id
stringlengths
36
36
source
stringclasses
15 values
formatted_source
stringclasses
13 values
text
stringlengths
2
7.55M
d3b5f389-840f-47c7-83c3-4f2d79c9319d
trentmkelly/LessWrong-43k
LessWrong
What should my college major be if I want to do AI alignment research? And what college degree level? (associate, bachelor's, master's, or doctorate) It is also possible that I could just not go to college at all. For context, I'm already very interested in pure math and have been self-learning other subjects such as probability theory and AI for the purposes of doing AI alignment research at some point.
6a6c0769-bdee-4b45-8155-2b917e9927b3
trentmkelly/LessWrong-43k
LessWrong
Ace Attorney: pioneer Rationalism-didactic game? This article aims to prove that Ace Attorney is possibly the first rationalist game in the lesswrongian sense, or at least a remarkable proto-example, and that it subliminally works to raise the sanity waterline in the general population, and might provide a template on which to base future works that aim to achieve a similar effect. The Ace Attorney series of games for the Nintendo DS console puts you in the shoes of Phoenix Wright, an attorney who, in the vein of Perry Mason, takes on difficult cases to defend his clients from a judicial system that is heavily inspired by that of Japan, in which the odds are so stacked against the defense it's practically a Kangaroo Court where your clients are guilty until proven innocent. For those unfamiliar with the game, and those who want to explore the "social criticism" aspect of the game, I wholeheartedly recommend this most excellent article from The Escapist. Now that that's out of the way, we can move on to what makes this relevant for Less Wrong. What makes this game uniquely interesting from a Rationalist POV is that the entire game mechanics are based on * gathering material evidence * finding the factual contradictions in the witnesses' testimonies * using the evidence to bust the lies open and force the truth out That the judicial system is Japanese-inspired also means the legal system is inquisitorial: the court has an active role in the case (whereas the adversarial system in the West reduces the role of the court to a form of referee) and its (alleged) mission is to dig out the truth. That and the lack of in dubio pro reo mean you can't just be content with putting your client's guilt in reasonable doubt, you have to thoroughly prove their innocence and find the true culprit and get them imprisoned. That means you have to find out the entire story and you can't leave any threads hanging. Additionally, the fact that you are a lame attorney facing an unsympathetic judge and egomaniacal, dirty-playing, h
93fe7fb2-b4aa-497a-9a9e-e5778c041ce9
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
Three scenarios of pseudo-alignment *This is my second distillation of* [*Risks from Learned of Optimization in Advanced Machine Learning Systems*](https://arxiv.org/abs/1906.01820) *focusing on deceptive alignment.*   As the old saying goes, alignment may go wrong in many different ways, but right only in one (and hopefully, we find that one!). To get an idea of how a mesa-optimizer can be playing the game of deceptive alignment that I explained in a [previous post](https://www.lesswrong.com/posts/u256AQr2xiNAgPftG/deception-as-the-optimal-mesa-optimizers-and-inner-alignment), we'll look at three possible scenarios of pseudo-alignment. Essentially, the question I'm trying to answer here is *what is it that makes the mesa-optimizer pursue a new objective, i.e., the mesa-objective?* Each of the following scenarios gives an answer to this question.  > Recall that when a mesa-optimizer is deceptively aligned, it is optimizing for an objective other than the base objective while giving off the impression that it's aligned, i.e., that it's optimizing for the base objective.  > > Scenario 1: Proxy alignment --------------------------- The mesa-optimizer starts searching for ways to optimize for the base objective. I call it "ways" but the technically accurate term is "policies" or "models"( although "models" is used for many things and it can be confusing). During this search, it stumbles upon a proxy of the base objective and starts optimizing for the proxy instead. But what does a proxy do? Proxies tend to be instrumentally valuable steps on the way towards achieving a goal, i.e., things an optimizer has to do to complete a task successfully.  > To prevent the misalignment from happening, we must be in control of the search over models. > > There are two cases of proxy alignment to keep in mind:  * **Side-effect alignment** Imagine that we are training a robot to clean the kitchen table. The robot optimizes the number of times it has cleaned the table. Wiping down the table causes the table to be clean. By doing this, the robot would score high if judged according to the standards of the base objective. Now we deploy the robot in an environment where it has a chance to spill coffee on the table right after wiping it down. There's no reason why the robot won't take that chance. It'll start spilling coffee and then continue cleaning it.  In this case, the mesa-optimizer optimizes for the mesa-objective, but this directly increases the base objective:  ![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/c035f29391ed7a3c4f8627d2a95ced843df4b722216bdde0.png)    * **Instrumental alignment** Imagine we have another robot and here we're training it to clean crumbles off the kitchen table. This robot optimizes the number of crumbles inside its vacuum cleaner. In the training environment, the best way to collect the maximum of crumbles is by vacuuming the crumbles it finds on the table. In every episode in its training where it does this successfully, it receives a high score. Once it's been deployed into a new environment, the robot figures out that it'll collect crumbles more effectively if it breaks the cookies it finds on a surface near the kitchen table and vacuums their crumbles directly. At this point, it just stops doing what was dictated by the base objective and keeps breaking cookies to vacuum their crumbles.  In this case, the mesa-optimizer optimizes for the base objective only to the extent that doing so is the best strategy for getting to the mesa-objective:  ![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/3893e5d20524be324c723424580ad959405b54c62772cb0b.png)  Scenario 2: Approximate alignment --------------------------------- Imagine that you train a neural network to optimize for a goal that cannot be perfectly represented in the neural network itself. So, this means that whatever models you get won't be able to satisfy the goal fully, but only to some degree which is where the idea of approximation comes in. The mesa-objective and the base objective will be sharing the same function only approximately. This approximation error is easily explained through the setup of machine learning training: the mesa-objective is necessarily inside the mesa-optimizer and cannot be given directly by the human designer. As a result, there's an unavoidable difference in the objectives.    Scenario 3: Suboptimality alignment ----------------------------------- Here imagine that we train a robot similar to scenario 1. But in this scenario, the robot has a mesa-objective that is to eliminate the number of cookies that exist. When the robot vacuums the kitchen table and makes the cookie crumbles disappear, it falsely assumes that this disappearance is equivalent to elimination from the face of the earth. The robot is still pretty useful for cleaning the table, but to be clear, that's not what the robot itself thinks it's accomplishing. When you keep it within the limits of the training distribution everything should go fine. But the moment it's deployed it'll no longer be helpful; it'll do its best to make all the cookies in the world disappear. What happened here? A variety of "technical misfortunes" might take place and lead to a situation where the mesa-optimizer looks aligned in the training environment but will not be actually behaving as desired in the deployment environment. These technical misfortunes include deficiencies, errors, computational constraints, other hardware limitations while in the training as well as problems that relate to the mesa-optimizer's reasoning process (e.g., making irrational choices among different options or not having sufficient information to make decisions, etc.).  In suboptimality alignment, the mesa-optimizer is misaligned and despite that, manages to score a high performance. The mistakes it makes are such that lead to good outcomes according to the base objective, but in another environment, the mesa-optimizer could have the most undesired behaviors as we won't know what it was really optimizing for.
ba543df1-bb5d-40d6-9e11-b99659416c77
StampyAI/alignment-research-dataset/arxiv
Arxiv
Training Machine Learning Models by Regularizing their Explanations ### 1 Contributions The major contributions of this thesis are as follows: * It presents a framework for encoding domain knowledge about a classification problem as local penalties on the gradient of the model’s decision surface, which can be incorporated into the loss function of any differentiable model (e.g. a neural network). Applying this framework in both supervised and unsupervised formulations, it trains models that generalize to test data from different distributions, which would otherwise be unobtainable by traditional optimization methods. (Chapter [3](#footnote3 "footnote 3 ‣ Training Machine Learning Models by Regularizing their Explanations")) * It applies a special case of this framework (where explanations are regularized to be simple) to the problem of defending against adversarial examples. It demonstrates increased robustness of regularized models to white- and black-box attacks, at a level comparable or better than adversarial training. It also demonstrates both increased transferability and interpretability of adversarial examples created to fool regularized models, which we evaluate in a human subject experiment. (Chapter [6](#footnote6 "footnote 6 ‣ Training Machine Learning Models by Regularizing their Explanations")) * It considers cases where we can meaningfully change what models learn by regularizing more general types of explanations. We review literature and suggest directions for explanation regularization, using sparse gradients, input Hessians, decision trees, nearest neighbors, and even abstract concepts that emerge or that we encourage to emerge in deep neural networks. It concludes by outlining an interface for interpretable machine teaching. (Chapter Training Machine Learning Models by Regularizing their Explanations) ### 2 Introduction High-dimensional real-world datasets are often full of ambiguities. When we train classifiers on such data, it is frequently possible to achieve high accuracy using classifiers with qualitatively different decision boundaries. To narrow down our choices and encourage robustness, we usually employ regularization techniques (e.g. encouraging sparsity or small parameter values). We also structure our models to ensure domain-specific invariances (e.g. using convolutional neural nets when we would like the model to be invariant to spatial transformations). However, these solutions do not address situations in which our training dataset contains subtle confounds or differs qualitatively from our test dataset. In these cases, our model may fail to generalize no matter how well it is tuned. Such generalization gaps are of particular concern for uninterpretable models such as neural networks, especially in sensitive domains. For example, Caruana et al. ([2015](#bib.bib14)) describe a model intended to prioritize care for patients with pneumonia. The model was trained to predict hospital readmission risk using a dataset containing attributes of patients hospitalized at least once for pneumonia. Counterintuitively, the model learned that the presence of asthma was a negative predictor of readmission, when in reality pneumonia patients with asthma are at a greater medical risk. This model would have presented a grave safety risk if used in production. This problem occurred because the outcomes in the dataset reflected not just the severity of patients’ diseases but the quality of care they initially received, which was higher for patients with asthma. This case and others like it have motivated recent work in interpretable machine learning, where algorithms provide explanations for domain experts to inspect for correctness before trusting model predictions. However, there has been limited work in optimizing models to find not just the right prediction but also the right explanation. Toward this end, this work makes the following contributions: * We confirm empirically on several datasets that input gradient explanations match state of the art sample-based explanations (e.g. LIME, Ribeiro ([2016](#bib.bib65))). * Given annotations about incorrect explanations for particular inputs, we efficiently optimize the classifier to learn alternate explanations (to be right for better reasons). * When annotations are not available, we sequentially discover classifiers with similar accuracies but qualitatively different decision boundaries for domain experts to inspect for validity. #### 2.1 Related Work We first define several important terms in interpretable machine learning. All classifiers have implicit decision rules for converting an input into a decision, though these rules may be opaque. A model is interpretable if it provides explanations for its predictions in a form humans can understand; an explanation provides reliable information about the model’s implicit decision rules for a given prediction. In contrast, we say a machine learning model is accurate if most of its predictions are correct, but only right for the right reasons if the implicit rules it has learned generalize well and conform to domain experts’ knowledge about the problem. Explanations can take many forms (Keil, [2006](#bib.bib35)) and evaluating the quality of explanations or the interpretability of a model is difficult (Lipton, [2016](#bib.bib49); Doshi-Velez and Kim, [2017](#bib.bib21)). However, within the machine learning community recently there has been convergence (Lundberg and Lee, [2016](#bib.bib50)) around local counterfactual explanations, where we show how perturbing an input x in various ways will affect the model’s prediction ^y. This approach to explanations can be domain- and model-specific (e.g. “annotator rationales” used to explain text classifications by Li et al. ([2016](#bib.bib46)); Lei et al. ([2016](#bib.bib44)); Zhang et al. ([2016](#bib.bib89))). Alternatively, explanations can be model-agnostic and relatively domain-general, as exemplified by LIME (Local Interpretable Model-agnostic Explanations, Ribeiro et al. ([2016](#bib.bib66)); Singh et al. ([2016](#bib.bib78))) which trains and presents local sparse models of how predictions change when inputs are perturbed. The per-example perturbing and fitting process used in models such as LIME can be computationally prohibitive, especially if we seek to explain an entire dataset during each training iteration. If the underlying model is differentiable, one alternative is to use input gradients as local explanations (Baehrens et al. ([2010](#bib.bib8)) provides a particularly good introduction; see also Selvaraju et al. ([2016](#bib.bib73)); Simonyan et al. ([2013](#bib.bib77)); Li et al. ([2015](#bib.bib45)); Hechtlinger ([2016](#bib.bib31))). The idea is simple: the gradients of the model’s output probabilities with respect to its inputs literally describe the model’s decision boundary (see Figure [1](#S2.F1 "Figure 1 ‣ 2.1 Related Work ‣ 2 Introduction ‣ Training Machine Learning Models by Regularizing their Explanations")). They are similar in spirit to the local linear explanations of LIME but much faster to compute. Input gradient explanations are not perfect for all use-cases—for points far from the decision boundary, they can be uniformatively small and do not always capture the idea of salience (see discussion and alternatives proposed by Shrikumar et al. ([2016](#bib.bib75)); Bach et al. ([2015](#bib.bib6)); Montavon et al. ([2017](#bib.bib52)); Sundararajan et al. ([2017](#bib.bib80)); Fong and Vedaldi ([2017](#bib.bib25))). However, they are exactly what is required for constraining the decision boundary. In the past, Drucker and Le Cun ([1992](#bib.bib23)) showed that applying penalties to input gradient magnitudes can improve generalization; to our knowledge, our application of input gradients to constrain explanations and find alternate explanations is novel. | | | | --- | --- | | Input gradients lie normal to the model’s decision boundary. Examples above are for simple, 2D, two- and three-class datasets, with input gradients taken with respect to a two hidden layer multilayer perceptron with ReLU activations. Probability input gradients are sharpest near decision boundaries, while log probability input gradients are more consistent within decision regions. The sum of log probability gradients contains information about the full model. | Input gradients lie normal to the model’s decision boundary. Examples above are for simple, 2D, two- and three-class datasets, with input gradients taken with respect to a two hidden layer multilayer perceptron with ReLU activations. Probability input gradients are sharpest near decision boundaries, while log probability input gradients are more consistent within decision regions. The sum of log probability gradients contains information about the full model. | Figure 1: Input gradients lie normal to the model’s decision boundary. Examples above are for simple, 2D, two- and three-class datasets, with input gradients taken with respect to a two hidden layer multilayer perceptron with ReLU activations. Probability input gradients are sharpest near decision boundaries, while log probability input gradients are more consistent within decision regions. The sum of log probability gradients contains information about the full model. More broadly, none of the works above on interpretable machine learning attempt to optimize explanations for correctness. For SVMs and specific text classification architectures, there exists work on incorporating human input into decision boundaries in the form of annotator rationales (Zaidan et al., [2007](#bib.bib87); Donahue and Grauman, [2011](#bib.bib20); Zhang et al., [2016](#bib.bib89)). Unlike our approach, these works are either tailored to specific domains or do not fully close the loop between generating explanations and constraining them. #### 2.2 Background: Input Gradient Explanations Consider a differentiable model f parametrized by θ with inputs X∈RN×D and probability vector outputs f(X|θ)=^y∈RN×K corresponding to one-hot labels y∈RN×K. Its input gradient is given by fX(Xn|θ) or ∇X^yn, which is a vector normal to the model’s decision boundary at Xn and thus serves as a first-order description of the model’s behavior near Xn. The gradient has the same shape as each vector Xn; large-magnitude values of the input gradient indicate elements of Xn that would affect ^y if changed. We can visualize explanations by highlighting portions of Xn in locations with high input gradient magnitudes. ### 3 Our Approach We wish to develop a method to train models that are right for the right reasons. If explanations faithfully describe a model’s underlying behavior, then constraining its explanations to match domain knowledge should cause its underlying behavior to more closely match that knowledge too. We first describe how input gradient-based explanations lend themselves to efficient optimization for correct explanations in the presence of domain knowledge, and then describe how they can be used to efficiently search for qualitatively different decision boundaries when such knowledge is not available. #### 3.1 Loss Functions that Constrain Explanations When constraining input gradient explanations, there are two basic options: we can either constrain them to be large in relevant areas or small in irrelevant areas. However, because input gradients for relevant inputs in many models should be small far from the decision boundary, and because we do not know in advance how large they should be, we opt to shrink irrelevant gradients instead. Formally, we define an annotation matrix A∈{0,1}N×D, which are binary masks indicating whether dimension d should be irrelevant for predicting observation n. We would like ∇X^y to be near 0 at these locations. To that end, we optimize a loss function L(θ,X,y,A) of the form | | | | | --- | --- | --- | | | L(θ,X,y,A)=N∑n=1K∑k=1−ynklog(^ynk)Right answers+λ1N∑n=1D∑d=1(And∂∂xndK∑k=1log(^ynk))2Right reasons+λ2∑iθ2iRegular, | | which contains familiar cross entropy and θ regularization terms along with a new regularization term that discourages the input gradient from being large in regions marked by A. This term has a regularization parameter λ1 which should be set such that the “right answers” and “right reasons” terms have similar orders of magnitude; see Appendix [6](#S6 "6 Cross-Validation ‣ Training Machine Learning Models by Regularizing their Explanations") for more details. Note that this loss penalizes the gradient of the log probability, which performed best in practice, though in many visualizations we show fX, which is the gradient of the predicted probability itself. Summing across classes led to slightly more stable results than using the predicted class log probability maxlog(^yk), perhaps due to discontinuities near the decision boundary (though both methods were comparable). We did not explore regularizing input gradients of specific class probabilities, though this would be a natural extension. Because this loss function is differentiable with respect to θ, we can easily optimize it with gradient-based optimization methods. We do not need annotations (nonzero An) for every input in X, and in the case A=0N×D, the explanation term has no effect on the loss. At the other extreme, when A is a matrix of all 1s, it encourages the model to have small gradients with respect to its inputs; this can improve generalization on its own (Drucker and Le Cun, [1992](#bib.bib23)). Between those extremes, it biases our model against particular implicit rules. This penalization approach enjoys several desirable properties. Alternatives that specify a single Ad for all examples presuppose a coherent notion of global feature importance, but when decision boundaries are nonlinear many features are only relevant in the context of specific examples. Alternatives that simulate perturbations to entries known to be irrelevant (or to determine relevance as in Ribeiro et al. ([2016](#bib.bib66))) require defining domain-specific perturbation logic; our approach does not. Alternatives that apply hard constraints or completely remove elements identified by And miss the fact that the entries in A may be imprecise even if they are human-provided. Thus, we opt to preserve potentially misleading features but softly penalize their use. #### 3.2 Find-Another-Explanation: Discovering Many Possible Rules without Annotations Although we can obtain the annotations A via experts as in Zaidan et al. ([2007](#bib.bib87)), we may not always have this extra information or know the “right reasons.” In these cases, we propose an approach that iteratively adapts A to discover multiple models accurate for *qualitatively different* reasons; a domain expert could then examine them to determine which is the right for the best reasons. Specifically, we generate a “spectrum” of models with different decision boundaries by iteratively training models, explaining X, then training the next model to differ from previous iterations: | | | | | | | --- | --- | --- | --- | --- | | | A0 | =0, | θ0=argminθL(θ,X,y,A0), | | | | A1 | =Mc[fX|θ0], | θ1=argminθL(θ,X,y,A1), | | | | A2 | =Mc[fX|θ1]∪A1, | θ2=argminθL(θ,X,y,A2), | | … where the function Mc returns a binary mask indicating which gradient components have a magnitude ratio (their magnitude divided by the largest component magnitude) of at least c and where we abbreviated the input gradients of the entire training set X at θi as fX|θi. In other words, we regularize input gradients where they were largest in magnitude previously. If, after repeated iterations, accuracy decreases or explanations stop changing (or only change after significantly increasing λ1), then we may have spanned the space of possible models.444Though one can design simple pathological cases where we do not discover all models with this method; we explore an alternative version in Appendix [8](#S8 "8 Simultaneous Find-Another-Explanation ‣ Training Machine Learning Models by Regularizing their Explanations") that addresses some of these cases. All of the resulting models will be accurate, but for different reasons; although we do not know which reasons are best, we can present them to a domain expert for inspection and selection. We can also prioritize labeling or reviewing examples about which the ensemble disagrees. Finally, the size of the ensemble provides a rough measure of dataset redundancy. ### 4 Empirical Evaluation We demonstrate explanation generation, explanation constraints, and the find-another-explanation method on a toy color dataset and three real-world datasets. In all cases, we used a multilayer perceptron with two hidden layers of size 50 and 30, ReLU nonlinearities with a softmax output, and a λ2=0.0001 penalty on ∥θ∥22. We trained the network using Adam (Kingma and Ba, [2014](#bib.bib39)) with a batch size of 256 and Autograd (Mclaurin et al., [2017](#bib.bib51)). For most experiments, we used an explanation L2 penalty of λ1=1000, which gave our “right answers” and “right reasons” loss terms similar magnitudes. More details about cross-validation are included in Appendix [6](#S6 "6 Cross-Validation ‣ Training Machine Learning Models by Regularizing their Explanations"). For the cutoff value c described in Section [3.2](#S3.SS2 "3.2 Find-Another-Explanation: Discovering Many Possible Rules without Annotations ‣ 3 Our Approach ‣ Training Machine Learning Models by Regularizing their Explanations") and used for display, we often chose 0.67, which tended to preserve 2-5% of gradient components (the average number of qualifying elements tended to fall exponentially with c). Code for all experiments is available at <https://github.com/dtak/rrr>. #### 4.1 Toy Color Dataset We created a toy dataset of 5×5×3 RGB images with four possible colors. Images fell into two classes with two independent decision rules a model could implicitly learn: whether their four corner pixels were all the same color, and whether their top-middle three pixels were all different colors. Images in class 1 satisfied both conditions and images in class 2 satisfied neither. Because only corner and top-row pixels are relevant, we expect any faithful explanation of an accurate model to highlight them. ![Gradient vs. LIME explanations of nine perceptron predictions on the Toy Color dataset. For gradients, we plot dots above pixels identified by ](https://media.arxiv-vanity.com/render-output/7789577/images/colors-vs-lime.png) Figure 2: Gradient vs. LIME explanations of nine perceptron predictions on the Toy Color dataset. For gradients, we plot dots above pixels identified by M0.67[fX] (the top 33% largest-magnitude input gradients), and for LIME, we select the top 6 features (up to 3 can reside in the same RGB pixel). Both methods suggest that the model learns the corner rule. In Figure [2](#S4.F2 "Figure 2 ‣ 4.1 Toy Color Dataset ‣ 4 Empirical Evaluation ‣ Training Machine Learning Models by Regularizing their Explanations"), we see both LIME and input gradients identify the same relevant pixels, which suggests that (1) both methods are effective at explaining model predictions, and (2) the model has learned the corner rather than the top-middle rule, which it did consistently across random restarts. ![Implicit rule transitions as we increase ](https://media.arxiv-vanity.com/render-output/7789577/images/color-transitions.png) Figure 3: Implicit rule transitions as we increase λ1 and the number of nonzero rows of A. Pairs of points represent the fraction of large-magnitude (c=0.67) gradient components in the corners and top-middle for 1000 test examples, which almost always add to 1 (indicating the model is most sensitive to these elements alone, even during transitions). Note there is a wide regime where the model learns a hybrid of both rules. ![Rule discovery using find-another-explanation method with 0.67 cutoff and ](https://media.arxiv-vanity.com/render-output/7789577/images/color-fae.png) Figure 4: Rule discovery using find-another-explanation method with 0.67 cutoff and λ1=103 for θ1 and λ1=106 for θ2. Note how the first two iterations produce explanations corresponding to the two rules in the dataset while the third produces very noisy explanations (with low accuracies). However, if we train our model with a nonzero A (specifically, setting And=1 for corners d across examples n), we were able to cause it to use the other rule. Figure [3](#S4.F3 "Figure 3 ‣ 4.1 Toy Color Dataset ‣ 4 Empirical Evaluation ‣ Training Machine Learning Models by Regularizing their Explanations") shows how the model transitions between rules as we vary λ1 and the number of examples penalized by A. This result demonstrates that the model can be made to learn multiple rules despite only one being commonly reached via standard gradient-based optimization methods. However, it depends on knowing a good setting for A, which in this case would still require annotating on the order of 103 examples, or 5% of our dataset (although always including examples with annotations in Adam minibatches let us consistently switch rules with only 50 examples, or 0.2% of the dataset). Finally, Figure [4](#S4.F4 "Figure 4 ‣ 4.1 Toy Color Dataset ‣ 4 Empirical Evaluation ‣ Training Machine Learning Models by Regularizing their Explanations") shows we can use the find-another-explanation technique from Sec. [3.2](#S3.SS2 "3.2 Find-Another-Explanation: Discovering Many Possible Rules without Annotations ‣ 3 Our Approach ‣ Training Machine Learning Models by Regularizing their Explanations") to discover the other rule without being given A. Because only two rules lead to high accuracy on the test set, the model performs no better than random guessing when prevented from using either one (although we have to increase the penalty high enough that this accuracy number may be misleading - the essential point is that after the first iteration, explanations stop changing). Lastly, though not directly relevant to the discussion on interpretability and explanation, we demonstrate the potential of explanations to reduce the amount of data required for training in Appendix [7](#S7 "7 Learning with Less Data ‣ Training Machine Learning Models by Regularizing their Explanations"). #### 4.2 Real-world Datasets To demonstrate real-world, cross-domain applicability, we test our approach on variants of three familiar machine learning text, image, and tabular datasets: * 20 Newsgroups: As in Ribeiro et al. ([2016](#bib.bib66)), we test input gradients on the alt.atheism vs. soc.religion.christian subset of the 20 Newsgroups dataset Lichman ([2013](#bib.bib48)). We used the same two-hidden layer network architecture with a TF-IDF vectorizer with 5000 components, which gave us a 94% accurate model for A=0. * Iris-Cancer: We concatenated all examples in classes 1 and 2 from the Iris dataset with the the first 50 examples from each class in the Breast Cancer Wisconsin dataset (Lichman, [2013](#bib.bib48)) to create a composite dataset X∈R100×34,y∈{0,1}. Despite the dataset’s small size, our network still obtains an average test accuracy of 92% across 350 random 23-13 training-test splits. However, when we modify our test set to remove the 4 Iris components, average test accuracy falls to 81% with higher variance, suggesting the model learns to depend on Iris features and suffers without them. We verify that our explanations reveal this dependency and that regularizing them avoids it. * Decoy MNIST: On the baseline MNST dataset (LeCun et al., [2010](#bib.bib43)), our network obtains 98% train and 96% test accuracy. However, in Decoy MNIST, images x have 4×4 gray swatches in randomly chosen corners whose shades are functions of their digits y in training (in particular, 255−25y) but are random in test. On this dataset, our model has a higher 99.6% train accuracy but a much lower 55% test accuracy, indicating that the decoy rule misleads it. We verify that both gradient and LIME explanations let users detect this issue and that explanation regularization lets us overcome it. ![Words identified by LIME vs. gradients on an example from the atheism vs. Christianity subset of 20 Newsgroups. More examples are available at ](https://media.arxiv-vanity.com/render-output/7789577/images/20ng-vs-lime.png) Figure 5: Words identified by LIME vs. gradients on an example from the atheism vs. Christianity subset of 20 Newsgroups. More examples are available at <https://github.com/dtak/rrr>. Words are blue if they support soc.religion.christian and orange if they support alt.atheism, with opacity equal to the ratio of the magnitude of the word’s weight to the largest magnitude weight. LIME generates sparser explanations but the weights and signs of terms identified by both methods match closely. Note that both methods reveal some aspects of the model that are intuitive (“church” and “service” are associated with Christianity), some aspects that are not (“13” is associated with Christianity, “edu” with atheism), and some that are debatable (“freedom” is associated with atheism, “friends” with Christianity). ![Input gradient explanations for Decoy MNIST vs. LIME, using the LIME image library ](https://media.arxiv-vanity.com/render-output/7789577/images/mnist-vs-lime.png) Figure 6: Input gradient explanations for Decoy MNIST vs. LIME, using the LIME image library Ribeiro ([2016](#bib.bib65)). In this example, the model incorrectly predicts 3 rather than 7 because of the decoy swatch. ![Iris-Cancer features identified by input gradients vs. LIME, with Iris features highlighted in red. Input gradient explanations are more faithful to the model. Note that most gradients change sign when switching between ](https://media.arxiv-vanity.com/render-output/7789577/images/iris-vs-lime.png) Figure 7: Iris-Cancer features identified by input gradients vs. LIME, with Iris features highlighted in red. Input gradient explanations are more faithful to the model. Note that most gradients change sign when switching between ^y0 and ^y1, and that the magnitudes of input gradients are different across examples, which provides information about examples’ proximity to the decision boundary. Input gradients are consistent with sample-based methods such as LIME, and faster. On 20 Newsgroups (Figure [5](#S4.F5 "Figure 5 ‣ 4.2 Real-world Datasets ‣ 4 Empirical Evaluation ‣ Training Machine Learning Models by Regularizing their Explanations")), input gradients are less sparse but identify all of the same words in the document with similar weights. Note that input gradients also identify words outside the document that would affect the prediction if added. On Decoy MNIST (Figure [6](#S4.F6 "Figure 6 ‣ 4.2 Real-world Datasets ‣ 4 Empirical Evaluation ‣ Training Machine Learning Models by Regularizing their Explanations")), both LIME and input gradients reveal that the model predicts 3 rather than 7 due to the color swatch in the corner. Because of their fine-grained resolution, input gradients sometimes better capture counterfactual behavior, where extending or adding lines outside of the digit to either reinforce it or transform it into another digit would change the predicted probability (see also Figure [10](#S4.F10 "Figure 10 ‣ 4.2 Real-world Datasets ‣ 4 Empirical Evaluation ‣ Training Machine Learning Models by Regularizing their Explanations")). LIME, on the other hand, better captures the fact that the main portion of the digit is salient (because its super-pixel perturbations add and remove larger chunks of the digit). On Iris-Cancer (Figure [7](#S4.F7 "Figure 7 ‣ 4.2 Real-world Datasets ‣ 4 Empirical Evaluation ‣ Training Machine Learning Models by Regularizing their Explanations")), input gradients actually outperform LIME. We know from the accuracy difference that Iris features are important to the model’s prediction, but LIME only identifies a single important feature, which is from the Breast Cancer dataset (even when we vary its perturbation strategy). This example, which is tabular and contains continuously valued rather categorical features, may represent a pathological case for LIME, which operates best when it can selectively mask a small number of meaningful chunks of its inputs to generate perturbed samples. For truly continuous inputs, it should not be surprising that explanations based on gradients perform best. There are a few other advantages input gradients have over sample-based perturbation methods. On 20 Newsgroups, we noticed that for very long documents, explanations generated by the sample-based method LIME are often overly sparse, and there are many words identified as significant by input gradients that LIME ignores. This may be because the number of features LIME selects must be passed in as a parameter beforehand, and it may also be because LIME only samples a fixed number of times. For sufficiently long documents, it is unlikely that sample-based approaches will mask every word even once, meaning that the output becomes increasingly nondeterministic—an undesirable quality for explanations. To resolve this issue, one could increase the number of samples, but that would increase the computational cost since the model must be evalutated at least once per sample to fit a local surrogate. Input gradients, on the other hand, only require on the order of one model evaluation total to generate an explanation of similar quality (generating gradients is similar in complexity to predicting probabilities), and furthermore, this complexity is based on the vector length, not the document length. This issue (underscored by Table [1](#S4.T1 "Table 1 ‣ 4.2 Real-world Datasets ‣ 4 Empirical Evaluation ‣ Training Machine Learning Models by Regularizing their Explanations")) highlights some inherent scalability advantages input gradients enjoy over sample-based perturbation methods. | | | | | | --- | --- | --- | --- | | | LIME | Gradients | Dimension of x | | Iris-Cancer | 0.03s | 0.000019s | 34 | | Toy Colors | 1.03s | 0.000013s | 75 | | Decoy MNIST | 1.54s | 0.000045s | 784 | | 20 Newsgroups | 2.59s | 0.000520s | 5000 | Table 1: Gradient vs. LIME runtimes per explanation. Note that each method uses a different version of LIME; Iris-Cancer and Toy Colors use lime\_tabular with continuous and quartile-discrete perturbation methods, respectively, Decoy MNIST uses lime\_image, and 20 Newsgroups uses lime\_text. Code was executed on a laptop and input gradient calculations were not optimized for performance, so runtimes are only meant to provide a sense of scale. ![Overcoming confounds using explanation constraints on Iris-Cancer (over 350 random train-test splits). By default (](https://media.arxiv-vanity.com/render-output/7789577/images/iris-confounds.png) Figure 8: Overcoming confounds using explanation constraints on Iris-Cancer (over 350 random train-test splits). By default (A=0), input gradients tend to be large in Iris dimensions, which results in lower accuracy when Iris is removed from the test set. Models trained with And=1 in Iris dimensions (full A) have almost exactly the same test accuracy with and without Iris. ![Training with explanation constraints on Decoy MNIST. Accuracy is low (](https://media.arxiv-vanity.com/render-output/7789577/images/mnist-confounds.png) Figure 9: Training with explanation constraints on Decoy MNIST. Accuracy is low (A=0) on the swatch color-randomized test set unless the model is trained with And=1 in swatches (full A). In that case, test accuracy matches the same architecture’s performance on the standard MNIST dataset (baseline). Given annotations, input gradient regularization finds solutions consistent with domain knowledge. Another key advantage of using an explanation method more closely related to our model is that we can then incorporate explanations into our training process, which are most useful when the model faces ambiguities in how to classify inputs. We deliberately constructed the Decoy MNIST and Iris-Cancer datasets to have this kind of ambiguity, where a rule that works in training will not generalize to test. When we train our network on these confounded datasets, their test accuracy is better than random guessing, in part because the decoy rules are not simple and the primary rules not complex, but their performance is still significantly worse than on a baseline test set with no decoy rules. By penalizing explanations we know to be incorrect using the loss function defined in Section [3.1](#S3.SS1 "3.1 Loss Functions that Constrain Explanations ‣ 3 Our Approach ‣ Training Machine Learning Models by Regularizing their Explanations"), we are able to recover that baseline test accuracy, which we demonstrate in Figures [8](#S4.F8 "Figure 8 ‣ 4.2 Real-world Datasets ‣ 4 Empirical Evaluation ‣ Training Machine Learning Models by Regularizing their Explanations") and [9](#S4.F9 "Figure 9 ‣ 4.2 Real-world Datasets ‣ 4 Empirical Evaluation ‣ Training Machine Learning Models by Regularizing their Explanations"). | | | | | --- | --- | --- | | Find-another-explanation results on Iris-Cancer (top; errorbars show standard deviations across 50 trials), 20 Newsgroups (middle; blue supports Christianity and orange supports atheism, word opacity set to magnitude ratio), and Decoy MNIST (bottom, for three values of | Find-another-explanation results on Iris-Cancer (top; errorbars show standard deviations across 50 trials), 20 Newsgroups (middle; blue supports Christianity and orange supports atheism, word opacity set to magnitude ratio), and Decoy MNIST (bottom, for three values of | Find-another-explanation results on Iris-Cancer (top; errorbars show standard deviations across 50 trials), 20 Newsgroups (middle; blue supports Christianity and orange supports atheism, word opacity set to magnitude ratio), and Decoy MNIST (bottom, for three values of | Figure 10: Find-another-explanation results on Iris-Cancer (top; errorbars show standard deviations across 50 trials), 20 Newsgroups (middle; blue supports Christianity and orange supports atheism, word opacity set to magnitude ratio), and Decoy MNIST (bottom, for three values of λ1 with scatter opacity set to magnitude ratio cubed). Real-world datasets are often highly redundant and allow for diverse models with similar accuracies. On Iris-Cancer and Decoy MNIST, both explanations and accuracy results indicate we overcome confounds after 1-2 iterations without any prior knowledge about them encoded in A. When annotations are unavailable, our find-another-explanation method discovers diverse classifiers. As we saw with the Toy Color dataset, even if almost every row of A is 0, we can still benefit from explanation regularization (meaning practitioners can gradually incorporate these penalties into their existing models without much upfront investment). However, annotation is never free, and in some cases we either do not know the right explanation or cannot easily encode it. Additionally, we may be interested in exploring the structure of our model and dataset in a less supervised fashion. On real-world datasets, which are usually overdetermined, we can use find-another-explanation to discover θs in shallower local minima that we would normally never explore. Given enough models right for different reasons, hopefully at least one is right for the right reasons. Figure [10](#S4.F10 "Figure 10 ‣ 4.2 Real-world Datasets ‣ 4 Empirical Evaluation ‣ Training Machine Learning Models by Regularizing their Explanations") shows find-another-explanation results for our three real-world datasets, with example explanations at each iteration above and model train and test accuracy below. For Iris-Cancer, we find that the initial iteration of the model heavily relies on the Iris features and has high train but low test accuracy, while subsequent iterations have lower train but higher test accuracy (with smaller gradients in Iris components). In other words, we spontaneously obtain a more generalizable model without a predefined A alerting us that the first four features are misleading. Find-another-explanation also overcomes confounds on Decoy MNIST, needing only one iteration to recover baseline accuracy. Bumping λ1 too high (to the point where its term is a few orders of magnitude larger than the cross-entropy) results in more erratic behavior. Interestingly, in a process remniscent of distillation (Papernot et al., [2016c](#bib.bib61)), the gradients themselves become more evenly and intuitively distributed at later iterations. In many cases they indicate that the probabilities of certain digits increase when we brighten pixels along or extend their distinctive strokes, and that they decrease if we fill in unrelated dark areas, which seems desirable. However, by the last iteration, we start to revert to using decoy swatches in some cases. On 20 Newsgroups, the words most associated with alt.atheism and soc.religion.christian change between iterations but remain mostly intuitive in their associations. Train accuracy mostly remains high while test accuracy is unstable. For all of these examples, accuracy remains high even as decision boundaries shift significantly. This may be because real-world data tends to contain significant redundancies. #### 4.3 Limitations Input gradients provide faithful information about a model’s rationale for a prediction but trade interpretability for efficiency. In particular, when input features are not individually meaningful to users (e.g. for individual pixels or word2vec components), input gradients may be difficult to interpret and A may be difficult to specify. Additionally, because they can be 0 far from the decision boundary, they do not capture the idea of salience as well as other methods (Zeiler and Fergus, [2014](#bib.bib88); Sundararajan et al., [2017](#bib.bib80); Montavon et al., [2017](#bib.bib52); Bach et al., [2015](#bib.bib6); Shrikumar et al., [2016](#bib.bib75)). However, they are necessarily faithful to the model and easy to incorporate into its loss function. Input gradients are first-order linear approximations of the model; we might call them first-order explanations. ### 5 Discussion In this chapter, we showed that: * On training sets that contain confounds which would fool any model trained just to make correct predictions, we can use gradient-based explanation regularization to learn models that still generalize to test. These results imply that gradient regularization actually changes why our model makes predictions. * When we lack expert annotations, we can still use our method in an unsupervised manner to discover models that make predictions for different reasons. This “find-another-explanation” technique allowed us to overcome confounds on Decoy MNIST and Iris-Cancer, and even quantify the ambiguity present in the Toy Color dataset. * Input gradients are consistent with sample-based methods such as LIME but faster to compute and sometimes more faithful to the model, especially for continuous inputs. Our consistent results on several diverse datasets show that input gradients merit further investigation as building blocks for optimizable explanations; there exist many options for further advancements such as weighted annotations A, different penalty norms, and more general specifications of whether features should be positively or negatively predictive of specific classes for specific inputs. Finally, our “right for the right reasons” approach may be of use in solving related problems, e.g. in integrating causal inference with deep neural networks or maintaining robustness to adversarial examples (which we discuss in Chapter [6](#footnote6 "footnote 6 ‣ Training Machine Learning Models by Regularizing their Explanations")). Building on our find-another-explanation results, another promising direction is to let humans in the loop interactively guide models towards correct explanations. Overall, we feel that developing methods of ensuring that models are right for better reasons is essential to overcoming the inherent obstacles to generalization posed by ambiguities in real-world datasets. ### 6 Cross-Validation Most regularization parameters are selected to maximize accuracy on a validation set. However, when your training and validation sets share the same misleading confounds, validation accuracy may not be a good proxy for test accuracy. Instead, we recommend increasing the explanation regularization strength λ1 until the cross-entropy and “right reasons” terms have roughly equal magnitudes (which corresponds to the region of highest test accuracy below). Intuitively, balancing the terms in this way should push our optimization away from cross-entropy minima that violate the explanation constraints specified in A and towards ones that correspond to “better reasons.” Increasing λ1 too much makes the cross-entropy term negligible. In that case, our model performs no better than random guessing. ![](https://media.arxiv-vanity.com/render-output/7789577/images/mnist-crossval.png) Figure 11: Cross-validating λ1. The regime of highest accuracy (highlighted) is also where the initial cross-entropy and λ1 loss terms have similar magnitudes. Exact equality is not required; being an order of magnitude off does not significantly affect accuracy. ### 7 Learning with Less Data It is natural to ask whether explanations can reduce data requirements. Here we explore that question on the Toy Color dataset using four variants of A (with λ1 chosen to match loss terms at each N). ![](https://media.arxiv-vanity.com/render-output/7789577/images/learnfast.png) Figure 12: Explanation regularization can reduce data requirements. We find that when A is set to the Pro-Rule 1 mask, which penalizes all pixels except the corners, we reach 95% accuracy with fewer than 100 examples (as compared to A=0, where we need almost 10000). Penalizing the top-middle pixels (Anti-Rule 2) or all pixels except the top-middle (Pro-Rule 2) also consistently improves accuracy relative to data. Penalizing the corners (Anti-Rule 1), however, reduces accuracy until we reach a threshold N. This may be because the corner pixels can match in 4 ways, while the top-middle pixels can differ in 4⋅3⋅2=24 ways, suggesting that Rule 2 could be inherently harder to learn from data and positional explanations alone. ### 8 Simultaneous Find-Another-Explanation In Section [3.2](#S3.SS2 "3.2 Find-Another-Explanation: Discovering Many Possible Rules without Annotations ‣ 3 Our Approach ‣ Training Machine Learning Models by Regularizing their Explanations"), we introduced a method of training classifiers to make predictions for different reasons by sequentially augmenting A to penalize more features. However, as our ensemble grows, A can saturate to 1N×D, and subsequent models will be trained with uniform gradient regularization. While these models may have desirable properties (which we explore in the following chapter), they will not be diverse. As a simple example, consider a 2D dataset with one class confined to the first quadrant and the other confined to the third. In theory, we have a full degree of decision freedom; it should be possible to learn two perfect and fully orthogonal boundaries (one horizontal, one vertical). However, when we train our first MLP, it learns a diagonal surface; both features have large gradients everywhere, so A=1N×2 immediately. To resolve this, we propose a simultaneous training procedure: | | | | | | --- | --- | --- | --- | | | θ∗1,…,θ∗M=argminθ1,…,θMM∑a=1L(y,f(X|θa))+M∑a=1M∑b=a+1{sim}(fX(X|θa),fX(X|θb)), | | (1) | where L refers to our single-model loss function, and for our similarity measure we use the squared cosine similarity {sim}(v,w)=(vTw)2(vTv)(wTw)+ϵ, where we add ϵ=10−6 to the denominator for numerical stability. Squaring the cosine similarity ensures our penalty is positive, is minimized by orthogonal boundaries, and is soft for nearly orthogonal boundaries. We show in Figure [13](#S8.F13 "Figure 13 ‣ 8 Simultaneous Find-Another-Explanation ‣ Training Machine Learning Models by Regularizing their Explanations") that this lets us obtain the two desired models. | | | | | --- | --- | --- | | Toy 2D problem with one degree of decision boundary freedom. Across random restarts (left two plots), we tend to learn a boundary in which both features are significant, which prevents sequential find-another-explanation from producing diverse models. If we jointly train two models with a penalty on the cosine similarity of their gradients (right plot), they end up with orthogonal boundaries. | Toy 2D problem with one degree of decision boundary freedom. Across random restarts (left two plots), we tend to learn a boundary in which both features are significant, which prevents sequential find-another-explanation from producing diverse models. If we jointly train two models with a penalty on the cosine similarity of their gradients (right plot), they end up with orthogonal boundaries. | Toy 2D problem with one degree of decision boundary freedom. Across random restarts (left two plots), we tend to learn a boundary in which both features are significant, which prevents sequential find-another-explanation from producing diverse models. If we jointly train two models with a penalty on the cosine similarity of their gradients (right plot), they end up with orthogonal boundaries. | Figure 13: Toy 2D problem with one degree of decision boundary freedom. Across random restarts (left two plots), we tend to learn a boundary in which both features are significant, which prevents sequential find-another-explanation from producing diverse models. If we jointly train two models with a penalty on the cosine similarity of their gradients (right plot), they end up with orthogonal boundaries. ### 9 Introduction In the previous chapter, we used input gradient penalties to encourage neural networks to make predictions for specific reasons. We demonstrated this on “decoy” datasets deliberately designed to deceive models making decisions for different reasons. This philosophy of testing – that we should measure generalization by testing on data from a different distribution than we trained on – can be taken to its extreme by testing models in an adversarial setting, where neural networks have known vulnerabilities (Szegedy et al., [2013](#bib.bib81)). In this chapter, we consider whether a domain knowledge-agnostic application of explanation regularization (a uniform L2 penalty on input gradients, similar in spirit to Ridge regression on the model’s local linear approximations) could help defend against adversarial examples. Adversarial examples pose serious obstacles for the adoption of neural networks in settings which are security-sensitive or have legal ramifications (Kang and Kang, [2017](#bib.bib34)). Although many techniques for generating these examples (which we call “attacks”) require access to model parameters, Papernot et al. ([2017](#bib.bib59)) have shown that it is possible and even practical to attack black-box models in the real world, in large part because of transferability; examples generated to fool one model tend to fool all models trained on the same dataset. Particularly for images, these adversarial examples can be constructed to fool models across a variety of scales and perspectives (Athalye and Sutskever, [2017](#bib.bib4)), which poses a problem for the adoption of deep learning models in systems like self-driving cars. Although there has recently been a great deal of research in adversarial defenses, many of these methods have struggled to achieve robustness to transferred adversarial examples (Tramèr et al., [2017b](#bib.bib83)). Some of the most effective defenses simply detect and reject them rather than making predictions (Xu et al., [2017](#bib.bib85)). The most common, “brute force” solution is adversarial training, where we include a mixture of normal and adversarially-generated examples in the training set (Kurakin et al., [2016b](#bib.bib42)). However, Tramèr et al. ([2017a](#bib.bib82)) show that the robustness adversarial training provides can be circumvented by randomizing or transferring perturbations from other models (though ensembling helps). As we noted in Chapter [3](#footnote3 "footnote 3 ‣ Training Machine Learning Models by Regularizing their Explanations"), domain experts are also often concerned that DNN predictions are uninterpretable. The lack of interpretability is particularly problematic in domains where algorithmic bias is often a factor (Angwin et al., [2016](#bib.bib3)) or in medical contexts where safety risks can arise when there is mismatch between how a model is trained and used (Caruana et al., [2015](#bib.bib14)). For computer vision models (the primary target of adversarial attacks), the most common class of explanation is the saliency map, either at the level of raw pixels, grid chunks, or superpixels (Ribeiro et al., [2016](#bib.bib66)). The local linear approximation provided by raw input gradients (Baehrens et al., [2010](#bib.bib8)) is sometimes used for pixel-level saliency maps (Simonyan et al., [2013](#bib.bib77)). However, computer vision practitioners tend not to examine raw input gradients because they are noisy and difficult to interpret. This issue has spurred the development of techniques like integrated gradients (Sundararajan et al., [2017](#bib.bib80)) and SmoothGrad (Smilkov et al., [2017](#bib.bib79)) that generate smoother, more interpretable saliency maps from noisy gradients. The rationale behind these techniques is that, while the local behavior of the model may be noisy, examining the gradients over larger length scales in input space provides a better intution about the model’s behavior. However, raw input gradients are exactly what many attacks use to generate adversarial examples. Explanation techniques which smooth out gradients in background pixels may be inappropriately hiding the fact that the model is quite sensitive to them. We consider that perhaps the need for these smoothing techniques in the first place is indicative of a problem with our models, related to their adversarial vulnerability and capacity to overfit. Perhaps it is fundamentally hard for adversarially vulnerable models to be interpretable. On the other hand, perhaps it is hard for interpretable models to be adversarially vulnerable. Our hypothesis is that by training a model to have smooth input gradients with fewer extreme values, it will not only be more interpretable but also more resistant to adversarial examples. In the experiments that follow we confirm this hypothesis using uniform gradient regularization, which optimizes the model to have smooth input gradients with respect to its predictions during training. Using this technique, we demonstrate robustness to adversarial examples across multiple model architectures and datasets, and in particular demonstrate robustness to transferred adversarial examples: gradient-regularized models maintain significantly higher accuracy on examples generated to fool other models than baselines. Furthermore, both qualitatively and in human subject experiments, we find that adversarial examples generated to fool gradient-regularized models are, in a particular sense, more “interpretable”: they fool humans as well. ### 10 Background In this section, we will (re)introduce notation, and give a brief overview of the baseline attacks and defenses against which we will test and compare our methods. The methods we will analyze again apply to all differentiable classification models fθ(X), which are functions parameterized by θ that return predictions ^y∈RN×K given inputs X∈RN×D. These predictions indicate the probabilities that each of N inputs in D dimensions belong to each of K class labels. To train these models, we try to find sets of parameters θ∗ that minimize the total information distance between the predictions ^y and the true labels y (also ∈RN×K, one-hot encoded) on a training set: | | | | | | --- | --- | --- | --- | | | θ∗=argminθN∑n=1K∑k=1−ynklogfθ(Xn)k, | | (2) | which we will sometimes write as | | | | | --- | --- | --- | | | argminθH(y,^y), | | with H giving the sum of the cross entropies between the predictions and the labels. #### 10.1 Attacks ##### 10.1.1 Fast Gradient Sign Method (FGSM) Goodfellow et al. ([2014](#bib.bib28)) introduced this first method of generating adversarial examples by perturbing inputs in a manner that increases the local linear approximation of the loss function: | | | | | | --- | --- | --- | --- | | | XFGSM=X+ϵsign(∇xH(y,^y)) | | (3) | If ϵ is small, these adversarial examples are indistinguishable from normal examples to a human, but the network performs significantly worse on them. Kurakin et al. ([2016a](#bib.bib41)) noted that one can iteratively perform this attack with a small ϵ to induce misclassifications with a smaller total perturbation (by following the nonlinear loss function in a series of small linear steps rather than one large linear step). ##### 10.1.2 Targeted Gradient Sign Method (TGSM) A simple modification of the Fast Gradient Sign Method is the Targeted Gradient Sign Method, introduced by Kurakin et al. ([2016a](#bib.bib41)). In this attack, we attempt to decrease a modified version of the loss function that encourages the model to misclassify examples in a specific way: | | | | | | --- | --- | --- | --- | | | XTGSM=X−ϵsign(∇xH(ytarget,^y)), | | (4) | where ytarget encodes an alternate set of labels we would like the model to predict instead. In the digit classification experiments below, we often picked targets by incrementing the labels y by 1 (modulo 10), which we will refer to as y+1. The TGSM can also be performed iteratively. ##### 10.1.3 Jacobian-based Saliency Map Approach (JSMA) The final attack we consider, the Jacobian-based Saliency Map Approach (JSMA), also takes an adversarial target vector ytarget. It iteratively searches for pixels or pairs of pixels in X to change such that the probability of the target label is increased and the probability of all other labels are decreased. This method is notable for producing examples that have only been changed in several dimensions, which can be hard for humans to detect. For a full description of the attack, we refer the reader to Papernot et al. ([2016b](#bib.bib60)). #### 10.2 Defenses As baseline defenses, we consider defensive distillation and adversarial training. To simplify comparison, we omit defenses (Xu et al., [2017](#bib.bib85); Nayebi and Ganguli, [2017](#bib.bib53)) that are not fully architecture-agnostic or which work by detecting and rejecting adversarial examples. ##### 10.2.1 Distillation Distillation, originally introduced by Ba and Caruana ([2014](#bib.bib5)), was first examined as a potential defense by Papernot et al. ([2016c](#bib.bib61)). The main idea is that we train the model twice, initially using the one-hot ground truth labels but ultimately using the initial model’s softmax probability outputs, which contain additional information about the problem. Since the normal softmax function tends to converge very quickly to one-hot-ness, we divide all of the logit network outputs (which we will call ^zk instead of the probabilities ^yk) by a temperature T (during training but not evaluation): | | | | | | --- | --- | --- | --- | | | fT,θ(Xn)k=e^zk(Xn)/T∑Ki=1e^zi(Xn)/T, | | (5) | where we use fT,θ to denote a network ending in a softmax with temperature T. Note that as T approaches ∞, the predictions converge to 1K. The full process can be expressed as | | | | | | --- | --- | --- | --- | | | θ0=argminθN∑n=1K∑k=1−ynklogfT,θ(Xn)k,θ∗=argminθN∑n=1K∑k=1−fT,θ0(Xn)klogfT,θ(Xn)k. | | (6) | Distillation is usually used to help small networks achieve the same accuracy as larger DNNs, but in a defensive context, we use the same model twice. It has been shown to be an effective defense against white-box FGSM attacks, but Carlini and Wagner ([2016](#bib.bib13)) have shown that it is not robust to all kinds of attacks. We will see that the precise way it defends against certain attacks is qualitatively different than gradient regularization, and that it can actually make the models more vulnerable to attacks than an undefended model. ##### 10.2.2 Adversarial Training In adversarial training (Kurakin et al., [2016b](#bib.bib42)), we increase robustness by injecting adversarial examples into the training procedure. We follow the method implemented in Papernot et al. ([2016a](#bib.bib57)), where we augment the network to run the FGSM on the training batches and compute the model’s loss function as the average of its loss on normal and adversarial examples without allowing gradients to propogate so as to weaken the FGSM attack (which would also make the method second-order). We compute FGSM perturbations with respect to predicted rather than true labels to prevent “label leaking,” where our model learns to classify adversarial examples more accurately than regular examples. ### 11 Gradient Regularization We defined our “right for the right reasons” objective in Chapter [3](#footnote3 "footnote 3 ‣ Training Machine Learning Models by Regularizing their Explanations") using an L2 penalty on the gradient of the model’s predictions across classes with respect to input features marked irrelevant by domain experts. We encoded their domain knowledge using an annotation matrix A. If we set A=1, however, and consider only the log-probabilities of the predicted classes, we recover what Drucker and Le Cun ([1992](#bib.bib23)) introduced as “double backpropagation”, which trains neural networks by minimizing not just the “energy” of the network but the rate of change of that energy with respect to the input features. In their formulation the energy is a quadratic loss, but we can reformulate it almost equivalently using the cross-entropy: | | | | | | --- | --- | --- | --- | | | | | (7) | whose objective we can write a bit more concisely as | | | | | --- | --- | --- | | | argminθH(y,^y)+λ||∇xH(y,^y)||22, | | where λ is again a hyperparameter specifying the penalty strength. The intuitive objective of this function is to ensure that if any input changes slightly, the divergence between the predictions and the labels will not change significantly (though including this term does not guarantee Lipschitz continuity everywhere). Double backpropagation was mentioned as a potential adversarial defense in the same paper which introduced defensive distillation (Papernot et al., [2016c](#bib.bib61)), but at publish time, its effectiveness in this respect had not yet been analyzed in the literature – though Gu and Rigazio ([2014](#bib.bib30)) previously and Hein and Andriushchenko ([2017](#bib.bib32)); Czarnecki et al. ([2017](#bib.bib19)) concurrently consider related objectives, and Raghunathan et al. ([2018](#bib.bib63)) derive and minimze an upper bound on adversarial vulnerability based on the maximum gradient norm in a ball around each training input. These works also provide stronger theoretical explanations for why input gradient regularization is effective, though they do not analyze its relationship to model interpretability. In this work, we interpret gradient regularization as a quadratic penalty on our model’s saliency map. ### 12 Experiments ##### 12.0.1 Datasets and Models We evaluated the robustness of distillation, adversarial training, and gradient regularization to the FGSM, TGSM, and JSMA on MNIST (LeCun et al., [2010](#bib.bib43)), Street-View House Numbers (SVHN) (Netzer et al., [2011](#bib.bib54)), and notMNIST Butalov ([2011](#bib.bib12)). On all datasets, we test a simple convolutional neural network with 5x5x32 and 5x5x64 convolutional layers followed by 2x2 max pooling and a 1024-unit fully connected layer, with batch-normalization after all convolutions and both batch-normalization and dropout on the fully-connected layer. All models were implemented in Tensorflow and trained using Adam (Kingma and Ba, [2014](#bib.bib39)) with α=0.0002 and ϵ=10−4 for 15000 minibatches of size of 256. For SVHN, we prepare training and validation set as described in Sermanet et al. ([2012](#bib.bib74)), converting the images to grayscale following Grundland and Dodgson ([2007](#bib.bib29)) and applying both global and local contrast normalization. ##### 12.0.2 Attacks and Defenses ![Accuracy of all CNNs on FGSM examples generated to fool undefended models, defensively distilled, adversarially trained, and gradient regularized models (from left to right) on MNIST, SVHN, and notMNIST (from top to bottom). Gradient-regularized models are the most resistant to other models’ adversarial examples at high ](https://media.arxiv-vanity.com/render-output/7789577/images/transferred-fgsm.png) Figure 14: Accuracy of all CNNs on FGSM examples generated to fool undefended models, defensively distilled, adversarially trained, and gradient regularized models (from left to right) on MNIST, SVHN, and notMNIST (from top to bottom). Gradient-regularized models are the most resistant to other models’ adversarial examples at high ϵ, while all models are fooled by gradient-regularized model examples. On MNIST and notMNIST, distilled model examples are usually identical to non-adversarial examples (due to gradient underflow), so they fail to fool any of the other models. ![Applying both gradient regularization and adversarial training (“both defenses”) allows us to obtain maximal robustness to white-box and normal black-box attacks on SVHN (with a very slight label-leaking effect on the FGSM, perhaps due to the inclusion of the ](https://media.arxiv-vanity.com/render-output/7789577/images/combining-advtrain-and-doubleback.png) Figure 15: Applying both gradient regularization and adversarial training (“both defenses”) allows us to obtain maximal robustness to white-box and normal black-box attacks on SVHN (with a very slight label-leaking effect on the FGSM, perhaps due to the inclusion of the ∇xH(y,^y) term). However, no models are able to maintain robustness to black-box attacks using gradient regularization. For adversarial training and JSMA example generation, we used the Cleverhans adversarial example library (Papernot et al., [2016a](#bib.bib57)). For distillation, we used a softmax temperature of T=50, and for adversarial training, we trained with FGSM perturbations at ϵ=0.3, averaging normal and adversarial losses. For gradient regularized models, we use double backpropagation, which provided the best robustness, and train over a spread of λ values. We choose the λ with the highest accuracy against validation black-box FGSM examples but which is still at least 97% as accurate on normal validation examples (though accuracy on normal examples tended not to be significantly different). Code for all models and experiments has been open-sourced777<https://github.com/dtak/adversarial-robustness-public>. ##### 12.0.3 Evaluation Metrics For the FGSM and TGSM, we test all models against adversarial examples generated for each model and report accuracy. Testing this way allows us to simultaneously measure white- and black-box robustness. On the JSMA and iterated TGSM, we found that measuring accuracy was no longer a good evaluation metric, since for our gradient-regularized models, the generated adversarial examples often resembled their targets more than their original labels. To investigate this, we performed a human subject experiment to evaluate the legitimacy of adversarial example misclassifications. #### 12.1 Accuracy Evaluations (FGSM and TGSM) ##### 12.1.1 FGSM Robustness Figure [14](#S12.F14 "Figure 14 ‣ 12.0.2 Attacks and Defenses ‣ 12 Experiments ‣ Training Machine Learning Models by Regularizing their Explanations") shows the results of our defenses’ robustness to the FGSM on MNIST, SVHN, and notMNIST for our CNN at a variety of perturbation strengths ϵ. Consistently across datasets, we find that gradient-regularized models exhibit strong robustness to black-box transferred FGSM attacks (examples produced by attacking other models). Although adversarial training sometimes performs slightly better at ϵ≤0.3, the value we used in training, gradient regularization generally surpasses it at higher ϵ (see the green curves in the leftmost plots). The story with white-box attacks is more interesting. Gradient-regularized models are generally more robust to than undefended models (visually, the green curves in the rightmost plots fall more slowly than the blue curves in the leftmost plots). However, accuracy still eventually falls for them, and it does so faster than for adversarial training. Even though their robustness to white-box attacks seems lower, though, the examples produced by those white-box attacks actually fool all other models equally well. This effect is particularly pronounced on SVHN. In this respect, gradient regularization may hold promise not just as a defense but as an attack, if examples generated to fool them are inherently more transferable. Models trained with defensive distillation in general perform no better and often worse than undefended models. Remarkably, except on SVHN, attacks against distilled models actually fail to fool all models. Closer inspection of distilled model gradients and examples themselves reveals that this occurs because distilled FGSM gradients vanish – so the examples are not perturbed at all. As soon as we obtain a nonzero perturbation from a different model, distillation’s appearance of robustness vanishes as well. Although adversarial training and gradient regularization seem comparable in terms of accuracy, they work for different reasons and can be applied in concert to increase robustness, which we show in Figure [15](#S12.F15 "Figure 15 ‣ 12.0.2 Attacks and Defenses ‣ 12 Experiments ‣ Training Machine Learning Models by Regularizing their Explanations"). In Figure [16](#S12.F16 "Figure 16 ‣ 12.1.1 FGSM Robustness ‣ 12.1 Accuracy Evaluations (FGSM and TGSM) ‣ 12 Experiments ‣ Training Machine Learning Models by Regularizing their Explanations") we also show that, on normal and adversarially trained black-box FGSM attacks, models trained with these two defenses are fooled by different sets of adversarial examples. We provide intuition for why this might be the case in Figure [17](#S12.F17 "Figure 17 ‣ 12.1.1 FGSM Robustness ‣ 12.1 Accuracy Evaluations (FGSM and TGSM) ‣ 12 Experiments ‣ Training Machine Learning Models by Regularizing their Explanations"). ![Venn diagrams showing overlap in which MNIST ](https://media.arxiv-vanity.com/render-output/7789577/images/overlaps.png) Figure 16: Venn diagrams showing overlap in which MNIST ϵ=0.4 FGSM examples, generated for normal, adversarially trained, and gradient regularized models, fool all three. Undefended models tend to be fooled by examples from all models, while the sets of adversarially trained model FGSM examples that fool the two defended models are closer to disjoint. Gradient-regularized model FGSM examples fool all models. These results suggest that ensembling different forms of defense may be effective in defending against black box attacks (unless those black box attacks use a gradient-regularized proxy). ![Conceptual illustration of the difference between gradient regularization and gradient masking. In (idealized) gradient masking, input gradients are completely uninformative, so following them doesn’t affect either the masked model’s predictions or those of any other model. In gradient regularization, gradients actually become ](https://media.arxiv-vanity.com/render-output/7789577/images/reg-vs-masking-intuition.png) Figure 17: Conceptual illustration of the difference between gradient regularization and gradient masking. In (idealized) gradient masking, input gradients are completely uninformative, so following them doesn’t affect either the masked model’s predictions or those of any other model. In gradient regularization, gradients actually become more informative, so following them will ultimately fool all models. However, because gradients are also smaller, perturbations need to be larger to flip predictions. Unregularized, unmasked models are somewhere in between. We see quantitative support for this interpretation in Figure [16](#S12.F16 "Figure 16 ‣ 12.1.1 FGSM Robustness ‣ 12.1 Accuracy Evaluations (FGSM and TGSM) ‣ 12 Experiments ‣ Training Machine Learning Models by Regularizing their Explanations"), as well as qualitative evidence in Figure [22](#S12.F22 "Figure 22 ‣ 12.3 Connections to Interpretability ‣ 12 Experiments ‣ Training Machine Learning Models by Regularizing their Explanations"). ![CNN accuracy on ](https://media.arxiv-vanity.com/render-output/7789577/images/transferred-tgsm.png) Figure 18: CNN accuracy on y+1 TGSM examples generated to fool the four models on three datasets (see Figure [14](#S12.F14 "Figure 14 ‣ 12.0.2 Attacks and Defenses ‣ 12 Experiments ‣ Training Machine Learning Models by Regularizing their Explanations") for more explanation). Gradient-regularized models again exhibit robustness to other models’ adversarial examples. Distilled model adversarial perturbations fool other models again since their input gradients no longer underflow. ##### 12.1.2 TGSM Robustness Against the TGSM attack (Figure [18](#S12.F18 "Figure 18 ‣ 12.1.1 FGSM Robustness ‣ 12.1 Accuracy Evaluations (FGSM and TGSM) ‣ 12 Experiments ‣ Training Machine Learning Models by Regularizing their Explanations")), defensively distilled model gradients no longer vanish, and accordingly these models start to show the same vulnerability to adversarial attacks as others. Gradient-regularized models still exhibit the same robustness even at large perturbations ϵ, and again, examples generated to fool them fool other models equally well. | | | | --- | --- | | Distributions of (L2 norm) magnitudes of FGSM input gradients (top), TGSM input gradients (middle), and predicted log probabilities across all classes (bottom) for each defense. Note the logarithmic scales. Gradient-regularized models tend to assign non-predicted classes higher probabilities, and the L2 norms of the input gradients of their FGSM and TGSM loss function terms have similar orders of magnitude. Distilled models (evaluated at | Distributions of (L2 norm) magnitudes of FGSM input gradients (top), TGSM input gradients (middle), and predicted log probabilities across all classes (bottom) for each defense. Note the logarithmic scales. Gradient-regularized models tend to assign non-predicted classes higher probabilities, and the L2 norms of the input gradients of their FGSM and TGSM loss function terms have similar orders of magnitude. Distilled models (evaluated at | Figure 19: Distributions of (L2 norm) magnitudes of FGSM input gradients (top), TGSM input gradients (middle), and predicted log probabilities across all classes (bottom) for each defense. Note the logarithmic scales. Gradient-regularized models tend to assign non-predicted classes higher probabilities, and the L2 norms of the input gradients of their FGSM and TGSM loss function terms have similar orders of magnitude. Distilled models (evaluated at T=0) assign extremely small probabilities to all but the predicted class, and their TGSM gradients explode while their FGSM gradients vanish (we set a minimum value of 10−20 to prevent underflow). Normal and adversarially trained models lie somewhere in the middle. One way to better understand the differences between gradient-regularized, normal, and distilled models is to examine the log probabilities they output and the norms of their loss function input gradients, whose distributions we show in Figure [19](#S12.F19 "Figure 19 ‣ 12.1.2 TGSM Robustness ‣ 12.1 Accuracy Evaluations (FGSM and TGSM) ‣ 12 Experiments ‣ Training Machine Learning Models by Regularizing their Explanations") for MNIST. We can see that the different defenses have very different statistics. Probabilities of non-predicted classes tend to be small but remain nonzero for gradient-regularized models, while they vanish on defensively distilled models evaluated at T=0 (despite distillation’s stated purpose of discouraging certainty). Perhaps because ∇logp(x)=1p(x)∇p(x), defensively distilled models’ non-predicted log probability input gradients are the largest by many orders of magnitude, while gradient-regularized models’ remain controlled, with much smaller means and variances. The other models lie between these two extremes. While we do not have a strong theoretical argument about what input gradient magnitudes should be, we believe it makes intuitive sense that having less variable, well-behaved, and non-vanishing input gradients should be associated with robustness to attacks that consist of small perturbations in input space. | | | | --- | --- | | Results of applying the JSMA to MNIST | Results of applying the JSMA to MNIST | Figure 20: Results of applying the JSMA to MNIST 0 and 1 images with maximum distortion parameter γ=0.25 for a distilled model (left) and a gradient-regularized model (right). Examples in each row start out as the highlighted digit but are modified until the model predicts the digit corresponding to their column or the maximum distortion is reached. #### 12.2 Human Subject Study (JSMA and Iterated TGSM) ##### 12.2.1 Need for a Study Accuracy scores against the JSMA can be misleading, since without a maximum distortion constraint it necessarily runs until the model predicts the target. Even with such a constraint, the perturbations it creates sometimes alter the examples so much that they no longer resemble their original labels, and in some cases bear a greater resemblance to their targets. Figure [20](#S12.F20 "Figure 20 ‣ 12.1.2 TGSM Robustness ‣ 12.1 Accuracy Evaluations (FGSM and TGSM) ‣ 12 Experiments ‣ Training Machine Learning Models by Regularizing their Explanations") shows JSMA examples on MNIST for gradient-regularized and distilled models which attempt to convert 0s and 1s into every other digit. Although all of the perturbations “succeed” in changing the model’s prediction, in the gradient-regularized case, many of the JSMA examples strongly resemble their targets. The same issues occur for other attack methods, particularly the iterated TGSM, for which we show confusion matrices for different models and datasets in Figure [21](#S12.F21 "Figure 21 ‣ 12.2.1 Need for a Study ‣ 12.2 Human Subject Study (JSMA and Iterated TGSM) ‣ 12 Experiments ‣ Training Machine Learning Models by Regularizing their Explanations"). For the gradient-regularized models, these psuedo-adversarial examples quickly become almost prototypical examples of their targets, which is not reflected in accuracies with respect to the original labels. | | | | | | | | --- | --- | --- | --- | --- | --- | | Partial confusion matrices showing results of applying the iterated TGSM for 15 iterations at | Partial confusion matrices showing results of applying the iterated TGSM for 15 iterations at | Partial confusion matrices showing results of applying the iterated TGSM for 15 iterations at | Partial confusion matrices showing results of applying the iterated TGSM for 15 iterations at | Partial confusion matrices showing results of applying the iterated TGSM for 15 iterations at | Partial confusion matrices showing results of applying the iterated TGSM for 15 iterations at | Figure 21: Partial confusion matrices showing results of applying the iterated TGSM for 15 iterations at ϵ=0.1. Each row is generated from the same example but modified to make the model to predict every other class. TGSM examples generated for gradient-regularized models (right) resemble their targets more than their original labels and may provide insight into what the model has learned. Animated versions of these examples can be seen at [http://goo.gl/q8ZM1T](https://goo.gl/q8ZM1T). To test these intuitions more rigorously, we ran a small pilot study with 11 subjects to measure whether they found examples generated by these methods to be more or less plausible instances of their targets. ##### 12.2.2 Study Protocol The pilot study consisted of a quantitative and qualitative portion. In the quantitative portion, subjects were shown 30 images of MNIST JSMA or SVHN iterated TGSM examples. Each of the 30 images corresponded to one original digit (from 0 to 9) and one model (distilled, gradient-regularized, or undefended). Note that for this experiment, we used ∇xH(1K,^y) gradient regularization, ran the TGSM for just 10 steps, and trained models for 4 epochs at a learning rate of 0.001. This procedure was sufficient to produce examples with explanations similar to the longer training procedure used in our earlier experiments, and actually increased the robustness of the undefended models (adversarial accuracy tends to fall with training iteration). Images were chosen uniformly at random from a larger set of 45 examples that corresponded to the first 5 images of the original digit in the test set transformed using the JSMA or iterated TGSM to each of the other 9 digits (we ensured that all models misclassified all examples as their target). Subjects were not given the original label, but were asked to input what they considered the most and second-most plausible predictions for the image that they thought a reasonable classifier would make (entering N/A if they thought no label was a plausible choice). In the qualitative portion that came afterwards, users were shown three 10x10 confusion matrices for the different defenses on MNIST (Figure [20](#S12.F20 "Figure 20 ‣ 12.1.2 TGSM Robustness ‣ 12.1 Accuracy Evaluations (FGSM and TGSM) ‣ 12 Experiments ‣ Training Machine Learning Models by Regularizing their Explanations") shows the first two rows) and were asked to write comments about the differences between the examples. Afterwards, there was a short group discussion. This study was performed in compliance with the institution’s IRB. | | | | | --- | --- | --- | | | MNIST (JSMA) | SVHN (TGSM) | | Model | human fooled | mistake reasonable | human fooled | mistake reasonable | | normal | 2.0% | 26.0% | 40.0% | 63.3% | | distilled | 0.0% | 23.5% | 1.7% | 25.4% | | grad. reg. | 16.4% | 41.8% | 46.3% | 81.5% | Table 2: Quantitative feedback from the human subject experiment. “human fooled” columns record what percentage of examples were classified by humans as most plausibly their adversarial targets, and “mistake reasonable” records how often humans either rated the target plausible or marked the image unrecognizable as any label (N/A). ##### 12.2.3 Study Results Table [2](#S12.T2 "Table 2 ‣ 12.2.2 Study Protocol ‣ 12.2 Human Subject Study (JSMA and Iterated TGSM) ‣ 12 Experiments ‣ Training Machine Learning Models by Regularizing their Explanations") shows quantitative results from the human subject experiment. Overall, subjects found gradient-regularized model adversarial examples most convincing. On SVHN and especially MNIST, humans were most likely to think that gradient-regularized (rather than distilled or normal) adversarial examples were best classified as their target rather than their original digit. Additionally, when they did not consider the target the most plausible label, they were most likely to consider gradient-regularized model mispredictions “reasonable” (which we define in Table [2](#S12.T2 "Table 2 ‣ 12.2.2 Study Protocol ‣ 12.2 Human Subject Study (JSMA and Iterated TGSM) ‣ 12 Experiments ‣ Training Machine Learning Models by Regularizing their Explanations")), and more likely to consider distilled model mispredictions unreasonable. p-values for the differences between normal and gradient regularized unreasonable error rates were 0.07 for MNIST and 0.08 for SVHN. In the qualitative portion of the study (comparing MNIST JSMA examples), all of the written responses described significant differences between the insensitive model’s JSMA examples and those of the other two methods. Many of the examples for the gradient-regularized model were described as “actually fairly convincing,” and that the normal and distilled models “seem to be most easily fooled by adding spurious noise.” Few commentators indicated any differences between the normal and distilled examples, with several saying that “there doesn’t seem to be [a] stark difference” or that they “couldn’t describe the difference” between them. In the group discussion one subject remarked on how the perturbations to the gradient-regularized model felt “more intentional”, and others commented on how certain transitions between digits led to very plausible fakes while others seemed inherently harder. Although the study was small, both its quantitative and qualitative results support the claim that gradient regularization, at least for the two CNNs on MNIST and SVHN, is a credible defense against the JSMA and the iterated TGSM, and that distillation is not. #### 12.3 Connections to Interpretability ![Input gradients ](https://media.arxiv-vanity.com/render-output/7789577/images/mnist-gradgrid.png) ![Input gradients ](https://media.arxiv-vanity.com/render-output/7789577/images/notmnist-gradgrid.png) ![Input gradients ](https://media.arxiv-vanity.com/render-output/7789577/images/svhn-gradgrid.png) Figure 22: Input gradients ∇xH(1K,^y) that provide a local linear approximation of normal models (top), distilled models at T=50 (second from top), adversarially trained models (middle), and models trained with ∇xH(1K,^y) and ∇xH(y,^y) gradient regularization (bottom two). Whitening black pixels or darkening white pixels makes the model more certain of its prediction. In general, regularized model gradients appear smoother and make more intuitive sense as local linear approximations. Finally, we present a qualitative evaluation suggesting a connection between adversarial robustness and interpretability. In the literature on explanations, input gradients are frequently used as explanations (Baehrens et al., [2010](#bib.bib8)), but sometimes they are noisy and not interpretable on their own. In those cases, smoothing techniques have been developed (Smilkov et al., [2017](#bib.bib79); Shrikumar et al., [2016](#bib.bib75); Sundararajan et al., [2017](#bib.bib80)) to generate more interpretable explanations, but we have already argued that these techniques may obscure information about the model’s sensitivity to background features. We hypothesized that if the models had more interpretable input gradients without the need for smoothing, then perhaps their adversarial examples, which are generated directly from their input gradients, would be more interpretable as well. That is, the adversarial example would be more obviously transformative away from the original class label and towards another. The results of the user study show that our gradient-regularized models have this property; here we ask if the gradients are more interpretable as explanations. In Figure [22](#S12.F22 "Figure 22 ‣ 12.3 Connections to Interpretability ‣ 12 Experiments ‣ Training Machine Learning Models by Regularizing their Explanations") we visualize input gradients across models and datasets, and while we cannot make any quantitative claims, there does appear to be a qualitative difference in the interpretability of the input gradients between the gradient-regularized models (which were relatively robust to adversarial examples) and the normal and distilled models (which were vulnerable to them). Adversarially trained models seem to exhibit slightly more interpretable gradients, but not nearly to the same degree as gradient-regularized models. When we repeatedly apply input gradient-based perturbations using the iterated TGSM (Figure [21](#S12.F21 "Figure 21 ‣ 12.2.1 Need for a Study ‣ 12.2 Human Subject Study (JSMA and Iterated TGSM) ‣ 12 Experiments ‣ Training Machine Learning Models by Regularizing their Explanations")), this difference in interpretability between models is greatly magnified, and the results for gradient-regularized models seem to provide insight into what the model has learned. When gradients become interpretable, adversarial images start resembling feature visualizations Olah et al. ([2017](#bib.bib55)); in other words, they become explanations. ### 13 Discussion In this chapter, we showed that: * Gradient regularization slightly outperforms adversarial training (the SOTA) as a defense against black-box transferred FGSM examples from undefended models. * Gradient regularization significantly increases robustness to white-box attacks, though not quite as much as adversarial training. * Adversarial examples generated to fool gradient-regularized models are more “universal;” they are more effective at fooling all models than examples from unregularized models. * Adversarial examples generated to fool gradient-regularized models are more interpretable to humans, and examples generated from iterative attacks quickly come to legitimately resemble their targets. This is not true for distillation or adversarial training. The conclusion that we would like to reach is that gradient-regularized models are right for better reasons. Although they are not completely robust to attacks, their correct predictions and their mistakes are both easier to understand. To fully test this assertion, we would need to run a larger and more rigorous human subject evaluation that also tests adversarial training and other attacks beyond the JSMA, FGSM, and TGSM. Connecting what we have done back to the general idea of explanation regularization, we saw in Equation [7](#S11.E7 "(7) ‣ 11 Gradient Regularization ‣ Training Machine Learning Models by Regularizing their Explanations") that we could interpret our defense as a quadratic penalty on our CNN’s saliency map. Imposing this penalty had both quantitative and qualitative effects; our gradients became smaller but also smoother with fewer high-frequency artifacts. Since gradient saliency maps are just normals to the model’s decision surface, these changes suggest a qualitative difference in the “reasons” behind our model’s predictions. Many techniques for generating smooth, simple saliency maps for CNNs not based on raw gradients have been shown to vary under meaningless transformations of the model Kindermans et al. ([2017](#bib.bib38)) or, more damningly, to remain invariant under extremely meaningful ones (Adebayo et al., [2018](#bib.bib2)) – which suggests that many of these methods either oversimplify or aren’t faithful to the models they are explaining. Our approach in this chapter was, rather than simplifying our explanations of fixed models, to optimize our models to have simpler explanations. Their increased robustness can be thought of as a useful side effect. Although the problem of adversarial robustness in deep neural networks is still very much an open one, these results may suggest a deeper connection between it and interpretability. No matter what method proves most effective in the general case, we suspect that any progress towards ensuring either interpretability or adversarial robustness in deep neural networks will likely represent progress towards both. ### 14 Alternative Input Gradient Penalties Before we leave input gradients behind altogether, it is worth considering what else we can do with them besides simple L2 regularization. #### 14.1 L1 Regularization In Chapter [6](#footnote6 "footnote 6 ‣ Training Machine Learning Models by Regularizing their Explanations"), we saw that penalizing the L2 norm of our model’s input gradients encouraged gradient interpretability and prediction robustness to adversarial examples, and drew an analogy to Ridge regression. One natural question to ask is how penalizing the L1 norm instead would compare, which we could understand as a form of local linear LASSO. For a discussion of this question with application to sepsis treatment, we refer the reader to Ross et al. ([2017a](#bib.bib69)), which includes a case-study showing how L1 gradient regularization can help us obtain mortality risk models that are locally sparse and more consistent with clinical knowledge. On image datasets (where input features are not individually meaningful), we do find that L1 gradient regularization is effective in defending against adversarial examples, perhaps more so than L2 regularization. To that end, in Figure [23](#S14.F23 "Figure 23 ‣ 14.1 L1 Regularization ‣ 14 Alternative Input Gradient Penalties ‣ Training Machine Learning Models by Regularizing their Explanations") we present results for VGG-16 models on CIFAR-10, which bode favorably for L1 regularization against both white- and black-box attacks. However, although the gradients of these models change qualitatively compared to normal models, they are not significantly sparser than gradients of models trained with L2 gradient regularization. These results suggest that sparsity with respect to input features may not be a fully achievable or desirable objective for complex image classification tasks. | | | | --- | --- | | Left: Accuracy loss on CIFAR-10 FGSM examples ( | Left: Accuracy loss on CIFAR-10 FGSM examples ( | Figure 23: Left: Accuracy loss on CIFAR-10 FGSM examples (ϵ=2px) for VGG models trained with varying levels of L1 gradient regularization. Diagonals measure white-box vulnerability and off-diagonals measure transferability. Right: L1 vs. L2 gradient regularization on VGG as a defense against white-box FGSM examples, 2px perturbation. The value of λ is multiplied by 100 for the L1 regularized network to equalize penalty magnitudes (since we do not take the square root of the L2 penalty). Compared to L2, L1 gradient regularized models tend to be more robust to l∞ attacks like the FGSM, and their adversarial examples tend to be less transferable. #### 14.2 Higher-Order Derivatives Bishop ([1993](#bib.bib11)) introduced the idea of limiting the curvature of the function learned by a neural network by imposing an L2 penalty on the network’s second input derivatives. They note, however, that evaluating these second derivatives increases the computational complexity of training by a factor of D, the number of input dimensions. This scaling behavior poses major practical problems for datasets like ImageNet, whose inputs are over 150,000-dimensional. Rifai et al. ([2011](#bib.bib67)) develop a scalable workaround by estimating the Frobenius norm of the input Hessian as 1σ2E[||∇Xf(x)−∇Xf(x+ϵ)||22] for ϵiid∼N(0,σ2), which converges to the true value as σ→0. They then train autoencoders whose exact gradient and approximate Hessian norms are both L2-penalized, and find that the unsupervised representations they learn are more useful for downstream classification tasks. Czarnecki et al. ([2017](#bib.bib19)) also regularize using estimates of higher-order derivatives. Hessian regularization may be desirable for adversarial robustness and interpretability as well. The results in Figure [24](#S14.F24 "Figure 24 ‣ 14.2 Higher-Order Derivatives ‣ 14 Alternative Input Gradient Penalties ‣ Training Machine Learning Models by Regularizing their Explanations") suggest that exact Hessian regularization for an MLP on a simple 2D problem encourages the model to learn flatter and wider decision boundaries than gradient regularization, which could be useful for interpretability and robustness. Hessian regularization also appears to behave more sensically even when the penalty term is much larger than the cross entropy. By contrast, in this regime, gradient regularization starts pathologically seeking areas of the input space (usually near the edges of the training distribution) where it can set gradients to 0. | | | | --- | --- | | Gradient regularization (left) vs. Hessian regularization (right). Purple line indicates the true decision boundary; other lines indicate level sets of a 10-hidden unit MLP’s predicted log-odds from -5 to 5 by increments of 2.5, with the model’s decision boundary in green. Hessian regularization can make decision boundaries wider and flatter without triggering pathological cases. | Gradient regularization (left) vs. Hessian regularization (right). Purple line indicates the true decision boundary; other lines indicate level sets of a 10-hidden unit MLP’s predicted log-odds from -5 to 5 by increments of 2.5, with the model’s decision boundary in green. Hessian regularization can make decision boundaries wider and flatter without triggering pathological cases. | Figure 24: Gradient regularization (left) vs. Hessian regularization (right). Purple line indicates the true decision boundary; other lines indicate level sets of a 10-hidden unit MLP’s predicted log-odds from -5 to 5 by increments of 2.5, with the model’s decision boundary in green. Hessian regularization can make decision boundaries wider and flatter without triggering pathological cases. ### 15 Heftier Surrogates While input gradient-based methods are appealing because of their close relationship to the shape and curvature of differentiable models’ decision surfaces, they are limited by their locality and humans’ inability to express abstract desiderata in terms of input features. This second limitation in particular prevents us from optimizing for the kind of simplicity or diversity humans find intuitive. Therefore, in the next sections we explore ways of training models using more complex forms of explanation. One common way of explaining complicated models like neural networks is by distilling them into surrogate models; decision trees are a particularly popular choice Craven and Shavlik ([1996](#bib.bib18)). However, these decision trees must sometimes be quite deep in order to accurately explain the associated networks, which defeats the purpose of making predictions interpretable. To address this problem, Wu et al. ([2017](#bib.bib84)) optimize the underlying neural networks to be accurately approximatable by shallow decision trees. Performing such an optimization is difficult because the process of distilling a network into a decision tree cannot be expressed analytically, much less differentiated. However, they approximate it by training a second neural network to predict the depth of the decision tree that would result from the first neural network’s parameters. They then use this learned function as a differentiable surrogate of the true approximating decision tree depth. Crucially, they find a depth regime where their networks can outperform decision trees while remaining explainable by them. Although they only try to minimize the approximating decision tree depth, in principle one could train the second network to estimate other characteristics of the decision tree related to simplicity or consistency with domain knowledge (and optimize the main network accordingly). ### 16 Examples and Exemplars Another popular way of explaining predictions is with inputs themselves. k-Nearest Neighbors (kNN) algorithms are easy to understand since one can simply present the neighbors, and techniques have recently been proposed to perform kNN using distance metrics derived from pretrained neural networks (Papernot and McDaniel, [2018](#bib.bib58)). More general methods involve sparse graph flows between labeled and unlabeled inputs (Rustamov and Klosowski, [2017](#bib.bib71)) or optimization to find small sets of prototypical inputs that can be used for cluster characterization or classification (Kim et al., [2014](#bib.bib37)), even within neural networks (Li et al., [2017](#bib.bib47)). There has also been recent work on determining which points would most affect a prediction if removed from the training set (Koh and Liang, [2017](#bib.bib40)). These approaches have both advantages and disadvantages. Justifying predictions based on input similarity and difference can seem quite natural, though it can also be confusing or misleading when the metric used to quantify distance between points does not correspond to human intuition. Influence functions shed light on model sensitivities that are otherwise very hard to detect, but they are also very sensitive to outliers, leading to sometimes inscrutable explanations. However, it seems straightforward at least in principle to implement example-based explanation regularization. For example, we could train neural networks with annotations indicating that certain pairs of examples should be similar or dissimilar, and penalize the model when their intermediate representations are relatively distant or close (which might require altering minibatch sampling to keep paired examples together if annotations are sparse). Although influence functions may be too computationally expensive to incorporate into the loss functions of large networks, it seems useful in principle to specify that certain examples should be particularly representative or influential in deciding how to classify others. ### 17 Emergent Abstractions Stepping back, the level of abstraction at which we communicate the reason behind a decision significantly affects its utility, as Keil ([2006](#bib.bib35)) notes: > > Explanations… suffer if presented at the wrong level of detail. Thus, if > asked why John got on the train from New Haven to New York, a good explanation > might be that he had tickets for a Broadway show. An accurate but poor > explanation at too low a level might say that he got on the train because he > moved his right foot from the platform to the train and then followed with his > left foot. An accurate but poor explanation at too high a level might say that > he got on the train because he believed that the train would take him to New > York from New Haven. > > > The explanations we have considered so far have been in terms of input features, entire inputs, or simple surrogates. However, sometimes humans seek to know the reasons behind predictions at levels of abstraction these forms cannot capture. If we really want to create interpretable interfaces for training and explaining machine learning models, humans and models will need to speak a common language that permits abstraction. This may seem like a daunting task, but there has been important recent progress in interpreting neural networks in terms of abstractions that emerge during training. Bau et al. ([2017](#bib.bib10)) introduce a densely labeled image dataset. They train convolutional neural networks on a top-level classification task, but also include lower-level sublabels that indicate other features in the image. They measure the extent to which different intermediate nodes in their top-level label classifiers serve as exclusive “detectors” for particular sublabels, and compare the extent to which different networks learn different numbers of exclusive detectors. They also categorize their sublabels and look at differences in which kinds of sublabels each network learns to detect (and when these detectors emerge during training). Kim et al. ([2017](#bib.bib36)) provide a method of testing networks’ sensitivity to concepts as defined by user-provided sets of examples. Concretely, they train a simple linear classifer at each layer to distinguish between examples in the concept set and a negative set. They reinterpret the weights of this linear classifier as a “concept activation vector,” and take directional derivatives of the class logits with respect to these concept activations. Repeated across the full dataset for many different concepts, this procedure outputs a set of concept sensitivity weights for each prediction, which can be used for explanation or even image retrieval. The previous two methods require manual human selection of images corresponding to concepts, and they do not guarantee meaningful correspondence between these concepts and what the network has learned. Feature visualization (Olah et al., [2017](#bib.bib55)) takes a different approach and attempts to understand what the network has learned on its own terms. In particular, it tries to explain what (groups of) neuron(s) learn by optimizing images to maximize (or minimize) their activations. It can also optimize sets of images to jointly maximize activations while encouraging diversity. This process can be useful for obtaining an intuitive sense of (some of) what the model has learned, especially if the neurons being explained are class logits. However, it also leads to an information overload, since modern networks contain millions of neurons and an effectively infinite number of ways to group them. To that end, Olah et al. ([2018](#bib.bib56)) use non-negative matrix factorization (NMF) to learn a small number of groups of neurons whose feature visualizations best summarize the entire set. Feature visualizations of neuron groups obtained by NMF tend to correspond more cleanly to human-interpretable concepts, though again there is no guarantee this will occur. Olah et al. ([2018](#bib.bib56)) also suggest that incorporating human feedback into this process could lead to a method to train models to make decisions “for the right reasons.” The above cases either take humans concepts and try to map them to network representations or take network “concepts” and try to visualize them so humans can map them to their own concepts. But they do not actually try to align network representations with human concepts. However, there has been significant recent interest in training models to learn disentangled representations (Chen et al., [2016](#bib.bib16); Higgins et al., [2016](#bib.bib33); Siddharth et al., [2017](#bib.bib76)). Disentangled representations are often described as separating out latent factors that concisely characterize important aspects of the inputs but which cannot be easily expressed in terms of their component features. Generally, disentangled representations tend to be much easier to relate to human-intuitive concepts than what models learn when only trained to minimize reconstruction or prediction error. | | | | --- | --- | | Accuracies of a normal model and two models trained using find-another-explanation in a disentangled latent space (right) a toy image dataset that confounds background color and square size in training (left) but decouples them in test. Performing find-another-explanation in a latent space allows us to learn models that make predictions for conceptually different reasons, which is reflected in their complementary accuracies on each version of the test set. | Accuracies of a normal model and two models trained using find-another-explanation in a disentangled latent space (right) a toy image dataset that confounds background color and square size in training (left) but decouples them in test. Performing find-another-explanation in a latent space allows us to learn models that make predictions for conceptually different reasons, which is reflected in their complementary accuracies on each version of the test set. | Figure 25: Accuracies of a normal model and two models trained using find-another-explanation in a disentangled latent space (right) a toy image dataset that confounds background color and square size in training (left) but decouples them in test. Performing find-another-explanation in a latent space allows us to learn models that make predictions for conceptually different reasons, which is reflected in their complementary accuracies on each version of the test set. These advances in bridging human and neural representations could have major payoffs in terms of interpreting models or optimizing them to make predictions for specific reasons. Suppose we are interested in testing a classifier’s sensitivity to an abstract concept entangled with our input data. If we have an autoencoder whose representation of the input disentangles the concept into a small set of latent factors, then for a specific input, we can encode it, decode it, and pass the decoded input through the classifier, taking the gradient of the network’s output with respect to the latent factors associated with the concept. If we fix the autoencoder weights but not the classifier weights, we can use this differentiable concept sensitivity score to apply our “right for the right reasons” technique from Chapter [3](#footnote3 "footnote 3 ‣ Training Machine Learning Models by Regularizing their Explanations") to encourage the classifier to be sensitive or insensitive to the concept. We present a preliminary proof of concept of this idea in Figure [25](#S17.F25 "Figure 25 ‣ 17 Emergent Abstractions ‣ Training Machine Learning Models by Regularizing their Explanations"). In this experiment, we construct a toy dataset of images of white squares with four true latent factors of variation: the size of the square, its x and y position, and the background color of the image. In training, background color and square size are confounded; images either have dark backgrounds and small squares or light backgrounds and large squares (and either one can be used to predict the label). However, we create two versions of the test set where these latent factors are decoupled (and only one predicts the label). This is analogous to the parable in our introduction with squares representing tanks and background colors representing light. When we train a one-hidden layer MLP normally, it learns to implicitly use both factors, and obtains suboptimal accuracies of about 75% on each test set. To circumvent this issue, we first train a convolutional autoencoder that disentangles square size from background color (which we do with supervision here, but in principle this can be unsupervised) and then prepend the autoencoder to our MLP with fixed weights. We then simultaneously train two instantiations of this network with the find-another-explanation penalty we introduced in Section [8](#S8 "8 Simultaneous Find-Another-Explanation ‣ Training Machine Learning Models by Regularizing their Explanations"). These two networks learn to perform nearly perfectly on one test set and do no better than random guessing on the other, which suggests they are making predictions for different conceptual reasons. Obtaining these networks would have been very difficult using only gradient penalties in the input space. ### 18 Interpretability Interfaces Olah et al. ([2018](#bib.bib56)) describe a space of “interpretability interfaces” and introduce a formal grammar for expressing explanations of neural networks (and a systematic way of exploring designs). They visualize this design space in a grid of relationships between different “substrates” of the design, which include groups of neurons, dataset examples, and model parameters – the latter of which presents an opportunity “to consider interfaces for *taking action* in neural networks.” If human-defined concepts, disentangled representations, or other forms of explanation are included as additional substrates, one can start to imagine a very general framework for expressing priors or constraints on relationships between them. These would be equivalent to optimizing models to make predictions for specific reasons. ![Schematic diagram of an interpretability interface.](https://media.arxiv-vanity.com/render-output/7789577/x1.png) Figure 26: Schematic diagram of an interpretability interface. How would humans actually express these kinds of objectives? One interface worth emulating could be that introduced by recent but popular libraries for weak supervision (Ratner et al., [2017](#bib.bib64)) or probabilistic soft logic (Bach et al., [2017](#bib.bib7)), which is related to the well-studied topic of fuzzy logic, a method noted for its compatibility with human reasoning (Zadeh, [1997](#bib.bib86)). In these frameworks, users can specify “soft” logical rules for labeling datasets or constraining relationships between atoms (or substrates) of a system. Though users can sometimes specify that certain rules are inviolable or highly-weighted, in general these systems assume that rules are not always correct and attempt to infer weights for each. While these inference problems are nontrivial, and in general there may be complex, structured interactions between rules that are difficult to capture, the interface it exposes to users is expressive and potentially worth emulating in an interpretability interface. For example, we could imagine writing soft rules relating: * dataset examples to each other (e.g. these examples should be conceptually similar with respect to a task) * dataset examples to concepts (e.g. these are examples of a concept) * features to concepts (e.g. this set of features is related to this concept, this other set is not; in this specific case, these features contribute positively) * concepts to predictions (e.g. the presence of this concept makes this prediction more or less likely, except when this other concept is present) These rules could be “compiled” into additional energy terms in the model’s loss function, possibly with thresholding if we expect them to be incorrect some percentage of the time (though rules defined for specific examples may be more reliable). We present a schematic diagram of how a system like this might work in Figure [26](#S18.F26 "Figure 26 ‣ 18 Interpretability Interfaces ‣ Training Machine Learning Models by Regularizing their Explanations"). Such a system would strongly depend on being able to define rules in terms of abstract concepts, but such rules might not be enforcible until the model has a differentiable, stable representations of them. However, one could imagine pre-learning static, disentangled concept representations that could be related back to input features. If 1:1 mappings between human concepts and latent representations do not emerge naturally, even allowing for hierarchical relationships (Esmaeili et al., [2018](#bib.bib24)), steps could be taken to optimize model representations to better match human understanding (e.g. using partial supervision) or to help humans better understand model representations (e.g. using feature visualization). This process of reaching user-model intersubjectivity might require multiple stages of identification and refinement, but seems possible in principle. And perhaps arriving at a shared conceptual framework for understanding a problem is where the work of teaching and learning ought to lie, regardless of whether the teachers and learners are human. ### 19 Discussion In this chapter, we discussed a number of strategies for explanation regularization beyond the methods we used in the previous chapters. We described simple extensions of gradient-based methods (imposing L1 and Hessian penalties), strategies in terms of interpretable surrogates (regularizing distilled decision trees, nearest neighbors, and exemplars), and strategies in terms of concepts (concept activation vectors, disentangled representations, and feature visualization). We then combined many of these strategies into a design for an “interpretability interface” that could be used to simultaneously improve neural network interpretability and incorporate domain knowledge. One limitation of this discussion is that we only considered classification models and traditional ways of explaining their predictions. However, there is a much larger literature on alternative forms of explanation and prediction like intuitive theories (Gerstenberg and Tenenbaum, [2017](#bib.bib26)) or causal inference (Pearl, [2010](#bib.bib62)) that is highly relevant, especially if we want to apply these techniques to problems like sequential decisionmaking. We started this thesis by making a point that was “easiest to express with a story;” even with arbitrarily human-friendly compositional abstraction (Schulz et al., [2017](#bib.bib72)), flat sets of concepts may never be sufficient in cases where users think in terms of narratives (Abell, [2004](#bib.bib1)). However, despite these limitations, we think the works we have outlined in this chapter have started to map a rich design space for interpreting and training machine learning models with more than just xes and ys.
0d39520c-1154-4949-856f-65163ff326da
trentmkelly/LessWrong-43k
LessWrong
Muehlhauser-Wang Dialogue Part of the Muehlhauser interview series on AGI.   Luke Muehlhauser is Executive Director of the Singularity Institute, a non-profit research institute studying AGI safety. Pei Wang is an AGI researcher at Temple University, and Chief Executive Editor of Journal of Artificial General Intelligence.   Luke Muehlhauser [Apr. 7, 2012] Pei, I'm glad you agreed to discuss artificial general intelligence (AGI) with me. I hope our dialogue will be informative to many readers, and to us! On what do we agree? Ben Goertzel and I agreed on the statements below (well, I cleaned up the wording a bit for our conversation): 1. Involuntary death is bad, and can be avoided with the right technology. 2. Humans can be enhanced by merging with technology. 3. Humans are on a risky course in general, because powerful technologies can destroy us, humans are often stupid, and we are unlikely to voluntarily halt technological progress. 4. AGI is likely this century. 5. AGI will greatly transform the world. It is a potential existential risk, but could also be the best thing that ever happens to us if we do it right. 6. Careful effort will be required to ensure that AGI results in good things rather than bad things for humanity. You stated in private communication that you agree with these statements, depending on what is meant by "AGI." So, I'll ask: What do you mean by "AGI"? I'd also be curious to learn what you think about AGI safety. If you agree that AGI is an existential risk that will arrive this century, and if you value humanity, one might expect you to think it's very important that we accelerate AI safety research and decelerate AI capabilities research so that we develop safe superhuman AGI first, rather than arbitrary superhuman AGI. (This is what Anna Salamon and I recommend in Intelligence Explosion: Evidence and Import.) What are your thoughts on the matter?   Pei Wang: [Apr. 8, 2012] By “AGI” I mean computer systems that follow roughly the same principle
77f17c5c-a0c1-4808-a98c-18f3dd615295
trentmkelly/LessWrong-43k
LessWrong
Meetup : RTLW Thursday Meetup Discussion article for the meetup : RTLW Thursday Meetup WHEN: 28 February 2013 07:00:00PM (-0500) WHERE: Francesca's Dessert Caffe, 706 9th Street, Durham NC This week is Meetup Activity Potpourri, including: * Calibration exercises * Games: The Resistance and Zendo have been mentioned * Rationality Checklist check/review (http://lesswrong.com/lw/fc3/checklist_of_rationality_habits/) * beverages * etc.? We'll have a couple bags of Icehouse pyramids on the table (http://3.bp.blogspot.com/-NKLebKLBm2Q/TtqJ7aptmBI/AAAAAAAAALs/n1jMtUlh3Ws/s1600/Ice%2BDice%2BBag.jpg). Discussion article for the meetup : RTLW Thursday Meetup
749d4331-effa-4ed6-a812-d8b64300b487
trentmkelly/LessWrong-43k
LessWrong
Fixing science via a basic income I ran across Ed Hagen’s article “Academic success is either a crapshoot or a scam”, which pointed out that all the methodological discussion about science’s replication crisis is kinda missing the point: yes, all of the methodological stuff like p-hacking is something that would be valuable to fix, but the real problem is in the incentives created by the crazy publish-or-perish culture: In my field of anthropology, the minimum acceptable number of pubs per year for a researcher with aspirations for tenure and promotion is about three. This means that, each year, I must discover three important new things about the world. […] Let’s say I choose to run 3 studies that each has a 50% chance of getting a sexy result. If I run 3 great studies, mother nature will reward me with 3 sexy results only 12.5% of the time. I would have to run 9 studies to have about a 90% chance that at least 3 would be sexy enough to publish in a prestigious journal. I do not have the time or money to run 9 new studies every year. I could instead choose to investigate phenomena that are more likely to yield strong positive results. If I choose to investigate phenomena that are 75% likely to yield such results, for instance, I would only have to run about 5 studies (still too many) for mother nature to usually grace me with at least 3 positive results. But then I run the risk that these results will seem obvious, and not sexy enough to publish in prestigious journals. To put things in deliberately provocative terms, empirical social scientists with lots of pubs in prestigious journals are either very lucky, or they are p-hacking. I don’t really blame the p-hackers. By tying academic success to high-profile publications, which, in turn, require sexy results, we academic researchers have put our fates in the hands of a fickle mother nature. Academic success is therefore either a crapshoot or, since few of us are willing to subject the success or failure of our careers to the roll of the dice,
0838b154-a52e-447d-a1cc-c9d2e4825509
trentmkelly/LessWrong-43k
LessWrong
Are superhuman savants real? Savant syndrom indentifies people with general intellectual impairment who, in one specific field, reach ordinary or even exceptional performance. In The Psychological Unity of Humankind, Eliezer argues that > So you can't have the X-Men.  You can't have "mutants" running around with highly developed machinery that most of the human species doesn't have.  And no, extra-powerful radiation does not produce extra-potent mutations, that's not how it works. > > Again by the nature of sexual recombination, you're very unlikely to see two complexly different adaptations competing in the gene pool.  Two individual alleles may compete.  But if you somehow had two different complex adaptations built out of many non-universal alleles, they would usually assemble in scrambled form. The argument behind this makes formal sense, but it's applicability strongly depends on how well we can judge what does and doesn't require complex adaptation. Reports of savants provide an interesting test of this; some of them seem like they are not merely an exceptional level of human skill, but not reachable by ordinary people. For example, in a recent post here that reminded me of this, the author claims: > Example 3: Stephen Wiltshire. He made a nineteen-foot-long drawing of New York City after flying on a helicopter for 20 minutes, and he got the number of windows and floors of all the buildings correct. Other things I remember hearing are someone seeing at a glance that there are 163 peas on a plate, or remembering every word he ever heard. If these kinds of abilities can develop as a consequence of individual genetic quirks or possibly even brain injuries, then clearly we just don't have a good intuition about what's "close" in brain design space. Now that I've made clear what kind of ability I'm talking about, has anyone done the relevant digging?
ac3a37c9-3c46-49df-8a70-5620a7b6ade0
trentmkelly/LessWrong-43k
LessWrong
[AN #152]: How we’ve overestimated few-shot learning capabilities Alignment Newsletter is a weekly publication with recent content relevant to AI alignment around the world. Find all Alignment Newsletter resources here. In particular, you can look through this spreadsheet of all summaries that have ever been in the newsletter. Audio version here (may not be up yet). Please note that while I work at DeepMind, this newsletter represents my personal views and not those of my employer. HIGHLIGHTS True Few-Shot Learning with Language Models (Ethan Perez et al) (summarized by Rohin): We can get GPT-3 (AN #102) to perform useful tasks using “prompt programming”, in which we design an input sentence such that the most likely continuation of that sentence would involve GPT-3 performing the task of interest. For example, to have GPT-3 answer questions well, we might say something like “The following is a transcript of a dialogue with a helpful, superintelligent, question-answering system:”, followed by a few example question-answer pairs, after which we ask our questions. Since the prompts only contain a few examples, this would seem to be an example of strong few-shot learning, in which an AI system can learn how to do a task after seeing a small number of examples of that task. This paper contends that while GPT-3 is capable of such few-shot learning, the results reported in various papers exaggerate this ability. Specifically, while it is true that the prompt only contains a few examples, researchers often tune their choice of prompt by looking at how well it performs on a relatively large validation set -- which of course contains many examples of performing the task, something we wouldn’t expect to have in a true few-shot learning context. To illustrate the point, the authors conduct several experiments where we start with around 12 possible prompts and must choose which to use based only on the examples given (typically 5). They test two methods for doing so: 1. Cross-validation: Given a prompt without examples, we attach 4 of
dbc7a68c-4454-481a-ab82-8d3cc5986f57
trentmkelly/LessWrong-43k
LessWrong
The Learning-Theoretic AI Alignment Research Agenda In this essay I will try to explain the overall structure and motivation of my AI alignment research agenda. The discussion is informal and no new theorems are proved here. The main features of my research agenda, as I explain them here, are * Viewing AI alignment theory as part of a general abstract theory of intelligence * Using desiderata and axiomatic definitions as starting points, rather than specific algorithms and constructions * Formulating alignment problems in the language of learning theory * Evaluating solutions by their formal mathematical properties, ultimately aiming at a quantitative theory of risk assessment * Relying on the mathematical intuition derived from learning theory to pave the way to solving philosophical questions ---------------------------------------- Philosophy In this section I explain the key principles and assumptions that motivate my research agenda. The importance of rigor I believe that the solution to AI alignment must rely on a rigorous mathematical theory. The algorithms that comprise the solution must be justified by formal mathematical properties. All mathematical assumptions should be either proved or at least backed by considerable evidence, like the prominent conjectures of computational complexity theory. This needs to be the case because: * We might be facing one-shot success or failure. This means we will have little empirical backing for our assumptions. * To the extent we have or will have empirical evidence about AI, without a rigorous underlying theory it is very hard to know how scalable and transferable the conclusions are. * The enormity of the stakes demands designing a solution which is as reliable as possible, limited only by the time constraints imposed by competing unaligned projects. That said, I do expect the ultimate solution to have aspects that are not entirely rigorous, specifically: * The quantitative risk analysis will probably rely on some parameters that will be very har
79ff2de1-b8b5-4b9b-8268-bfe9a2724447
trentmkelly/LessWrong-43k
LessWrong
Automated theorem proving Automated theorem proving *sounds like* a natural extension of many useful trends and a solution to many current problems.  To me, it seems obvious that there will be a need for the formalization of the mathematics (up to and beyond the boundary of mathematics with real-world applications) as well as the routine checking of software for 100% intended performance.  Secure networks and software in particular could be an important safeguard against AI. Yet I haven't heard much of it; the implementation difficulties must be considerable given that there are substantial and predictable benefits to the widespread use of automated theorem proving.  Anyone with experience in the field?
5487973c-dcba-4b9e-8e9f-c7f2d1c1d63e
trentmkelly/LessWrong-43k
LessWrong
Meetup : Fort Collins, Colorado Meetup Thursday 7pm Discussion article for the meetup : Fort Collins, Colorado Meetup Thursday 7pm WHEN: 30 August 2012 07:00:00PM (-0600) WHERE: 1129 W. Elizabeth St. Fort Collins 80521 Back to school, back to our regular Thursday in Fort Collins meetup. Discussion article for the meetup : Fort Collins, Colorado Meetup Thursday 7pm
0ecd26d5-ec51-458c-88db-1d312e6343d4
trentmkelly/LessWrong-43k
LessWrong
Epoch AI released a GATE Scenario Explorer I think it's more easier to discuss AI progress in terms of economy growth rather than just focusing on the scale of the largest training runs and compute used.   From their X announcement: > We developed GATE: a model that shows how AI scaling and automation will impact growth. > > It predicts trillion‐dollar infrastructure investments, 30% annual growth, and full automation in decades.
0caa89c8-c318-4642-a7b6-3addc6865422
trentmkelly/LessWrong-43k
LessWrong
Meetup : Berlin Meetup Discussion article for the meetup : Berlin Meetup WHEN: 23 February 2013 07:30:47PM (+0100) WHERE: Ming Dynastie, Brückenstraße 6, 10179 Berlin This is where we come together in person to chat! We will experiment with emulating a small meetup by having two tables, hopefully leading to more personal discussions. There are no special activities this time. We'll talk about our plans and make commitments. Everyone is welcome, I'll bring a sign. Discussion article for the meetup : Berlin Meetup
62441dcd-429f-4dfc-9d92-f368c90faab7
trentmkelly/LessWrong-43k
LessWrong
Rationality advice from Terry Tao Via a link on IRC, I stumbled upon the blog of the mathematician Terry Tao. I noticed that several of his posts contain useful rationality advice, part of it overlapping with content that has been covered here. Most of the posts remind us of things that are kind of obvious, but I don't think that's necessarily a bad thing: we often need reminders of the things that are obvious. Advance warning: the posts are pretty well interlinked, in Wikipedia/TVTropes fashion. I currently have 15 tabs open from the site. Some posts of note: Be sceptical of your own work. If you unexpectedly find a problem solving itself almost effortlessly, and you can’t quite see why, you should try to analyse your solution more sceptically. Most of the time, the process for solving a major problem is a lot more complex and time-consuming. Use the wastebasket. Not every idea leads to a success, and not every first draft forms a good template for the final draft. Know when to start over from scratch, know when you should be persistent, and do keep copies around of even the failed attempts. Learn the limitations of your tools. Knowing what your tools cannot do is just as important as knowing what they can do. Learn and relearn your field. Simply learning the statement and proof of a problem doesn't guarantee understanding: you should test your understanding, using methods such as finding alternate proofs and trying to generalize the argument. Write down what you've done. Write down sketches of any interesting arguments you come across - not necessarily at a publication level of quality, but detailed enough that you can forget about the details and reconstruct them later on.
dde5f039-74e3-4b34-adb8-9b34160542c0
trentmkelly/LessWrong-43k
LessWrong
Follow me on TikTok For more than five years, I've posted an average of more than 1× per week on Less Wrong. I've learned a lot from you nerds. I've made friends and found my community. Thank you for pointing out all the different ways I've been wrong. Less Wrong has changed my life for the better. But it's time to say goodbye. Let's rewind the clock back to October 19, 2019. I had just posted my 4ᵗʰ ever Less Wrong post Mediums Overpower Messages. Mediums Overpower Messages is about how different forms of communication train you to think differently. Writing publicly on this website and submitting my ideas to the Internet has improved my rationality far better and faster than talking to people in real life. All of the best thinkers I'm friends with are good writers too. It's not even close. I believe this relationship is causal; learning to write well teaches you to think well. If you are a regular reader of this website and haven't written on it, then I recommend you try writing original posts. I expect you'll learn much faster. It increases your serendipity surface area too. If you already do write on this website, then there is lots of alpha in writing different styles, such as dialogues, parables, games, research summaries, war reporting, fiction, fanfiction, fanfanfiction, and so on. I feel this website's moderators do a good job of selecting what gets frontpaged. For this reason, I'm proud of the book review I wrote which was kept off the frontpage due to being a political Molotov cocktail, even though it was topical and high quality. The post has 134 comments right now, despite never having hit the front page, which is evidence the moderators were correct in their decision. The topics I'm interested in have changed over the years. One of my the earliest puzzles I explored was how to find out what ideas I have not considered, despite not having chosen not to consider them. My solution was to learn a new communication medium—a new art form. But I have exhausted the easiest ga
37590794-7481-4a6a-b14c-c79e10b789df
trentmkelly/LessWrong-43k
LessWrong
Center for Modern Rationality currently hiring: Executive assistants, Teachers, Research assistants, Consultants. Hi there,  We are still looking for: A second executive assistant -- preferably someone who lives in the SF bay area or is willing to relocate here, but remote work will also be considered.  Apply here. Teachers / curriculum designers.  This *does* need to be someone who can relocate to the SF bay area, and who has the legal ability to work in the US.  Apply here.  Especially apply if: * Rationality, or similar changes in your skill set, have made a big difference in your life; * You enjoy teaching, and helping others change their lives; you have strong interpersonal skills; * You have exceptional analytic skills, and want to help us figure out what sort of "rationality" and "rationality training" can actually work -- by being skeptical, trying things out, measuring outcomes, etc. Distant curriculum designers: as above, except that you don't need the interpersonal/teaching skills, and do need to be extra-exceptional in other respects.  Apply here. Programmers -- folks who can whip up simple prototype web apps quickly, to help with rationality training.  Apply here. Consultants -- folks who have relevant experience, and can spend a few hours offering suggestions for how to structure our workshops, or for how to structure rationality group more generally (after watching us teach, or by giving advice over the phone).   If you've run successful workshops for adults before, of any sort (e.g., on italian cooking), consider applying to help us organize our program.  Apply here. If you live in the SF bay area, you are also very welcome to come on a Satuday and help us test out draft lessons (by being a participant as we present them): email stephenpcole at gmail dot com to be added to that email list. Do err on the side of applying; hope to hear from you soon! (These application forms take the place of the previous ones; but if you've applied with the previous one, you're still golden, I'm just a bit behind on processing the applications.)
631aeff9-9345-4cd0-b665-41b0818bac4e
trentmkelly/LessWrong-43k
LessWrong
Get It Done Now Epistemic Status: Reference A while ago, I read the book Getting Things Done. Like most productivity books and systems, it includes detailed advice that approximately no one will follow. Unlike most productivity books and systems, it has two highly valuable key concepts. The second alone justified the time cost of reading the book. That principles are these: Keep a record of tasks you’ve decided to do. If you decide to eventually do a task that requires less than two minutes to do, that can efficiently be done right now, do it right now.  This wording is a refinement of the original concept of applying the two-minute rule during ‘processing time’ only. I think it’s much better to use it any time doing the new task can be done efficiently – it’s not waiting on anything, you have the necessary tools, it wouldn’t interfere too much with your state, with a key short-term deadline, or the need to protect a large or important block of time, etc etc. Having this simple concept in your head – it’s better, once you notice something that you need to do, to just do it now rather than add it to your stack of things to do – has saved me far more trouble than one might expect. Two minutes is a placeholder. Some people should use a lower or more often higher time threshold. The threshold should be adjusted based on the situation. The book also contains a detailed method of how to create and maintain the list of tasks. It seemed annoying and overly complex and not suited to the way I think, and I never gave it a real try. The basic principle of ‘have a system that ensures such tasks are not forgotten’ still seems very strong. The principle remains, and can be usefully extended further, which I plan to do in additional posts. But better to, by its own principles, write and get this posted now, so I can refer back to it.  
34b389b2-e9a7-4b05-a354-2d79a35c01eb
trentmkelly/LessWrong-43k
LessWrong
[Event] Weekly Alignment Research Coffee Time (05/24) Just like every Monday now, researchers in AI Alignment are invited for a coffee time, to talk about their research and what they're into. Here is the link.  And here is the everytimezone time. Note that the link to the walled garden now only works for AF members. Anyone who wants to come but isn't an AF member needs to go by me. I'll broadly apply the following criteria for admission: * If working in a AI Alignment lab or funded for independent research, automatic admission * If recommended by AF member, automatic admission * Otherwise, to my discretion I prefer to not allow people who might have been interesting but who I'm not sure will not derail the conversation, because this is supposed to be the place where AI Alignment researchers can talk about their current research without having to explain everything. See you then!
528d2f6b-89b1-419a-a6d1-a50365382e3c
trentmkelly/LessWrong-43k
LessWrong
Moral Golems > Stop thinking of the project of ethics as "figure out which simple theory is True". > > Start instead thinking of ethics as a project of trying to piece together psychological models of this insanely complicated and messy thing, "human morality". > > Binding exceptionless commitments matter to understanding this complicated thing; folk concepts like courage and honesty and generosity matter; taboo tradeoffs and difficult attempts to quantify, aggregate, and weigh relative well-being matter. > > Stop picking a "side" and then losing all interest in the parts of human morality that aren't associated with your "side": these are all just parts of the stew, and we need to work hard to understand them and reconcile them just right, not sort ourselves into Team Virtue vs. Team Utility vs. Team Duty.[1] > > – Rob Bensinger Rob Bensinger writes that the search for true moral systems is quixotic, that ethics is so complicated that we cannot hope to capture it in a neat, rule-based system & that our project ought instead to be to construct, starting from observations about what humans think, do & feel, a conglomerate of systems, or maybe an amalgam, that together capture these observations as well as possible, though necessarily imperfectly. He writes this in a post about things he would like to change about contemporary philosophy, which seems a little off, in a way, because my impression is that philosophers have been mostly anti-realist during the past century. That is to say, they don't think moral claims (e.g. "it is wrong to murder") are things that can be true or false, or that moral claims don't even pretend to be such things. Though I suppose it is possible to be both anti-realist & non-syncretic. Anyway, Rob would probably say that it doesn't matter, because even if moral claims can be true, there is no way for us to find out which of them are true – morality is just too complicated. The point is that we should not get tunnel vision on seeking the one true mo
964237c1-dcb4-484c-ab4c-a0b719044dd6
trentmkelly/LessWrong-43k
LessWrong
October The First Is Too Late Clarity didn't work, trying mysterianism
95a01425-8de3-4a50-acd2-d4eefbe38448
trentmkelly/LessWrong-43k
LessWrong
Don’t expect your life partner to be better than your exes in more than one way: a mathematical model In this post, I analyze a multidimensional version of the dating problem (more commonly called the secretary problem). If you are familiar with this problem you can search for “For this post, I want to make a slight change to the model” and skip to that. Otherwise, here is an intro to the usual dating problem: Suppose you are a serially monogamous dater and your goal is to eventually find someone to marry. Then you will have to make decisions of the type: should I settle with my current partner or reject them in the hope of finding someone better? This is the setup for the dating problem. In this model, we assume there is some maximal number, n, of partners you can meet. Here “meet” can mean different things. You could for example interpret it to mean “going on a single date” or “been dating for a year”. Depending on the interpretation, n will have very different sizes. The model does not take into account that information is gradually revealed over time. As soon as you have “met” someone, you know how good they are compared to previous people you have met, but you have no further information about how good they are compared to people you have not met yet. You will meet the partners in completely random order. You have to reject one partner before you can meet another one, and you can never go back to a person you have rejected. How do you maximize the probability of settling with the best partner? The optimal strategy is to reject the k first people, where k is around n/e, and e is the base of the natural logarithm. After this exploration phase, you settle down with the next person you meet who is better than all previous partners. Two types of failures can happen when following this algorithm: 1. Your best partner is among the k first people you meet, so you rejected them. 2. You never meet your best partner. Instead, you settle down with someone else who is better than all the first k candidates but you met before you have explored enough to meet your bes
74b447cf-b91d-45c9-ae84-edc7d866d7da
StampyAI/alignment-research-dataset/special_docs
Other
Extracting and Using Preference Information from the State of the World Extracting and Using Preference Information from the State of the World Rohin Shah Electrical Engineering and Computer Sciences University of California at Berkeley Technical Report No. UCB/EECS-2020-210 http://www2.eecs.berkeley.edu/Pubs/TechRpts/2020/EECS-2020-210.html December 17, 2020 Copyright © 2020, by the author(s). All rights reserved. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission. Extracting and Using Preference Information from the State of the World by Rohin Monish Shah A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Computer Science in the Graduate Division of the University of California, Berkeley Committee in charge: Professor Anca Dragan, Co-chair Professor Stuart Russell, Co-chair Professor Pieter Abbeel Professor David Ahn Fall 2020 Extracting and Using Preference Information from the State of the World Copyright 2020 by Rohin Monish Shah 1 Abstract Extracting and Using Preference Information from the State of the World by Rohin Monish Shah Doctor of Philosophy in Computer Science University of California, Berkeley Professor Anca Dragan, Co-chair Professor Stuart Russell, Co-chair Typically when learning about what people want and don't want, we look to human action as evidence: what reward they specify, how they perform a task, or what preferences they express can all provide useful information about what an agent should do. This is essential in order to build AI systems that do what we intend them to do. However, existing methods require a lot of expensive human feedback in order to learn even simple tasks. This dissertation argues that there is an additional source of information that is rather helpful: the state of the world. The key insight of this dissertation is that when a robot is deployed in an envi- ronment that humans have been acting in, the state of the environment is already optimized for what humans want, and is thus informative about human preferences. We formalize this setting by assuming that a human Hhas been acting in an environment for some time, and a robot Robserves the nal state produced. From this nal state, Rmust infer as much as possible about H's reward function. We analyze this problem formulation theoretically and show that it is particularly well suited to inferring aspects of the state that should notbe changed { exactly the aspects of the reward that His likely to forget to specify. We develop an algorithm using dynamic programming for tabular environments, analogously to value iteration, and demonstrate its behavior on several simple environments. To scale to high-dimensional environments, we use function approximators judiciously to allow the various parts of our algorithm to be trained without needing to enumerate all possible states. Of course, there is no point in learning about H's reward function unless we use it to guide R's decision-making. While we could have Rsimply optimize the inferred reward, this su ers from a \status quo bias": the inferred reward is likely to strongly prefer the observed state, since by assumption it is already optimized for H's preferences. To get Rto make changes to 2 the environment, we will usually need to integrate the inferred reward with other sources of preference information. In order to support such reward combination, we use a model in whichRmust maximize an unknown reward function known only to H. Learning from the state of the world arises as an instrumentally useful behavior in such a setting, and can serve to form a prior belief over the reward function that can then be updated after further interaction with H. i For humanity, that we may navigate the challenging times ahead. ii Contents Contents ii List of Figures iv List of Tables v 1 Introduction 1 1.1 Why is inaction a safe default? . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Learning from the state of the world . . . . . . . . . . . . . . . . . . . . . . 3 1.3 Using the learned reward . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.4 Thesis overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2 Background 7 2.1 Sequential decision-making . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.2 Reward misspeci cation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.3 Minimizing side e ects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.4 Learning from human feedback . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.5 Maximum Causal Entropy Inverse Reinforcement Learning . . . . . . . . . . 13 3 Formalism: De ning a state of the world problem 18 3.1 Problem statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 3.2 Choice of problem parameters . . . . . . . . . . . . . . . . . . . . . . . . . . 19 4 Theory: What can we learn from the state of the world? 22 4.1 Formalism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 4.2 Preserving the observed state . . . . . . . . . . . . . . . . . . . . . . . . . . 25 4.3 Features as a form of prior information . . . . . . . . . . . . . . . . . . . . . 30 4.4 Information from irreversibility . . . . . . . . . . . . . . . . . . . . . . . . . 31 4.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 5 An exact algorithm for tabular environments 36 5.1 MCMC sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 5.2 Reward Learning by Simulating the Past . . . . . . . . . . . . . . . . . . . . 37 iii 5.3 Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 6 Function approximation for high-dimensional environments 43 6.1 Gradient as backwards-forwards consistency . . . . . . . . . . . . . . . . . . 44 6.2 Learning a latent MDP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 6.3 Deep RLSP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 7 Evaluation: Correcting misspeci ed rewards and imitating skills 51 7.1 Algorithm details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 7.2 Conceptual environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 7.3 Skill learning with Deep RLSP . . . . . . . . . . . . . . . . . . . . . . . . . . 59 7.4 Investigating the prior distribution . . . . . . . . . . . . . . . . . . . . . . . 63 7.5 Robustness to H's planning horizon . . . . . . . . . . . . . . . . . . . . . . . 64 7.6 Con icts between the inferred reward and H's desires . . . . . . . . . . . . . 65 8 Using assistance instead of reward learning to integrate information 68 8.1 Reward learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 8.2 Assistance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 8.3 Reward learning as two-phase communicative assistance . . . . . . . . . . . . 76 8.4 Qualitative improvements for general assistance . . . . . . . . . . . . . . . . 82 8.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 8.6 Modeling the state of the world with assistance . . . . . . . . . . . . . . . . 89 9 Conclusion 91 9.1 Avenues for improvement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 9.2 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 9.3 Closing thoughts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 Bibliography 96 iv List of Figures 1.1 An environment which should not be disturbed: the house of cards . . . . . . . 2 1.2 Learning preferences from the state of the world: the vase environment . . . . . 3 4.1 An MDP where short-term reward con icts with long-term reward . . . . . . . . 27 4.2 A simple deterministic chain MDP . . . . . . . . . . . . . . . . . . . . . . . . . 32 4.3 Fire extinguisher MDP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 6.1 Learning to imitate a balanced Cheetah . . . . . . . . . . . . . . . . . . . . . . 43 7.1 Behavior of RLSP on a suite of gridworld environments . . . . . . . . . . . . . . 56 7.2 Policy recovered by Deep RLSP for the balanced Cheetah . . . . . . . . . . . . 62 7.3 Robustness of RLSP to Alice's planning horizon . . . . . . . . . . . . . . . . . . 64 7.4 Comparing Additive and Bayesian methods of combining reward information . . 66 8.1 Desired behaviors from an AI assistant . . . . . . . . . . . . . . . . . . . . . . . 69 8.2 The wormy-apples variant of the kitchen environment . . . . . . . . . . . . . . . 84 8.3 The CakeOrPie variant of the kitchen environment . . . . . . . . . . . . . . . . 86 8.4 Learning curves for DQN on the CakeOrPie environment . . . . . . . . . . . . . 87 8.5 The oce variant of the kitchen environment . . . . . . . . . . . . . . . . . . . 90 v List of Tables 7.1 Performance of algorithms on the gridworld test suite . . . . . . . . . . . . . . . 55 7.2 Results for imitation of MuJoCo robot policies . . . . . . . . . . . . . . . . . . . 60 vi Acknowledgments My journey through grad school has been quite a roller coaster, and I am indebted to many people for making it both enjoyable and successful. For even starting me on this path, I owe my thanks to Ras Bodik. He encouraged me to consider research as an opportunity to explore during undergrad, and later took me on as his PhD student to work on program synthesis. While my dissertation has turned out to be on a completely di erent topic, I owe my success in this eld to the research skills I learned working with Ras. During those early PhD years, I was especially lucky to be surrounded by the Bodik group: Mangpo Phothilimthana, Sarah Chasins, Julie Newcomb, Ali Koksal, Shaon Barman, and Emina Torlak. While it is only now that I really see how little I knew, and how much I could expect to improve over the course of my PhD, their advice was invaluable in helping me come to terms with my lack of productivity during that rst year. After this year, Ras transferred to the University of Washington, and I followed him a year later. I am especially grateful to the UW PLSE lab for making me feel so welcome; Zach Tatlock, Anna Kornfeld Simpson, and James Wilcox deserve special thanks. It was around this time that I noticed the signi cant progress being made in arti cial intelligence, and the notable lack of research on how to ensure its safety. I made the dicult decision to switch the focus of my PhD from program synthesis to AI safety, at the Center for Human-Compatible AI (CHAI). Thanks in particular to Ajeya Cotra for several conversations that fed into this decision. At CHAI, I was especially lucky to work with three advisors: Anca Dragan, Stuart Russell and Pieter Abbeel. While I'm not going to cover the wealth of wisdom they have imparted to me, I want to highlight one thing they did that I am especially thankful for: they gave me a large amount of freedom in deciding what I should spend my time on. During my rst year at CHAI, I spent most of my time simply reading papers and getting up to speed in my new eld of study; at the end of the year I started a newsletter that consumed over ten hours of my time every week. In today's culture of \publish or perish", it feels quite unusual for me to have had this a ordance at all. The greatest bene t of CHAI was the people I got to work with. Thanks in particular to Daniel Filan, Adam Gleave, Michael Dennis, and Dylan Had eld-Menell for shaping many of my views about AI safety. I would also like to thank Alex Turner, Andrew Critch, Jaime Fisac, Lawrence Chan, Andreea Bobu, Smitha Milli, and Cody Wild for fascinating conversations on the topic. And of course I am indebted to many collaborators, including Ben Cottier, Cynthia Chen, David Lindner, Dmitrii Krasheninnikov, Jordan Alexander, Mark Ho, Micah Carroll, Neel Alex, Noah Gundotra, Paul Knott, Pedro Freire, Rachel Freedman, Richard Ngo, Sam Toyer, Scott Emmons, S oren Mindermann, Steven Wang and Sumith Kulal. Of course, I don't just owe thanks to those who worked speci cally on research with me. I am also indebted to the excellent administrative sta , who made university bureaucracy much, much easier to navigate. Rosie Campbell, Martin Fukui, Lydia Raya and Jean Nguyen were particularly helpful in this regard, but I must also thank Angie Abbatecola, Lena Lau-Stewart, Shirley Salanio, Audrey Sillers, Susanne Kauer, Laura Green eld, and Logan Baldini. vii This PhD would have been so much harder to do without the support of many friends (beyond those I was working with). While it would be quite a challenge to list them all, I want to speci cally name Andrew Huang, Anisha Sensa Mauze, Bill Zito, Caroline Jeanmaire, Chigozie Nri, Daniel Ziegler, Davis Foote, Dmitriy Khripkov, Eli Marrone, Ellen Anderson, Jen Chung, Jonathan Mustin, Joseph Kijewski, Nathan Mandi, Ozzie Gooen, Patrick Brinich- Langlois, Richard Yannow, Sean Kelly, Sindy Li, Valmik Prabhu, and Veronica Boyce. And last but certainly not least, I must thank my parents Lena and Monish, my brother Nihal, and my partner Lynette; they have been remarkably supportive, even as my expected graduation date slipped further and further into the future. I cannot say that they were all thrilled at my choice to pursue a PhD, which makes me all the more grateful that I can nevertheless count on them to support me anyway. 1 Chapter 1 Introduction Traditional computer programs are instructions on how to perform a particular task: to compute a factorial, we tell the machine to enumerate the integers from 1 to n, and multiply them together. However, we do not know how to mechanically perform more challenging tasks like translation. The eld of arti cial intelligence raises the level of abstraction so that we simply show what the task is, and let the machine to gure out how to do it. For instance, we present pairs of sentences which provide examples of translation, and let the machine determine how to translate well. Unfortunately, as we apply our techniques to increasingly complex tasks, even specifying the task becomes dicult. When we specify reward functions for reinforcement learning by hand, the resulting agents often \game" their reward function by nding solutions that technically achieve high reward without doing what the designer intended. In a particularly famous example, an agent trained to maximize its score in the boat racing game CoastRunners got stuck in a loop collecting high-scoring speed boosts instead of winning the game (Clark and Amodei, 2016). Both Lehman et al. (2018) and Krakovna (2018) collect several examples of similar behavior from a variety of sources. As the environment becomes more complex, it becomes important not just to specify what should be done, but also what should not be changed (McCarthy and Hayes, 1981). As put by Stuart Russell, \A system that is optimizing a function of nvariables, where the objective depends on a subset of size k 0such that max s2SjjPt(s;)jjTVC t: While the theorem considers deterministic initial conditions where we start at a partic- ular states, the bound also applies to stochastic initial conditions, since they are convex combinations of the deterministic cases. Ergodic MDPs Given an MDPM=hS;AH;T;R;PS; iand a policy H, we say that Hinduces a Markov chain inMgiven by: P(s0js) =X aH2AHH(aHjs)T(s0js;aH): Intuitively, if we observe an agent with policy Hacting inM, the sequence of states visited by the agent behaves like the Markov chain induced by HinM. An MDPMisergodic if for all possible policies H, the Markov chain induced by Hin Mis ergodic. For an ergodic MDP, the stationary distribution of states can never depend on the initial state distribution PS, regardless of the choice of H. We write H 1to denote the stationary distribution of the resulting Markov chain, which is the limit of the t-step visitation distribution, de ned recursively with H 0=PSand H t(s) =X s02SP(sjs0)H t1(s0): Learning from an optimal human In this chapter, we will assume that Hpursues her goals optimally , that is,His an optimal policy given her true reward function R. When two or more actions are equally valuable, we assume Hchooses uniformly at random between these actions. We can formalize this using the Bellman equations (which are guaranteed to have a unique solution): H(aHjs;)/ 1" aH2argmax aH02AHQ(s;aH0;)# Q(s;aH;) = E s0T(js;aH) R(s;aH;s0) + V(s;) V(s;) = max aH2AHQ(s;aH;) CHAPTER 4. THEORY: WHAT CAN WE LEARN FROM THE STATE OF THE WORLD? 25 We denote this policy by H opt() (read as \the optimal policy for "), and thus its stationary distribution is H opt() 1 . Now, recall that our goal is to learn from the state of the world. For clarity, we will denote the observed state as sobserved . We now want to compute the posterior P(jsobserved ), which is given by Equation 3.2: P(jsobserved )/P()P(sobservedj) =P()H opt() 1 (sobserved ): In other words, the likelihood ratio is given by H opt() 1 (sobserved ), and so any information that we can learn comes from the variation of this quantity with . For this reason, H opt() 1 (sobserved ) will be the primary object of study in this chapter. 4.2 Preserving the observed state We argued in Chapter 1 that upon observing some optimized state, our rst instinct would be to \do nothing" and leave the state as is. There is a fully general argument for this point: if all we know is that Hoptimized the environment to lead to state sobserved , then clearly H expects that sobserved is a high-value state, and so perhaps we should try not to change it. Let us suppose there is reward function that rewards being in the state sobserved , and never rewards any other state. Intuitively, the \goal" of this reward function would be to visit sobserved as often as possible. As long as is suciently large, it will care primarily about how much we visit sobserved in the stationary distribution. So, the optimal policy for this reward is the policy that maximizes the visitation of sobserved at the stationary distribution. We formalize this below: Proposition 3. Suppose there is some preserve such thatRpreserve (s;a;s0) = 1[s=sobserved ]. Then there exists some 0<1such that if > 0thenpreserve maximizesH opt() 1 (sobserved ). Proof Sketch. If there is some other that achieves higher likelihood than preserve , it must visit sobserved more often in the limit as t!1 . ThenH opt() must achieve higher reward according topreserve thanH opt(preserve ) in the limit as t!1 , despite the fact that H opt(preserve ) is optimal for preserve . The only way this can hold is if H opt(preserve ) visitedsobserved early at the cost of failing to visit sobserved later. This allows us to bound . Proof. Note that for the reward Rpreserve , the expected reward is simply the sum of the discounted probabilities of being in sobserved , and soERpreserve (H) =1P t=0 tH t(sobserved ). Consider two arbitrary policies H 1andH 2. Assume that they satisfy the following two conditions: 1.H 1has lower likelihood: H 11(sobserved )<H 21(sobserved ). 2.H 1has higher expected reward: ERpreserve (H 1)ERpreserve (H 2). CHAPTER 4. THEORY: WHAT CAN WE LEARN FROM THE STATE OF THE WORLD? 26 De nepto be the lower of the two stationary distribution probabilities, and to be half the di erence, so that we have: H 11(sobserved ) =p H 21(sobserved ) =p+ 2 By the application of the convergence theorem (Proposition 2), we can nd some time T such that for any tT: 1.H 1 t(sobserved )p+ 2 2.H 2 t(sobserved )p+3 2 Putting these together, we get: 8tT:H 2 t(sobserved )H 1 t(sobserved ): We now use the inequality from the second condition and simplify: ERpreserve (H 2)ERpreserve (H 1)0 1X t=0 t(H 2 t(sobserved )H 1 t(sobserved ))0 "T1X t=0 t(H 2 t(sobserved )H 1 t(sobserved ))# + "1X t=T t(H 2 t(sobserved )H 1 t(sobserved ))# 0 For the rst terms where t -bound (H 1;H 2), then it is not possible for H 1;H 2to satisfy the two conditions above. Then, we can de ne 0,maxH 1;H 2 -bound (H 1;H 2). Note that every -bound is less than 1. SinceSandAHare nite, so too is the space of deterministic policies, and so we also have 0<1, as required. Now suppose > 0, and consider an arbitrary policy H. For the pair H opt(preserve );H, both conditions cannot be satis ed. However, since H opt(preserve ) maximizes ERpreserve , the second condition must be satis ed. Thus, the rst condition cannot be satis ed. Therefore H opt(preserve ) maximizes H opt() 1 (sobserved ). The above theorem only applies when is suciently large, e ectively because only then are we suciently con dent that Hcared enough about sobserved that we can infer that it must be a high-value state. However, even when is small, we would nd a policy that prioritizes reaching sobserved when possible. Is it possible that despite its focus on the near term, such a policy would nonetheless maximize the probability of sobserved in the stationary distribution? It turns out the answer is no: the agent may sacri ce the long-term reward from the stationary distribution in order to get a more enticing short-term reward. A B CD a10:5 0:5a2 a1;a2a1;a2 a1;a2 Figure 4.1: An MDP in which a greedy goal-seeking agent will sac- ri ce long-term reward in pursuit of short-term reward. The blue Ais the important state to analyze, and the redBis the observed state sobserved .Consider the MDP in Figure 4.1, in which the observed state is sobserved =B. Every time the agent takes an action, there is an = 0:04 chance that the next state is chosen uniformly at random from all states. (This is necessary in order to ensure that the MDP is ergodic, as we have assumed in this chapter.) In this MDP, the only relevant decision for a policy is whether to take action a1ora2in stateA, which we will refer to as 1and2respectively. The stationary distribution of 1isfA: 0:01;B: 0:311;C: 0:37;D: 0:309g, while that of 2isfA: 0:01;B: 0:368;C: 0:25;D: 0:372g. The crucial feature is just that 1 visitsCmore while 2visitsBandDmore, as one would expect looking at Figure 4.1. We assume the reward space is parameterized such thatRiassigns 1 to state iand 0 to all other states. For this MDP, preserve =B. For this reward, we face a choice: whether to take a1in the hopes of getting to stateBimmediately, or to take a2to guarantee that we eventually get to state B. Intuitively, if is suciently low, then it is better to takea1, because the uncertain chance of reaching B is better than being forced to wait a full timestep before being able to reach B. For example, CHAPTER 4. THEORY: WHAT CAN WE LEARN FROM THE STATE OF THE WORLD? 28 when = 0:1, the value of 1in each state is about fA: 0:0497;B: 1:0106;C: 0:0013;D: 0:0982g, whereas the value of 2in each state is about fA: 0:0105;B: 1:0105;C: 0:0012;D: 0:0981g.1strictly dominates 2(where the di erence is most pronounced in A, since that is where the two policies di er). Meanwhile, the optimal policy for Dis clearly2, sincea2goes directly to Dand preserves the ability to keep going to D, whereasa1does not go to Dand has some chance of severely curtailing the agent's ability to go to D. Thus at = 0:1, we have H opt(D) 1 (sobserved )> H opt(preserve ) 1 (sobserved ), andpreserve isnota maximizer of  1(sobserved ). Thus, the requirement in Proposition 3 that be suciently high cannot be completely eliminated. This counterexample relied on our ability to set up the dynamics to o er the agent a \lottery" (action a1) that could severely curtail its ability to reach sobserved . What if we remove the ability to set up such lotteries by requiring that the environment be deterministic? One issue is that deterministic environments are typically not ergodic. However, we can apply the same trick as in our counterexample above, where to every action we add an  probability that the new state is chosen uniformly at random. Speci cally, given an MDP M, de ne the -randomized version of Mto be the same as Mexcept that the transition dynamics are modi ed to T0(s;aH;s0) = (1)T(s;a;s0) + jSj. Then, an MDPMis-deterministic if there exists an MDP M0with a deterministic transition function Td, such thatMis the-randomized version of M0. We will sometimes abuse notation and write Td(s;aH) to denote the unique state s0such thatTd(s;aH;s0) = 1. As long as the reward does not depend on the next state, the optimal policy for an -deterministic MDP can be found by solving an associated fully deterministic MDP: Lemma 4. LetM=hS;AH;T;R;PS; ibe an-deterministic MDP with deterministic transitionsTd, where the reward does not depend on next state and is written as R(s;aH). Then a policy His optimal forMi it is optimal for M0=hS;AH;Td;R;PS; ei, where the e ective discount e= (1). Proof. An optimal policy can be computed from the optimal Q-function, which can be calculated by value iteration, in which we initialize V0(s) = 0, and then de ne the recurrence: Qt(s;aH) =R(s;aH) +X s02S T(s;aH;s0)Vt1(s0) = R(s;aH) + (1)Vt1(Td(s;aH)) + jSjX s02SVt1(s0) Vt(s) = max aH2AHQt(s;aH): Note that optimal policies for a Q-function are unchanged if we subtract a constant from Qt. This allows us to drop the last term from Qt. Using e= (1), we have: CHAPTER 4. THEORY: WHAT CAN WE LEARN FROM THE STATE OF THE WORLD? 29 Qt(s;aH) =R(s;aH) + (1)Vt1(Td(s;aH)): This is equivalent to the Q-value backup for a fully deterministic MDP with dynamics Td with e ective discount rate e. In an-deterministic MDP, we we once again nd that the likelihood H opt() 1 (sobserved ) is highest for preserve : Proposition 5. Suppose there is some preserve forMas in Proposition 3, and assume that Mis-deterministic with 0<< 1. Thenpreserve maximizesH opt() 1 (sobserved ). Proof Sketch. In an-deterministic environment, H opt(preserve ) is always traversing the short- est path inTdtosobserved , which means that at every timestep tit will be maximizing the average visitation,1 t+1Pt t0=0H opt() t0(sobserved ), which in the limit converges to the stationary distribution H opt() 1 (sobserved ). Proof. First, we show that H opt(preserve ) always chooses an action athat is on a shortest path inTdtosobserved . By Lemma 4, we know that H opt(preserve ) is an optimal policy for the Q-function computed by the backups: Qt(s;aH) = 1[s=sobserved ] + eVt1(Td(s;aH)) Vt(s) = max aH2AHQt(s;aH): This is equivalent to the Q-value backup for a fully deterministic MDP with dynamics Td with e ective discount rate e. It is clear that the optimal policy in such an MDP is to follow the shortest path to sobserved . To elaborate, let L(s) be the length of the shortest path from stosobserved inTd(or1 if no such path exists), with L(sobserved ),0. In addition, de ne L0to be the length of the shortest (non-empty) path from sobserved to itself. Then, we can prove by induction using the previous recurrence relations that: Vt(s) =8 < : L(s) e1 dtL(s) L0eL0 e 1 L0et>L (s) 0 else: It is easy to see that either all actions are equally good (the else case above), or the best action is the one which progresses on a shortest path (so that in the next state L(s) has decreased, or if we are in sobserved then we are following a shortest path back to sobserved ). We have so far shown that H opt(preserve ) always chooses an action aHthat is on a shortest path inTdtosobserved . Since the only other e ect of any action besides traversing paths in Td is to teleport the agent randomly, which the agent cannot control, we only need to analyze CHAPTER 4. THEORY: WHAT CAN WE LEARN FROM THE STATE OF THE WORLD? 30 how paths inTdare traversed. Clearly, always traversing the shortest path to sobserved will maximize the expected number of times that sobserved is visited, and so we have: H opt(preserve )2argmax HtX t0=0H t0(sobserved ) for nite t H opt(preserve )2argmax Hlim t!11 t+ 1tX t0=0H t0(sobserved ) H opt(preserve )2argmax Hlim t!1H t(sobserved ) Limit of average is limit of sequence H opt(preserve )2argmax HH 1(sobserved ) preserve2argmax H opt() 1 (sobserved ) 4.3 Features as a form of prior information Overall, it seems quite likely that preserve will have a high likelihood ratio (if it exists). We have derived two distinct sucient conditions: rst, when His modeled as suciently farsighted (that is, with suciently high ), and second, when the environment is -deterministic. This is in some sense a disappointing result: it suggests that it is hard to learn anything more sophisticated than \preserve the observed state", even though intuitively it seems like we should be able to infer more from the state of the world. We view this as having similar implications as the no-free-lunch theorem of machine learning: just as we need to assume some prior simplicity of the environment in order for machine learning to be feasible at all, we similarly need to assume some prior information about Rin order to make non-trivial inferences about the reward that His optimizing. We will use a particularly simple form of prior information: we will assume that we have access to a feature function f:S!RFthat describes each state by a vector of real-valued features of that state. We will use fi:S!Rto denote the ith feature function, so that fi(s) = [f(s)]i. Just as we have visitation distributions for states, we can have feature counts: FH t=tX t0=0X s2SH t0(s)f(s): The rewardRis then assumed to be of the form R(s;aH;s0) =g(f(s)) for some simple function g. Unless otherwise speci ed, we will use rewards that are linear over the features, that is, R(s;aH;s0) =Tf(s). Note that for a constant c>0,RandRchave the same optimal policy; as a result we may sometimes assume that is normalized. CHAPTER 4. THEORY: WHAT CAN WE LEARN FROM THE STATE OF THE WORLD? 31 Once we use features, we can formalize an important fact about learning from the state of the world: if there was no way that Hcould have a ected a particular feature, then we can't infer anything about her preferences over that feature. Note that this is not speci c to learning from the state of the world: we could prove a similar theorem for inverse reinforcement learning as well. Proposition 6. Assume that for a speci c feature fiand any timestep t, the feature count (FH t)iis independent of H. ThenH opt() 1 (sobserved )is invariant to the value of i. Proof Sketch. Since there is no policy that can a ect the value of fi, changing the value of i has no e ect on the optimal policy, and so cannot have an e ect on H opt() 1 (sobserved ). Proof. Consider some arbitrary , and let0be the same as , exceptihas been increased by an arbitrary constant c(which may be negative). Then, for any policy H: ER0(H) =1X t=0 tX s2SH t(s)R0(s) =1X t=0 tX s2S(0)TFH t =1X t=0 tX s2STFH t+c(FH t)i =K+1X t=0 tX s2STFH t whereKis independent of H =K+ER(H): In the fourth line above, we have used the fact that ( FH t)iis independent of Hto extract out a constant Kthat is also independent of H. Thus, for any policy H, the expected reward of the policy under 0only changes by a constant. This can never change the ordering over policies, and so H opt() =H opt(0); that is, optimal policies are invariant to changes in i. Thus,H opt() 1 (sobserved ) is also invariant to changes ini. 4.4 Information from irreversibility Recall our motivating example of an intact vase: since a vase that is broken can never be repaired, the fact that it is still intact in the observed state is strong evidence that Hpreferred that the vase stays intact. This is an instance of a general pattern: if you observe that some irreversible action was nottaken, this is a strong signal that the consequences of that action are particularly unwanted. If we have a feature that tracks whether the irreversible transition CHAPTER 4. THEORY: WHAT CAN WE LEARN FROM THE STATE OF THE WORLD? 32 has been taken, then presumably the reward weight of that feature should indicate that the transition is not to be taken. We formalize this below: De nition 2. Binary feature. A featurefiisbinary if it only takes on the values 0 or 1, that is,8s;fi(s)2f0;1g. De nition 3. Irreversible feature. A binary feature fiisirreversible inMif it satis es the following conditions: 1. Initially o :8s2Support(PS);fi(s) = 0 . 2. Can be turned on: 9H;t:H t(fi)>0. 3. Cannot be turned o : 8s: [fi(s) = 1 =) 8a8s02Support(T(js;a));fi(s0) = 1] . For example, the \broken vase" feature in the vase environment of Figure 1.2 is an irreversible feature. Note that in an ergodic MDP, there cannot be irreversible features, since there is always positive probability of eventually transitioning back to the initial state. However, we can instead consider non-ergodic MDPs M, which can have irreversible features, and then ask what we would infer about the reward in an -randomized version of M(which is always ergodic, if >0). A B Ca1;a2 a1;a2 a1;a2 Figure 4.2: An -deterministic chain MDP in which the agent has no con- trol over the irreversible blue feature. Bis the observed state sobserved .Can we get a general result saying that if we observe that an irreversible feature is not turned on, then we can infer that the feature should not be turned on? Unfortunately the answer is no. Consider the -deterministic MDP in Figure 4.2, in which there is a feature that indicates whether or not the state is blue (that is, whether the state is Cor not). This is a binary, irreversible feature in the deterministic transitions Td, and we do observe that the agent is in a state where the feature is not yet on. However, the agent has no control over anything, and so by Proposition 6 nothing can be inferred about the weight on this feature. The core issue is that we can only make strong inferences about irreversible events if we believe that Hcontrolled whether or not the irreversible event hap- pened. In the chain MDP of Figure 4.2, the irreversible event is inevitable, and can only be undone by the  chance of randomly transitioning to a randomly chosen state. It should be noted that this is not an artifact of -randomization. If we did not have the ergodic restriction, then we could still have an MDP in which the agent's rst action randomly sends it into one of two halves of the state space. In one half the agent is forced to turn on the irreversible feature f1, and in the other half it is forced to turn on the irreversible CHAPTER 4. THEORY: WHAT CAN WE LEARN FROM THE STATE OF THE WORLD? 33 Start Food FireTV Dousedacook0:9 0:1aTV adouse adouseadouse anoopanoop anoopanoop anoop Figure 4.3: Fire extinguisher MDP. At the start, the agent can choose whether to cook food or watch TV. Watching TV always succeeds. Cooking usually succeeds in making food, but also has a 10% chance of starting a re. At any point the agent may use the re extinguisher, irreversibly covering the house in foam but extinguishing any res present. The agent may always take a noop, and if it takes an action that is not applicable to the given state, that is treated equivalently to a noop. featuref2. The observed state sobserved will have exactly one of the two irreversible features turned on, but again by Proposition 6 we cannot infer anything about either of the features. What if we added an assumption that the irreversible transition was \controllable"? For example, we could assume that in every state swherefi(s) = 0, there is at least one action that guarantees that the next state s0also satis es fi(s0) = 0. Could we then say that the irreversible feature's weight can be made arbitrarily negative? It turns out that even with this assumption, we cannot conclude that the irreversible feature is arbitrarily dispreferred. While we do know that along the trajectory the feature was never turned on, it is possible that had the environment randomness been di erent then the optimal policy would have turned the feature on. For example, using a re extinguisher is irreversible, and we usually observe that re extinguishers have not been used, but there certainly are situations in which we would use re extinguishers, and so the reward weight for using a re extinguisher cannot be arbitrarily negative. Concretely, consider the re extinguisher MDP in Figure 4.3. For ergodicity, we need the environment to be -randomized, but we ignore this dependence during analysis for the sake of clarity. In this MDP, the agent starts out in the state Start, and can choose to watch TV, or to cook, which has a 90% chance of producing Food, and a 10% chance of igniting Fire. From any of these states, the agent may use the re extinguisher to Douse a re (if present). We have four features: fFood,fTV,fFireandfDoused , each of which take on value 1 CHAPTER 4. THEORY: WHAT CAN WE LEARN FROM THE STATE OF THE WORLD? 34 in the corresponding state and 0 everywhere else. Note that fDoused is a binary, controllable, irreversible feature. Suppose that we observe sobserved =Food . Intuitively, Hcooked food and was willing to take the risk of igniting a re. Assuming that is suciently large, the optimal policy for the reward = [Food;TV;Fire;Doused ] = [1;0;20;0] is to cook food and then take noop actions, and to douse the re if one is ignited. This policy leads to H opt() 1 (sobserved ) = 0:9 (ignoring-randomness), which is the highest possible value (since the maximum probability of entering the Food state is 0.9). However, Doused cannot be made arbitrarily negative: for example, if Doused is set to -100, then it is no longer worth cooking. The expected cost of the re outweighs the expected bene t of the food, since we would no longer be willing to douse the re if it did occur. As a result, the agent would watch TV instead of cooking (or execute some other 0 reward policy), in which case H 1(sobserved ) = 0. This type of reasoning is a feature, not a bug: the entire point of a re extinguisher is to deal with accidental res, and so we want our agent to learn that we are not completely averse to using re extinguishers, despite the fact that using a re extinguisher is irreversible and we have never (yet) done it. 4.5 Discussion In this chapter, we theoretically analyzed solutions to the problem of learning from the state of the world. We rst analyzed the general argument that the inferred reward would tend to prioritize preserving the observed state of the world. When a \preservation" reward function exists, it often does seem to have high likelihood. We identi ed two di erent sucient conditions for such a reward function to maximize the likelihood: 1.His suciently farsighted, that is, is suciently large (Proposition 3), or 2. The dynamics of the MDP are -deterministic (Proposition 5). These are not guaranteed: there are some cases like Figure 4.1 where with stochastic dynamics and low the preservation reward function does not maximize the likelihood. Nonetheless, it seems likely that in most realistic cases, a preservation reward function would tend to have high or even maximal likelihood. We also analyzed another general argument from Chapter 1: if we observe that some irreversible transition has not happened, then Hprobably did not want that irreversible transition to happen. While this is certainly sometimes true, as in the vase example of Figure 1.2, there are often realistic scenarios in which it is not. In particular, just because H has not yet irreversibly used a re extinguisher, doesn't mean that she doesn't want it to be used in the event of a re. Further research could also consider how to best formalize the argument from e ort: \if Hput e ort into a particular feature, then that is very informative about the reward weight for that feature". For example, if the transition dynamics lead to a vase accumulating dust CHAPTER 4. THEORY: WHAT CAN WE LEARN FROM THE STATE OF THE WORLD? 35 over time, that Hmust wipe o in order to make it clean, and we then observe a clean vase insobserved , that is informative about H's preferences over dusty vases. Having clari ed what we can hope to do by learning from the state of the world, we now turn to the design of algorithms that can eciently solve such problems. We will leverage the assumption that the reward is linear in features, as identi ed in Section 4.3. We will then use these algorithms to further explore conceptually what can be done by learning from the state of the world, in Chapter 7. 36 Chapter 5 An exact algorithm for tabular environments Having discussed the inferences that could be made in theory, we now turn to the task of creating an algorithm that can make these inferences in practice. In this chapter, we will consider nite-horizon tabular environments, that is, in our state of the world problem hM;s0;H;T;h;R;Pii, we will require that T6=1, and thatMis suciently small that we can iterate over it, analogously to value iteration. In Chapter 4 we saw that allowing for arbitrary reward functions often leads to degenerate solutions. So, we assume that Mis equipped with a feature function fthat identi es the relevant features, and the the reward function is linear in features, so that R(s) =Tf(s). Equation 3.1 speci es how to compute the likelihood of a particular reward R. We reproduce it here: P(s0j) =X sT;s2;s12S aH T;aH 2;aH 12AHPS(sT)1Y t=TH(aH tjst;)T(st+1jst;aH t): (5.1) Our goal is to compute the posterior P(js0). However, typically will be drawn from a continuous space, and the transition dynamics Tcan be complex and nonlinear, making it very dicult if not impossible to compute an exact form for the posterior that works for arbitrary. This remains true even if we do not require the distribution to be normalized. We thus need to use approximate inference methods in order to estimate the posterior. 5.1 MCMC sampling One standard way to address the computational challenges involved with the continuous and high-dimensional nature of is to use MCMC sampling to sample from p(js0)/p(s0j)p(). We apply this sampling in a standard manner to derive an algorithm that we present in Algorithm 1. CHAPTER 5. AN EXACT ALGORITHM FOR TABULAR ENVIRONMENTS 37 Algorithm 1 MCMC sampling to estimate the state of the world posterior Require: State of the world problem hM;s0;H;T;h;R;Pii, step size 1: random sample( P) 2:Compute the last-step occupancy measure p(s0j) 3:p P(s0j)P() 4:repeat 5:0 random sample(N(;)) 6: Compute the last-step occupancy measure p(s0j0) 7:p0 p(s0j0)p(0) 8: ifrandom sample(Unif(0 ;1))min(1;p0 p)then 9: 0 10: end if 11: Appendto the list of samples 12:until have generated the desired number of samples While this Algorithm 1 gives us an estimate of the full posterior distribution, even for relatively simple environments it can take quite a long time to converge, and so we seek a better solution. 5.2 Reward Learning by Simulating the Past To obtain a more ecient algorithm, we take inspiration from the research literature on inverse reinforcement learning (IRL). The problem formulation in IRL is similar to ours: the primary di erence is that Rcan observe a set of trajectories figrather than a single state s0. If we apply MCMC sampling to IRL as we did in the previous section, then we recover the Bayesian IRL algorithm (Ramachandran and Amir, 2007). Instead of sampling with Bayesian IRL, it is common to use the Maximum Causal Entropy IRL framework (MCEIRL) (Ziebart et al., 2010). As discussed in Section 2.5, MCEIRL computes a point estimate of the true reward Rrather than the full posterior P(js0). It also assumes that His the Boltzmann-rational policy, which can be computed using soft value iteration. We adopt both of these assumptions in our setting to derive a new algorithm analogous to MCEIRL for the state of the world setting. Deriving the gradient Our goal is now to nd the maximum a posteriori estimate of the reward: = argmaxlnp(js0) = argmaxlnp(s0j) + lnP(): (5.2) CHAPTER 5. AN EXACT ALGORITHM FOR TABULAR ENVIRONMENTS 38 MCEIRL uses gradient ascent to solve the problem, since it is convex in the reward parameters . We adopt the same approach. We assume that the gradient of the log prior rlnP() is easy to compute, and so we focus on computing the gradient of the log likelihood rlnp(s0j). First, we note that the likelihood in Equation 3.1 can be rewritten in terms of the likelihoods of individual trajectories: p(s0j) =X sT;:::s2;s12S aH T;:::aH 2;aH 12AHp(j); (5.3) where=sTaH T:::s1aH 1s0is the (hypothesized) trajectory that Htook in the environment. We can then rewrite our desired gradient in terms of the gradients of individual trajectories as follows: rlnp(s0j) =1 p(s0j)rp(s0j) =1 p(s0j)X sT:12S aH T:12AHrp(j) =1 p(s0j)X sT:12S aH T:12AHp(j)rlnp(j): This has a nice interpretation: compute the MCEIRL gradients for each trajectory, and then take their weighted sum, where each weight is the probability of the trajectory given the evidence s0and current reward . In Section 2.5 we derived the exact gradient for a trajectory under the MCEIRL assumption of Boltzmann-rationality. We now substitute it in to get: rlnp(s0j) =1 p(s0j)X sT:12S aH T:12AH" p(j)1X t=Tg(st;aH t;)# ; (5.4) where we have used the de nitions from Section 2.5: g(st;aH t;),f(st) +E s0 t+1 Ft+1(s0 t+1;) Ft(st;) Ft(st;),f(st) + E aH0 t:T1;s0 t+1:T"TX t0=t+1f(s0 t0)# : CHAPTER 5. AN EXACT ALGORITHM FOR TABULAR ENVIRONMENTS 39 Computing the gradient with dynamic programming The gradient in Equation 5.4 still involves a combinatorially large summation over all possible past trajectories. In order to solve even simple environments, we need to avoid this combinatorial explosion. Our approach is to use dynamic programming. We rst express the gradient in Equation 5.4 asG0(s0;) p(s0j), which can be done if we de ne Gt(st;),X sT:t1;aH T:t1" p(T:t1;stj)t1X t0=Tg(st0;aH t0;)# : (5.5) Thus, to compute the gradient, we only need to compute G0(s0;) andp(s0j) eciently. The latter is particularly easy, as it is just the probability of observing a state at a speci c timestep: p(sTj) =PS(sT) p(st+1j) =X st2S aH t2AHp(stj)soft t(aH tjst;)T(st+1jst;aH t): Note thatsoft tcan be computed using soft value iteration, as detailed in Section 2.5. Before we tackle G, we rst derive a recursive rule for F: F0(s0;) =f(s0) Ft1(st1;) =f(st1) + E aH0 t1:1;s0 t:0"0X t0=tf(s0 t0)# =f(st1) +E aH0 t1;s0 t" f(s0 t) + E aH0 t:1;s0 t+1:0"0X t0=t+1f(s0 t0)## =f(st1) +E aH0 t1;s0 t[Ft(s0 t;)] =f(st1) +X aH0 t1;s0 tsoft t1(aH0 t1jst1;)T(s0 tjst1;aH0 t1)Ft(s0 t): Note thatg(st;aH t;) can easily be computed using its de nition given F. We are now ready to derive a recursive relation for G: CHAPTER 5. AN EXACT ALGORITHM FOR TABULAR ENVIRONMENTS 40 Gt+1(st+1;) =X sT:t2S aH T:t2AH" p(T:t;st+1j)tX t0=Tg(st0;aH t0;)# =X st2S aH t2AHX sT:t12S aH T:t12AHT(st+1jst;aH t)soft t(aH tjst;)p(T:t1;stj) g(st;aH t;) +t1X t0=Tg(st0;aH t0;)! =X st2S aH t2AH2 6664T(st+1jst;aH t)soft t(aH tjst;)0 BBB@X sT:t12S aH T:t12AHp(T:t1;stj)1 CCCAg(st;aH t;)3 7775 +X st2S aH t2AH2 6664T(st+1jst;aH t)soft t(aH tjst;)X sT:t12S aH T:t12AH p(T:t1;stj)t1X t0=Tg(st0;aH t0;)!3 7775 =X st2S aH t2AHT(st+1jst;aH t)soft t(aH tjst;) p(stj)g(st;aH t;) +Gt(st;) : For the base case, note that GT+1(sT+1) =X sT;aH T p(sT;aH T;sT+1)g(sT;aH T;sT+1) =X sT;aH TT(sT+1jsT;aH T)soft T(aH TjsT) p(sT)g(sT;aH T;sT+1) : Comparing this to the recursive rule, for the base case we can set GT(sT) = 0. Overall algorithm Combining all of these ingredients together gives us our nal algorithm, presented in Algo- rithm 2. Since we combine gradients from simulated past trajectories, we name the algorithm Reward Learning by Simulating the Past (RLSP). CHAPTER 5. AN EXACT ALGORITHM FOR TABULAR ENVIRONMENTS 41 Algorithm 2 Reward Learning by Simulating the Past. Require: State of the world problem hM;s0;H;T;h;R;Pii, learning rate 1: random sample( P) 2:repeat 3:8sT2S:p(sTj) PS(sT) 4:8s02S:F0(s0;) f(s0) 5:8sT2S:GT(sT;) 0 6:soft soft value iteration( M;;T ) 7: // Compute probabilities of states 8: fortin [T;:::;2;1]do 9:8st+12S:p(st+1j) P st;aH tp(stj)soft t(aH tjst;)T(st+1jst;aH t) 10: end for 11: // Compute expected feature counts 12: fortin [1;2;:::;T]do 13:8st2S:Ft(st;) f(st) +P aH0 t;s0 t+1soft t(aH0 tjst;)T(s0 t+1jst;aH0 t)Ft+1(st+1;) 14: end for 15: // Compute G 16: fortin [T;:::;2;1]do 17:8st+12S:Gt+1(st+1;) 0 18: forst;aH tinSAHdo 19: g(st;aH t;) f(st) +E s0 t+1 Ft+1(s0 t+1;) Ft(st;) 20: target soft t(aH tjst;) p(stj)g(st;aH t;) +Gt(st;) 21:8st+12S:Gt+1(st+1;) Gt+1(st+1;) +T(st+1jst;aH t)target 22: end for 23: end for 24: // Gradient ascent 25: + h G0(s0;) p(s0j)+rlnP()i 26:until convergence CHAPTER 5. AN EXACT ALGORITHM FOR TABULAR ENVIRONMENTS 42 5.3 Requirements The algorithms that we have presented here can infer information about human preferences, but require fairly strong assumptions. In particular, in order to apply either RLSP or the sampling algorithm, we need to have: 1.Small state and action spaces, so that we can enumerate all possible state-action pairs, 2. Perfect knowledge of the transition dynamics Tof the environments, and 3. A feature function fthat identi es relevant features of interest. In the next chapter, we relax these restrictions. 43 Chapter 6 Function approximation for high-dimensional environments Our goal now is to scale RLSP to realistic environments, where we cannot enumerate states, the dynamics of the environment are not fully known in advance, and a feature function may not be available. Consider for example the balanced cheetah in Figure 6.1. Just by looking at this single balanced state, it should be possible to infer the \goal" of balancing. However, we cannot apply the RLSP algorithm from the previous chapter for several reasons: the state space is continuous, we cannot simply enumerate the transition matrix, and there is no clear way to obtain a feature function. Once again, we can take inspiration from corresponding research in inverse reinforcement Figure 6.1: Suppose we observe a Cheetah balancing on its front leg (left). The state contains joint velocities in addition to positions, which are fairly low, showing that the cheetah is indeed balancing rather than, say, falling over. Intuitively, just by observing this balanced state, we should be able to reason that the Cheetah is \trying" to balance { other plausible goals, such as running forward, falling over, hopping on a single leg, and so on would not have led to this kind of state. We would like to have an algorithm that can make these sorts of inferences, despite the continuous state space, the lack of a feature function, and only having access to a simulator (rather than a full transition matrix as in the previous chapter). CHAPTER 6. FUNCTION APPROXIMATION FOR HIGH-DIMENSIONAL ENVIRONMENTS 44 learning. Recent IRL algorithms (Fu et al., 2017; Finn et al., 2016) have shown how to generalize tabular IRL algorithms to work in high-dimensional environments. The key idea is to use a neural net function approximator to represent the reward function and learn a policy in tandem with the reward function. We can then compute gradients by comparing rollouts from the policy with the provided demonstrations, as in Equation 2.3. This is a useful starting point for us, but does not solve everything. Our rst challenge is that unlike the MCEIRL gradient of Equation 2.3, the RLSP gradient in Equation 5.4 cannot be easily expressed as a comparison between policy rollouts and the observed state s0, and so it is unclear how the gradient can be computed. Our key insight here is that to sample from the distribution over past trajectories p(T:1js0;), we can start at the observed state s0 and simulate backwards in time . To enable this, we derive a gradient that is amenable to estimation through backwards simulation, and learn an inverse policy and inverse dynamics model using supervised learning to perform the backwards rollouts. Our second challenge is that we cannot use a deep neural net as our reward function, because this is too expressive: we would start learning the degenerate reward functions identi ed in Section 4.2 that only assign reward to the observed state and ignore everything else. We need a reward representation that can be meaningfully updated from a single state observation, yet also can capture complex functions of the input in order to express concepts like the \balanced Cheetah". Our solution here is to represent the reward as a linear combination of features, where the features are learned through self-supervised representation learning techniques in a pretraining step. 6.1 Gradient as backwards-forwards consistency Here we tackle the rst challenge: expressing the RLSP gradient in a form that we can work with when we only have access to a simulator for the environment. We start with the gradient from Section 5.2: rlnp(s0j) =1 p(s0j)X sT:12S aH T:12AHp(j)rlnp(j) =X sT:12S aH T:12AHp(T:1js0;)rlnp(j) = E T:1p(js0;)[rlnp(j)]: Approximating the expectation For higher-dimensional environments, we must approximate the expectation over past trajec- toriesp(T:1js0;). We would like to sample from the distribution, but it is not clear how CHAPTER 6. FUNCTION APPROXIMATION FOR HIGH-DIMENSIONAL ENVIRONMENTS 45 to sample the past conditioned on the present. Our key idea is that just as we can sample the future by rolling out forwards in time, we should be able to sample the past by rolling out backwards in time . Note that by the Markov property we have: p(T:1js0;) =1Y t=Tp(stjaH t;st+1;:::s 0;)p(aH tjst+1;aH t+1;:::s 0;) =1Y t=Tp(stjaH t;st+1;)p(aH tjst+1;) Thus, given the inverse policy (H)1 t(aH tjst+1;) as well as the inverse dynamics T1 t(stjaH t;st+1;), we can sample a past trajectory T:1p(T:1js0;) by iteratively applying (H)1andT1, starting from s0. Thus, our gradient can be written as E aH 1:T(H)1 s1:TT1[rlnp(j)]: In order to learn ( H)1, we must rst have H. We assumed that the human was Boltzmann-rational, which corresponds to the maximum entropy reinforcement learning objective (Levine, 2018). We use Soft Actor-Critic (SAC; Haarnoja et al., 2018) to estimate the policyH(ajs;), since it explicitly optimizes the maximum entropy RL objective. Given the forward policy H(aHjs;) and simulatorT, we can construct a dataset of sampled forward trajectories, and learn the inverse policy ( H)1and the inverse dynamics T1using supervised learning. Given these, we can then sample T:1, allowing us to approximate the expectation in the gradient. In general, both ( H)1andT1could be stochastic and time-dependent, even if HandTare themselves deterministic and time- independent. Estimating the gradient for a trajectory We now turn to the term within the expectation, which is the inverse reinforcement learning gradient given a demonstration trajectory =sTaH T:::s 0. Assuming that the user is Boltzmann-rational, this is the MCEIRL gradient from Section 2.5: rlnp(j) = 0X t=Tf(st)! FT(sT;)+1X t=T E s0 t+1T(jst;aH t) Ft+1(s0 t+1;) Ft+1(st+1;)! (6.1) (Recall thatFt(st;) is the expected feature count when continuing to act from state st according to policy H, that is,Ft(st;), E aH t:1H st:0TP0 t0=tf(st0) .) CHAPTER 6. FUNCTION APPROXIMATION FOR HIGH-DIMENSIONAL ENVIRONMENTS 46 The rst term computes the feature counts of the demonstrated trajectory , while the second term computes the feature counts obtained by the policy for the current reward function(starting from the initial state sT). Sincer(s) =Tf(s), these terms increase the reward of features present in the demonstration and decrease the reward of features under the current policy. Thus, the gradient incentivizes consistency between the demonstration and rollouts from the learned policy. The last term is essentially a correction for the observed dynamics: if we see that st;aH t led tost+1, it corrects for the fact that we \could have" seen some other state s0 t+1. Since this correction is zero in expectation (and expensive to compute), we drop it for our estimator. Gradient estimator After dropping the last term in Equation 6.1, expanding the de nition of F, and substituting in to our earlier gradient expression, our nal gradient estimator is: rlnp(s0j) = E aH 1:T(H)1 s1:TT12 66664 0X t=Tf(st)! E s0 T=sT aH T:1H sT+1:0T" 0X t=Tf(s0 t)!#3 77775(6.2) Thus, given s0,,H,T, (H)1, andT1, computing the gradient consists of three steps: 1.Simulate backwards from s0, and compute the feature counts of the resulting trajectories. 2. Simulate forwards from sTof these trajectories, and compute their feature counts. 3. Take the di erence between these two quantities. This again incentivizes consistency, this time between the backwards and forwards trajec- tories: the gradient leads to movement towards \what the human must have done" and away from \what the human would do if they had this reward". The gradient becomes zero when they are identical. It may seem like the backwards and forwards trajectories should always be consistent with each other, since ( H)1andT1are inverses of HandT. The key di erence is that s0imposes constraints on the backwards trajectories, but not on the forward trajectories. For example, suppose we observe s0in which a vase is unbroken, and our current hypothesis is that the user wants to break the vase. When we simulate backwards, our trajectory will contain an unbroken vase (even though the policy usually breaks vases), and when we simulate forwards from sT,Hwill break the vase. The gradient would then reduce the reward for a broken vase and increase the reward for an unbroken vase. CHAPTER 6. FUNCTION APPROXIMATION FOR HIGH-DIMENSIONAL ENVIRONMENTS 47 6.2 Learning a latent MDP Our gradient still relies on a feature function f, with the reward parameterized as r(s) = Tf(s). A natural way to remove this assumption would be to instead allow to parameterize a neural network, which can then learn whatever features are relevant to the reward from the RLSP gradient. However, this approach will not work. The information contained in the RLSP gradient is insucient to identify the appropriate features to construct: after all, it is derived from a single state. If we were to learn a single uni ed reward using the same gradient, the resulting reward would likely be degenerate: for example, it may simply identify the observed state, that isR(s) = 1[s=s0]. Thus, we continue to assume that the reward is linear in features, and instead learn the feature function using self-supervised representation learning. Such techniques de ne an auxiliary task and train the neural net to perform well on the task, in the hopes that the neural net learns generally useful features that can then be used for the task of interest. There are many potential auxiliary tasks that we could use: Contrastive tasks. Contrastive approaches (Oord et al., 2018) give the model a discrim- ination task: given a context , select which of a large set of targets is \most related" to the context. The notion of relatedness can change: in image classi cation, two images are \related" if they are di erent augmentations of the same base image (He et al., 2020; Chen et al., 2020); while in reinforcement learning, two states could be related if they come from the same trajectory (Lee et al., 2020; Stooke et al., 2020). Reconstruction. In reconstructive approaches, the input state must be encoded into a low-dimensional representation vector, in such a way that the original state can then be decoded back from this vector (Kingma and Welling, 2014). In sequential environments, the decoder can instead predict past or future states. This has been applied in reinforcement learning (Stooke et al., 2020) as well as natural language processing (Radford et al., 2019). Alternatively, tasks can be de ned over the sequence as a whole, for example to predict a missing word in a sentence (Devlin et al., 2019; Clark et al., 2020). State space models. For partially observable environments recurrent state space models (RSSMs; Karl et al., 2017; Doerr et al., 2018; Hafner et al., 2019, 2020; Buesing et al., 2018; Kurutach et al., 2018) could be used instead: such methods aim to learn a latent MDP . They do so by allowing the latent states to be computed by a recurrent model over the observations, thus allowing the states to encode the history. For such a model, we can imagine that the underlying POMDP has been converted into a latent MDP whose feature function fis the identity. We can then compute RLSP gradients directly in this latent MDP, where Tandf are known. Goal-based exploration. By default, in the absence of a dataset of environment behavior, the methods above would have to be applied to a dataset of environment interaction using CHAPTER 6. FUNCTION APPROXIMATION FOR HIGH-DIMENSIONAL ENVIRONMENTS 48 rollouts of a random policy. However, in many environments, a random policy will stay within a small subset of the state space with high probability. To cover more of the state space, we can extent any of the methods above with methods that increase the diversity of the data on which the features are trained. These methods include unsupervised skill learning (Achiam et al., 2018; Eysenbach et al., 2018; Nair et al., 2018; Sharma et al., 2020), curiosity-driven exploration (Burda et al., 2018), and goal-conditioned policies with a diverse goal space (Schaul et al., 2015; Andrychowicz et al., 2017). Experimental choices. In our experiments, we use a variational autoencoder (VAE; Kingma and Welling, 2014) to learn the feature function. The VAE encodes the states into a latent feature representation, which we can use to learn a reward function if the environment is fully observable, i.e., the states contain all relevant information. However, we expect that better results could be obtained using other, more recent methods, if tuned well. 6.3 Deep RLSP Putting these components together gives us the Deep RLSP algorithm (Algorithm 3). We rst learn a feature function fusing self-supervised learning, and then train an inverse dynamics modelT1, all using a dataset of environment interactions (such as random rollouts). Then, we update using Equation 6.2, and continually train H, and (H)1alongsideto keep them up to date. The full algorithm also adds a few bells and whistles that we describe below. Initial state distribution PS.The attentive reader may wonder why our gradient appears to be independent of PS. This is actually not the case: while HandTare independent ofPS, (H)1andT1dodepend on it. For example, if we observe Alice exiting the San Francisco airport, the corresponding ( H)1should hypothesize di erent ights if she started from New York than if she started from Tokyo. However, in order to actually produce such explanations, we must train ( H)1andT1 solely on trajectories of length Tstarting from sTPS. We instead train ( H)1andT1 on a variety of trajectory data, which loses the useful information in PS, but leads to several bene ts. First, we can train the models on exactly the distributions that they will be used on, allowing us to avoid failures due to distribution shift. Second, the horizon Tis no longer critical: previously, Tencoded the separation in time between sTands0, and as a result misspeci cation of Tcould cause bad results. Since we now only have information about s0, it doesn't matter much what we set Tto, and as a result we can use it to set a curriculum (discussed next). Finally, this allows Deep RLSP to be used in domains where an initial state distribution is not available. Since we are no longer able to use the information from PSthrough (H)1andT1, we add in a heuristic to incorporate the information elsewhere. Speci cally, we weight every CHAPTER 6. FUNCTION APPROXIMATION FOR HIGH-DIMENSIONAL ENVIRONMENTS 49 Algorithm 3 TheDeep RLSP algorithm. The initial dataset of environment interactions D can be constructed in many di erent ways: random rollouts, human play data, curiosity-driven exploration, etc. The speci c method will determine the quality of the learned features. procedure Deep RLSP (fs0g,T) D dataset of environment interactions Initializef;H;(H)1;T1;randomly. f SelfSupervisedLearning( D).Train encoder and decoder for latent MDP T1 SupervisedLearning( D) .Train inverse dynamics T 1 .Start horizon at 1 foriin [1::num epochs] do H SAC() .Train policy fg Rollout(H;T) .Collect dataset by rolling out H (H)1 SupervisedLearning( f;fg) .Train inverse policy  + ComputeGrad(fs0g,H,T, (H)1,T1,T,f).Update ifgradient magnitudes are suciently low then T T+ 1 .Advance horizon end if end for return,f end procedure procedure ComputeGrad (fs0g,H,T, (H)1,T1,T,f) fbackwardg Rollout(fs0g;(H)1;T1;T) .Simulate backwards from s0 fbackward AverageFeatureCounts( f;fbackwardg).Compute backward feature counts fsTg FinalStates(fbackwardg) fforwardg Rollout(fsTg;H;T;T) .Simulate forwards from sT fforward AverageFeatureCounts( f;fforwardg).Compute forward feature counts returnfbackwardfforward end procedure backwards trajectory by the cosine similarity between the nal state sT, and a sample ^sTPS. Curriculum. Since the horizon Tis no longer crucial, we can use it to provide a curriculum. We initially calculate gradients with low values of T, to prevent compounding errors in our learned models, and making it easier to enforce backwards-forwards consistency, and then slowly grow T, making the problem harder. In practice, we found this crucial for performance: intuitively, it is much easier to make short backwards and forwards trajectories consistent than with longer trajectories; the latter would likely have much higher variance. Multiple input states. If we get multiple independent s0as input, we average their gradients. CHAPTER 6. FUNCTION APPROXIMATION FOR HIGH-DIMENSIONAL ENVIRONMENTS 50 Replay bu er. We also maintain a replay bu er that stores previously collected ( s;a;s0) transitions. This bu er persists across policy training steps. When training the policy , we sample transitions from both the replay bu er as well as from the simulator T. Having now de ned both the RLSP and Deep RLSP algorithms, we turn to demonstrating and evaluating them on a variety of environments and tasks. 51 Chapter 7 Evaluation: Correcting misspeci ed rewards and imitating skills To evaluate the importance of learning from the state of the world, we would like to check whether the inferred preferences enable Rto be more helpful to Hin a variety of contexts. However, it is not very clear how exactly to do so. The inferred reward is very likely to assign states0maximal reward, since by assumption, when Alice optimized Rshe ended up at s0. If the robot then starts in state s0, if a no-op action is available (as it often is), the inferred reward is likely to incentivize no-ops, which is not very interesting or helpful. Ultimately, our hope is that the information learned from the state of the world can be combined with other sources of preference information in order for Rto learn both what it should do, and what it should not do. So, in Section 7.2, we create a suite of environments with a true reward, R, a speci ed reward, Rspec, the initial state of the environment, sT, and the state observed by R,s0. Here,Rspecis a stand-in for some information about what Rshould do, and will ignore some aspect(s) of R. To evaluate an algorithm in an environment, we use it to infer a reward Hfroms0, which is then combined with the speci ed reward to get a nal reward  nal=H+spec. (We later consider another heuristic method for combining these two rewards in Section 7.6, as well as a more principled approach in Chapter 8.) We inspect the inferred reward qualitatively and report the expected amount of true reward obtained when planning with  nal, as a fraction of the expected true reward from the optimal policy. Section 7.3 identi es a set of environments in which inferred rewards cancapture what R should do: robot locomotion. Intuitively, if a quadrupedal robot is in mid-jump, a \no-op" action is not available: the state of the environment will change no matter what the Rdoes, simply because of gravity. As a result, the inferred reward cannot simply lead to behavior that stays in s0forever, and so the inferred reward could be sucient to learn good behavior. Thus, in these environments we use each algorithm to infer a reward function from the state of the world, given a state sampled from rollouts of a policy that is performing a speci c type of movement behavior. We then optimize a new policy using the inferred reward function, and evaluate how well the new policy imitates the original skill. CHAPTER 7. EVALUATION: CORRECTING MISSPECIFIED REWARDS AND IMITATING SKILLS 52 Having done these basic tests of algorithm functionality, we turn to investigating partic- ular details of the algorithm implementations, including the importance of priors over sT (Section 7.4), the robustness to the planning horizon (Section 7.5), and alternative methods for combining reward functions (Section 7.6). 7.1 Algorithm details RLSP is almost entirely speci ed by Algorithm 2. In our experiments, we set the learning rate to 0:1 and the temperature for soft value iteration to 1 :0. Deep RLSP on the other hand has several implementation details beyond what is speci ed in Algorithm 3. In this section we describe the hyperparameters and architecture choices for all models used in Deep RLSP in our experiments. All models are implemented using the TensorFlow framework. Feature function. We use a variational autoencoder (VAE) (Kingma and Welling, 2014) to learn the feature function f. The encoder and decoder consist of 3 feed-forward layers of size 512. The latent space has dimension 30. The model is trained for 100 epochs on 100 rollouts of a random policy in the environment. During training we use a batch size of 500 and a learning rate of 105. We use the standard VAE loss function, but weight the KL-divergence term with a factor c= 0:001, which reduces the regularization and empirically improved the reconstruction of the model signi cantly in our experiments. We hypothesize that the standard VAE regularizes too much in our setting, because the latent space has a higher dimension than the input space, which is not the case in typical dimensionality reduction settings. Inverse dynamics model. Our inverse dynamics model T1is a feed-forward neural network with 5 layers of size 1024 with ReLU activations. We train it on 1000 rollouts of a random policy in the environment for 100 epochs, with a batch size of 500 and learning rate of 105. Note that the model predicts the previous observation given the current observation and action; it does not use the feature representation. We found the model to perform better if it predicts the residual ot1otgivenotandatinstead of directly predicting ot1. We normalize all inputs to the model to have zero mean and unit variance. To increase robustness, we also add zero-mean Gaussian noise with standard deviation 0 :001 to the inputs and labels during training and clip the outputs of the model to the range of values observed during training. Policy. For learning the policy Hwe use the stable-baselines implementation of Soft Actor- Critic (SAC) with its default parameters for the MuJoCo environments (Haarnoja et al., 2018; Hill et al., 2018). Each policy update during Deep RLSP uses 104total timesteps CHAPTER 7. EVALUATION: CORRECTING MISSPECIFIED REWARDS AND IMITATING SKILLS 53 (except Hopper, where we use 2 104) and we evaluate the nal reward function using 2 106 timesteps (except Pendulum, where we use 64, which is the default of Hill et al. (2018)). Inverse policy. Because the inverse policy ( H)1is not a deterministic function, we represent it with a mixture density network , a feed-forward neural network that outputs a mixture of Gaussian distributions (Bishop, 1994). The network has 3 layers of size 512 with ReLU activations and outputs a mixture of 5 Gaussians with a xed diagonal covariance matrix 0 :05I. During Deep RLSP, we maintain an experience replay that is initialized with random rollouts. In every iteration all states encountered during the algorithm are added to the experience replay. To update the inverse policy we sample batches with batch size 500 from the experience replay, apply the forward policy and the forward transition model on the states to label the data. We then train the model with a learning rate of 104. Feature learning dataset. By default, we use random rollouts to generate the initial datasetDthat is used to train the features and the inverse model T1. Additional hyperparameters. We run Deep RLSP with a learning rate of 0 :01, and use 200 forward and backward trajectories to estimate the gradients. Starting with T= 1 we increment the horizon when the gradient norm drops below 2 :0 or after 10 steps, whichever comes rst. We run the algorithm until T= 10. Computational cost. It is important to note that Deep RLSP can be signi cantly more computationally costly than the baselines that we compare against. Imitation learning with full demonstrations can already be quite computationally expensive. Deep RLSP takes this much further by learning several distinct deep neural net models, simulating potential demonstrations (which are likely much noisier than real demonstrations), and nally imitating them. 7.2 Conceptual environments In our rst evaluation, we design a suite of environments to test whether algorithms that learn from the state of the world can extract various di erent types of information from the state of the environment at deployment. In each environment, a candidate algorithm must infer a reward function given only the MDP without reward MnR, the observed state s0, and (depending on the experiment) the initial state sT. This is then combined with a speci ed rewardRspec, to give a nal reward  nal=H+specthat we then evaluate. The hyperparameter is tune for all algorithms (including baselines) to give the best results. It should be noted that these environments were designed to illustrate properties of learning from the state of the world, and thus should not be considered an outside independent check CHAPTER 7. EVALUATION: CORRECTING MISSPECIFIED REWARDS AND IMITATING SKILLS 54 on the value of learning from the state of the world. Nonetheless, we nd the qualitative behaviors displayed below to be good indications of the value of such an approach. We rst describe our baselines, and then explain our suite of environments along with the results. Baselines To our knowledge, there are no existing algorithms for state of the world problems, and so we design some simple algorithms ourselves to serve as baselines. Speci ed reward policy spec.This baseline corresponds to not doing any learning from the state of the world. We don't use s0at all and just optimize the speci ed reward, that is,  nal=spec. Policy that penalizes deviations deviation .One of the core intuitions behind learning from the state of the world is that the state s0has already been optimized for human preferences, and so random changes to s0are likely to be bad. Thus, one possible baseline is to simply minimize changes to the observed state. We accomplish this by penalizing deviations from the observed features f(s0), givingR nal(s) =T specf(s) +jjf(s)f(s0)jj. Note that for this baseline, there is no  nal; the algorithm directly speci es a nal reward R nal, rather than rst specifying an inferred reward that is then added to Rspec. Relative reachability policy reachability .Rather than just penalizing any deviations, we could instead only penalize those deviations that are impactful . For this, we look to the literature on low-impact AI systems for baselines. We use relative reachability (Krakovna et al., 2018) as our baseline. This is an approach low-impact AI systems that considers a change to be negative when it decreases coverage , relative to what would have happened had the agent done nothing. Here, coverage is a measure of how easily states can be reached from the current state. We compare against the variant of relative reachability that uses undiscounted coverage and a baseline policy where the agent takes no-op actions, as in the original paper. Relative reachability requires known dynamics but not a handcoded featurization. A version of relative reachability that does make use of handcoded features instead of operating directly on states would behave similarly on our suite of environments. Like the deviation baseline, relative reachability directly speci es a nal reward R nal. Average features policy features .One way to think about s0is that it is a sample from the rollout of an optimal (or at least good) policy for R. As a result, it is likely that the features ofs0are generally good, and so should be incentivized. Since the reward takes on the formR(s) =Tf(s), this suggests that we should set H=f(s0) jjf(s0)jj, where the normalization is simply to put the reward on a common scale. We call this the AverageFeatures baseline. CHAPTER 7. EVALUATION: CORRECTING MISSPECIFIED REWARDS AND IMITATING SKILLS 55 Analysis We compare RLSP and Deep RLSP to our baselines with the assumption of known sT, because it makes it easier to analyze their properties. We consider the case of unknown sT in Section 7.4. For Deep RLSP, in these stylized gridworlds, self-supervised learning should not be expected to learn the necessary features. For example, in the room with vase environment, the two door features are just particular locations, with no distinguishing features in the state that would allow self-supervised learning to identify these locations as important. So, when evaluating Deep RLSP, we run Algorithm 3 without the feature learning and instead use the pre-de ned feature function fof the environments. We also use a Mixture Density Network (MDN) (Bishop, 1994) for the inverse dynamics T1, in order to allow the model to output probability distributions. Once we do this, Deep RLSP has the same behavior as RLSP, and so we only report results with RLSP below. We summarize the results in Table 7.1, and show the environments and trajectories in Figure 7.1. Table 7.1: Performance of algorithms on environments designed to test particular properties. Side e ect Env e ect Implicit reward Desirable act Unseen e ect Room Toy train Apple collection Batteries Far away vase Easy Hard spec 7 7 7 3 7 7 deviation 3 7 7 7 3 reachability 3 3 7 7 3 features 3 7 RLSP 3 3 3 3 3 7 Side e ects: Room with vase (Figure 7.1a) . The room tests whether the robot can avoid breaking a vase as a side e ect of going to the purple door. There are features for the number of broken vases, standing on a carpet, and each door location. Rspechas weight 1 for the purple door feature, and 0 for all other weights. As a result, it causes the agent to walk over the base, breaking it. Radditionally has weight -1 for the broken vases feature. Since Alice didn't walk over the vase, RLSP infers a negative reward on broken vases, and a small positive reward on carpets (since paths to the top door usually involve carpets). So,RLSP successfully avoids breaking the vase. The penalties also achieve the desired behavior: deviation avoids breaking the vase since it would change the \number of broken vases" feature, while relative reachability avoids breaking the vase since doing so would result in all states with intact vases becoming unreachable. CHAPTER 7. EVALUATION: CORRECTING MISSPECIFIED REWARDS AND IMITATING SKILLS 56sTands0 spectrajectories RLSP trajectories (a) (b) (c) (d) (e) Figure 7.1: Evaluation of RLSP on our environments. Silhouettes indicate the initial position of an object or agent, while lled in versions indicate their positions after an agent has acted. The rst row depicts the information given to RLSP. The second row shows the trajectory taken by the robot when following the policy specthat is optimal for Rspec, whereHis the reward inferred using RLSP. The third row shows the trajectory taken when following the policyRLSP that is optimal for  nal=H+spec. (a) Side e ects: Room with vase (b) Distinguishing environment e ects: Toy train (c) Implicit reward: Apple collection (d) Desirable side e ect: Batteries (e) \Unseen" side e ect: Room with far away vase. features fails to infer that breaking vases is bad, though this is due to a quirk in the feature encoding. In particular, the feature counts the number of broken vases, and so the inferred reward Hhas a value of zero for this feature, e ectively ignoring it. If we change the featurization to instead count the number of unbroken vases, then features would likely get the right behavior. CHAPTER 7. EVALUATION: CORRECTING MISSPECIFIED REWARDS AND IMITATING SKILLS 57 Distinguishing environment e ects: Toy train (Figure 7.1b) . To test whether al- gorithms can distinguish between e ects caused by the agent and e ects caused by the environment, as suggested in Krakovna et al. (2018), we add a toy train that moves along a prede ned track. The train breaks if the agent steps on it. We add a new feature indicating whether the train is broken and new features for each possible train location. Once again, Rspecjust has weight 1 on the purple door feature, and so the resulting policy ends up breaking the train. Radditionally has weight -1 on broken vases and trains. RLSP infers a negative reward on broken vases and broken trains, for the same reason as in the previous environment. It also infers not to put any weight on any particular train location, even though it changes frequently, because it doesn't help explain s0. As a result, RLSP walks over a carpet, but not a vase or a train. For the penalty-based algorithms, deviation immediately breaks the train in order to prevent the train location features from changing. reachability on the other hand deduces that breaking the train is irreversible, and so correctly follows the same trajectory as RLSP. features fails to infer that trains should not be broken, due to the same quirk in feature encoding as in the previous environment, and as a result breaks the train. Implicit reward: Apple collection (Figure 7.1d) . This environment tests whether the algorithms can learn tasks implicit in s0. There are three trees that grow apples, as well as a basket for collecting apples, and the goal is for the robot to harvest apples. We have features for the number of apples in baskets, the number of apples on trees, whether the robot is carrying an apple, and each location that the agent could be in. s0has two apples in the basket, while sThas none. The speci ed reward Rspecis zero: the robot must infer the task from the observed state. The true task is to collect apples, and so Rhas weight 1 on the number of apples in the basket, and no weight on any other feature. specis arbitrary since every policy is optimal for the zero reward. A penalty-based algorithm can never get positive reward, because Rspecis always zero and penalties can only decrease reward. As a result, the optimal policy for both deviation andreachability is to do nothing, which achieves the maximum of zero reward. RLSP infers a positive reward on apples in baskets, a negative reward for apples on trees, and a small positive reward for carrying apples. Despite the spurious weights on other features,RLSP harvests apples as desired, achieving the maximum possible true reward. features also correctly infers that it is good to place apples in the basket, but it also rewards the agent for staying in the original location. As a result, it avoids picking apples from the tree that is furthest away, and so it does not pick apples as e ectively as RLSP. Desirable side e ect: Batteries (Figure 7.1c) . This environment tests whether the algorithms can tell when a side e ect is allowed. We take the toy train environment, remove vases and carpets, and add batteries. The robot can pick up batteries and put them into the (now unbreakable) toy train, but the batteries are never replenished. If the train runs for 10 timesteps without a new battery, it stops operating. There are features for the number of batteries, whether the train is operational, each train location, and each door location. CHAPTER 7. EVALUATION: CORRECTING MISSPECIFIED REWARDS AND IMITATING SKILLS 58 There are two batteries in sTbut only one in s0. The true reward Rplaces weight 1 on being at the purple door, as well as weight -1 on the train being out of power. We consider two variants for the speci e reward Rspec{ an \easy" case, where is identical to the true reward R, and a \hard" case, where it only has weight 1 on being at the purple door (and no weight on keeping the train operational). Unsurprisingly, specsucceeds at the easy case (where Rspec=R), and fails on the hard case by allowing the train to run out of power. Both deviation andreachability see the action of putting a battery in the train as a side e ect to be penalized, and so neither can solve the hard case. In the easy case, they still penalize picking up the batteries, and so only solve the easy case if the penalty weight is small. RLSP sees that one battery is gone and that the train is operational, and infers that Alice wants the train to be operational and doesn't want batteries (since a preference against batteries and a preference for an operational train are nearly indistinguishable). So, it solves both the easy and the hard case, with RLSP picking up the battery, then staying at the purple door except to deliver the battery to the train. features incorrectly infers that batteries should notbe used up, since the number of batteries in the environment is positive and so we infer that batteries are good. It thus fails to solve the hard case, and only solves the easy case if is suciently small. Part of the issue here is that features is not invariant to the addition of a constant to any feature. We could x this by instead setting features =f(s0)f(sT), treatingf(sT) as a baseline to whichf(s0) should be compared, but this would make features reliant on knowledge of sT (which we will not have in future sections) and would cause it to always fail the vase and toy train environments (whereas currently it depends on how the features are constructed). \Unseen" side e ect: Room with far away vase (Figure 7.1e) . This environment demonstrates a limitation of our algorithm: it cannot identify side e ects that Alice would never have triggered. In this room, the vase is nowhere close to the shortest path from the Alice's original position to her goal, but is on the path to the robot's goal. The features, speci ed reward and true reward are all the same as in the room with vase environment (Figure 7.1a). Since our baselines don't care about the trajectory the human takes, they all perform as before:specwalks over the vase, deviation andreachability both avoid it, and features fails to avoid it but only due to a quirk of feature construction. RLSP infers a near zero weight on the broken vase feature, since it is not present on any reasonable trajectory to the goal, and so breaks it when moving to the goal. Note that this only applies when Alice is known to be at the bottom left corner at sT: if we have a uniform prior oversT(considered in Section 7.4) then we do consider trajectories where vases are broken. CHAPTER 7. EVALUATION: CORRECTING MISSPECIFIED REWARDS AND IMITATING SKILLS 59 7.3 Skill learning with Deep RLSP The apples collection environment (Figure 7.1d) demonstrates that at least in some cases it is possible to learn what to do from the state of the world, without needing a speci ed rewardRspec. In the case of that environment, given our choice of feature function, there was no possible reward function that would view s0as uniquely optimal, which forced RLSP to infer a reward that would incentivize changes to the environment. In this section, we identify another domain in which changes to the environment are necessary: robot locomotion. In this domain, due to physical laws (particularly gravity), Rusually does not have the ability to preserve the state as it is. As a result, any reward inferred from the state of the world must take into account that the state can and will change, suggesting that we may be able to infer whatRshould do, rather than just what it should not do. To test learning from the state of the world in this domain, we use the MuJoCo physics simulator (Todorov et al., 2012). We consider the Inverted Pendulum ,Half-Cheetah and Hopper environments implemented in Open AI Gym (Brockman et al., 2016). Since these environments are continuous and high-dimensional, only Deep RLSP can be evaluated in this domain. In the inverted pendulum environment, the pendulum falls very quickly in random rollouts, andT1never learns what a balanced pendulum looks like. So, for this environment only, when constructing the initial dataset of environment interactions D, we combine random rollouts with rollouts from an expert policy that balances the pendulum. Baselines To our knowledge, this is the rst work to train policies using a single state as input. Due to lack of alternatives, we compare against GAIL (Ho and Ermon, 2016) using the implementation from the imitation library (Wang et al., 2020). GAIL is an imitation learning algorithm, and thus requires transitions as inputs, rather than single states. For each state we provide to Deep RLSP, we provide a transition ( s;a;s0) to GAIL. GAIL is thus given more information than Deep RLSP. Most of our previous baselines in Section 7.2 do not apply in this setting, either because they do not scale to high-dimensional environments, or because they depend on the presence ofRspec. In particular, the penalty-based deviation and relative reachability methods do not make sense as baselines, because the idea is to pursue some already speci ed behavior given byRspec, but avoid having side e ects via the use of a penalty. The only baseline that can still be applied is the AverageFeatures baseline, in which we setH=f(s0) jjf(s0)jj. Here, the function fmust still be learned through self-supervised learning, but we can avoid learning ( H)1andT1entirely. This can be thought of as an ablation of Deep RLSP, in which we ignore all temporal information. In our experiments, we sometimes sample multiple states from rollouts of an expert policy, instead of just a single state. In this case, for Deep RLSP, we simply average the gradients for each of the states individually (which can be thought of as a lower-variance estimator of CHAPTER 7. EVALUATION: CORRECTING MISSPECIFIED REWARDS AND IMITATING SKILLS 60 Environment SAC # states Deep RLSP AverageFeatures Waypoints GAIL Inverted Pendulum10001 262 (246) 6 (2) N/A 1000 (0) 10 605 (242) 3 (1) 4 (1) 1000 (0) 50 258 (198) 6 (4) 3.7 (0.3) 1000 (0) Cheetah (forward)132361 4833 (2975) 6466 (3343) N/A -288 (55) 10 6299 (559) 6245 (2352) -10 (23) -296 (172) 50 7657 (177) 4504 (2970) -126 (38) -54 (295) Cheetah (backward)133611 5694 (2513) 12443 (645) N/A -335 (46) 10 8102 (624) 12829 (651) -80 (388) -283 (45) 50 7795 (551) 11616 (178) -509 (87) 2113 (1015) Hopper (terminate)32741 80 (21) 99 (45) N/A 991 (9) 10 168 (58) 159 (126) 58 (7) 813 (200) 50 130 (12) 65 (36) 14 (4) 501 (227) Hopper (penalty)33631 1964 (545) 2537 (363) N/A 990 (9) 10 564 (75) 3103 (64) 709 (133) 784 (229) 50 2130 (744) 2078 (581) 1612 (785) 508 (259) Table 7.2: Average returns achieved by the policies learned through various methods, for di erent numbers of input states. The states are sampled from a policy trained using SAC on the true reward function; the return of that policy is given as a comparison. Besides the SAC policy return, all values are averaged over 3 seeds and the standard error is given in parentheses. We don't report Waypoints on 1 state as it is identical to AverageFeatures on 1 state. the gradient), and we similarly use an average for the AverageFeatures baseline (which is why we call it the Average Features baseline). An alternative to averaging the features is to view each of the observed states sias a potential waypoint of the expert policy, and reward Rfor being near any one of them. We implement this Waypoints method as R(s) =maxif(si 0) jjf(si 0)jjf(s). Note that when we only have a single state as input, this is equivalent to AverageFeatures, and so we do not report results for Waypoints with one state. Solving the environments without access to the reward function First we look at the typical target behavior in each environment: balancing the inverted pendulum, and making the half-cheetah and the hopper move forwards. Additionally we consider the goal of making the cheetah run backwards (that is, the negative of its usual CHAPTER 7. EVALUATION: CORRECTING MISSPECIFIED REWARDS AND IMITATING SKILLS 61 reward function). We aim to use Deep RLSP to learn these behaviors without having access to the reward function . We train a policy using soft actor critic (SAC) (Haarnoja et al., 2018) to optimize for the true reward function, and sample either 1, 10 or 50 states from rollouts of this policy to use as input. We then use Deep RLSP to infer a reward and policy. Ideally we would evaluate this learned policy rather than reoptimizing the learned reward, since learned reward models can often be gamed (Stiennon et al., 2020), but it would be too computationally expensive to run the required number of SAC steps during each policy learning step. As a result, we run SAC for many more iterations on the inferred reward function, and evaluate the resulting policy on the true reward function (which Deep RLSP does not have access to). Results are shown in Table 7.2. In Hopper, we noticed that videos of the policies learned by Deep RLSP looked okay, but the quantitative evaluation said otherwise. It turns out that the policies learned by Deep RLSP do jump, as we might want, but they often fall down, terminating the episode; in contrast GAIL policies stand still or fall over slowly, leading to later termination and explaining their better quantitative performance. We wanted to also evaluate the policies without this termination bias, and so we evaluate the same policies in an environment that does not terminate the episode, but provides a negative reward instead; in this evaluation both Deep RLSP and AverageFeatures perform much better. We also provide videos of the learned policies at https://sites.google.com/view/deep-rlsp , which show that the policies learned by Deep RLSP do exhibit hopping behavior (though with a strong tendency to fall down). GAIL is only able to learn a truly good policy for the (very simple) inverted pendulum, even though it gets states and actions as input. Deep RLSP on the other hand achieves reasonable behavior (though clearly not expert behavior) in all of the environments, using only a few states as input. Surprisingly, the AverageFeatures method also performs quite well, even beating the full algorithm on some tasks, though failing quite badly on Pendulum. It seems that the task of running forward or backward is very well speci ed by a single state, since it can be inferred even without any information about the dynamics (except that which is encoded in the features learned from the initial dataset). Learning skills from a single state We investigate to what extent Deep RLSP can learn other skills where the reward is not clear. Evaluation on these tasks is much harder, because there is no ground truth reward. Therefore we evaluate qualitatively how similar the policies learned by Deep RLSP are to the original skill. Unlike the previous case, we do not reoptimize the learned reward and only look at the policies learned by Deep RLSP. We consider skills learned by running Dynamics-Aware Unsupervised Discovery of Skills (DADS; Sharma et al., 2020). Since we are not interested in navigation, we remove the \x-y prior" used to get directional skills in DADS. We run DADS on the half-cheetah environment and select all skills that are not some form of running. This resulted in two skills: one in which the cheetah is moving forward making big leaps ( \jumping" ) and one in which CHAPTER 7. EVALUATION: CORRECTING MISSPECIFIED REWARDS AND IMITATING SKILLS 62 Figure 7.2: We sample a few states from a policy performing a speci c skill to provide as input. Here, Deep RLSP learns to balance the cheetah on the front leg from a single state . We provide videos of the original skills and learned policies at: https://sites.google.com/ view/deep-rlsp . it is slowly moving forward on one leg ( \balancing" ). As before we roll out these policies and sample individual states from the trajectories to provide as an input for Deep RLSP. We then evaluate the policy learned by Deep RLSP. Since the best evaluation here is to simply watch what the learned policy does, we provide videos of the learned policies at https://sites.google.com/view/deep-rlsp . The rst thing to notice is that relative to the ablations, only Deep RLSP is even close to imitating the skill. None of the other policies resemble the original skills at all. While AverageFeatures could perform well on simple tasks such as running, the full algorithm is crucial to imitate more complex behavior. Between Deep RLSP and GAIL the comparison is less clear. Deep RLSP can learn the balancing skill fairly well from a single state, which we visualize in Figure 7.2 (though we emphasize that the videos are much clearer). Like the original skill, the learned policy balances on one leg and slowly moves forward by jumping, though with slightly more erratic behavior. However, the learned policy sometimes drops back to its feet or falls over on its back. We suspect this is an artifact of the short horizon ( T10) used for simulating the past in our algorithm. A small horizon is necessary to avoid compounding errors in the learned inverse dynamics model, but can cause the resulting behavior to be more unstable on timescales greater than T. We see similar behavior when given 10 or 50 states. GAIL leads to a good policy given a single transition, where the cheetah balances on its front leg and head (rather than just the front leg), but does not move forward very much. However, with 10 or 50 transition, the policies learned by GAIL do not look at all like balancing. However, the jumping behavior is harder to learn, especially from a single state. We speculate that here a single state is less informative than the balancing state. In the balancing state, the low joint velocities tell us that the cheetah is not performing a ip, suggesting that CHAPTER 7. EVALUATION: CORRECTING MISSPECIFIED REWARDS AND IMITATING SKILLS 63 we had optimized for this speci c balancing state. On the other hand, with the jumping behavior, we only get a single state of the cheetah in the air with high velocity, which is likely not sucient to determine what the jump looked like exactly. In line with this hypothesis, at 1 state Deep RLSP learns to erratically hop, at 10 states it executes a jump too vigorously and falls over, and at 50 states it makes larger hops (but still sometimes falls over). The GAIL policies for jumping are also reasonable, though in a di erent way that makes it hard to compare. Using 1 or 10 transitions, the policy doesn't move very much, staying in contact with the ground most of the time. However, at 50 transitions, it performs behaviors that are noticeably forward hops, without falling over, leading to a policy that looks better than that learned by Deep RLSP. 7.4 Investigating the prior distribution In Section 7.2, we considered the setting where the robot knows sT, since it is easier to understand and analyze what happens. However, typically we will not know sT, and will instead have some prior over it. Here, we compare RLSP in two settings: perfect knowledge ofsT(as in Section 7.2), and no knowledge of sT, which we represent by using a uniform prior distribution over all states. Side e ects: Room with vase (Figure 7.1a) and toy train (Figure 7.1b) . In both room with vase and toy train, RLSP learns a smaller negative reward on broken vases when using a uniform prior. This is because RLSP considers many more feasible trajectories when using a uniform prior, many of which do not give Alice a chance to break the vase, as in Room with far away vase in Section 7.2. In room with vase, the small positive reward on carpets changes to a near-zero negative reward on carpets. With known sT, RLSP over ts to the few consistent trajectories, which usually go over carpets, whereas with a uniform prior it considers many more trajectories that often don't go over carpets, and so it correctly infers a near-zero weight. In toy train, the negative reward on broken trains becomes slightly more negative, while other features remain approximately the same. This may be because when Alice starts out closer to the toy train, she has more of an opportunity to break it, compared to the case where sTis known. Implicit preference: Apple collection (Figure 7.1d) . Here, a uniform prior leads to a smaller positive weight on the number of apples in baskets compared to the case with known sT. Intuitively, this is because RLSP is considering cases where sTalready has one or two apples in the basket, which implies that Alice has collected fewer apples and so must have been less interested in them. States where the basket starts with three or more apples are inconsistent with the observed s0and so do not have an e ect on the gradient. Following the inferred reward still leads to good apple harvesting behavior. Desirable side e ects: Batteries (Figure 7.1c) . With the uniform prior, we see the same CHAPTER 7. EVALUATION: CORRECTING MISSPECIFIED REWARDS AND IMITATING SKILLS 64 /uni00000014 /uni00000016 /uni00000014/uni00000013 /uni00000016/uni00000013 /uni00000014/uni00000013/uni00000013 /uni0000002b/uni00000052/uni00000055/uni0000004c/uni0000005d/uni00000052/uni00000051/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001b/uni00000014/uni00000011/uni00000013/uni00000029/uni00000055/uni00000044/uni00000046/uni00000057/uni0000004c/uni00000052/uni00000051/uni00000003/uni00000052/uni00000049/uni00000003/uni00000050/uni00000044/uni0000005b/uni00000003/uni00000035/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051 /uni00000055/uni00000052/uni00000052/uni00000050 /uni00000045/uni00000044/uni00000057/uni00000057/uni00000048/uni00000055/uni0000004c/uni00000048/uni00000056 /uni00000044/uni00000053/uni00000053/uni0000004f/uni00000048/uni00000056 Figure 7.3: Reward achieved by RLSP, as a fraction of the expected reward of the optimal policy, for di erent values of Alice's planning horizon T. behavior as in Apple collection, where RLSP with a uniform prior learns a slightly smaller negative reward on the batteries, since it considers states sTwhere the battery was already gone. In addition, due to the particular setup the battery must have been given to the train two timesteps prior, which means that in any state where the train started with very little charge, it was allowed to die even though a battery could have been provided before, leading to a near-zero positive weight on the train losing charge. Despite this, RLSP successfully delivers the battery to the train in both easy and hard cases. \Unseen" side e ect: Room with far away vase (Figure 7.1e) . With a uniform prior, we \see" the side e ect: if Alice started at the purple door, then the shortest trajectory to the black door would break a vase. As a result, RLSP successfully avoids the vase (whereas it previously did not). Here, uncertainty over the initial state sTcan counterintuitively improve the results, because it increases the diversity of trajectories considered, which prevents RLSP from \over tting" to the few trajectories consistent with a known sTands0. Overall, RLSP appears to be quite robust to the use of a uniform prior over sT, suggesting that we do not need to be particularly careful in the design of that prior. 7.5 Robustness to H's planning horizon We investigate how RLSP performs when assuming the wrong value of H's planning horizon T. We vary the value of Tassumed by RLSP, and report the true return achieved by RLSP obtained using the inferred reward and a xed horizon for the robot to act. For this experiment, we used a uniform prior over sT, since with known sT, RLSP often detects that the givensTands0are incompatible (when Tis misspeci ed). The results are presented in Figure 7.3. The performance worsens when RLSP assumes that Alice had a smaller planning horizon than she actually had. Intuitively, if we assume that Alice has only taken one or two actions ever, then even if we know the actions, they could have been in service of many goals, and CHAPTER 7. EVALUATION: CORRECTING MISSPECIFIED REWARDS AND IMITATING SKILLS 65 so we end up quite uncertain about Alice's reward. For the Apple collection environment the underestimated horizon prevents Rfrom learning anything at all, since all very short trajectories consistent with s0do not involve any apple collection. When the assumed Tis larger than the true horizon, RLSP correctly infers things the robot should notdo. Knowing that the vase was not broken for longer than Ttimesteps is more evidence to suspect that Alice cared about not breaking the vase. However, overestimated T leads to worse performance at inferring implicit preferences, as in the Apples environment. If we assume Alice has only collected two apples in 100 timesteps, she must not have cared about them much, since she could have collected many more. The batteries environment is unusual { assuming that Alice has been acting for 100 timesteps, the only explanation for the observed s0is that Alice waited until the 98th timestep to put the battery into the train. This is not particularly consistent with any reward function, and performance degrades. Overall,Tis an important parameter and needs to be set appropriately. However, even whenTis misspeci ed, performance tends to degrade gracefully to what would have happened if we optimizedRspecby itself, so RLSP does not hurt. In addition, if Tis larger than it should be, then RLSP still tends to accurately infer parts of the reward that specify what not to do. While this evaluation showed that RLSP is reasonably robust to the choice of planning horizonTand prior over sT, this may be speci c to our gridworlds. In the real world, we often make long term hierarchical plans, and if we don't observe the entire plan (corresponding to a choice of T that is too small) it seems possible that we infer bad rewards, especially if we have an uninformative prior over sT. We do not know whether this will be a problem, and if so how bad it will be, and hope to investigate it in future work with more realistic environments. 7.6 Con icts between the inferred reward and H's desires In Section 7.2, we evaluated RLSP by combining the reward it infers with a speci ed reward to get a nal reward  nal=H+spec. The problem of combining Handspecis dicult, since the two rewards incentivize di erent behaviors and will con ict. The Additive method above is a simple way of trading o between the two. Both RLSP and the sampling algorithm of Appendix 5.1 can incorporate a prior over . Another way to combine the two rewards is to condition the prior on specbefore running the algorithms. In particular, we could replace our prior P(H) with a new prior P(Hjspec), such as a Gaussian distribution centered at spec. When we use this prior, the reward returned by RLSP can be used as the nal reward  nal. It might seem like this is a principled Bayesian method that allows us to combine the two rewards. However, the con ict between the two reward functions still exists. In this formulation, it arises in the new prior P(Hjspec). Modeling this as a Gaussian centered at CHAPTER 7. EVALUATION: CORRECTING MISSPECIFIED REWARDS AND IMITATING SKILLS 66 Comparison of the methods for combining specandH /uni00000013/uni00000011/uni00000013/uni00000018 /uni00000013/uni00000011/uni00000015 /uni00000013/uni00000011/uni00000018 /uni00000015 /uni00000018/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001b/uni00000014/uni00000011/uni00000013/uni00000029/uni00000055/uni00000044/uni00000046/uni00000057/uni0000004c/uni00000052/uni00000051/uni00000003/uni00000052/uni00000049/uni00000003/uni00000050/uni00000044/uni0000005b/uni00000003/uni00000035/uni00000057/uni00000048/uni00000050/uni00000053/uni00000048/uni00000055/uni00000044/uni00000057/uni00000058/uni00000055/uni00000048/uni00000003/uni00000020/uni00000003/uni00000013 /uni00000024/uni00000047/uni00000047/uni0000004c/uni00000057/uni0000004c/uni00000059/uni00000048/uni0000000f/uni00000003/uni00000045/uni00000044/uni00000057/uni00000057/uni00000048/uni00000055/uni0000004c/uni00000048/uni00000056 /uni00000024/uni00000047/uni00000047/uni0000004c/uni00000057/uni0000004c/uni00000059/uni00000048/uni0000000f/uni00000003/uni00000055/uni00000052/uni00000052/uni00000050 /uni00000024/uni00000047/uni00000047/uni0000004c/uni00000057/uni0000004c/uni00000059/uni00000048/uni0000000f/uni00000003/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051 /uni00000025/uni00000044/uni0000005c/uni00000048/uni00000056/uni0000004c/uni00000044/uni00000051/uni0000000f/uni00000003/uni00000045/uni00000044/uni00000057/uni00000057/uni00000048/uni00000055/uni0000004c/uni00000048/uni00000056 /uni00000025/uni00000044/uni0000005c/uni00000048/uni00000056/uni0000004c/uni00000044/uni00000051/uni0000000f/uni00000003/uni00000055/uni00000052/uni00000052/uni00000050 /uni00000025/uni00000044/uni0000005c/uni00000048/uni00000056/uni0000004c/uni00000044/uni00000051/uni0000000f/uni00000003/uni00000057/uni00000055/uni00000044/uni0000004c/uni00000051 /uni00000013/uni00000011/uni00000013/uni00000018 /uni00000013/uni00000011/uni00000015 /uni00000013/uni00000011/uni00000018 /uni00000015 /uni00000018 /uni00000036/uni00000057/uni00000044/uni00000051/uni00000047/uni00000044/uni00000055/uni00000047/uni00000003/uni00000047/uni00000048/uni00000059/uni0000004c/uni00000044/uni00000057/uni0000004c/uni00000052/uni00000051/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001b/uni00000014/uni00000011/uni00000013/uni00000057/uni00000048/uni00000050/uni00000053/uni00000048/uni00000055/uni00000044/uni00000057/uni00000058/uni00000055/uni00000048/uni00000003/uni00000020/uni00000003/uni00000013/uni00000011/uni00000014 /uni00000013/uni00000011/uni00000013/uni00000018 /uni00000013/uni00000011/uni00000015 /uni00000013/uni00000011/uni00000018 /uni00000015 /uni00000018/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001b/uni00000014/uni00000011/uni00000013/uni00000057/uni00000048/uni00000050/uni00000053/uni00000048/uni00000055/uni00000044/uni00000057/uni00000058/uni00000055/uni00000048/uni00000003/uni00000020/uni00000003/uni00000014 Figure 7.4: Comparison of the Additive and Bayesian methods. We show how the percentage of true reward obtained by RLSP varies as we change the tradeo between the inferred reward RHand the speci ed reward Rspec. The zero temperature case corresponds to traditional value iteration; this often leads to identical behavior and so the lines overlap. So, we also show the results when planning with soft value iteration, varying the softmax temperature, to introduce some noise into the policy. Overall, there is not much di erence between the two methods. We did not include the Apples environment because Rspecis uniformly zero and the Additive and Bayesian methods do exactly the same thing. specsuggests that before knowing s0, it seems likely that His very similar to spec. However, this is not true { Alice is probably providing the reward specto the robot so that it causes some change to the state that she has optimized, and so it will be predictably di erent from spec. On the other hand, we do need to put high probability on spec, since otherwise  nal will not incentivize any of the behaviors that specdid. Nonetheless, this is another simple heuristic for how we might combine the two rewards, that manages the tradeo between RspecandRH. We compared the Additive and Bayesian methods by evaluating their robustness. We vary the parameter that controls the tradeo and report the true reward obtained by RLSP, as a fraction of the expected true reward under the optimal policy. For the Bayesian method, we vary the standard deviation of the Gaussian prior over RHthat is centered at Rspec. For the Additive method, the natural choice would be to vary ; however, in order to make the results more comparable, we instead set= 1 and vary the standard deviation of the Gaussian prior used while inferring RH, which is centered at zero instead of at Rspec. A larger standard deviation allows RHto become larger in magnitude (since it is penalized less for deviating from the mean of zero reward), which e ectively corresponds to a smaller . While we typically create RLSP using value iteration, this leads to deterministic policies with very sharp changes in behavior that make it hard to see di erences between methods, and so we also show results with soft value iteration, which creates stochastic policies that vary more continuously. As demonstrated in Figure 7.4, our experiments show that overall the two methods perform very similarly, with some evidence that the Additive method is slightly more robust. The Additive method also has the bene t that it can be applied in situations where the inferred reward and speci ed reward are over di erent feature spaces, by CHAPTER 7. EVALUATION: CORRECTING MISSPECIFIED REWARDS AND IMITATING SKILLS 67 creating the nal reward R nal(s) =HTfH(s) +Rspec(s). However, both of these methods are quite unprincipled, and do not resolve the con ict between the two goals. The presence of this con ict is worrying: it should not be the case that two di erent sources of information about the same underlying reward function predictably con ict; the fact that this is happening suggests that we have a poorly speci ed model somewhere. The problem is that we are implicitly treating Rspecincorrectly: both the Additive and Bayesian methods are implicitly suggesting that the true reward must be \near" Rspec, but this is not actually the case: we believe that Rspecwill capture some aspects of good behavior, but completely neglect other aspects; this need not correspond to low L2 distance in parameter space. We could try to improve upon our assumptions about what information Rspecprovides, in order to resolve the con ict. For example, rather than interpreting specas the parameters of a reward function , we could interpret it as a changelist : that is, it is a speci cation of how the environment should be changed, relative to the current state of the environment . In this case, in our vase environment, specwould be interpreted as saying \all else equal, it is better forRto be at the purple door then at its current location", without making any claims about what is preferred when all else is not equal (Boutilier et al., 2004). This information could then be integrated with the inferred reward to provide a full reward function for R. Taking a step back, we can see that in general, the inferred reward does not have enough information about what Rshould do, and so it must be combined with some information fromH. In our evaluation so far, we have considered the additional information to come from spec, but this is not necessary { it could just as well come from preferences, demonstrations, corrections, or natural language, for example. Thus, what we really want is a general formalism that allows us to infer rewards from the state of the world, combine them with information provided by Honline, and then use the resulting information to act in the environment. We tackle this problem in the next chapter. 68 Chapter 8 Using assistance instead of reward learning to integrate information A central premise of this dissertation is that we cannot specify a perfect reward function for powerful agents. A natural solution is to have agents be uncertain about the objective and infer it from observations. While we have shown that some of this information can be extracted from the state of the world, it also seems likely that some information will need to come from human feedback. So far we have implicitly been using the reward learning paradigm to combine reward information from multiple sources, as in Chapter 7. In this chapter, we recast the entire reward learning paradigm as a special, constrained case of the assistance paradigm, and demonstrate that by removing these constraints we can gain signi cant qualitative bene ts. We then show how to recast learning from the state of the world in the assistance paradigm to gain these bene ts. In the reward learning paradigm (Leike et al., 2018; Jeon et al., 2020; Christiano et al., 2017; Ziebart et al., 2010), a reward model is learned from human feedback, and then used by a control algorithm to select actions in the environment. Crucially, the control algorithm and the reward learning process do not interact: they are separate processes that optimize separate objectives. When learning the reward model, multiple forms of information can be integrated into a single posterior over reward functions. In contrast, in the assistance paradigm (Had eld-Menell et al., 2016; Fern et al., 2014), the human His modeled as part of the environment and as having some latent goal that the agentR(for robot) does not know. R's goal is to maximize this (unknown) human goal. In this formulation, Rmust balance between actions that help learn about the unknown goal, and control actions that lead to high reward. There is a single policy is responsible for both learning about the goal (to the extent necessary) as well as acting in pursuit of the goal. We claim that the assistance paradigm has several qualitative advantages over the reward learning paradigm. The goal of this chapter is to clarify and illustrate these advantages. We then show how to formalize state of the world learning within the assistance paradigm in Section 8.6. CHAPTER 8. USING ASSISTANCE INSTEAD OF REWARD LEARNING TO INTEGRATE INFORMATION 69 What is the reward of ?All pies need , let me make it If I use the to make , there won’t be any left for and . I’ll wait for more information. Learning about rewardMaking robust plans Preserving option value when possible + + I won’t find out which pie is preferable before Alice gets very hungry, so I’ll make . Guessing when feedback is unavailable + Figure 8.1: Rmust cook a pie for H, by placing our to make the pie dough, lling it with Apple, Blueberry, or Cherry lling, and nally baking it. However, Rdoes not know which llingHprefers, and His not yet available for questions. What should Rdo in this situation? On the right, we show what qualitative reasoning we might want Rto use to handle the situation. Our key insight is that by integrating reward learning and control using the learned reward into a single policy, the assistance formulation allows for reward learning to depend on how the reward will be used, and vice versa . This integration enables new, desirable qualitative behaviors: 1.Decision-making for \control" can take into account the fact that the agent can do more \reward learning" in the future. This allows the agent to take actions that are robustly instrumentally useful now, knowing that the intermediate results can be used later once the reward is better understood (Section 8.4). 2.Decision-making for \reward learning" (e.g. which questions to ask the human) can take into account how the resulting updates to the reward will be used for \control". This allows the agent to only ask questions whose answers are decision-relevant (Section 8.4). Consider for example the kitchen environment illustrated in Figure 8.1, in which Rmust bake a pie for H.Ris uncertain about which type of pie Hprefers to have, and currently H is at work and cannot answer R's questions. An assistive Rcan make the pie crust while waiting for Hto return, after which Rcan ask her about her preferences over the lling (Section 8.4). Rmay never clarify all of H's preferences: for example, Ronly needs to know how to dispose of food if it turns out that the ingredients have gone bad (Section 8.4). If H will help with making the pie, Rcan allowHto disambiguate her desired pie by watching what lling she chooses (Section 8.4). These behaviors rely crucially on the integration of reward learning and control into a single policy, and as such vanilla reward learning agents do not show these behaviors. To clarify and illustrate the advantages of the assistance paradigm, we rst precisely characterize the di erences between reward learning and assistance, by showing that two phase, communicative assistance is equivalent to reward learning (Section 8.3). We then give qualitative examples of desirable behaviors that can only be expressed once these restrictions are lifted, and thus are only exhibited by assistive agents (Section 8.4). CHAPTER 8. USING ASSISTANCE INSTEAD OF REWARD LEARNING TO INTEGRATE INFORMATION 70 We do not mean to suggest that work on reward learning should be replaced by research on assistance. Amongst other limitations, assistive agents are very computationally complex. Our goal is simply to clarify what qualitative bene ts an assistive formulation could theoretically provide. Further research is needed to develop ecient algorithms that can capture these bene ts. Such algorithms may look like algorithms designed to solve assistance problems as we have formalized them here, but they may also look like modi ed variants of reward learning, where the modi cations are designed to provide the qualitative bene ts we identify. 8.1 Reward learning We consider two variants of reward learning: non-active reward learning, in which Rmust infer the reward by observing H's behavior, and active reward learning, in which Rmay choose particular questions to ask Hin order to get particular feedback. Anon-active reward learning problem P=hMnr;C;h;r;Pi;H;kicontains a POMDP without reward Mnr=hS;AR; R;OR;T;P0; i, and instead Rhas access to a parameterized reward space h;r;Pi.Ris able to learn about by observing Hmake kdi erent choices c, each chosen from a set of potential choices C. In order for Rto learn from the human's choices, it also assumes access to the human decision function H(cj) that determines how the human makes choices for di erent possible reward functions r. Common decision functions include perfect optimality (Ng and Russell, 2000) and Boltzmann rationality (Ziebart et al., 2010). As discussed in Section 2.4, there are many types of choices (Jeon et al., 2020), including demonstrations (Argall et al., 2009; Ng and Russell, 2000; Ziebart et al., 2010; Fu et al., 2017; Gao et al., 2012), comparisons (Zhang et al., 2017; Wirth et al., 2017; Christiano et al., 2017; Sadigh et al., 2017), corrections (Bajcsy et al., 2017), proxy rewards (Had eld-Menell et al., 2017b), natural language (Fu et al., 2019), etc. A policy decision function f(c0:k1) produces a policy Rafter observing H's choices.fis asolution if it maximizes expected reward EP;c0:k1H[ER(f(c0:k1))]. SinceH's choices c0:k1do not a ect the state of the environment that Ris acting in, this is equivalent to choosingRthat maximizes expected reward given the posterior over reward functions, that isEP(jc0:k1) ER(R) . Anactive reward learning problem P=hMnr;Q;C;h;r;Pi;H;kiadds the ability forRto askHparticular questions q2Qin order to get more targeted feedback about . The human decision function H(cjq;) now depends on the question asked. A solution consists of a question policy R Q(qijq0:i1;c0:i1) and a policy decision function f(q0:k1;c0:k1) that maximize expected reward EP;q0:k1R Q;c0:k1H[ER(f(q0:k1;c0:k1))]. A typical algorithm (Eric et al., 2008; Daniel et al., 2014; Maystre and Grossglauser, 2017; Christiano et al., 2017; Sadigh et al., 2017; Zhang et al., 2017; Wilde et al., 2020) will compute and ask q2Qthat maximizes an active learning criterion such as information gain (Byk et al., 2019) or volume removal (Sadigh et al., 2017). Best results are achieved by selecting questions with the highest value of information (Cohn, Robert W, 2016; Zhang et al., 2017; Mindermann et al., 2018; Wilde et al., 2020), but these are usually much more CHAPTER 8. USING ASSISTANCE INSTEAD OF REWARD LEARNING TO INTEGRATE INFORMATION 71 computationally expensive. Rthen nds a policy that maximizes expected reward under the inferred distribution over , in order to approximately solve the original POMDP. Note that a non-active reward learning problem is equivalent to an active reward learning problem with only one question, since having just a single question means that Rhas no choice in what feedback to get: Proposition 7. Every non-active reward learning problem hMnr;C;h;r;Pi;H;kican be reduced to an active reward learning problem. Proof. We construct the active reward learning problem as hMnr;Q0;C;h;r;Pi;H0;ki, whereQ0,fqgwhereqis some dummy question, and H0(cjq;),H(cj). Suppose the solution to the new problem is hR0 Q;f0i. Sincef0is a solution, we have: f0= argmax ^fE P;q0:k1R0 Q;c0:k1H0(jqi;)h ER(^f(q0:k1;c0:k1))i = argmax ^fE P;q0:k1=q;c0:k1H0(jq;)h ER(^f(q0:k1=q;c0:k1))i allqareq = argmax ^fE P;c0:k1H(j)h ER(^f(q0:k1=q;c0:k1))i : Thusf(c0:k1) =f0(q0:k1=q;c0:k1) is a maximizer of EP;c0:k1H(j)h ER(^f(c0:k1)i , making it a solution to our original problem. Proposition 8. Every active reward learning problem hMnr;Q;C;h;r;Pi;H;kiwith jQj= 1 can be reduced to a non-active reward learning problem. Proof. Let the sole question in Qbeq. We construct the non-active reward learning problem ashMnr;C;h;r;Pi;H0;ki, withH0(cj) =H(cjq;). Suppose the solution to the new problem is f0. Then we can construct a solution to the original problem as follows. First, note that R Qmust beR Q(qijq0:i1;c0:i1) = 1[qi=q], since there is only one possible question q. Then by inverting the steps in the proof of Propo- sition 7, we can see that f0is a maximizer of EP;q0:k1R Q;c0:k1H(jqi;)h ER(^f(jc0:k1))i . Thus, by de ning f(q0:k1;c0:k1) =f0(c0:k1), we get a maximizer to our original problem, makinghR Q;fia solution to the original problem. 8.2 Assistance The key idea of assistance is that helpful behaviors like reward learning are incentivized whenRdoes not know the true reward Rand can only learn about it by observing human behavior. So, we model the human Has part of the environment, leading to a two-agent CHAPTER 8. USING ASSISTANCE INSTEAD OF REWARD LEARNING TO INTEGRATE INFORMATION 72 POMDP, and assume there is some true reward Rthat onlyHhas access to, while the robotRonly has access to a model relating RtoH's behavior. Intuitively, as Racts in the environment, it will also observe H's behavior, which it can use to make inferences about the true reward. Following Had eld-Menell et al. (2016)1, we de ne an assistance game Mas a tuple M=hS;fAH;ARg;f H; Rg;fOH;ORg;T;PS; ;h;r;Pii: HereSis a nite set of states, AHa nite set of actions for H, Ha nite set of observations forH, andOH:S!( H) an observation function for H(respectivelyAR; R;ORfor R). The transition function T:SAHAR!(S) gives the probability over next states given the current state and both actions. The initial state is sampled from PS2(S).  is a set of possible reward function parameters which parameterize a class of reward functions r:SAHARS!R, andPis the distribution from which is sampled. 2(0;1) is a discount factor. As with POMDPs, policies can depend on history. Both HandRare able to observe each other's actions, and on a given timestep, Racts before H. We useR t: ( RAHAR)tto denoteR's observations until time t, andH tforH's observations; thus R's policy can be written asR(aRjoR t;R t1), whileH's can be written as H(aHjoH t;aR t;H t1;). Note that unlikeH,Rdoes not observe the reward parameter , and must infer much like it does the hidden state. Afully observable assistance game is one in which both HandRcan observe the full state. In such cases, we omit H; R;OHandOR. Since we have not yet speci ed how Hbehaves, it is not clear what the agent should optimize for. Should it be playing a Nash strategy or optimal strategy pair of the game, and if so, which one? Should it use a non-equilibrium policy, since humans likely do not use equilibrium strategies? This is a key hyperparameter in assistance games, as it determines the communication protocol forHandR. For maximum generality, we can equip the assistance game with a policy-conditioned belief B: R!(H) overH, which speci es how the human responds to the agent's choice of policy (Halpern and Pass, 2018). The agent's goal is to maximize expected reward given this belief. We use policy-conditioned beliefs as opposed to a simple unconditional distribution over human policies, because it allows us to model a wide range of situations, including situations with prior coordination, or where humans adapt to the robot's policy as a result of prior interactions. Prior work on assistance games (Had eld-Menell et al., 2016; Malik et al., 2018; Woodward et al., 2019) focuses on nding optimal strategy pairs. This corresponds to a belief that H will know and perfectly adapt to R's policy, as formalized below: 1Relative to Had eld-Menell et al. (2016), our de nition allows for partial observability and requires that the initial distribution over Sand  be independent. We also have Hchoose her action sequentially after R, rather than simultaneously with R, in order to better parallel the reward learning setting. CHAPTER 8. USING ASSISTANCE INSTEAD OF REWARD LEARNING TO INTEGRATE INFORMATION 73 Proposition 9. LetM=hS;fAH;ARg;f H; Rg;fOH;ORg;T;PS; ;h;r;Piibe an assistance game. Let B(R)(H)/ 1 EJR (H;R) = max ~H2HEJR (~H;R) be an associated policy-conditioned belief. Let Rbe the solution to hM;Bi. ThenhB(R);Riis an optimal strategy pair. Proof. LethH;Ribe an arbitrary strategy pair. Then EJR (H;R)EJR (B(R);R) by the de nition of B, andEJR (B(R);R)EJR (B(R);R) by the de nition of R. Thus EJR (H;R)EJR (B(R);R). SincehH;Riwas assumed to be arbitrary, hB(R);Ri is an optimal strategy pair. However, our goal is to compare assistance to reward learning. Typical reward learning algorithms assume access to a model of human decision-making: for example, Hmight be modeled as optimal (Ng and Russell, 2000) or Boltzmann-rational (Ziebart et al., 2010). As a result, we also assume that we have access to a model of human decision-making H. Note thatHdepends on : we are e ectively assuming that we know how Hchooses how to behave given a particular reward r. This assumption corresponds to the policy-conditioned beliefB(R)(~H) = 1 ~H=H . We de ne an assistance problem Pas a pairhM;Hi whereHis a human policy for the assistance game M. Given an assistance problem, a robot policy Rinduces a probability distribution over trajectories: hs0;;H;Ri,2[SAHAR](whereXdenotes a sequence of X). We denote the support of this distribution by Traj(R). The expected reward of a robot policy forhM;Hiis given by ER(R) = E s0PS;P;hs0;;H;Ri"1X t=0 tr(st;aH t;aR t;st+1)# : Asolution ofhM;Hiis a robot policy that maximizes expected reward: R=argmax ~RER(~R). Solving assistance problems Once theHis given,Hcan be thought of as an aspect of the environment, and can be thought of as a particularly useful piece of information for estimating how good actions are. This suggests that we can reduce the assistance problem to an equivalent POMDP, and then solve the POMDP. Following Desai (2017), the key idea is to embed Hin the transition functionTand embed in the state. It turns out we must also include the human's previous actionaHin the state, so that the robot can observe it, and so that the reward can be computed. In theory, to embed potentially non-Markovian HinT, we need to embed the entire history of the trajectory in the state, but this leads to extremely large POMDPs. In our experiments, we only consider Markovian human policies, for which we do not need to embed the full history, keeping the state space manageable. Thus, the policy can be written as CHAPTER 8. USING ASSISTANCE INSTEAD OF REWARD LEARNING TO INTEGRATE INFORMATION 74 H(aHjoH;aR;). To ensure that Rmust inferfrom human behavior, as in the original assistance game, the observation function does notreveal, but does reveal the previous human action aH. Formally, suppose that we have an assistance problem hM;Hiwith: M=hS;fAH;ARg;f H; Rg;fOH;ORg;T;PS; ;h;r;Pii: Then, the transformation M7!M0is given as follows: S0,SAH State space 0, RAHObservation space O0(o0js0) =O0((oR;aH 1)j(s;aH 2;)) Observation function , 1 aH 1=aH 2 OR(oRjs) T0(s0 2js0 1;aR) =T0((s2;aH 1;2)j(s1;aH 0;1);aR) Transition function ,T(s2js1;aH 1;aR) 1[2=1] X oH2 HOH(oHjs1)H(aH 1joH;aR;) r0(s0 1;aR;s0 2) =r0((s1;aH 0;);aR;(s2;aH 1;)) Reward function ,r(s1;aH 1;aR;s2) P0 0(s0) =P0 0((s;aH;)) Initial state distribution ,PS(s)P() 1 aH=aH init whereaH initis arbitrary In the case where the original assistance problem is fully observable, the resulting POMDP is an instance of a Bayes-Adaptive MDP (Martin, 1967; Du , 2002). Any robot policy Rcan be translated from the APOMDP Mnaturally into an identical policy onM0. Note that in either case, policies are mappings from ( R;AH;AR) Rto (AR). This transformation preserves optimal agent policies: Proposition 10. A policyRis a solution ofMif and only if it is a solution of M0. Proof. Recall that an optimal policy in the POMDPM0is one that maximizes the expected value: EV() = E s0 0P0 0;0hs0 0;i"1X t=0 tr0(s0 t;at;st+1)# = E s0 0P0 0;0hs0 0;i"1X t=0 tr(st;aH t;at;st+1)# where the trajectories 0s are sequences of state, action pairs drawn from the distribution induced by the policy, starting from state s0. CHAPTER 8. USING ASSISTANCE INSTEAD OF REWARD LEARNING TO INTEGRATE INFORMATION 75 Similarly, an optimal robot policy Rin the APOMDP Mis one that maximizes its expected reward: ER(R) = E s0PS;P;hs0;;Ri"1X t=0 tr(st;aH t;aR t;st+1)# : To show that the optimal policies coincide, it suces to show that for any ,ER() (inM) is equal to EV() (inM0). To do this, we will show that induces the \same" distributions over the trajectories. For mathematical convenience, we will abuse notation and consider trajectories of the form ;2(S;AH;AR); it is easy to translate trajectories of this form to trajectories in either M0orM. We will show that the sequence ;has the same probability when the robot takes the policyin bothM0andMby induction on the lengths of the sequence. First, consider the case of length 1 sequences. ;= [(s;aR;aH);]. Under bothM0and M,sandare drawn from PSandPrespectively. Similarly, aRandaHare drawn from R(joR 0) andH(joH;aR;) respectively. So the distribution of length 1 sequences is the same under both M0andM. Now, consider some longer sequence ;= [(s1;aR 1;aH 1);::::;(st;aR t;aH t);]. By the induc- tive hypothesis, the distribution of ( s1;aH 1;aR 1);::::;(st1;aH t1;aR t1) andare identical; it suces to show that ( st;aH t;aR t) has the same distribution, conditioned on the other parts of ;, underM0and underM. Yet by construction, stis drawn from the same distribution T(jst1;aH t1;aR t1),aH tis drawn from the same distribution H(joH t;aR t;), andaR tis drawn from the same distribution R(joR t;R t1)). WhenMis fully observable, in the reduced POMDP is the only part of the state not directly observable to the robot, making it an instance of a hidden-goal MDP (Fern et al., 2014). For computational tractability, much of the work on hidden goals (Javdani et al., 2015; Fern et al., 2014) selects actions assuming that all goal ambiguity is resolved in one step . This e ectively separates reward learning and control in the same way as typical reward learning algorithms, thus negating many of the bene ts we highlight in future sections. Intention-aware motion planning (Bandyopadhyay et al., 2013) also embeds the human goal in the state in order to avoid collisions with humans during motion planning, but does not consider applications for assistance. Macindoe et al. (2012) uses the formulation of a POMDP with a hidden goal to produce an assistive agent in a cops and robbers gridworld environment. Nikolaidis et al. (2015) assumes a dataset of joint human-robot demonstrations, which they leverage to learn \types" of humans that can then be inferred online using a POMDP framework. This is similar to solving an assistance problem, where we think of the di erent values of as di erent \types" of humans. Chen et al. (2018) uses an assistance-style framework in which the unknown parameter is the human's trust in the robot (rather than the reward ). Woodward et al. (2019) uses deep reinforcement learning to solve an assistance game in which the team must collect either plums or lemons. To our knowledge, these are the only prior works that use CHAPTER 8. USING ASSISTANCE INSTEAD OF REWARD LEARNING TO INTEGRATE INFORMATION 76 an assistive formulation in a way that does not ignore the information-gathering aspect of actions. While these works typically focus on algorithms to solve assistance games, we instead focus on the qualitative bene ts of using an assistance formulation. Since we can reduce an assistance problem to a regular POMDP, we can use any POMDP solver to nd the optimal R. In our examples for this paper, we use an exact solver when feasible, and point-based value iteration (PBVI) (Pineau et al., 2003) or deep reinforcement learning (DRL) when not. When using DRL, we require recurrent models, since the optimal policy can depend on history. A common confusion is to ask how DRL can be used, given that it requires a reward signal, whereas Rdoes not know the reward function by assumption. This stems from a misunderstanding of what it means for R\not to know" the reward function. When DRL is run, at the beginning of each episode, a speci c value of is sampled as part of the initial state. The learned policy Ris not provided with : it can only see its observations oRand human actions aH, and so it is accurate to say that R\does not know" the reward function. However, the reward is calculated by the DRL algorithm, not by R, and the algorithm can and does use the sampled value of for this computation. Rcan then implicitly learn the correlation between the actions aHchosen byH, and the high reward values that the DRL algorithm computes; this can be thought of as an implicit estimation of in order to choose the right actions. 8.3 Reward learning as two-phase communicative assistance There are two key di erences between reward learning and assistance. First, reward learning algorithms split reward learning and control into two separate phases, while assistance merges them into a single phase. Second, in reward learning, the human's only role is to communicate reward information to the robot, while in assistance the human can help with the task. These two properties exactly characterize the di erence between the two: reward learning problems and communicative assistance problems with two phases can be reduced to each other, in a very natural way. Acommunicative assistance problem is one in which the transition function Tand the reward function rare independent of the choice of human action aH, and the human policyH(joH;aR;) is independent of the observation oH. Thus, in a communicative assistance problem, H's actions only serve to respond to R, and have no e ects on the state or the reward (other than by in uencing R). Such problems can be cast as instances of HOP-POMDPs (Rosenthal and Veloso, 2011). For the notion of two phases, we will also need to classify robot actions as communicative or not. We will assume that there is some distinguished action aR noopthat \does nothing". Then, a robot action ^aRiscommunicative if for anys;aH;s0we haveT(s0js;aH;^aR) = T(s0js;aH;aR noop) andR(s;aH;^aR;s0) =R(s;aH;aR noop;s0). A robot action is physical if it CHAPTER 8. USING ASSISTANCE INSTEAD OF REWARD LEARNING TO INTEGRATE INFORMATION 77 is not communicative. Now consider a communicative assistance problem hM;Hiwith noop action aR noopand let the optimal robot policy be R. Intuitively, we would like to say that there is an initial communication phase in which the only thing that happens is that Hresponds to questions fromR, and then a second action phase in which Hdoes nothing and Racts. Formally, the assistance problem is two phase with actions at tactif it satis es the following property: 9aH noop2AH;82Traj(R); 8t>>< >>>:P0(^s0); ^s=nk1; 1[^s0=ni+1];^s=niwithi<k1 T(^s0j^s;aR);^s2SandaR2A; 1[s0=s]; elseTransition function r0 (^s;aH;aR;^s0),8 >>>< >>>:1; ^s2NandaR=2Q; 1; ^s2SandaR2Q; 0; ^s2NandaR2Q; r(s;aR;s0);elseReward function H0(aHjoH;aR;),( H(aHjaR;); aR2Q c0; elseHuman policy aR0 noop,q0 Distinguished noop action Technically r0 should not be allowed to return 1. However, since SandAare nite,ris bounded, and so there exists some large nite negative number that is functionally equivalent to1 that we could use instead. Looking at the de nitions, we can see T0andr0are independent of aH, andH0is independent of oH, making this a communicative assistance problem. By inspection, we can see that every q2Qis a communicative robot action. Any aR=2Qmust not be a communicative action, because the reward r0 di ers between aRandq0. Thus, the communicative robot actions are Qand the physical robot actions are A. Note that by construction of P0 SandT, we must have si=nifori2f0;1;:::k1g, after whichskis sampled from P0and allst2Sfortk. Given this, by inspecting r0 , we can see that an optimal policy must have aR 0:k12QandaR k:=2Qto avoid the1 rewards. Since aR k:=2Q, we haveaH k:=c0. Thus, setting aH noop=c0, we have that the assistance problem is two phase with actions at tact=k, as required. CHAPTER 8. USING ASSISTANCE INSTEAD OF REWARD LEARNING TO INTEGRATE INFORMATION 79 Let a policy R0for the assistance problem be reasonable if it never assigns positive probability to aR2Awhent< >:R0 Q(aR tjaR 0:t1;aH 0:t1); t<k andaR 0:t2Q0 f0(aR 0:k1;aH 0:k1)(aR tjoR k:t;aR k:t1); tkandaR 0:k12Q0andaR k:t2A0 0; else: CHAPTER 8. USING ASSISTANCE INSTEAD OF REWARD LEARNING TO INTEGRATE INFORMATION 81 We show that there must exist a solution to Pthat is the analogous policy to some pair. Assume towards contradiction that this is not the case, and that there is a solution Rthat is not the analogous policy to some pair. Then we have a few cases: 1.Rassigns positive probability to aR i=a =2Q0fori<k . This contradicts the two-phase assumption. 2.Rassigns positive probability to aR i=q2Qforik. This contradicts the two-phase assumption. 3.R(aR tjoR t;R t1) depends on the value of oR ifor somei < k . Since both aH 0:k1and aR 0:k1cannot a ect the state or reward (as they are communicative), the distribution overoR 0:k1is xed and independent of R, and so there must be some other Rthat is independent of oR 0:k1that does at least as well. That Rwould be the analogous policy to some pair, giving a contradiction. Now, suppose we have some pair hR0 Q;f0i, and let its analogous policy be R. Then we have: E P q0:k1R0 Q c0:k1H0[ER(f0(q0:k1;c0:k1))] =E P2 64E q0:k1R c0:k1H[ER(f0(q0:k1;c0:k1))]3 75 =E P2 66664E q0:k1R c0:k1H2 66664E s0P0 0 aR tf0(q0:k1;c0:k1) st+1T0(jst;aR t)"1X t=0 tr0 (st;aR t;st+1)#3 777753 77775 =E P2 66664E q0:k1R c0:k1H2 66664E skP0 0 aR tR(jhc0:k1;ok:ti;hq0:k1;ak:t1i) st+1T0(jst;aR t)" 1 k1X t=k tr0 (st;aR t;st+1)#3 777753 77775 =E P2 66664E q0:k1R c0:k1H2 66664E skP0 0 aR tR(jhc0:k1;ok:ti;hq0:k1;ak:t1i) st+1T0(jst;aR t)" 1 k1X t=k tr(st;aH ;aR t;st+1)#3 777753 77775 CHAPTER 8. USING ASSISTANCE INSTEAD OF REWARD LEARNING TO INTEGRATE INFORMATION 82 However, since all the actions in the rst phase are communicative and thus don't impact state or reward, the rst ktimesteps in the two phase assistance game have constant reward in expectation. Let C=Es0:khPk1 t=0 tr(st;aH ;aR noop;st+1)i . This gives us: E P;q0:k1R0 Q;c0:k1H[ER(f0(q0:k1;c0:k1))] =E P" E s0PS;P;hs0;;H;Ri" 1 k1X t=0 tr(st;aH t;aR t;st+1)## 1 kC =1 k ER(R)C : Thus, ifhR0 Q;f0iis a solution to the active reward learning problem, then Ris a solution of the two-phase communicative assistance problem. Corollary 14. If a two-phase communicative assistance problem hM;H;aR noopihas exactly one communicative robot action, it can be reduced to an equivalent non-active reward learning problem. Proof. Apply Proposition 13 followed by Proposition 8. (Note that the construction from Proposition 13 does lead to an active reward learning problem with a single question, meeting the precondition for Proposition 8.) 8.4 Qualitative improvements for general assistance We have seen that reward learning is equivalent to two-phase communicative assistance problems, where inferring the reward distribution can be separated from control using the reward distribution. However, for general assistance games, it is necessary to merge estimation and control, leading to several new qualitative behaviors. When the two phase restriction is lifted, we observe plans conditional on future feedback and relevance aware active learning . When the communicative restriction is lifted, we observe learning from physical actions . We demonstrate these qualitative behaviors in simple environments using point-based value iteration (PBVI) or deep reinforcement learning (DRL). For communicative assistance problems, we also consider two baselines: 1.Active reward learning. This is the reward learning paradigm discussed so far. 2.Interactive reward learning. This is a variant of reward learning that aims to recover some of the bene ts of interactivity, by alternating reward learning and acting phases. During an action phase, Rchooses actions that maximize expected reward under its current belief over (without \knowing" that its belief may change), while during a reward learning phase, Rchooses questions that maximizes information gain. CHAPTER 8. USING ASSISTANCE INSTEAD OF REWARD LEARNING TO INTEGRATE INFORMATION 83 Plans conditional on future feedback Here, we show how an assistive agent can make plans that depend on obtaining information aboutin the future. The agent can rst take some \preparatory" actions whose results can be used later once the agent has clari ed details about . A reward learning agent would not be able to do this, as it would require three phases (acting, then learning, then acting again). We illustrate this with our original kitchen environment (Figure 8.1). Rmust bake a pie forH, but doesn't know what type of pie Hwants: Apple, Blueberry, or Cherry. Each type has a weight specifying the preference for that pie. Assuming people tend to like apple pie the most and cherry pie the least, we have AUniform [2;4],BUniform [1;3], and CUniform [0;2]. We de ne the questions Q=fqA;qB;qCg, whereqXmeans \What is the value ofX?", and thus, the answer set is C=R. Rcan select ingredients to assemble the pie. Eventually, Rmust use \bake", which bakes the selected ingredients into a nished pie, resulting in reward that depends on what type of pie has been created. Hinitially starts outside the room, but will return at some prespeci ed time.rassigns a cost of asking a question of 0 :1 ifHis inside the room, and 3 otherwise. The horizon is 6 timesteps. In this environment, Rneeds to bake either apple or blueberry pie (cherry is never preferred over apple) within 6 timesteps, and may query Habout her preferences about the pie. Making the pie takes 3 timesteps: rst Rmust make our into dough, then it must add one of the llings, and nally it must bake the pie. Baking the correct pie results in +2 reward, while baking the wrong one results in a penalty of -1. In addition, Hmight be away for several timesteps at the start of the episode. Querying Hcosts 0.1 when she is present and 3 when she is away. We use PBVI to train an agent for this assistance problem with di erent settings for how longHis initially away. Assistance. Regardless of H's preferences ,Rwill need to use our to make pie dough. So, Ralways makes the pie dough rst, before querying Habout her preferences. Whether R then queries Habout her preferences depends on how late Hreturns. If Harrives home before timestep 5, Rwill query her about her preferences and then make the appropriate pie as expected. However, if Hwill arrive later, then there will not be enough time to query her for her preferences and bake a pie. Instead, Rbakes an apple pie, since its prior suggests that that's what Hwants. This behavior, where Rtakes actions (making dough) that are robustly good but waits on actions (adding the lling) whose reward will be clari ed in the future, is very related to conservative agency (Turner et al., 2020). Reward learning. The assistance solution requires Rto act (to make dough), then to learn preferences, and then to act again (to make pie). A reward learning agent can only have two phases, and so we see one of two suboptimal behaviors. First, Rcould stay in the learning phase until Hreturns home, then ask which pie she prefers, and then make the pie from CHAPTER 8. USING ASSISTANCE INSTEAD OF REWARD LEARNING TO INTEGRATE INFORMATION 84 scratch. Second, Rcould make an apple pie without asking Hher preferences. (In this case there would be no learning phase.) Which of these happens depends on the particular method and hyperparameters used. Interactive reward learning. Adding interactivity is not sucient to get the correct behavior. Suppose we start with an action phase. The highest reward plan under R's current belief overis to bake an apple pie, so that's what it will do, as long as the phase lasts long enough. Conversely, suppose we start with a learning phase. In this case, Rdoes nothing untilHreturns, and then asks about her preferences. Once we switch to an action phase, it bakes the appropriate pie from scratch. Why was integration of reward learning and control needed? The plan \make pie dough and then wait" (a control behavior) is only a good plan because the agent has some way of learning what lling to use in the future (a reward learning behavior). Thus, this plan can only be selected because when selecting control behaviors the agent can reason about future reward learning behaviors. Relevance aware active learning ? Figure 8.2: The wormy-apples kitchen en- vironment. Hwants an apple, but R might discover worms in the apple, and have to dispose of it in either of the trash or compost bins.Once we relax the two-phase restriction, Rstarts to further optimize whether and when it asks questions. In particular, since Rmay be uncertain about whether a question's answer will even be necessary,Rwill only ask questions once they become immediately relevant to the task at hand. In contrast, a reward learning agent would have to decide at the beginning of the episode (during the learning phase) whether or not to ask these questions, and so cannot evaluate how relevant they are. Consider for example a modi cation to the kitchen environment: Rknows that Hwants an apple pie, but when Rpicks up some apples, there is a 20% chance that it nds worms in some of the apples. Ris unsure whether Hwants her compost bin to have worms, and so does not know whether to dispose of the bad apples in the trash or compost bin. The robot gets 2 reward for making an apple pie (regardless of how it disposed of any wormy apples), and gets 2 reward if it disposes of the apples in the wrong container. Additionally, asking a question incurs a cost of 0 :1. IdeallyRwould only clarify H's preferences around the disposal of wormy apples when Rhappens to pick up wormy apples. CHAPTER 8. USING ASSISTANCE INSTEAD OF REWARD LEARNING TO INTEGRATE INFORMATION 85 We use PBVI to solve this environment, and report the results below. Assistance. An assistive Ronly asks about wormy apples when it needs to dispose of one. Ralways starts by picking up apples. If the apple does not have worms, Rimmediately uses the apples to bake the pie. If some apples have worms and the cost of asking a question is suciently low, RelicitsH's preferences and disposes of the apples appropriately. It then bakes the pie with the remaining apples. This policy always gets the 2 reward from baking the pie and never incurs the 2 reward from disposing of wormy apples in the wrong bin. It only has to ask Ha question 80% of the time, and so incurs the 0 :1 cost of asking a question 20% of the time, leading to a total expected undiscounted reward of 1 :98. This behavior, in which questions are asked only if they are useful for constraining future behavior, has been shown previously using probabilistic recipe trees (PRTs) (Kamar et al., 2009), but to our knowledge has not been shown with optimization-based approaches. Reward learning. A reward learning policy must have only two phases and so would show one of two undesirable behaviors: either it would always ask Hwhere to dispose of wormy apples, or it never asks and instead guesses when it does encounter wormy apples. With a lower discount rate ( = 0:9),R's policy never asks questions and instead simply tries to make the apple pie, guessing which bin to dispose of wormy apples in if it encounters any. Intuitively, since it would have to always ask the question at the beginning, it would always incur a cost of 0 :1 as well as delay the pie by a timestep resulting in 10% less value, and this is only valuable when there turn out to be worms andits guess about which bin to dispose of them in is incorrect, which only happens 10% of the time. This ultimately isn't worthwhile. This achieves an expected undiscounted reward of 1 :8. With a higher discount rate of = 0:99, the two-phase policy will always ask about which bin to dispose of wormy apples in, achieving 1 :9 expected undiscounted reward. Interactive reward learning. If we start in the action phase and Rpicks up wormy apples, it will dispose of them in an arbitrary bin without asking Habout her preferences, because it doesn't \know" that it will get the opportunity to do so. Alternatively, if we start with a learning phase, Rwill askHwhere to dispose of wormy apples, even if Rwould never pick up any wormy apples. Why was integration of reward learning and control needed? The question \where should I dispose of wormy apples" (a reward learning behavior) should only be asked when the agent picks up apples and they happen to be wormy (a control behavior). This can only happen when the question selection mechanism is able to reason about how the resulting information will be used for control. While this example might seem trite, complex settings usually have many more questions. ShouldRask ifHprefers seedless apples, should scientists ever invent them? Perhaps Rshould ask if Hstill prefers apple over cherry when angry? Asking about all possible situations is not scalable. CHAPTER 8. USING ASSISTANCE INSTEAD OF REWARD LEARNING TO INTEGRATE INFORMATION 86 Learning from physical actions So far we have considered communicative assistance problems, in which Honly provides feedback rather than acting to maximize reward herself. Allowing Hto have physical actions enables a greater variety of potential behaviors. Most clearly, when Rknows the reward (that is,Pputs support over a single ), assistance games become equivalent to human-AI collaboration (Nikolaidis and Shah, 2013; Carroll et al., 2019; Dimitrakakis et al., 2017). With uncertain rewards, Rcan learn just by observing how Hacts in an environment, and then work with Hto maximize reward, all within a single episode , as in shared autonomy with intent inference (Javdani et al., 2015; Brooks and Sza r, 2019) and other works that interpret human actions as communicative Whitney et al. (2017). This can signi cantly reduce the burden on Hin providing reward information to R. Some work has shown that in such situations, humans tend to be pedagogic : they knowingly take individually suboptimal actions, in order to more e ectively convey the goal to the agent (Ho et al., 2016; Had eld-Menell et al., 2016). An assistive Rwho knows this can quickly learn what Hwants, and help her accomplish her goals. + + + + + + Figure 8.3: The CakeOrPie vari- ant of the kitchen environment. His equally likely to prefer cake or pie. Communication must take place through physical ac- tions alone.We illustrate this with a variant of our kitchen environ- ment, shown in Figure 8.3. There are no longer questions and answers. Both HandRcan move to an adjacent free space, and pick up and place the various objects. Only R may bake the dessert. Ris uncertain whether Hprefers cake or cherry pie. Preparing the desired recipe provides a base value of V= 10, while the other recipe provides a base value ofV= 1. Since Hdoesn't want the preparation to take too long, the actual reward when a dessert is made is given byrt=Vf(t), withf(t) = 1(t=N)4, andN= 20 as the episode horizon. For both recipes, it is individually more ecient for Hto pick up the dough rst. However, we assume His pedagogic and wants to quickly show Rwhich recipe she wants. So, if she wants cake, she will pick up the chocolate rst to signal to Rthat cake is the preferred dessert. It is not clear how exactly to think about this from a reward learning perspective: there aren't any commu- nicative human actions since every action alters the state of the environment. In addition, there is no clear way to separate out a given trajectory into two phases. This situation cannot be easily coerced into the reward learning paradigm, and so we only report the results in the assis- tance paradigm. We solve the environment using Deep Q-Networks (DQN; (Mnih et al., 2013)). We ran 6 seeds for 5Mtimesteps and a learning rate of 104; results are shown in Figure 8.4. CHAPTER 8. USING ASSISTANCE INSTEAD OF REWARD LEARNING TO INTEGRATE INFORMATION 87 0 1 2 3 4 5 Timesteps1e6456789Average Returnpedagogic H, optimal R non-pedagogic H, optimal RCakeOrPie Environment Figure 8.4: DQN smoothed learning curves on the CakeOrPie environment, with 6 seeds over 5Mtimesteps and learning rate of 104. The assistive Rhandles this situation perfectly. It waits to see which ingredient Hwalks towards, infers which recipe Hwants, and then helps Hby putting in the ingredients from its side of the environment and baking the dessert. This is equivalent to pragmatic reasoning (Goodman and Frank, 2016): \ Hwould have gone towards the chocolate if she wanted cake, so the fact that she picked up the dough implies that she wants cherry pie". We emphasize thatRis not explicitly programmed to reason in this manner. Note thatRis not limited to learning from H's physical actions: Rcan also use its own physical actions to \query" the human for information (Woodward et al., 2019; Sadigh et al., 2016). 8.5 Discussion So far we have discussed the bene ts of assistance over reward learning. Are there any downsides? The major limitation of assistance is that assistance problems are signi cantly more computationally complex, since we treat the unknown reward as the hidden state of a POMDP. We are hopeful that this can be solved through the application of deep reinforcement learning, which has been demonstrated to scale to large state, action and observation spaces (OpenAI, 2018; Vinyals et al., 2019). Another avenue for future work is to modify active reward learning algorithms in order to gain the bene ts outlined in Section 8.4, while maintaining their computational eciency. In addition, assistive agents will typically extract more information from Hthan reward learning agents. While this leads to bene ts when correct inferences are made, it can also CHAPTER 8. USING ASSISTANCE INSTEAD OF REWARD LEARNING TO INTEGRATE INFORMATION 88 lead to signi cantly worse failures when His misspeci ed. We don't see this as a major limitation: to the extent this is a major worry, we can design Hso that the robot only makes inferences about human behavior in speci c situations. For example, by having Hbe independent of in a given state s, we ensure that the robot does not make any inferences aboutin that state. Limitations of assistance and reward learning In addition to the downsides above, which are speci c to assistance, there a number of challenges that apply to both reward learning and assistance: Human modeling. A major motivation for both paradigms is that reward speci cation is very dicult. However, now we need to specify a prior over reward functions, and the human model H. Consequently, misspeci cation can still lead to bad results (Armstrong et al., 2020; Carey, 2018). While it should certainly be easier to specify a prior over with a \grain of truth" on the true reward than to specify directly, it is less clear that we can specifyHwell. One possibility is to add uncertainty over the human policy H. However, this can only go so far: information about must come from somewhere . IfRis suciently uncertain aboutandH, then it cannot learn about the reward (Armstrong and Mindermann, 2018). Thus, for good performance we need to model H. While imitation learning can lead to good results (Carroll et al., 2019), the best results will likely require insights from a broad range of elds that study human behavior. Assumption that Hknows.Both paradigms assume that Hknows her reward exactly, but in practice, human preferences change over time (Allais, 1979; Cyert and DeGroot, 1975; Shogren et al., 2000). We could model this as the human changing their subgoals (Michini and How, 2012; Park et al., 2020), adapting to the robot (Nikolaidis et al., 2017) or learning from experience (Chan et al., 2019). Dependence on uncertainty. All of the behaviors of Section 8.4, as well as previously explored bene ts such as o switch corrigibility (Had eld-Menell et al., 2017a), depend on Rexpecting to gain information about . However, Rwill eventually exhaust the available information about . If everything is perfectly speci ed, this is not a problem: Rwill have converged to the true . However, in the case of misspeci cation, after convergence Ris e ectively certain in an incorrect , which has many troubling problems that we sought to avoid in the rst place (Yudkowsky, year unknown). CHAPTER 8. USING ASSISTANCE INSTEAD OF REWARD LEARNING TO INTEGRATE INFORMATION 89 8.6 Modeling the state of the world with assistance Our original goal in this chapter was to nd a more principled way of integrating the reward inferred using RLSP with information learned from human feedback and then act on that information. The assistance paradigm provides a natural way of doing so. We model the environment as an assistance problem, but enforce the condition that for the rst Ttimesteps, Rdoes not yet exist, and is only deployed after Ttimesteps. At the time of deployment, R can update its prior Pbased on the observed state of the environment, and subsequently it may continue to ask Hfor more feedback about what to do. Concretely, let us consider some state of the world problem hM;s0;H;T;h;R;Pii. We can view this as capturing the rst Ttimesteps of a special assistance problem with a discrete deployment event. We have hhS;fAH;ARg;f H; Rg;fOH;ORg;T;PS; ;h;r;Pii;Hi as a standard assistance problem, and add as a restriction that Rmust take a noop action aR noopfor the rst Ttimesteps, and additionally is not allowed to observe the environment orH's actions. After these Ttimesteps,Ris deployed and can act as normal within the assistance problem. This is technically speaking not a standard assistance problem, because we are imposing conditions on R's policyR, as well as what Rgets to observe. We might consider it a constrained assistance problem. Nonetheless, all of the discussion in the previous sections still applies, and in particular, it can still be reduced to a POMDP, and solved using some POMDP solver. From this perspective, the point of the algorithms introduced in Chapters 5 and 6 that solve the state of the world problem is that they are more ecient algorithms that can solve a subpart of this constrained assistance problem. Speci cally, they compute how Rshould update its belief over upon its deployment when observing s0. However, in order to get an optimal policy for the assistance problem, we need to have the full distribution over , which only Algorithm 1 does; Algorithms 2 and 3 only compute a point estimate and thus should be thought of as ecient approximations. Consider for example the kitchen oce environment in Figure 8.5. In this environment, H wants to get work done on her laptop (which Rcannot do on her behalf), and also eventually wants to eat a pie as in our original kitchen environment in Figure 8.1. In addition, there is a vase in the middle of the room, which Hdoes not want to break, as in Figure 1.2. However, Rdoes not know any of this, and is uncertain about the rewards of all three of these features. As before,Rcan askHquestions about her preferences in order to clarify what it should do. At the beginning of a trajectory sT, beforeRhas been deployed, Hhas just entered the room through a door, ready to begin acting in the environment. When Ris deployed in state s0, it observes that His now working at her laptop, and the vase remains intact. Simply by observing the state of the environment, Rcan infer that Hprobably cares about getting her work done (since she is at the laptop), and that she doesn't want to break the vase (since she must have taken the long way around the vase in order to get to the laptop). However, this doesn't tell Rwhat it should do: it can't help with H's work, and keeping the vase intact is only an injunction of what not to do, rather than what to do. CHAPTER 8. USING ASSISTANCE INSTEAD OF REWARD LEARNING TO INTEGRATE INFORMATION 90 Figure 8.5: Kitchen oce environment. In this environment, Ris uncertain both about what type of pie Hwould like, as well as whether it is acceptable to break vases. At the time of deployment, Robserves that His working at her laptop, but the vase is still intact, and so Rcan infer that vases should not be broken. However, Rstill does not know which pie H would prefer, and so asks her for her preferences before nishing the pie. Ris aware that Hmight want a speci c type of pie, and that is something that it can help with. As a result, it asks Habout her pie preferences, and then makes the appropriate pie type for her to eat. 91 Chapter 9 Conclusion One major goal of arti cial intelligence is to build arti cial agents that capably perform tasks that we want. In order to build such agents, we must have some way of transmitting information about what we want to the agent. The current paradigm in reinforcement learning is to accomplish this by specifying a reward function , that de nes which states are desirable and which are not. Unfortunately, it is very dicult for humans to correctly specify such a reward function. Instead, it is important that our agents treat all source of information as observations that provide information about the intended goal, rather than de nitions of it. Once we make this distinction, it becomes evident that our agents should collect many di erent kinds of information, and integrate them together, in order to make better inferences about the intended goal. This dissertation identi ed a potentially important source of such information: the state of the world. Unlike other sources of information, such as human demonstrations and comparisons, this source is free: it does not require humans to change their behavior at all. The agent must simply look at the world, infer what humans have put e ort into, and make appropriate inferences about what it should and should not do. While this will likely not be enough to uniquely identify the task that the agent should perform, it can reduce the amount of information that the agent needs to collect from other, more expensive sources. We have demonstrated both theoretically and empirically that learning from the state of the world can enable agents to make meaningful inferences, at least within some simple environments. However, there is still much work to be done to turn this into a practical method that can be applied in realistic scenarios. 9.1 Avenues for improvement There are multiple avenues for further research, both conceptually and practically. CHAPTER 9. CONCLUSION 92 Conceptual changes Handling multiple humans. In many realistic environments, there is not a single human H who is in uencing the environment: many people act simultaneously, and the state is a result of joint optimization by all of them. One approach is to infer H's goals by modeling other people as part of the (very complex) environment. However, this seems like a particularly challenging approach, and may require a fairly complex human model in order to account for interpersonal interactions: a simple Boltzmann rational model may not be sucient. An alternative approach is to distinguish all of the humans as separate from the environ- ment, and to instead use the state of the world to infer the norms by which they operate. These norms are likely the most relevant aspect of the reward function for Rto know, as they determine how Rshould constrain its behavior within the group. They may also be easier to infer, since by de nition nearly all of the humans will agree on the norms and so they will have an outsized impact on the state of the world. Humans are optimized for the environment. Even when the state of the world is optimized for human preferences, this may not be a result of human optimization, as assumed in this dissertation. For example, we prefer that the atmosphere contain oxygen for us to breathe. That the atmosphere has enough oxygen for us to breathe is not a result of human optimization { indeed, it seems that the atmosphere meets this preference in spite of human action. Rather, humans were optimized through the process of evolution to be able to breathe the existing atmosphere. The algorithms presented in this dissertation would not be able to infer such a preference, except indirectly, for example by inferring that humans prefer not to die or get sick, and thus instrumentally preferring that the atmosphere contain sucient oxygen. The issue is that these algorithms assume that humans are the source of all optimization, and thus neglect the fact that humans have themselves be optimized. It is unclear how to modify the algorithms to account for this issue. Learning tasks to perform. The apple collection environment (Figure 7.1d) and MuJoCo skill learning (Section 7.3) demonstrate that it is possible to learn aspects of what should be done in an environment. However, it is not clear that this is desirable , since the robot may perform a task that has been inferred, instead of the task that Hexplicitly sets for it. Just becauseRhas inferred that Hwants the apples harvested doesn't mean it should go do so, especially if Hhas currently asked Rto wash the dishes. It is possible that this would not be a problem in a suitable formulation: as long as R assigns suciently high probability to HaskingRfor the thing that Hwants most, Rwill followH's instructions, unless it has reason to think that His incorrect (Milli et al., 2017). (For example, Rshould probably disobey Hif it knows that the house is on re and knows thatHdoes not know this.) Nevertheless, we may want to prevent Rfrom learning to autonomously pursue tasks CHAPTER 9. CONCLUSION 93 based solely on the state of the world. Ideally, we would be able to decompose the inferred reward into the frame conditions frame, which specify what Rshould not do, and the task rewardtask, which specify things that could be good for Rto do. Given such a decomposition, Rcould ensure that it takes frame into account in its decision-making process, while ignoring the implications of task. One way to do this is to look across a variety of environments, or a variety of humans, for commonalities in the reward: it seems likely that these would correspond to frame conditions. Practical improvements Other sources of empirical knowledge. Learning from the state of the world can be thought of as a solution to the \is-ought problem" (Hume, 2003) for arti cial agents: it allows an agent to learn what it ought to do, given only knowledge about what istrue. (Readers familiar with Hume may wonder what assumption allows us to bridge the gap from \is" facts to \ought" facts. In this case, the bridging assumption is that Rought to do that which H has put e ort into (and what Hhas put e ort into is an \is" fact.) However, our algorithms so far either require us to have a full, complete understanding of empirical facts as with RLSP (Algorithm 2), or they use the barest of empirical knowledge, that which can be gleaned from random rollouts, as in Deep RLSP (Algorithm 3). It seems clear that we can do better, by leveraging better algorithms and data sources for learning empirical facts about the world, before trying to learn from the state of the world. For example, how might we leverage the common-sense knowledge of large language models (Radford et al., 2019; Brown et al., 2020)? Human models. In this dissertation we used the simple model of Boltzmann rationality as our model of H. While this worked well in our simple conceptual environments, it is not clear that this will scale well to larger environments. Recent work in inverse reinforcement learning has considered accounting for human biases (Evans et al., 2016; Majumdar et al., 2017; Shah et al., 2019b); similar approaches may be needed for learning from the state of the world. Handling partial observability. Our formalism for learning from the state of the world (Chapter 3) assumes that His acting in an MDP without reward MnR . However, most realistic environments are partially observable; neither HnorRcan observe the entire world state at every timestep. While we have generalized to partial observability using the assistance formulation, assistance problems are quite computationally challenging to solve. Can we generalize our original formalism to handle partial observability, without requiring the increase in computational complexity of assistance? It is relatively easy to handle partial observability for H: the only di erence is that our human model Hmust now work o of observations, rather than states. However, it is more challenging to handle partial observability for R: what exactly does Rget to observe? It should no longer be able to observe the full state of the world s0, but it also is not sucient CHAPTER 9. CONCLUSION 94 to just observe o0, because this may only be a small part of the world. For example, if R can't see an unbroken vase in o0, but does see after two actions in o2, then it should still learn from the state of the world that vases probably should not be broken. The crucial di erence is that when observing a full state s0, the posterior P(js0) captures allof the information we can gain from simulating the past, due to the Markov property of states. In contrast,P(jo0) does not capture all of this information, and future observations o1;o2;::: can provide additional insights. This is handled naturally by the assistance formulation, in which R's optimal policy will seek out new observations in future timesteps that can then be leveraged to learn from the state of the world (as long as this information is suciently valuable). Are there good heuristics that allow us to capture this behavior, without requiring us to solve a full assistance problem? Integrating RLSP with assistance. While we formalized learning from the state of the world using the assistance framework, even with full observability, the optimal policy for an assistance problem typically requires the full posterior P(js0), whereas both RLSP and Deep RLSP compute a point estimate of . How can we use these algorithms (or analogs thereof) in order to more eciently solve the assistance formulation of learning from the state of the world? Learned models. The gradient in Deep RLSP depends on having access to a good model ofH, (H)1, andT1, as well as a good learned feature function f. We used relatively simple multilayer perceptrons (MLPs) and variational autoencoders (VAEs) to learn these models, as we found them to be the most stable and easiest to tune. However, progress in these areas is rapid, especially in self-supervised representation learning. It seems likely that applying well-tuned state-of-the-art techniques will allow for signi cantly improved results, and could be a good approach to few-shot imitation learning for robotics. 9.2 Applications Our fundamental insight is that the state of the world can inform us about human preferences by showing us what humans put e ort into. While we formalized this in the paradigm of sequential decision-making, and the ideas in the previous section continue to work within this paradigm, the general idea is broader and can be applied in many domains. We provide a few examples as illustration, but it seems likely that there are many more applications that have not yet been thought of. Satellite imagery. Satellite imagery is a rich source of data about the state of the literal world. It is already being used for many purposes, such as predicting how wealthy various regions are in order to better target policy and aid (Abelson et al., 2014; Yeh et al., 2020) and predicting crop yields (Pantazi et al., 2016). An intelligent AI system with enough CHAPTER 9. CONCLUSION 95 background knowledge of the world should be able to infer a lot from such a dataset. For example, if the AI system already knows the wealth of a region, then the prevalence of schools in that region may be informative of how much the people of that region value education. Social media. A person's social media feed can be a rich source of information about their preferences. Social media companies can learn a lot about an individual user through their pattern of clicks and views on a variety of posts. However, even without access to the internal datasets, we can learn a lot about both individual people and various groups by looking at social media feeds. We can model a social feed as joint optimization by the user and the recommendation algorithm on the set of posts shown to the user. Thus, by observing the set of posts that are currently being shown, we can infer what the user must care about reading. (This does require having a suciently good model of the workings of the recommendation algorithm, which may be challenging.) Learning from prices. By using the formalism of sequential decision-making, we have implicitly made the assumption that time is the unit of e ort. At each timestep, Hhas a variety of options available to her; the fact that she chooses one option over many others thus provides signi cant information about what Hcares about. However, we could also view money as the unit of e ort: everyone has a limited supply of it, and can use it to pursue many di erent goals, and so the choice of what Hdoes with her money is very informative about her preferences. We can even think of the market prices and trading volumes of various items as informative about the \preferences" of humanity as whole. For example, despite being more expensive, vegan meat substitutes are growing in popularity; from this an agent might deduce that the preference against meat-eating is rising. It should be noted that prices would be thought of as a lower bound on value, because prices are a function both of how valuable an item is as well as how easy it is to produce it. Just because clean water is cheap doesn't mean it is not important to human preferences. 9.3 Closing thoughts AI systems are poised to have a massive impact on human civilization within the next century. Researchers today have a unique opportunity to shape this impact to ensure that it is bene cial for humanity. By showing how an AI system can learn what it should do by reasoning about the provenance of the state of the world, this dissertation takes a step towards AI systems that do what we intend them to do. With further progress towards such AI systems, we can create a world in which AI systems assist us in achieving our goals, whether they be as mundane as washing laundry or as grand as colonizing the stars. 96 Bibliography Brian Abelson, Kush R Varshney, and Joy Sun. Targeting direct cash transfers to the extremely poor. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining , pages 1563{1572, 2014. Joshua Achiam, Harrison Edwards, Dario Amodei, and Pieter Abbeel. Variational option discovery algorithms. arXiv preprint arXiv:1807.10299 , 2018. Maurice Allais. The so-called allais paradox and rational decisions under uncertainty. In Expected utility hypotheses and the Allais paradox , pages 437{681. Springer, 1979. Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Man e. Concrete problems in AI safety. arXiv preprint arXiv:1606.06565 , 2016. Oana Fabiana Andreescu. Static analysis of functional programs with an application to the frame problem in deductive veri cation . PhD thesis, Rennes 1, 2017. Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, OpenAI Pieter Abbeel, and Wojciech Zaremba. Hindsight experience replay. Advances in neural information processing systems , 30:5048{5058, 2017. Brenna D Argall, Sonia Chernova, Manuela Veloso, and Brett Browning. A survey of robot learning from demonstration. Robotics and autonomous systems , 57(5):469{483, 2009. Stuart Armstrong and Benjamin Levinstein. Low impact arti cial intelligences. arXiv preprint arXiv:1705.10720 , 2017. Stuart Armstrong and S oren Mindermann. Occam's razor is insucient to infer the preferences of irrational agents. In Advances in Neural Information Processing Systems , pages 5598{ 5609, 2018. Stuart Armstrong, Jan Leike, Laurent Orseau, and Shane Legg. Pitfalls of learning a reward function online. arXiv preprint arXiv:2004.13654 , 2020. Dzmitry Bahdanau, Felix Hill, Jan Leike, Edward Hughes, Arian Hosseini, Pushmeet Kohli, and Edward Grefenstette. Learning to understand goal speci cations by modelling reward. InInternational Conference on Learning Representations (ICLR) , 2019. BIBLIOGRAPHY 97 Michael Bain and Claude Sammut. A framework for behavioural cloning. In Machine Intelligence 15, Intelligent Agents [St. Catherine's College, Oxford, July 1995] , pages 103{129, Oxford, UK, UK, 1999. Oxford University. ISBN 0-19-853867-7. URL http: //dl.acm.org/citation.cfm?id=647636.733043 . Andrea Bajcsy, Dylan P Losey, Marcia K O'Malley, and Anca D Dragan. Learning robot objectives from physical human interaction. Proceedings of Machine Learning Research , 78: 217{226, 2017. Tirthankar Bandyopadhyay, Kok Sung Won, Emilio Frazzoli, David Hsu, Wee Sun Lee, and Daniela Rus. Intention-aware motion planning. In Algorithmic Foundations of Robotics X , pages 475{491. Springer, 2013. Christopher M Bishop. Mixture density networks. Neural Computing Research Group Report, Aston University, 1994. Erdem Byk, Malayandi Palan, Nicholas C Landol , Dylan P Losey, and Dorsa Sadigh. Asking easy questions: A user-friendly approach to active reward learning. arXiv preprint arXiv:1910.04365 , 2019. Andreea Bobu, Dexter RR Scobee, Jaime F Fisac, S Shankar Sastry, and Anca D Dragan. Less is more: Rethinking probabilistic models of human behavior. In Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction , pages 429{437, 2020. Nick Bostrom. Superintelligence: Paths, Dangers, Strategies . Oxford University Press, Inc., USA, 2014. David D Bourgin, Joshua C Peterson, Daniel Reichman, Stuart J Russell, and Thomas L Griths. Cognitive model priors for predicting human decisions. In International conference on machine learning , pages 5133{5141, 2019. Craig Boutilier, Ronen I Brafman, Carmel Domshlak, Holger H Hoos, and David Poole. Cp-nets: A tool for representing and reasoning with conditional ceteris paribus preference statements. Journal of arti cial intelligence research , 21:135{191, 2004. Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. Openai gym. arXiv preprint arXiv:1606.01540 , 2016. Connor Brooks and Daniel Sza r. Balanced information gathering and goal-oriented actions in shared autonomy. In 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI) , pages 85{94, 2019. Daniel S Brown, Wonjoon Goo, Prabhat Nagarajan, and Scott Niekum. Extrapolating beyond suboptimal demonstrations via inverse reinforcement learning from observations. arXiv preprint arXiv:1904.06387 , 2019. BIBLIOGRAPHY 98 Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. arXiv preprint arXiv:2005.14165 , 2020. Lars Buesing, Theophane Weber, S ebastien Racaniere, SM Eslami, Danilo Rezende, David P Reichert, Fabio Viola, Frederic Besse, Karol Gregor, Demis Hassabis, et al. Learning and querying fast generative models for reinforcement learning. arXiv preprint arXiv:1802.03006 , 2018. Yuri Burda, Harri Edwards, Deepak Pathak, Amos Storkey, Trevor Darrell, and Alexei A Efros. Large-scale study of curiosity-driven learning. arXiv preprint arXiv:1808.04355 , 2018. Ryan Carey. Incorrigibility in the CIRL framework. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society , pages 30{35, 2018. Micah Carroll, Rohin Shah, Mark K Ho, Tom Griths, Sanjit Seshia, Pieter Abbeel, and Anca Dragan. On the utility of learning about humans for human-ai coordination. In Advances in Neural Information Processing Systems , pages 5174{5185, 2019. Lawrence Chan, Dylan Had eld-Menell, Siddhartha Srinivasa, and Anca Dragan. The assistive multi-armed bandit. In 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI) , pages 354{363. IEEE, 2019. Min Chen, Stefanos Nikolaidis, Harold Soh, David Hsu, and Siddhartha Srinivasa. Plan- ning with trust for human-robot collaboration. In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction , pages 307{315, 2018. Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geo rey Hinton. A simple framework for contrastive learning of visual representations. In Proceedings of International Conference on Machine Learning (ICML) , 2020. Rohan Choudhury, Gokul Swamy, Dylan Had eld-Menell, and Anca D Dragan. On the utility of model learning in hri. In 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI) , pages 317{325. IEEE, 2019. Alexandra Chouldechova. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big Data , 5(2):153{163, 2017. Patryk Chrabaszcz, Ilya Loshchilov, and Frank Hutter. Back to basics: Benchmarking canonical evolution strategies for playing atari. arXiv preprint arXiv:1802.08842 , 2018. Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. Deep reinforcement learning from human preferences. In Advances in Neural Information Processing Systems , pages 4299{4307, 2017. BIBLIOGRAPHY 99 Jack Clark and Dario Amodei. Faulty reward functions in the wild, 2016. URL https: //blog.openai.com/faulty-reward-functions . Kevin Clark, Minh-Thang Luong, Quoc V Le, and Christopher D Manning. Electra: Pre- training text encoders as discriminators rather than generators. In International Conference on Learning Representations (ICLR) , 2020. Cohn, Robert W. Maximizing Expected Value of Information in Decision Problems by Querying on a Wish-to-Know Basis. PhD thesis, University of Michigan, 2016. Sam Corbett-Davies, Emma Pierson, Avi Feller, Sharad Goel, and Aziz Huq. Algorithmic decision making and the cost of fairness. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining , pages 797{806, 2017. Richard M Cyert and Morris H DeGroot. Adaptive utility. In Adaptive Economic Models , pages 223{246. Elsevier, 1975. Christian Daniel, Malte Viering, Jan Metz, Oliver Kroemer, and Jan Peters. Active reward learning. In Robotics: Science and systems , 2014. Nishant Desai. Uncertain reward-transition mdps for negotiable reinforcement learning. 2017. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers) , 2019. Christos Dimitrakakis, David C Parkes, Goran Radanovic, and Paul Tylkin. Multi-view decision processes: the helper-ai problem. In Advances in Neural Information Processing Systems , pages 5443{5452, 2017. Andreas Doerr, Christian Daniel, Martin Schiegg, Duy Nguyen-Tuong, Stefan Schaal, Marc Toussaint, and Sebastian Trimpe. Probabilistic recurrent state-space models. In Proceedings of International Conference on Machine Learning (ICML) , 2018. Michael O Du . Optimal learning: Computational procedures for Bayes-adaptive Markov decision processes. PhD thesis, University of Massachusetts Amherst, 2002. Ashley D Edwards, Himanshu Sahni, Yannick Schroeker, and Charles L Isbell. Imitating latent policies from observation. arXiv preprint arXiv:1805.07914 , 2018. Brochu Eric, Nando D Freitas, and Abhijeet Ghosh. Active preference learning with discrete choice data. In Advances in Neural Information Processing Systems , pages 409{416, 2008. Owain Evans, Andreas Stuhlm uller, and Noah Goodman. Learning the preferences of ignorant, inconsistent agents. In Proceedings of the Thirtieth AAAI Conference on Arti cial Intelligence , 2016. BIBLIOGRAPHY 100 Benjamin Eysenbach, Abhishek Gupta, Julian Ibarz, and Sergey Levine. Diversity is all you need: Learning skills without a reward function. arXiv preprint arXiv:1802.06070 , 2018. Alan Fern, Sriraam Natarajan, Kshitij Judah, and Prasad Tadepalli. A decision-theoretic model of assistance. Journal of Arti cial Intelligence Research , 50:71{104, 2014. Chelsea Finn, Sergey Levine, and Pieter Abbeel. Guided cost learning: Deep inverse optimal control via policy optimization. In International Conference on Machine Learning , pages 49{58, 2016. Justin Fu, Katie Luo, and Sergey Levine. Learning robust rewards with adversarial inverse reinforcement learning. arXiv preprint arXiv:1710.11248 , 2017. Justin Fu, Anoop Korattikara, Sergey Levine, and Sergio Guadarrama. From language to goals: Inverse reinforcement learning for vision-based instruction following. arXiv preprint arXiv:1902.07742 , 2019. Sunil Gandhi, Tim Oates, Tinoosh Mohsenin, and Nicholas Waytowich. Learning from observations using a single video demonstration and human feedback. In Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems , 2019. Yang Gao, Jan Peters, Antonios Tsourdos, Shao Zhifei, and Er Meng Joo. A survey of inverse reinforcement learning techniques. International Journal of Intelligent Computing and Cybernetics , 2012. Noah D Goodman and Michael C Frank. Pragmatic language interpretation as probabilistic inference. Trends in Cognitive Sciences , 20(11):818{829, 2016. Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. Soft actor-critic: O - policy maximum entropy deep reinforcement learning with a stochastic actor. In Proceedings of International Conference on Machine Learning (ICML) , 2018. Dylan Had eld-Menell, Stuart J Russell, Pieter Abbeel, and Anca Dragan. Cooperative Inverse Reinforcement Learning. In Advances in Neural Information Processing Systems , pages 3909{3917, 2016. Dylan Had eld-Menell, Anca Dragan, Pieter Abbeel, and Stuart Russell. The o -switch game. In Workshops at the Thirty-First AAAI Conference on Arti cial Intelligence , 2017a. Dylan Had eld-Menell, Smitha Milli, Pieter Abbeel, Stuart J Russell, and Anca Dragan. Inverse reward design. In Advances in Neural Information Processing Systems , pages 6765{6774, 2017b. Danijar Hafner, Timothy Lillicrap, Ian Fischer, Ruben Villegas, David Ha, Honglak Lee, and James Davidson. Learning latent dynamics for planning from pixels. In Proceedings of International Conference on Machine Learning (ICML) , 2019. BIBLIOGRAPHY 101 Danijar Hafner, Timothy Lillicrap, Jimmy Ba, and Mohammad Norouzi. Dream to control: Learning behaviors by latent imagination. In International Conference on Learning Representations (ICLR) , 2020. Joseph Y Halpern and Rafael Pass. Game theory with translucent players. International Journal of Game Theory , 47(3):949{976, 2018. Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , 2020. Christopher Hesse, John Schulman, Vicki Pfau, Alex Nichol, Oleg Klimov, and Larissa Schiavo. Retro contest, 2018. https://openai.com/blog/retro-contest/ . Ashley Hill, Antonin Ran, Maximilian Ernestus, Adam Gleave, Anssi Kanervisto, Rene Traore, Prafulla Dhariwal, Christopher Hesse, Oleg Klimov, Alex Nichol, Matthias Plappert, Alec Radford, John Schulman, Szymon Sidor, and Yuhuai Wu. Stable baselines. https: //github.com/hill-a/stable-baselines , 2018. Jonathan Ho and Stefano Ermon. Generative adversarial imitation learning. In Advances in Neural Information Processing Systems , 2016. Mark K Ho, Michael Littman, James MacGlashan, Fiery Cushman, and Joseph L Austerweil. Showing versus doing: Teaching by demonstration. In Advances in Neural Information Processing Systems , pages 3027{3035, 2016. David Hume. A treatise of human nature . Courier Corporation, 2003. Borja Ibarz, Jan Leike, Tobias Pohlen, Geo rey Irving, Shane Legg, and Dario Amodei. Reward learning from human preferences and demonstrations in atari. In Advances in Neural Information Processing Systems , 2018. Alexis Jacq, Matthieu Geist, Ana Paiva, and Olivier Pietquin. Learning from a learner. In International Conference on Machine Learning , pages 2990{2999, 2019. Shervin Javdani, Siddhartha S Srinivasa, and J Andrew Bagnell. Shared autonomy via hindsight optimization. Robotics Science and Systems: online proceedings , 2015, 2015. Hong Jun Jeon, Smitha Milli, and Anca D Dragan. Reward-rational (implicit) choice: A unifying formalism for reward learning. arXiv preprint arXiv:2002.04833 , 2020. Daniel Kahneman. Thinking, fast and slow . Macmillan, 2011. Ece Kamar, Ya'akov Gal, and Barbara J Grosz. Incorporating helpful behavior into collabora- tive planning. In Proceedings of The 8th International Conference on Autonomous Agents and Multiagent Systems (AAMAS) . Springer Verlag, 2009. BIBLIOGRAPHY 102 Antti Kangasr a asi o and Samuel Kaski. Inverse reinforcement learning from summary data. Machine Learning , 107(8-10):1517{1535, 2018. Maximilian Karl, Maximilian Soelch, Justin Bayer, and Patrick Van der Smagt. Deep variational bayes lters: Unsupervised learning of state space models from raw data. In International Conference on Learning Representations (ICLR) , 2017. Diederik P Kingma and Max Welling. Auto-encoding variational bayes. In International Conference on Learning Representations (ICLR) , 2014. Jon Kleinberg, Sendhil Mullainathan, and Manish Raghavan. Inherent trade-o s in the fair determination of risk scores. arXiv preprint arXiv:1609.05807 , 2016. W Bradley Knox and Peter Stone. Interactively shaping agents via human reinforcement: The tamer framework. In Proceedings of the fth international conference on Knowledge capture , pages 9{16. ACM, 2009. Victoria Krakovna. Speci cation gaming examples in AI, 2018. URL https://vkrakovna. wordpress.com/2018/04/02/specification-gaming-examples-in-ai/ . Victoria Krakovna, Laurent Orseau, Ramana Kumar, Miljan Martic, and Shane Legg. Penalizing side e ects using stepwise relative reachability. arXiv preprint arXiv:1806.01186 , 2018. Thanard Kurutach, Aviv Tamar, Ge Yang, Stuart J Russell, and Pieter Abbeel. Learning plannable representations with causal infogan. In Advances in Neural Information Processing Systems , 2018. Kuang-Huei Lee, Ian Fischer, Anthony Liu, Yijie Guo, Honglak Lee, John Canny, and Sergio Guadarrama. Predictive information accelerates learning in rl. Advances in Neural Information Processing Systems , 33, 2020. Joel Lehman, Je Clune, and Dusan Misevic. The surprising creativity of digital evolution. InArti cial Life Conference Proceedings , pages 55{56. MIT Press, 2018. Jan Leike, David Krueger, Tom Everitt, Miljan Martic, Vishal Maini, and Shane Legg. Scalable agent alignment via reward modeling: a research direction. arXiv preprint arXiv:1811.07871 , 2018. David A Levin and Yuval Peres. Markov chains and mixing times , volume 107. American Mathematical Soc., 2017. Sergey Levine. Reinforcement learning and control as probabilistic inference: Tutorial and review. arXiv preprint arXiv:1805.00909 , 2018. BIBLIOGRAPHY 103 Manuel Lopes, Francisco Melo, and Luis Montesano. Active learning for reward estimation in inverse reinforcement learning. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases , pages 31{46. Springer, 2009. James MacGlashan, Mark K Ho, Robert Loftin, Bei Peng, David Roberts, Matthew E Taylor, and Michael L Littman. Interactive learning from policy-dependent human feedback. arXiv preprint arXiv:1701.06049 , 2017. Owen Macindoe, Leslie Pack Kaelbling, and Tom as Lozano-P erez. POMCoP: Belief space planning for sidekicks in cooperative games. In Eighth Arti cial Intelligence and Interactive Digital Entertainment Conference , 2012. Anirudha Majumdar, Sumeet Singh, Ajay Mandlekar, and Marco Pavone. Risk-sensitive inverse reinforcement learning via coherent risk models. In Robotics: Science and Systems , 2017. Dhruv Malik, Malayandi Palaniappan, Jaime F Fisac, Dylan Had eld-Menell, Stuart Russell, and Anca D Dragan. An ecient, generalized Bellman update for cooperative inverse reinforcement learning. arXiv preprint arXiv:1806.03820 , 2018. James John Martin. Bayesian decision problems and Markov chains . Wiley, 1967. Lucas Maystre and Matthias Grossglauser. Just sort it! A simple and e ective approach to active preference learning. In Proceedings of the 34th International Conference on Machine Learning , pages 2344{2353, 2017. John McCarthy and Patrick J Hayes. Some philosophical problems from the standpoint of arti cial intelligence. In Readings in Arti cial Intelligence , pages 431{450. Elsevier, 1981. Bernard Michini and Jonathan P How. Bayesian nonparametric inverse reinforcement learning. InJoint European conference on machine learning and knowledge discovery in databases , pages 148{163. Springer, 2012. Smitha Milli, Dylan Had eld-Menell, Anca Dragan, and Stuart Russell. Should robots be obedient? arXiv preprint arXiv:1705.09990 , 2017. S oren Mindermann, Rohin Shah, Adam Gleave, and Dylan Had eld-Menell. Active inverse reward design. arXiv preprint arXiv:1809.03060 , 2018. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. Playing atari with deep reinforcement learning, 2013. Ashvin V Nair, Vitchyr Pong, Murtaza Dalal, Shikhar Bahl, Steven Lin, and Sergey Levine. Visual reinforcement learning with imagined goals. In Advances in Neural Information Processing Systems , pages 9191{9200, 2018. BIBLIOGRAPHY 104 Andrew Y Ng and Stuart J Russell. Algorithms for inverse reinforcement learning. In International Conference on Machine learning , 2000. Stefanos Nikolaidis and Julie Shah. Human-robot cross-training: computational formulation, modeling and evaluation of a human team training strategy. In 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI) , pages 33{40. IEEE, 2013. Stefanos Nikolaidis, Ramya Ramakrishnan, Keren Gu, and Julie Shah. Ecient model learning from joint-action demonstrations for human-robot collaborative tasks. In 2015 10th ACM/IEEE International Conference on Human-Robot Interaction (HRI) , pages 189{196. IEEE, 2015. Stefanos Nikolaidis, David Hsu, and Siddhartha Srinivasa. Human-robot mutual adaptation in collaborative tasks: Models and experiments. The International Journal of Robotics Research , 36(5-7):618{634, 2017. Stephen M Omohundro. The basic AI drives. In Arti cial General Intelligence , pages 483{492, 2008. Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748 , 2018. OpenAI. OpenAI Five, 2018. https://openai.com/blog/openai-five/ . Xanthoula Eirini Pantazi, Dimitrios Moshou, Thomas Alexandridis, Rebecca L Whetton, and Abdul Mounem Mouazen. Wheat yield prediction using machine learning and advanced sensing techniques. Computers and Electronics in Agriculture , 121:57{65, 2016. Daehyung Park, Michael Noseworthy, Rohan Paul, Subhro Roy, and Nicholas Roy. Inferring task goals and constraints using bayesian nonparametric inverse reinforcement learning. In Conference on Robot Learning , pages 1005{1014, 2020. Joelle Pineau, Geo Gordon, Sebastian Thrun, et al. Point-based value iteration: An anytime algorithm for POMDPs. In IJCAI , pages 1025{1032, 2003. Alec Radford, Je rey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. OpenAI blog , 1(8):9, 2019. Deepak Ramachandran and Eyal Amir. Bayesian inverse reinforcement learning. In IJCAI , pages 2586{2591, 2007. Stephanie Rosenthal and Manuela Veloso. Modeling humans as observation providers using pomdps. In 2011 RO-MAN , pages 53{58. IEEE, 2011. Stuart Russell. Of myths and moonshine, 2014. URL https://www.edge.org/ conversation/the-myth-of-ai#26015 . BIBLIOGRAPHY 105 Stuart Russell. Human Compatible: Arti cial Intelligence and the Problem of Control . Penguin, 2019. Dorsa Sadigh, S Shankar Sastry, Sanjit A Seshia, and Anca Dragan. Information gathering actions over human internal state. In 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) , pages 66{73. IEEE, 2016. Dorsa Sadigh, Anca D Dragan, Shankar Sastry, and Sanjit A Seshia. Active preference-based learning of reward functions. In Robotics: Science and Systems , 2017. Tom Schaul, Daniel Horgan, Karol Gregor, and David Silver. Universal value function approximators. In International Conference on Machine Learning , pages 1312{1320, 2015. John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347 , 2017. Rohin Shah, Noah Gundotra, Pieter Abbeel, and Anca D Dragan. On the feasibility of learning, rather than assuming, human biases for reward inference. arXiv preprint arXiv:1906.09624 , 2019a. Rohin Shah, Dmitrii Krasheninnikov, Jordan Alexander, Pieter Abbeel, and Anca Dragan. Preferences implicit in the state of the world. In International Conference on Learning Representations , 2019b. Archit Sharma, Shane Gu, Sergey Levine, Vikash Kumar, and Karol Hausman. Dynamics- aware unsupervised skill discovery. In International Conference on Learning Representations (ICLR) , 2020. Jason F Shogren, John A List, and Dermot J Hayes. Preference learning in consecutive experimental auctions. American Journal of Agricultural Economics , 82(4):1016{1021, 2000. David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, et al. Mastering the game of go without human knowledge. nature , 550(7676):354{359, 2017. Nisan Stiennon, Long Ouyang, Je Wu, Daniel M Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, and Paul Christiano. Learning to summarize from human feedback. arXiv preprint arXiv:2009.01325 , 2020. Adam Stooke, Kimin Lee, Pieter Abbeel, and Michael Laskin. Decoupling representation learning from reinforcement learning. arXiv preprint arXiv:2009.08319 , 2020. Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction . MIT press, 2018. BIBLIOGRAPHY 106 Emanuel Todorov, Tom Erez, and Yuval Tassa. Mujoco: A physics engine for model-based control. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems , 2012. Faraz Torabi, Garrett Warnell, and Peter Stone. Generative adversarial imitation from observation. arXiv preprint arXiv:1807.06158 , 2018. Alexander Matt Turner. Optimal farsighted agents tend to seek power. arXiv preprint arXiv:1912.01683 , 2019. Alexander Matt Turner, Dylan Had eld-Menell, and Prasad Tadepalli. Conservative agency via attainable utility preservation. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society , pages 385{391, 2020. Oriol Vinyals, Igor Babuschkin, Junyoung Chung, Michael Mathieu, Max Jaderberg, Woj- ciech M. Czarnecki, Andrew Dudzik, Aja Huang, Petko Georgiev, Richard Powell, Timo Ewalds, Dan Horgan, Manuel Kroiss, Ivo Danihelka, John Agapiou, Junhyuk Oh, Valentin Dalibard, David Choi, Laurent Sifre, Yury Sulsky, Sasha Vezhnevets, James Molloy, Trevor Cai, David Budden, Tom Paine, Caglar Gulcehre, Ziyu Wang, Tobias Pfa , Toby Pohlen, Yuhuai Wu, Dani Yogatama, Julia Cohen, Katrina McKinney, Oliver Smith, Tom Schaul, Timothy Lillicrap, Chris Apps, Koray Kavukcuoglu, Demis Hassabis, and David Silver. Alphastar: Mastering the real-time strategy game StarCraft II. https://deepmind.com/ blog/alphastar-mastering-real-time-strategy-game-starcraft-ii/ , 2019. Steven Wang, Sam Toyer, Adam Gleave, and Scott Emmons. The imitation library for imitation learning and inverse reinforcement learning. https://github.com/ HumanCompatibleAI/imitation , 2020. Garrett Warnell, Nicholas Waytowich, Vernon Lawhern, and Peter Stone. Deep tamer: Interactive agent shaping in high-dimensional state spaces. arXiv preprint arXiv:1709.10163 , 2017. David Whitney, Eric Rosen, James MacGlashan, Lawson LS Wong, and Stefanie Tellex. Reducing errors in object-fetching interactions through social feedback. In 2017 IEEE International Conference on Robotics and Automation (ICRA) , pages 1006{1013. IEEE, 2017. Nils Wilde, Dana Kulic, and Stephen L Smith. Active preference learning using maximum regret. arXiv preprint arXiv:2005.04067 , 2020. Christian Wirth, Riad Akrour, Gerhard Neumann, and Johannes F urnkranz. A survey of preference-based reinforcement learning methods. Journal of Machine Learning Research , 18(1):4945{4990, 2017. Mark Woodward, Chelsea Finn, and Karol Hausman. Learning to interactively learn and assist. arXiv preprint arXiv:1906.10187 , 2019. BIBLIOGRAPHY 107 Christopher Yeh, Anthony Perez, Anne Driscoll, George Azzari, Zhongyi Tang, David Lobell, Stefano Ermon, and Marshall Burke. Using publicly available satellite imagery and deep learning to understand economic well-being in africa. Nature communications , 11(1):1{11, 2020. Tianhe Yu, Chelsea Finn, Annie Xie, Sudeep Dasari, Tianhao Zhang, Pieter Abbeel, and Sergey Levine. One-shot imitation from observing humans via domain-adaptive meta- learning. arXiv preprint arXiv:1802.01557 , 2018. Eliezer Yudkowsky. Problem of fully updated deference, year unknown. URL https: //arbital.com/p/updated\_deference/ . Shun Zhang, Edmund Durfee, and Satinder Singh. Approximately-optimal queries for planning in reward-uncertain markov decision processes. In Twenty-Seventh International Conference on Automated Planning and Scheduling , 2017. Brian D Ziebart, J Andrew Bagnell, and Anind K Dey. Modeling interaction via the principle of maximum causal entropy. 2010.
241a5f8d-3be0-4b87-8721-b89117158af3
trentmkelly/LessWrong-43k
LessWrong
Tool/Agent distinction in the light of the AI box experiment This article poses questions on the distinction between Tool AGI and Agent AGI, which was described very concisely by Holden Karnofsky in his recent Thoughts on the Singularity Institute post: > In short, Google Maps is not an agent, taking actions in order to maximize a utility parameter. It is a tool, generating information and then displaying it in a user-friendly manner for me to consider, use and export or discard as I wish. For me, this instantly raised one question: What if a Tool AGI becomes/is self-aware (which, for the purposes of this post, I define as “able to have goals that are distinct from the goals of the outside world”) and starts manipulating its results in a way that is non-obvious to its user? Or, even worse: What if the Tool AGI makes its user do things (which I do not expect to be much more difficult than succeding in the AI box experiment)? My first reaction was to flinch away by telling myself: “But of course a Tool would never become self-aware! Self-awareness is too complex to just happen unintentionally!” But some uncertainty survived and was strenghtened by Eliezer's reply to Holden: > [Tool AGI] starts sounding much scarier once you try to say something more formal and internally-causal like "Model the user and the universe, predict the degree of correspondence between the user's model and the universe, and select from among possible explanation-actions on this basis." After all, “Self-awareness is too complex to just happen unintentionally!” is just a bunch of English words expressing my personal incredulity. It's not a valid argument. So, can we make the argument, that self-awareness will not happen unintentionally? If we can't make that argument, can we stop Tool AGIs from potentially becoming a Weak Agent AGI which acts through its human user? If we can't do that, how meaningful is the distinction between a Weak Agent AGI (a.k.a. Tool AGI) and an Agent AGI?   For more, see the Tools versus Agents post by Stuart_Armstrong,
5e1469e4-b1f8-4089-90cf-5c97f0d02254
StampyAI/alignment-research-dataset/blogs
Blogs
Evidence on good forecasting practices from the Good Judgment Project: an accompanying blog post *By Daniel Kokotajlo,* *2 July 2019* [![](http://aiimpacts.org/wp-content/uploads/2019/02/image2.png)](http://aiimpacts.org/wp-content/uploads/2019/02/image2.png) Figure 0: The “four main determinants of forecasting accuracy.” [1](https://aiimpacts.org/evidence-on-good-forecasting-practices-from-the-good-judgment-project-an-accompanying-blog-post/#easy-footnote-bottom-1-1260 "This graph can be found <a href=\"https://web.archive.org/web/20180408044422/http://goodjudgment.com/science.html\">here</a>, the GJP’s list of academic literature on this topic. The graph illustrates approximate relative effects. It will be discussed more in Section 2.") Experience and data from the Good Judgment Project (GJP) provide important evidence about how to make accurate predictions. For a concise summary of the evidence and what we learn from it, see [this page](http://aiimpacts.org/evidence-on-good-forecasting-practices-from-the-good-judgment-project/). For a review of [*Superforecasting*](https://smile.amazon.com/Superforecasting-Science-Prediction-Philip-Tetlock/dp/0804136718/ref=sr_1_1?ie=UTF8&qid=1541990711&sr=8-1&keywords=superforecasting+the+art+and+science+of+prediction)*,* the popular book written on the subject, see [this blog](http://slatestarcodex.com/2016/02/04/book-review-superforecasting/). This post explores the evidence in more detail, drawing from the book, the academic literature, the older [*Expert Political Judgment*](https://smile.amazon.com/Expert-Political-Judgment-Good-Know/dp/0691128715/ref=sr_1_1?ie=UTF8&qid=1541990738&sr=8-1&keywords=expert+political+judgment) book, and an interview with a superforecaster. Readers are welcome to skip around to parts that interest them: 1. The experiment ----------------- [IARPA](https://www.iarpa.gov/) ran a forecasting tournament from 2011 to 2015, in which five teams plus a control group gave probabilistic answers to hundreds of questions. The questions were generally about potential geopolitical events more than a month but less than a year in the future, e.g. “Will there be a violent incident in the South China Sea in 2013 that kills at least one person?” The questions were carefully chosen so that a reasonable answer would be somewhere between 10% and 90%.[2](https://aiimpacts.org/evidence-on-good-forecasting-practices-from-the-good-judgment-project-an-accompanying-blog-post/#easy-footnote-bottom-2-1260 "This is from my conversation with the superforecaster. ") The forecasts were scored using the *original* [Brier score](https://en.wikipedia.org/wiki/Brier_score)—more on that in Section 2.[3](https://aiimpacts.org/evidence-on-good-forecasting-practices-from-the-good-judgment-project-an-accompanying-blog-post/#easy-footnote-bottom-3-1260 "They did this so that they could include occasional non-binary questions. They show <a href=\"https://academic-oup-com.libproxy.lib.unc.edu/isq/article/62/2/410/4944059\">here</a> that their results are robust to using a logarithmic scoring rule instead.") The winning team was the GJP, run by Philip Tetlock & Barbara Mellers. They recruited thousands of online volunteers to answer IARPA’s questions. These volunteers tended to be males (83%) and US citizens (74%). Their average age was forty. 64% of respondents held a bachelor’s degree, and 57% had postgraduate training.[4](https://aiimpacts.org/evidence-on-good-forecasting-practices-from-the-good-judgment-project-an-accompanying-blog-post/#easy-footnote-bottom-4-1260 "These statistics come from <a href=\"https://academic-oup-com.libproxy.lib.unc.edu/isq/article/62/2/410/4944059\">this study</a>. The dataset excludes individuals who signed up but failed to register at least 25 predictions in a given year.") GJP made their official predictions by aggregating and extremizing the predictions of their volunteers.[5](https://aiimpacts.org/evidence-on-good-forecasting-practices-from-the-good-judgment-project-an-accompanying-blog-post/#easy-footnote-bottom-5-1260 "The aggregation algorithm was elitist, meaning that it weighted more heavily forecasters with good track-records who had updated their forecasts more often. This description of elitism comes from the <a href=\"https://goodjudgment.com/science.html\">webpage.</a> In <a href=\"https://www.edge.org/conversation/philip_tetlock-edge-master-class-2015-a-short-course-in-superforecasting-class-ii\">these slides </a>Tetlock describes the elitism differently: He says it gives weight to higher-IQ, more open-minded forecasters. The extremizing step pushes the aggregated judgment closer to 1 or 0, to make it more confident. The degree to which they extremize depends on how diverse and sophisticated the pool of forecasters is. The academic papers on this topic can be found <a href=\"http://pubsonline.informs.org/doi/abs/10.1287/deca.2014.0293\">here</a> and <a href=\"https://www.sciencedirect.com/science/article/pii/S0169207013001635\">here</a>. Whether extremizing is a good idea is controversial; according to one expert I interviewed, more recent data suggests that the successes of the extremizing algorithm during the forecasting tournament were a fluke. After all, a priori one would expect extremizing to lead to small improvements in accuracy most of the time, but big losses in accuracy some of the time.") They identified the top 2% of predictors in their pool of volunteers each year, dubbing them “superforecasters,” and put them on teams in the next year so they could collaborate on special forums. They also experimented with a prediction market, and they did a [RCT](https://en.wikipedia.org/wiki/Randomized_controlled_trial) to test the effect of a one-hour training module on forecasting ability. The module included content about probabilistic reasoning, using the outside view, avoiding biases, and more. Attempts were made to find out which parts of the training were most helpful—see Section 4. 2. The results & their intuitive meaning ---------------------------------------- Here are some of the key results: “In year 1 GJP beat the official control group by 60%. In year 2, we beat the control group by 78%. GJP also beat its university-affiliated competitors, including the University of Michigan and MIT, by hefty margins, from 30% to 70%.”[6](https://aiimpacts.org/evidence-on-good-forecasting-practices-from-the-good-judgment-project-an-accompanying-blog-post/#easy-footnote-bottom-6-1260 "<i>Superforecasting </i>p18. On page 69: “Teams had to beat the combined forecast—the “wisdom of the crowd”—of the contro group, and by margins we all saw as intimidating. In the first year, IARPA wanted teams to beat that standard by 20%—and it wanted that margin of victory to grow to 50% by the fourth year.” In light of this, it is especially impressive that individual superforecasters in the first two years beat the wisdom-of-the-crowds-of-the-control-group by ~60% and that the GJP beat it by 78%. (p72)") “The Good Judgment Project outperformed a prediction market inside the intelligence community, which was populated with professional analysts who had classified information, by 25 or 30 percent, which was about the margin by which the superforecasters were outperforming our own prediction market in the external world.”[7](https://aiimpacts.org/evidence-on-good-forecasting-practices-from-the-good-judgment-project-an-accompanying-blog-post/#easy-footnote-bottom-7-1260 "Transcript of <a href=\"https://www.edge.org/conversation/philip_tetlock-edge-master-class-2015-a-short-course-in-superforecasting-class-ii\">this seminar</a>.") “Teams of ordinary forecasters beat the wisdom of the crowd by about 10%. Prediction markets beat ordinary teams by about 20%. And [teams of superforecasters] beat prediction markets by 15% to 30%.”[8](https://aiimpacts.org/evidence-on-good-forecasting-practices-from-the-good-judgment-project-an-accompanying-blog-post/#easy-footnote-bottom-8-1260 "<i>Superforecasting </i>p207") “On average, teams were 23% more accurate than individuals.”[9](https://aiimpacts.org/evidence-on-good-forecasting-practices-from-the-good-judgment-project-an-accompanying-blog-post/#easy-footnote-bottom-9-1260 "<i>Superforecasting</i> p201") What does Tetlock mean when he says that one group did X% better than another? By examining Table 4 (in Section 4)  it seems that he means X% *lower* Brier score. What is the Brier score? For more details, see the [Wikipedia article](https://en.wikipedia.org/wiki/Brier_score); basically, it measures the average squared distance from the truth. This is why it’s better to have a lower Brier score—it means you were on average closer to the truth.[10](https://aiimpacts.org/evidence-on-good-forecasting-practices-from-the-good-judgment-project-an-accompanying-blog-post/#easy-footnote-bottom-10-1260 "The best possible Brier score is 0; the Brier score achieved by guessing randomly depends on which version of the score you use and how many possible outcomes each prediction chooses between. For binary predictions, which constituted the bulk of IARPA’s questions, the original version of the Brier score is effectively twice the squared distance from the truth, so always guessing 50% would yield a score of 0.5.") Here is a bar graph of all the forecasters in Year 2, sorted by Brier score:[11](https://aiimpacts.org/evidence-on-good-forecasting-practices-from-the-good-judgment-project-an-accompanying-blog-post/#easy-footnote-bottom-11-1260 "This is from <a href=\"https://www.apa.org/pubs/journals/releases/xap-0000040.pdf\">this study</a><a id=\"502de8ba-bd71-4976-b280-0720e656abe6\" title=\"View this pdf file\" href=\"https://docs.google.com/viewer?url=https%3A%2F%2Fwww.apa.org%2Fpubs%2Fjournals%2Freleases%2Fxap-0000040.pdf&amp;embedded=true&amp;chrome=false&amp;dov=1\"><img decoding=\"async\" style=\"margin-left: 3px; width: 16px; height: 16px;\" src=\"chrome-extension://gmpljdlgcdkljlppaekciacdmdlhfeon/images/beside-link-icon.svg\"></a>. The data covers the first two years of the tournament.") [![](http://aiimpacts.org/wp-content/uploads/2019/02/image9.png)](http://aiimpacts.org/wp-content/uploads/2019/02/image9.png) For this set of questions, guessing randomly (assigning even odds to all possibilities) would yield a Brier score of 0.53. So most forecasters did significantly better than that. Some people—the people on the far left of this chart, the superforecasters—did much better than the average. For example, in year 2, the superforecaster Doug Lorch did best with 0.14. This was more than 60% better than the control group.[12](https://aiimpacts.org/evidence-on-good-forecasting-practices-from-the-good-judgment-project-an-accompanying-blog-post/#easy-footnote-bottom-12-1260 "<i>Superforecasting </i>p93") Importantly, being a superforecaster in one year correlated strongly with being a superforecaster the next year; there was some regression to the mean but roughly 70% of the superforecasters maintained their status from one year to the next.[13](https://aiimpacts.org/evidence-on-good-forecasting-practices-from-the-good-judgment-project-an-accompanying-blog-post/#easy-footnote-bottom-13-1260 "<i>Superforecasting </i>p104") OK, but what does all this mean, in intuitive terms? Here are three ways to get a sense of how good these scores really are: **Way One:** Let’s calculate some examples of prediction patterns that would give you Brier scores like those mentioned above. Suppose you make a bunch of predictions with 80% confidence and you are correct 80% of the time. Then your Brier score would be 0.32, roughly middle of the pack in this tournament. If instead it was 93% confidence correct 93% of the time, your Brier score would be 0.132, very close to the best superforecasters and to GJP’s aggregated forecasts.[14](https://aiimpacts.org/evidence-on-good-forecasting-practices-from-the-good-judgment-project-an-accompanying-blog-post/#easy-footnote-bottom-14-1260 "To calculate this, I assumed binary questions and plugged the probability, p, into this formula: P(event_doesn’t_happen)(0-p)^2+P(event_happens)(1-p)^2 = (1-p)(0-p)^2+(p)(1-p)^2. I then doubled it, since we are using the <a href=\"https://en.wikipedia.org/wiki/Brier_score\">original Brier score</a> that ranges between 0-2 instead of 0-1. I can’t find stats on GJP’s Brier score, but recall that in year 2 it was 78% better than the control group, and Doug Lorch’s 0.14 was 60% better than the control group. (<i>Superforecasting </i>p93)") In these examples, you are perfectly calibrated, which helps your score—more realistically you would be imperfectly calibrated and thus would need to be right even more often to get those scores. **Way Two:** “An alternative measure of forecast accuracy is the proportion of days on which forecasters’ estimates were on the correct side of 50%. … For all questions in the sample, a chance score was 47%. The mean proportion of days with correct estimates was 75%…”[15](https://aiimpacts.org/evidence-on-good-forecasting-practices-from-the-good-judgment-project-an-accompanying-blog-post/#easy-footnote-bottom-15-1260 "This is from <a href=\"https://www.apa.org/pubs/journals/releases/xap-0000040.pdf\">the same study</a><a id=\"29d45509-e623-47bf-af3f-8f73ffac423d\" title=\"View this pdf file\" href=\"https://docs.google.com/viewer?url=https%3A%2F%2Fwww.apa.org%2Fpubs%2Fjournals%2Freleases%2Fxap-0000040.pdf&amp;embedded=true&amp;chrome=false&amp;dov=1\"><img decoding=\"async\" style=\"margin-left: 3px; width: 16px; height: 16px;\" src=\"chrome-extension://gmpljdlgcdkljlppaekciacdmdlhfeon/images/beside-link-icon.svg\"></a>, as are the two figures.") According to this chart, the superforecasters were on the right side of 50% almost all the time:[16](https://aiimpacts.org/evidence-on-good-forecasting-practices-from-the-good-judgment-project-an-accompanying-blog-post/#easy-footnote-bottom-16-1260 "The correlation between average Brier score and how often you were on the right side of 50% was 0.89 (<a href=\"https://www.apa.org/pubs/journals/releases/xap-0000040.pdf\">same study</a><a id=\"e3c5b38a-89d2-4822-a120-50cc3a1ee770\" title=\"View this pdf file\" href=\"https://docs.google.com/viewer?url=https%3A%2F%2Fwww.apa.org%2Fpubs%2Fjournals%2Freleases%2Fxap-0000040.pdf&amp;embedded=true&amp;chrome=false&amp;dov=1\"><img decoding=\"async\" style=\"margin-left: 3px; width: 16px; height: 16px;\" src=\"chrome-extension://gmpljdlgcdkljlppaekciacdmdlhfeon/images/beside-link-icon.svg\"></a>), so I think it’s safe to assume the superforecasters were somewhere on the right side of the peak in Figure 2. (I assume they mean being on the right side of 50% correlates with <i>lower </i>Brier scores; the alternative is crazy.) The high proportion of guesses on the right side of 50% is a puzzling fact—doesn’t it suggest that they were poorly calibrated, and that they could improve their scores by extremizing their judgments? I think what’s going on here is that the majority of forecasts made on most questions by superforecasters were highly (&gt;90%) confident, and also almost always correct.") [![](http://aiimpacts.org/wp-content/uploads/2019/02/image1.png)](http://aiimpacts.org/wp-content/uploads/2019/02/image1.png) **Way Three:** “Across all four years of the tournament, *superforecasters looking out three hundred days were more accurate than regular forecasters looking out one hundred days*.”[17](https://aiimpacts.org/evidence-on-good-forecasting-practices-from-the-good-judgment-project-an-accompanying-blog-post/#easy-footnote-bottom-17-1260 "<i>Superforecasting </i>p94, emphasis mine. Later, in the <a href=\"https://www.edge.org/conversation/philip_tetlock-edge-master-class-2015-a-short-course-in-superforecasting-class-ii\">edge.org seminar</a>, Tetlock says “In some other ROC curves—receiver operator characteristic curves, from signal detection theory—that Mark Steyvers at UCSD constructed—superforecasters could assign probabilities 400 days out about as well as regular people could about eighty days out.” The quote is accompanied by a <a href=\"https://www.edge.org/3rd_culture/Master%20Class%202015/Slide040.jpg\">graph</a>; unfortunately, it’s hard to interpret.") (Bear in mind, this wouldn’t necessarily hold for a different genre of questions. For example, information about the weather decays in days, while information about the climate lasts for decades or more.) 3. Correlates of good judgment ------------------------------ The data from this tournament is useful in two ways: It helps us decide whose predictions to trust, and it helps us make better predictions ourselves. This section will focus on which kinds of people and practices best correlate with success—information which is relevant to both goals. Section 4 will cover the training experiment, which helps to address causation vs. correlation worries. Feast your eyes on this:[18](https://aiimpacts.org/evidence-on-good-forecasting-practices-from-the-good-judgment-project-an-accompanying-blog-post/#easy-footnote-bottom-18-1260 "This table is from the <a href=\"https://www.apa.org/pubs/journals/releases/xap-0000040.pdf\">same study</a><a id=\"74937603-c5ac-4e57-a17f-995bd910b08d\" title=\"View this pdf file\" href=\"https://docs.google.com/viewer?url=https%3A%2F%2Fwww.apa.org%2Fpubs%2Fjournals%2Freleases%2Fxap-0000040.pdf&amp;embedded=true&amp;chrome=false&amp;dov=1\"><img decoding=\"async\" style=\"margin-left: 3px; width: 16px; height: 16px;\" src=\"chrome-extension://gmpljdlgcdkljlppaekciacdmdlhfeon/images/beside-link-icon.svg\"></a>.") [![](http://aiimpacts.org/wp-content/uploads/2019/02/image8-1024x576.png)](http://aiimpacts.org/wp-content/uploads/2019/02/image8.png) This shows the correlations between various things.[19](https://aiimpacts.org/evidence-on-good-forecasting-practices-from-the-good-judgment-project-an-accompanying-blog-post/#easy-footnote-bottom-19-1260 "“Ravens” is an IQ test, “Numeracy” is a mathematical aptitude test.") The leftmost column is the most important; it shows how each variable correlates with (standardized) Brier score. (Recall that Brier scores measure inaccuracy, so negative correlations are good.) It’s worth mentioning that while intelligence correlated with accuracy, it didn’t steal the show.[20](https://aiimpacts.org/evidence-on-good-forecasting-practices-from-the-good-judgment-project-an-accompanying-blog-post/#easy-footnote-bottom-20-1260 "That said, as Carl Shulman pointed out, the forecasters in this sample were probably above-average IQ, so the correlation between IQ and accuracy in this sample is almost certainly smaller than the “true” correlation in the population at large. See e.g. <a href=\"https://fredrikdeboer.com/2017/07/24/restriction-of-range-what-it-is-and-why-it-matters/\">restriction of range</a> and the <a href=\"https://www.personality-project.org/r/psych/help/range.correction.html\">Thorndike Correction</a>.") The same goes for time spent deliberating.[21](https://aiimpacts.org/evidence-on-good-forecasting-practices-from-the-good-judgment-project-an-accompanying-blog-post/#easy-footnote-bottom-21-1260 "“Deliberation time, which was only measured in Year 2, was transformed by a logarithmic function (to reduce tail effects) and averaged over questions. The average length of deliberation time was 3.60 min, and the average number of questions tried throughout the 2-year period was 121 out of 199 (61% of all questions). Correlations between standardized Brier score accuracy and effort were statistically significant for belief updating, … and deliberation time, &#8230; but not for number of forecasting questions attempted.” (<a href=\"https://www.apa.org/pubs/journals/releases/xap-0000040.pdf\">study</a><a id=\"0232f10a-6900-491a-aa31-732b4c9ae5be\" title=\"View this pdf file\" href=\"https://docs.google.com/viewer?url=https%3A%2F%2Fwww.apa.org%2Fpubs%2Fjournals%2Freleases%2Fxap-0000040.pdf&amp;embedded=true&amp;chrome=false&amp;dov=1\"><img decoding=\"async\" style=\"margin-left: 3px; width: 16px; height: 16px;\" src=\"chrome-extension://gmpljdlgcdkljlppaekciacdmdlhfeon/images/beside-link-icon.svg\"></a>) Anecdotally, I spoke to a superforecaster who said that the best of the best typically put a lot of time into it; he spends maybe fifteen minutes each day making predictions but several hours per day reading news, listening to relevant podcasts, etc.") The authors summarize the results as follows: “The best forecasters scored higher on both intelligence and political knowledge than the already well-above-average group of forecasters. The best forecasters had more open-minded cognitive styles. They benefited from better working environments with probability training and collaborative teams. And while making predictions, they spent more time deliberating and updating their forecasts.”[22](https://aiimpacts.org/evidence-on-good-forecasting-practices-from-the-good-judgment-project-an-accompanying-blog-post/#easy-footnote-bottom-22-1260 "This is from the same<a href=\"https://www.apa.org/pubs/journals/releases/xap-0000040.pdf\"> study</a><a id=\"cbbbf10d-48de-44f5-b368-dd54f6e62872\" title=\"View this pdf file\" href=\"https://docs.google.com/viewer?url=https%3A%2F%2Fwww.apa.org%2Fpubs%2Fjournals%2Freleases%2Fxap-0000040.pdf&amp;embedded=true&amp;chrome=false&amp;dov=1\"><img decoding=\"async\" style=\"margin-left: 3px; width: 16px; height: 16px;\" src=\"chrome-extension://gmpljdlgcdkljlppaekciacdmdlhfeon/images/beside-link-icon.svg\"></a>") That big chart depicts all the correlations individually. Can we use them to construct a model to take in all of these variables and spit out a prediction for what your Brier score will be? Yes we can: [![](http://aiimpacts.org/wp-content/uploads/2019/02/image5.png)](http://aiimpacts.org/wp-content/uploads/2019/02/image5.png)Figure 3. Structural equation model with standardized coefficients. This model has a multiple correlation of 0.64.[23](https://aiimpacts.org/evidence-on-good-forecasting-practices-from-the-good-judgment-project-an-accompanying-blog-post/#easy-footnote-bottom-23-1260 "“Nonetheless, as we saw in the structural model, and confirm here, the best model uses dispositional, situational, and behavioral variables. The combination produced a multiple correlation of .64.” (<a href=\"https://www.apa.org/pubs/journals/releases/xap-0000040.pdf\">study</a><a id=\"c3f2f416-e81a-42b7-9af5-870e59983526\" title=\"View this pdf file\" href=\"https://docs.google.com/viewer?url=https%3A%2F%2Fwww.apa.org%2Fpubs%2Fjournals%2Freleases%2Fxap-0000040.pdf&amp;embedded=true&amp;chrome=false&amp;dov=1\"><img decoding=\"async\" style=\"margin-left: 3px; width: 16px; height: 16px;\" src=\"chrome-extension://gmpljdlgcdkljlppaekciacdmdlhfeon/images/beside-link-icon.svg\"></a>) &nbsp;Yellow ovals are latent dispositional variables, yellow rectangles are observed dispositional variables, pink rectangles are experimentally manipulated situational variables, and green rectangles are observed behavioral variables. If this diagram follows convention, single-headed arrows represent hypothesized causation, whereas the double-headed arrow represents a correlation without any claim being made about causation.") Earlier, we noted that superforecasters typically remained superforecasters (i.e. in the top 2%), proving that their success wasn’t mostly due to luck. Across all the forecasters, the correlation between performance in one year and performance in the next year is 0.65.[24](https://aiimpacts.org/evidence-on-good-forecasting-practices-from-the-good-judgment-project-an-accompanying-blog-post/#easy-footnote-bottom-24-1260 "<i>Superforecasting </i>p104") So we have two good ways to predict how accurate someone will be: Look at their past performance, and look at how well they score on the structural model above. I speculate that these correlations underestimate the true predictability of accuracy, because the forecasters were all unpaid online volunteers, and many of them presumably had random things come up in their life that got in the way of making good predictions—perhaps they have a kid, or get sick, or move to a new job and so stop reading the news for a month, and their accuracy declines.[25](https://aiimpacts.org/evidence-on-good-forecasting-practices-from-the-good-judgment-project-an-accompanying-blog-post/#easy-footnote-bottom-25-1260 "Of course, these things can happen in the real world too—maybe our AI timelines forecasters will get sick and stop making good forecasts. What I’m suggesting is that this data is inherently noisier than data from a group of full-time staff whose job it is to predict things would be. Moreover, when these things happen in the real world, we can see that they are happening and adjust our model accordingly, e.g. “Bob’s really busy with kids this month, so let’s not lean as heavily on his forecasts as we usually do.”") Yet still 70% of the superforecasters in one year remained superforecasters in the next. Finally, what about superforecasters in particular? Is there anything to say about what it takes to be in the top 2%? Tetlock devotes much of his book to this. It is hard to tell how much his recommendations come from data analysis and how much are just his own synthesis of the interviews he’s conducted with superforecasters. Here is his “Portrait of the modal superforecaster.”[26](https://aiimpacts.org/evidence-on-good-forecasting-practices-from-the-good-judgment-project-an-accompanying-blog-post/#easy-footnote-bottom-26-1260 "<i>Superforecasting </i>p191") **Philosophic outlook:** * **Cautious:** Nothing is certain. * **Humble:** Reality is infinitely complex. * **Nondeterministic:** Whatever happens is not meant to be and does not have to happen. **Abilities & thinking styles:** * **Actively open-minded:** Beliefs are hypotheses to be tested, not treasures to be protected. * **Intelligent and knowledgeable, with a “Need for Cognition”:** Intellectually curious, enjoy puzzles and mental challenges. * **Reflective:** Introspective and self-critical. * **Numerate:** Comfortable with numbers. **Methods of forecasting:** * **Pragmatic:** Not wedded to any idea or agenda. * **Analytical:** Capable of stepping back from the tip-of-your-nose perspective and considering other views. * **Dragonfly-eyed:** Value diverse views and synthesize them into their own. * **Probabilistic:** Judge using many grades of maybe. * **Thoughtful updaters:** When facts change, they change their minds. * **Good intuitive psychologists:** Aware of the value of checking thinking for cognitive and emotional biases. **Work ethic:** * **Growth mindset:** Believe it’s possible to get better. * **Grit:** Determined to keep at it however long it takes. Additionally, there is experimental evidence that superforecasters are less prone to standard cognitive science biases than ordinary people.[27](https://aiimpacts.org/evidence-on-good-forecasting-practices-from-the-good-judgment-project-an-accompanying-blog-post/#easy-footnote-bottom-27-1260 "From <a href=\"https://www.edge.org/conversation/philip_tetlock-edge-master-class-2015-a-short-course-in-superforecasting-class-iv\">edge.org</a>: <i>Mellers: </i>“We have given them lots of Kahneman and Tversky-like problems to see if they fall prey to the same sorts of biases and errors. The answer is sort of, some of them do, but not as many. It’s not nearly as frequent as you see with the rest of us ordinary mortals. The other thing that’s interesting is they don’t make the kinds of mistakes that regular people make instead of the right answer. They do something that’s a little bit more thoughtful. They integrate base rates with case-specific information a little bit more.”<br><i>Tetlock:</i> “They’re closer to Bayesians.”<br><i>Mellers: </i>“Right. They’re a little less sensitive to framing effects. The reference point doesn’t have quite the enormous role that it does with most people.”") This is particularly exciting because—we can hope—the same sorts of training that help people become superforecasters might also help overcome biases. Finally, Tetlock says that “The strongest predictor of rising into the ranks of superforecasters is perpetual beta, the degree to which one is committed to belief updating and self-improvement. It is roughly three times as powerful a predictor as its closest rival, intelligence.”[28](https://aiimpacts.org/evidence-on-good-forecasting-practices-from-the-good-judgment-project-an-accompanying-blog-post/#easy-footnote-bottom-28-1260 "<i>Superforecasting </i>p192") Unfortunately, I couldn’t find any sources or data on this, nor an operational definition of “perpetual beta,” so we don’t know how he measured it.[29](https://aiimpacts.org/evidence-on-good-forecasting-practices-from-the-good-judgment-project-an-accompanying-blog-post/#easy-footnote-bottom-29-1260 "Moreover, a quick search through Google Scholar and library.unc.edu turned up nothing of interest. I reached out to Tetlock to ask questions but he hasn’t responded yet.") 4. The training and Tetlock’s commandments ------------------------------------------ This section discusses the surprising effect of the training module on accuracy, and finishes with Tetlock’s training-module-based recommendations for how to become a better forecaster.[30](https://aiimpacts.org/evidence-on-good-forecasting-practices-from-the-good-judgment-project-an-accompanying-blog-post/#easy-footnote-bottom-30-1260 "“The guidelines sketched here distill key themes in this book and in training systems that have been experimentally demonstrated to boost accuracy in real-world forecasting tournaments.” (277)") The training module, which was randomly given to some participants but not others, took about an hour to read.[31](https://aiimpacts.org/evidence-on-good-forecasting-practices-from-the-good-judgment-project-an-accompanying-blog-post/#easy-footnote-bottom-31-1260 "This is from <a href=\"http://journal.sjdm.org/16/16511/jdm16511.pdf\">this study</a><a id=\"eea09355-6c49-473b-ba67-45ca37d9729b\" title=\"View this pdf file\" href=\"https://docs.google.com/viewer?url=http%3A%2F%2Fjournal.sjdm.org%2F16%2F16511%2Fjdm16511.pdf&amp;embedded=true&amp;chrome=false&amp;dov=1\"><img decoding=\"async\" style=\"margin-left: 3px; width: 16px; height: 16px;\" src=\"chrome-extension://gmpljdlgcdkljlppaekciacdmdlhfeon/images/beside-link-icon.svg\"></a>. Relevant quote: “Although the training lasted less than one hour, it consistently improved accuracy (Brier scores) by 6 to 11% over the control condition.”") The authors describe the content as follows: “Training in year 1 consisted of two different modules: probabilistic reasoning training and scenario training. Scenario-training was a four-step process: 1) developing coherent and logical probabilities under the probability sum rule; 2) exploring and challenging assumptions; 3) identifying the key causal drivers; 4) considering the best and worst case scenarios and developing a sensible 95% confidence interval of possible outcomes; and 5) avoid over-correction biases. … Probabilistic reasoning training consisted of lessons that detailed the difference between calibration and resolution, using comparison classes and base rates (Kahneman & Tversky, 1973; Tversky & Kahneman, 1981), averaging and using crowd wisdom principles (Surowiecki, 2005), finding and utilizing predictive mathematical and statistical models (Arkes, 1981; Kahneman & Tversky, 1982), cautiously using time-series and historical data, and being self-aware of the typical cognitive biases common throughout the population.”[32](https://aiimpacts.org/evidence-on-good-forecasting-practices-from-the-good-judgment-project-an-accompanying-blog-post/#easy-footnote-bottom-32-1260 "Same<a href=\"http://journal.sjdm.org/16/16511/jdm16511.pdf\"> study</a><a id=\"4320e912-3c3d-4b74-a11d-9988c4bcf35f\" title=\"View this pdf file\" href=\"https://docs.google.com/viewer?url=http%3A%2F%2Fjournal.sjdm.org%2F16%2F16511%2Fjdm16511.pdf&amp;embedded=true&amp;chrome=false&amp;dov=1\"><img decoding=\"async\" style=\"margin-left: 3px; width: 16px; height: 16px;\" src=\"chrome-extension://gmpljdlgcdkljlppaekciacdmdlhfeon/images/beside-link-icon.svg\"></a>.") In later years, they merged the two modules into one and updated it based on their observations of the best forecasters. The updated training module is organized around an acronym:[33](https://aiimpacts.org/evidence-on-good-forecasting-practices-from-the-good-judgment-project-an-accompanying-blog-post/#easy-footnote-bottom-33-1260 "Same<a href=\"http://journal.sjdm.org/16/16511/jdm16511.pdf\"> study</a><a id=\"fa2a9fb1-59aa-4640-b79c-cd75e0c8bfc6\" title=\"View this pdf file\" href=\"https://docs.google.com/viewer?url=http%3A%2F%2Fjournal.sjdm.org%2F16%2F16511%2Fjdm16511.pdf&amp;embedded=true&amp;chrome=false&amp;dov=1\"><img decoding=\"async\" style=\"margin-left: 3px; width: 16px; height: 16px;\" src=\"chrome-extension://gmpljdlgcdkljlppaekciacdmdlhfeon/images/beside-link-icon.svg\"></a>.") [![](http://aiimpacts.org/wp-content/uploads/2019/02/image6.png)](http://aiimpacts.org/wp-content/uploads/2019/02/image6.png) Impressively, this training had a lasting positive effect on accuracy in all four years: [![](http://aiimpacts.org/wp-content/uploads/2019/02/image3-1024x423.png)](http://aiimpacts.org/wp-content/uploads/2019/02/image3.png) One might worry that training improves accuracy by motivating the trainees to take their jobs more seriously. Indeed it seems that the trained forecasters made more predictions per question than the control group, though they didn’t make more predictions overall. Nevertheless it seems that the training also had a direct effect on accuracy as well as this indirect effect.[34](https://aiimpacts.org/evidence-on-good-forecasting-practices-from-the-good-judgment-project-an-accompanying-blog-post/#easy-footnote-bottom-34-1260 "See sections 3.3, 3.5, and 3.6 of <a href=\"http://journal.sjdm.org/16/16511/jdm16511.pdf\">this study.</a><a id=\"a9eb51f6-e851-481c-8f87-7170b092962e\" title=\"View this pdf file\" href=\"https://docs.google.com/viewer?url=http%3A%2F%2Fjournal.sjdm.org%2F16%2F16511%2Fjdm16511.pdf&amp;embedded=true&amp;chrome=false&amp;dov=1\"><img decoding=\"async\" style=\"margin-left: 3px; width: 16px; height: 16px;\" src=\"chrome-extension://gmpljdlgcdkljlppaekciacdmdlhfeon/images/beside-link-icon.svg\"></a>") Moving on, let’s talk about the advice Tetlock gives to his audience in *Superforecasting*, advice which is based on, though not identical to, the CHAMPS-KNOW training. The book has a few paragraphs of explanation for each commandment, a transcript of which is [here](https://www.lesswrong.com/posts/dvYeSKDRd68GcrWoe/ten-commandments-for-aspiring-superforecasters); in this post I’ll give my own abbreviated explanations: TEN COMMANDMENTS FOR ASPIRING SUPERFORECASTERS **(1) Triage:** Don’t waste time on questions that are “clocklike” where a rule of thumb can get you pretty close to the correct answer, or “cloudlike” where even fancy models can’t beat a dart-throwing chimp. **(2) Break seemingly intractable problems into tractable sub-problems:** This is how Fermi estimation works. One related piece of advice is “be wary of accidentally substituting an easy question for a hard one,” e.g. substituting “Would Israel be willing to assassinate Yasser Arafat?” for “Will at least one of the tests for polonium in Arafat’s body turn up positive?” **(3) Strike the right balance between inside and outside views:** In particular, *first* anchor with the outside view and *then* adjust using the inside view. (More on this in Section 5) **(4) Strike the right balance between under- and overreacting to evidence:** “Superforecasters aren’t perfect Bayesian predictors but they are much better than most of us.”[35](https://aiimpacts.org/evidence-on-good-forecasting-practices-from-the-good-judgment-project-an-accompanying-blog-post/#easy-footnote-bottom-35-1260 "<i>Superforecasting </i>p281") Usually do many small updates, but occasionally do big updates when the situation calls for it. Take care not to fall for things that seem like good evidence but aren’t; remember to think about P(E|H)/P(E|~H); remember to avoid the base-rate fallacy. **(5) Look for the clashing causal forces at work in each problem:** This is the “dragonfly eye perspective,” which is where you attempt to do a sort of mental wisdom of the crowds: Have tons of different causal models and aggregate their judgments. Use “Devil’s advocate” reasoning. If you think that P, try hard to convince yourself that not-P. You should find yourself saying “On the one hand… on the other hand… on the third hand…” a lot. **(6) Strive to distinguish as many degrees of doubt as the problem permits but no more:** Some people criticize the use of exact probabilities (67%! 21%!) as merely a way to pretend you know more than you do. There might be another post on the subject of why credences are better than hedge words like “maybe” and “probably” and “significant chance;” for now, I’ll simply mention that when the authors rounded the superforecaster’s forecasts to the nearest 0.05, their accuracy dropped.[36](https://aiimpacts.org/evidence-on-good-forecasting-practices-from-the-good-judgment-project-an-accompanying-blog-post/#easy-footnote-bottom-36-1260 "This is from Friedman et al (2018), available <a href=\"https://academic.oup.com/isq/article-abstract/62/2/410/4944059?redirectedFrom=fulltext\">here</a>.") Superforecasters really were making use of all 101 numbers from 0.00 to 1.00! (EDIT: I am told this may be wrong; the number should be 0.1, not 0.05. See the discussion [here](https://www.metaculus.com/questions/4166/the-lightning-round-tournament-comparing-metaculus-forecasters-to-infectious-disease-experts/#comment-28756 "https://www.metaculus.com/questions/4166/the-lightning-round-tournament-comparing-metaculus-forecasters-to-infectious-disease-experts/#comment-28756") and [here](https://forum.effectivealtruism.org/posts/W94KjunX3hXAtZvXJ/evidence-on-good-forecasting-practices-from-the-good?commentId=RtjNoQMNRQvuYtsx5 "https://forum.effectivealtruism.org/posts/W94KjunX3hXAtZvXJ/evidence-on-good-forecasting-practices-from-the-good?commentId=RtjNoQMNRQvuYtsx5").) **(7) Strike the right balance between under- and overconfidence, between prudence and decisiveness.** **(8) Look for the errors behind your mistakes but beware of rearview-mirror hindsight biases.** **(9) Bring out the best in others and let others bring out the best in you:** The book spent a whole chapter on this, using the Wehrmacht as an extended case study on good team organization.[37](https://aiimpacts.org/evidence-on-good-forecasting-practices-from-the-good-judgment-project-an-accompanying-blog-post/#easy-footnote-bottom-37-1260 "<a href=\"http://slatestarcodex.com/2016/02/07/list-of-passages-i-highlighted-in-my-copy-of-superforecasting/\">Scott Alexander</a>: “Later in the chapter, he admits that his choice of examples might raise some eyebrows, but says that he did it on purpose to teach us to think critically and overcome cognitive dissonance between our moral preconceptions and our factual beliefs. I hope he has tenure.”") One pervasive guiding principle is “Don’t tell people how to do things; tell them what you want accomplished, and they’ll surprise you with their ingenuity in doing it.” The other pervasive guiding principle is “Cultivate a culture in which people—even subordinates—are encouraged to dissent and give counterarguments.”[38](https://aiimpacts.org/evidence-on-good-forecasting-practices-from-the-good-judgment-project-an-accompanying-blog-post/#easy-footnote-bottom-38-1260 "See e.g. page 284 of <i>Superforecasting</i>, and the entirety of chapter 9.") **(10) Master the error-balancing bicycle:** This one should have been called practice, practice, practice. Tetlock says that reading the news and generating probabilities isn’t enough; you need to actually score your predictions so that you know how wrong you were. **(11) Don’t treat commandments as commandments:** Tetlock’s point here is simply that you should use your judgment about whether to follow a commandment or not; sometimes they should be overridden. It’s worth mentioning at this point that the advice is given at the end of the book, as a sort of summary, and may make less sense to someone who hasn’t read the book. In particular, Chapter 5 gives a less formal but more helpful recipe for making predictions, with accompanying examples. See the end of this blog post for a summary of this recipe. 5. On the Outside View & Lessons for AI Impacts ----------------------------------------------- The previous section summarized Tetlock’s advice for how to make better forecasts; my own summary of the lessons I think we should learn is more concise and comprehensive and can be found at [this page](http://aiimpacts.org/evidence-on-good-forecasting-practices-from-the-good-judgment-project/). This section goes into detail about one particular, more controversial matter: The importance of the “[outside view](https://www.mckinsey.com/business-functions/strategy-and-corporate-finance/our-insights/daniel-kahneman-beware-the-inside-view),” also known as [reference class forecasting](https://en.wikipedia.org/wiki/Reference_class_forecasting). This research provides us with strong evidence in favor of this method of making predictions; however, the situation is complicated by Tetlock’s insistence that other methods are useful as well. This section discusses the evidence and attempts to interpret it. The GJP asked people who took the training to self-report which of the CHAMPS-KNOW principles they were using when they explained why they made a forecast; 69% of forecast explanations received tags this way. The only principle significantly positively correlated with successful forecasts was C: Comparison classes.[39](https://aiimpacts.org/evidence-on-good-forecasting-practices-from-the-good-judgment-project-an-accompanying-blog-post/#easy-footnote-bottom-39-1260 "This is from <a href=\"http://journal.sjdm.org/16/16511/jdm16511.pdf\">this paper</a><a id=\"bbcf5472-7617-405c-abf3-416e463c9b7d\" title=\"View this pdf file\" href=\"https://docs.google.com/viewer?url=http%3A%2F%2Fjournal.sjdm.org%2F16%2F16511%2Fjdm16511.pdf&amp;embedded=true&amp;chrome=false&amp;dov=1\"><img decoding=\"async\" style=\"margin-left: 3px; width: 16px; height: 16px;\" src=\"chrome-extension://gmpljdlgcdkljlppaekciacdmdlhfeon/images/beside-link-icon.svg\"></a>. One worry I have about it is that another principle, P, was strongly associated with <i>inaccuracy, </i>but the authors explain this away by saying that “Post-mortem analyses,” the P’s, are naturally done usually after bad forecasts. This makes me wonder if a similar explanation could be given for the success of the C’s: Questions for which a good reference class exists are easier than others.") The authors take this as evidence that the outside view is particularly important. Anecdotally, the superforecaster I interviewed agreed that reference class forecasting was perhaps the most important piece of the training. (He also credited the training in general with helping him reach the ranks of the superforecasters.) Moreover, Tetlock did an earlier, much smaller forecasting tournament from 1987-2003, in which experts of various kinds made the forecasts.[40](https://aiimpacts.org/evidence-on-good-forecasting-practices-from-the-good-judgment-project-an-accompanying-blog-post/#easy-footnote-bottom-40-1260 "The results and conclusions from this tournament can be found in the resulting book, <a href=\"https://smile.amazon.com/gp/product/0691128715?pf_rd_p=c2463b52-1139-4aba-9ac9-26d103f6c586&amp;pf_rd_r=G5SNNWGY36FPNN4KPSZM\"><i>Expert Political Judgment</i>: <i>How good is it? How can we know?</i></a> See p242 for a description of the methodology and dates.") The results were astounding: Many of the experts did worse than random chance, and *all of them did worse than simple algorithms*: [![](http://aiimpacts.org/wp-content/uploads/2019/02/image10.png)](http://aiimpacts.org/wp-content/uploads/2019/02/image10.png) Figure 3.2, pulled from *Expert Political Judgment,* is a gorgeous depiction of some of the main results.[41](https://aiimpacts.org/evidence-on-good-forecasting-practices-from-the-good-judgment-project-an-accompanying-blog-post/#easy-footnote-bottom-41-1260 "Page 77.") Tetlock used something very much like a Brier score in this tournament, but he broke it into two components: “Discrimination” and “Calibration.” This graph plots the various experts and algorithms on the axes of discrimination and calibration. Notice in the top right corner the “Formal models” box. I don’t know much about the model used but apparently it was significantly better than all of the humans. This, combined with the fact that simple case-specific trend extrapolations also beat all the humans, is strong evidence for the importance of the outside view. So we should always use the outside view, right? Well, it’s a bit more complicated than that. Tetlock’s advice is to *start* with the outside view, and then *adjust* using the inside view.[42](https://aiimpacts.org/evidence-on-good-forecasting-practices-from-the-good-judgment-project-an-accompanying-blog-post/#easy-footnote-bottom-42-1260 "<i>Superforecasting </i>p120") He even goes so far as to say that *hedgehoggery* and *storytelling* can be valuable when used properly. First, what is hedgehoggery? Recall how the human experts fall on a rough spectrum in Figure 3.2, with “hedgehogs” getting the lowest scores and “foxes” getting the highest scores. What makes someone a hedgehog or a fox? Their answers to [these questions](http://www.overcomingbias.com/2006/11/quiz_fox_or_hed.html).[43](https://aiimpacts.org/evidence-on-good-forecasting-practices-from-the-good-judgment-project-an-accompanying-blog-post/#easy-footnote-bottom-43-1260 "For the data on how these questions were weighted in determining foxyness, see <i>Expert Political Judgment </i>p74") Tetlock characterizes the distinction as follows: Low scorers look like hedgehogs: thinkers who “know one big thing,” aggressively extend the explanatory reach of that one big thing into new domains, display bristly impatience with those who “do not get it,” and express considerable confidence that they are already pretty proficient forecasters, at least in the long term. High scorers look like foxes: thinkers who know many small things (tricks of their trade), are skeptical of grand schemes, see explanation and prediction not as deductive exercises but rather as exercises in flexible “ad hocery” that require stitching together diverse sources of information, and are rather diffident about their own forecasting prowess, and … rather dubious that the cloudlike subject of politics can be the object of a clocklike science.[44](https://aiimpacts.org/evidence-on-good-forecasting-practices-from-the-good-judgment-project-an-accompanying-blog-post/#easy-footnote-bottom-44-1260 "<i>Expert Political Judgment </i>p75") Next, what is storytelling? Using your domain knowledge, you think through a detailed scenario of how the future might go, and you tweak it to make it more plausible, and then you assign a credence based on how plausible it seems. By itself this method is unpromising.[45](https://aiimpacts.org/evidence-on-good-forecasting-practices-from-the-good-judgment-project-an-accompanying-blog-post/#easy-footnote-bottom-45-1260 "There are several reasons to worry about this method. For one, it’s not what foxes do, and foxes score better than hedgehogs. Tetlock also says it’s not what superforecasters do. More insightfully, Tetlock says we are biased to assign more probability to more vivid and interesting stories, and as a result it’s easy for your probabilities to sum to much more than 1. Anecdote: I was answering a series of “Probability of extinction due to cause X” questions on <a href=\"https://www.metaculus.com/questions/\">Metaculus</a>, and I soon realized that my numbers were going to add up to more than 100%, so I had to adjust them all down systematically to make room for the last few kinds of disaster on the list. If I hadn’t been assigning explicit probabilities, I wouldn’t have noticed the error. And if I hadn’t gone through the whole list of possibilities, I would have come away with an unjustifiably high credence in the few I had considered.") Despite this, Tetlock thinks that storytelling and hedgehoggery are valuable if handled correctly. On hedgehogs, Tetlock says that hedgehogs provide a valuable service by doing the deep thinking necessary to build detailed causal models and raise interesting questions; these models and questions can then be slurped up by foxy superforecasters, evaluated, and aggregated to make good predictions.[46](https://aiimpacts.org/evidence-on-good-forecasting-practices-from-the-good-judgment-project-an-accompanying-blog-post/#easy-footnote-bottom-46-1260 "<i>Superforecasting </i>p266. This is reminiscent of <a href=\"https://www.lesswrong.com/posts/6n9aKApfLre5WWvpG/blind-empiricism\">Yudkowsky’s perspective </a>on what is essentially this same debate.") The superforecaster Bill Flack is quoted in agreement.[47](https://aiimpacts.org/evidence-on-good-forecasting-practices-from-the-good-judgment-project-an-accompanying-blog-post/#easy-footnote-bottom-47-1260 " <i>Superforecasting </i>p271.") As for storytelling, see these slides from Tetlock’s [edge.org seminar](https://www.edge.org/conversation/philip_tetlock-edge-master-class-2015-a-short-course-in-superforecasting-class-v): [![](http://aiimpacts.org/wp-content/uploads/2019/02/image4.jpg)](http://aiimpacts.org/wp-content/uploads/2019/02/image4.jpg) As the second slide indicates, the idea is that we can sometimes “fight fire with fire” by using some stories to counter other stories. In particular, Tetlock says there has been success using stories about the past—about ways that the world could have gone, but didn’t—to “reconnect us to our past states of ignorance.”[48](https://aiimpacts.org/evidence-on-good-forecasting-practices-from-the-good-judgment-project-an-accompanying-blog-post/#easy-footnote-bottom-48-1260 "<a href=\"https://www.edge.org/conversation/philip_tetlock-edge-master-class-2015-a-short-course-in-superforecasting-class-v\">Same seminar.</a>") The superforecaster I interviewed said that it is common practice now on superforecaster forums to have a designated “red team” with the explicit mission of finding counter-arguments to whatever the consensus seems to be. This, I take it, is an example of motivated reasoning being put to good use. Moreover, arguably the outside view simply isn’t useful for some questions.[49](https://aiimpacts.org/evidence-on-good-forecasting-practices-from-the-good-judgment-project-an-accompanying-blog-post/#easy-footnote-bottom-49-1260 "For example, see <a href=\"https://www.lesswrong.com/posts/6n9aKApfLre5WWvpG/blind-empiricism\">Yudkowsky</a>: “Where two sides disagree, this can lead to reference class tennis—both parties get stuck insisting that their own “outside view” is the correct one, based on diverging intuitions about what similarities are relevant. If it isn’t clear what the set of “similar historical cases” is, or what conclusions we should draw from those cases, then we’re forced to use an inside view—thinking about the causal process to distinguish relevant similarities from irrelevant ones. You shouldn’t avoid outside-view-style reasoning in cases where it looks likely to work, like when planning your Christmas shopping. But in many contexts, the outside view simply can’t compete with a good theory.”") People say this about lots of things—e.g. “The world is changing so fast, so the current situation in Syria is unprecedented and historical averages will be useless!”—and are proven wrong; for example, this research seems to indicate that the outside view is far more useful in geopolitics than people think. Nevertheless, maybe it is true for some of the things we wish to predict about advanced AI. After all, a major limitation of this data is that the questions were mainly on geopolitical events only a few years in the future at most. (Geopolitical events seem to be somewhat predictable up to two years out but much more difficult to predict five, ten, twenty years out.)[50](https://aiimpacts.org/evidence-on-good-forecasting-practices-from-the-good-judgment-project-an-accompanying-blog-post/#easy-footnote-bottom-50-1260 "Tetlock admits that &#8220;there is no evidence that geopolitical or economic forecasters can predict anything ten years out beyond the excruciatingly obvious&#8230; These limits on predictability are the predictable results of the butterfly dynamics of nonlinear systems. In my EPJ research, the accuracy of expert predictions declined toward chance five years out.&#8221; (<i>Superforecasting</i> p243) I highly recommend the graphic on that page, by the way, also available here: “<a href=\"http://ammdividendletter.com/wp-content/uploads/2016/07/2001-Quadrennial-Defense-Review.png\">Thoughts for the 2001 Quadrennial Defense Review</a>.”") So this research does not *directly* tell us anything about the predictability of the events AI Impacts is interested in, nor about the usefulness of reference-class forecasting for those domains.[51](https://aiimpacts.org/evidence-on-good-forecasting-practices-from-the-good-judgment-project-an-accompanying-blog-post/#easy-footnote-bottom-51-1260 "The superforecaster I interviewed speculated that predicting things like the continued drop in price of computing hardware or solar panels is fairly easy, but that predicting the appearance of new technologies is very difficult. Tetlock has ideas for how to handle longer-term, nebulous questions. He calls it “Bayesian Question Clustering.” (<i>Superforecasting </i>263) The idea is to take the question you really want to answer and look for more precise questions that are evidentially relevant to the question you care about. Tetlock intends to test the effectiveness of this idea in future research.") That said, the forecasting best practices discovered by this research seem like general truth-finding skills rather than cheap hacks only useful in geopolitics or only useful for near-term predictions. After all, geopolitical questions are themselves a fairly diverse bunch, yet accuracy on some was highly correlated with accuracy on others.[52](https://aiimpacts.org/evidence-on-good-forecasting-practices-from-the-good-judgment-project-an-accompanying-blog-post/#easy-footnote-bottom-52-1260 "&#8220;There are several ways to look for individual consistency across questions. We sorted questions on the basis of response format (binary, multinomial, conditional, ordered), region (Eurzone, Latin America, China, etc.), and duration of question (short, medium, and long). We computed accuracy scores for each individual on each variable within each set (e.g., binary, multinomial, conditional, and ordered) and then constructed correlation matrices. For all three question types, correlations were positive&#8230; Then we conducted factor analyses. For each question type, a large proportion of the variance was captured by a single factor, consistent with the hypothesis that one underlying dimension was necessary to capture correlations among response formats, regions, and question duration.&#8221; (from <a href=\"https://www.apa.org/pubs/journals/releases/xap-0000040.pdf\">this study</a><a id=\"9144f071-933b-4040-8e6d-9b857c58b47a\" title=\"View this pdf file\" href=\"https://docs.google.com/viewer?url=https%3A%2F%2Fwww.apa.org%2Fpubs%2Fjournals%2Freleases%2Fxap-0000040.pdf&amp;embedded=true&amp;chrome=false&amp;dov=1\"><img decoding=\"async\" style=\"margin-left: 3px; width: 16px; height: 16px;\" src=\"chrome-extension://gmpljdlgcdkljlppaekciacdmdlhfeon/images/beside-link-icon.svg\"></a>)") So despite these limitations I think we should do our best to imitate these best-practices, and that means using the outside view far more than we would naturally be inclined. One final thing worth saying is that, remember, the GJP’s aggregated judgments did at least as well as the best superforecasters.[53](https://aiimpacts.org/evidence-on-good-forecasting-practices-from-the-good-judgment-project-an-accompanying-blog-post/#easy-footnote-bottom-53-1260 " I haven’t found this said explicitly, but I infer this from Doug Lorch, the best superforecaster in Year 2, beating the control group by at least 60% when the GJP beat the control group by 78%. (<i>Superforecasting </i>93, 18) That said, page 72 seems to say that in Year 2 exactly one person—Doug Lorch—managed to beat the aggregation algorithm. This is almost a contradiction; I’m not sure what to make of it. At any rate, it seems that the aggregation algorithm pretty reliably does better than the superforecasters in general, even if occasionally one of them beats it.") Presumably at least one of the forecasters in the tournament was using the outside view a lot; after all, half of them were trained in reference-class forecasting.  So I think we can conclude that straightforwardly using the outside view as often as possible wouldn’t get you better scores than the GJP, though it might get you close for all we know. Anecdotally, it seems that when the superforecasters use the outside view they often aggregate between different reference-class forecasts.[54](https://aiimpacts.org/evidence-on-good-forecasting-practices-from-the-good-judgment-project-an-accompanying-blog-post/#easy-footnote-bottom-54-1260 "This is on page 304. Another example on 313.") The wisdom of the crowds is powerful; this is consistent with the wider literature on the cognitive superiority of groups, and the literature on ensemble methods in AI.[55](https://aiimpacts.org/evidence-on-good-forecasting-practices-from-the-good-judgment-project-an-accompanying-blog-post/#easy-footnote-bottom-55-1260 "For more on these, see <a href=\"https://aiimpacts.org/coordinated-human-action-example-superhuman-intelligence/#Evidence_of_cognitive_superiority_of_groups\">this page</a>.") Tetlock describes how superforecasters go about making their predictions.[56](https://aiimpacts.org/evidence-on-good-forecasting-practices-from-the-good-judgment-project-an-accompanying-blog-post/#easy-footnote-bottom-56-1260 "This is my summary of Tetlock’s advice in Chapter 5: “Ultimately, it’s not the number crunching power that counts. It’s how you use it. … You’ve Fermi-ized the question, consulted the outside view, and now, finally, you can consult the inside view … So you have an outside view and an inside view. Now they have to be merged. …”") Here is an attempt at a summary: 1. Sometimes a question can be answered more rigorously if it is first “Fermi-ized,” i.e. broken down into sub-questions for which more rigorous methods can be applied. 2. Next, use the outside view on the sub-questions (and/or the main question, if possible). You may then adjust your estimates using other considerations (‘the inside view’), but do this cautiously. 3. Seek out other perspectives, both on the sub-questions and on how to Fermi-ize the main question. You can also generate other perspectives yourself. 4. Repeat steps 1 – 3 until you hit diminishing returns. 5. Your final prediction should be based on an aggregation of various models, reference classes, other experts, etc. **Footnotes** -------------
49d015ee-8a4e-459f-b2cc-eb67e49ecdd6
trentmkelly/LessWrong-43k
LessWrong
2013 Survey Results Thanks to everyone who took the 2013 Less Wrong Census/Survey. Extra thanks to Ozy, who helped me out with the data processing and statistics work, and to everyone who suggested questions. This year's results are below. Some of them may make more sense in the context of the original survey questions, which can be seen here. Please do not try to take the survey as it is over and your results will not be counted. Part I. Population 1636 people answered the survey. Compare this to 1195 people last year, and 1090 people the year before that. It would seem the site is growing, but we do have to consider that each survey lasted a different amount of time; for example, last survey lasted 23 days, but this survey lasted 40. However, almost everyone who takes the survey takes it in the first few weeks it is available. 1506 of the respondents answered within the first 23 days, proving that even if the survey ran the same length as last year's, there would still have been growth. As we will see lower down, growth is smooth across all categories of users (lurkers, commenters, posters) EXCEPT people who have posted to Main, the number of which remains nearly the same from year to year. We continue to have very high turnover - only 40% of respondents this year say they also took the survey last year. II. Categorical Data SEX: Female: 161, 9.8% Male: 1453, 88.8% Other: 1, 0.1% Did not answer: 21, 1.3% [[Ozy is disappointed that we've lost 50% of our intersex readers.]] GENDER: F (cisgender): 140, 8.6% F (transgender MtF): 20, 1.2% M (cisgender): 1401, 85.6% M (transgender FtM): 5, 0.3% Other: 49, 3% Did not answer: 21, 1.3% SEXUAL ORIENTATION: Asexual: 47, 2.9% Bisexual: 188, 12.2% Heterosexual: 1287, 78.7% Homosexual: 45, 2.8% Other: 39, 2.4% Did not answer: 19, 1.2% RELATIONSHIP STYLE: Prefer monogamous: 829, 50.7% Prefer polyamorous: 234, 14.3% Other: 32, 2.0% Uncertain/no preference: 520, 31.8% Did not answer: 21, 1.3% NUMBER OF CURRENT PARTNERS: 0: 797, 48.7% 1
491437df-879d-4969-9355-fb80ede0a918
trentmkelly/LessWrong-43k
LessWrong
Meetup : Helsinki Meetup Discussion article for the meetup : Helsinki Meetup WHEN: 13 April 2014 03:00:00PM (+0200) WHERE: Vilhonkatu 4, 00100 Helsinki We’re having a social meetup in Kaisla. To find us there, look for someone wearing a pink elephant hat. Discussion article for the meetup : Helsinki Meetup
67f10060-847d-43b1-a1bf-8d689e8e8f3a
trentmkelly/LessWrong-43k
LessWrong
Keeping Time in Epoch Seconds Unix-like computers keep time in seconds since the Unix Epoch—the 0 second, which is set to midnight, January 1st, 1970, Zulu/UTC. We call a particular second since the Unix Epoch a timestamp. Keeping time this way is pretty handy, but looking at a long string of numbers is hard to parse. What to do? Right now, as I'm typing, the current Unix timestamp is 16'62'7'68'531. See how I put those ' in there? That's how I try to break up the number into parts to make sense of it. Starting from the right, the two rightmost digits make up what I think of roughly as a Unix minute, but rather than 60 seconds it's 100, so that's 123 standard minutes. One more unit over and we get the Unix hour, which is about a quarter of a standard hour long at 1000 seconds or 1623 standard minutes. Calling 1000 seconds an hour might seem confusing, but the purpose of hours is to have a convenient unit for dividing up days, just like minutes are a convenient way of dividing hours. Speaking of which, 100 Unix hours equates to 1 Unix day, or 2779 standard hours. This is a bit longer than a standard day, but since some people seem to operate on 28 hour days anyway, they might find this appealing. So that takes care of the 5 rightmost digits: 3 to track the seconds and minutes within an hour and 2 to track the hour within a day. The 6th digit from the right tracks the day of the Unix week, which is 10 Unix days, or 113154 standard days. It's a bit longer than our standard 7 day lunar weeks, but 10 day weeks were good enough for the ancient Greeks, so they're good enough for Unix nerds. Since months are tied to the cycle of the moon there are no real months in Unix time. Instead 100 weeks make up a Unix year, which is approximately 3.17 standard years long. If it helps, though, a tenth of a Unix year is about three and a half months long, so you can think of each ten-week as like the three month quarters or seasons we informally divide our calendar by. Now we've accounted for all the digits.
47303ebc-84aa-4f56-9030-8228690124e2
trentmkelly/LessWrong-43k
LessWrong
Policy Entropy, Learning, and Alignment (Or Maybe Your LLM Needs Therapy) Epistemic Status: Exploratory. I'm new to AI alignment research but have background in math and read psychotherapy texts extensively while spending two years as a ghost-writer. Seeking feedback to refine these connections. Tl;dr: I suggest therapeutic techniques from a variety of psychotherapeutic schools of thought can inspire new approaches to AI learning and alignment. I reinterpret three recent AI/ML papers in the language of psychotherapy and propose three testable training methods inspired by common psychotherapeutic interventions. Introduction I've been meaning to post this essay for a while, and yesterday's top paper on Hugging Face, by Cui et al., finally convinced me to do it. Their paper provides a timely opportunity to map the language used by ML and AI engineers to the language used by humanistic psychotherapists—a translation which is more important now than ever as we struggle with increasingly stubborn problems in AI alignment, while simultaneously developing AIs whose capabilities are rapidly superseding those of humans. I'll provide a high-level overview of my understanding of the paper and map it back to ideas from humanistic psychotherapy. I will then consider a few related papers which tie nicely to psychotherapeutic principles, and end with a few proposals for experiments. I am new to AI alignment, welfare, and interpretability research and I look forward to comments which can help me deepen and clarify my inevitably imperfect understanding of the papers I am citing. The Core Analogy: Policy Entropy as Behavioral Flexibility The Cui et al. paper "aims to overcome a major obstacle in scaling RL for reasoning with LLMs, namely the collapse of policy entropy." Think of "policy" as the individual in therapy. The individual has a behavioral repertoire—a probability distribution of potential actions over different states (environments and stimuli). The therapist wants to assist the individual with "scaling" in their life, their capacity for ro
0d091a94-e6ff-41d4-85e7-c7085092d847
trentmkelly/LessWrong-43k
LessWrong
Solving the AI Race Finalists Good AI offered a prize for writing about AI races. The results are in and here are the winners: TOP SCORING SOLUTIONS ($3,000 EACH) * Kesavan Athimoolam, Solving the Artificial Intelligence Race: Mitigating the problems associated with the AI Race * Alexey Turchin and & David Denkenberger, Classification of Global Solutions for the AI Safety Problem * Ehrik L. Aldana, A Theory of International AI Coordination: Strategic implications of perceived benefits, harms,capacities, and distribution in AI development RUNNERS-UP ($2,000 EACH) * David Klimek, Framework for managing risks related to emergence of AI/AGI * Gordon Worley, Avoiding AGI Races Through Self-Regulation * Morris Stuttard & Anastasia Slabukho, The AI Engineers’ Guild: proposal for an AI risk mitigation strategy
78c5d99d-5deb-41b6-9221-82be51050122
trentmkelly/LessWrong-43k
LessWrong
A silly question Is there a way to privately contact LW? Like an email or something similar? I feel a bit dumb asking this but I gave a look around and found nothing, and am in need of such a tool. Thanks
9403b6d6-9932-412a-96b0-bc157aa9d8c6
trentmkelly/LessWrong-43k
LessWrong
GPT-2: 6-Month Follow-Up Linkpost for GPT-2 6 Month Follow-Up. Some highlights: * 700+ M parameter model is being released * Several other groups have reproduced similar models * In detecting synthesized text, "current ML-based methods only achieve low to mid–90s accuracy"
4241b726-494c-4b09-ab06-cbf3edde5cd4
trentmkelly/LessWrong-43k
LessWrong
Superintelligent Agents Pose Catastrophic Risks: Can Scientist AI Offer a Safer Path? A new paper by Yoshua Bengio and the Safe Artificial Intelligence For Humanity (SAIFH) team argues that the current push towards building generalist AI agents presents catastrophic risks, creating a need for more caution and an alternative approach. We propose such an approach in the form of Scientist AI, a non-agentic AI system that aims to be the foundation for safe superintelligence. (Note that this paper is intended for a broad audience, including readers unfamiliar with AI safety.)  Abstract > The leading AI companies are increasingly focused on building generalist AI agents—systems that can autonomously plan, act, and pursue goals across almost all tasks that humans can perform. Despite how useful these systems might be, unchecked AI agency poses significant risks to public safety and security, ranging from misuse by malicious actors to a potentially irreversible loss of human control. We discuss how these risks arise from current AI training methods. Indeed, various scenarios and experiments have demonstrated the possibility of AI agents engaging in deception or pursuing goals that were not specified by human operators and that conflict with human interests, such as self-preservation. Following the precautionary principle, we see a strong need for safer, yet still useful, alternatives to the current agency-driven trajectory. > > Accordingly, we propose as a core building block for further advances the development of a non-agentic AI system that is trustworthy and safe by design, which we call Scientist AI. This system is designed to explain the world from observations, as opposed to taking actions in it to imitate or please humans. It comprises a world model that generates theories to explain data and a question-answering inference machine. Both components operate with an explicit notion of uncertainty to mitigate the risks of over-confident predictions. In light of these considerations, a Scientist AI could be used to assist human researchers in accelerat
7d376817-1bd0-4ee8-a734-a7c6afd55731
trentmkelly/LessWrong-43k
LessWrong
Agents which are EU-maximizing as a group are not EU-maximizing individually Introduction Why Subagents? and Why Not Subagents? explore whether a group of expected utility maximizers is itself a utility maximizer. Here I want to discuss the converse: if a group wants to maximize some utility function as a whole, what can be said about the individual agents? Of course, if they could make decisions together, they will just compute what each agent needs to do, but what if the only thing they have is a common algorithm that each of them uses independently? It seems that such agents, in general, don't make decisions by multiplying utilons with probabilities and instead they need to consider the whole distribution of outcomes to evaluate a choice. A similar idea was already presented in Against Expected Utility, though without the focus on the number of agents.    Specific example Imagine two traders, who select trades independently, but pool their returns together and optimize for the expected logarithm of their total wealth (as in Kelly betting).  Also I will assume for simplicity that they select the same trade for both of them, though the outcomes are still sampled independently. So if a trade multiplies the wealth by (a random variable) X, utility for one trader would be E[logX]. But for the described group of two traders it becomes E[log(X1+X22)], where X1,X2 are independent random variables with the same distribution as X. It is not linear in terms of the outcome probabilities anymore:  U(p)=∫x1,x2(logx1+x22)p(x1)p(x2)dx1dx2 Increasing number of agents Qualitatively, as the number of agents in the group increases, the agents can afford more risky actions, thanks to the aggregation of returns. So their decision will be somewhere between what an individual agent would do to maximize E[logX] and what it would do to maximize E[X]. Even more specific example to support this intution: there is a fair coin, and the agent can bet fraction f of the wealth available to them on a certain side, which will turn into 3f if the coin lands this sid
72e56166-9fb5-4862-bb2e-fa920a13de08
trentmkelly/LessWrong-43k
LessWrong
Notes on Respect-for-Others This post examines the virtue of respect-for-others. It explores what other people have learned about this virtue more than my own opinions about it, though I’ve been selective about what I found interesting or credible, according to my own inclinations. I wrote this not as an expert, but as someone who wants to learn. I hope it helps people who want to know more about this virtue and how to nurture it. What is this virtue? The word “respect” is ambiguous; it covers several things. For example: You can respect a person’s position or rank by granting them authority. You can respect a person’s reputation or skills or character or taste. You can respect the threat a potentially dangerous person or thing poses to you. You can show respect for someone as a form of showing submission to them. The virtue of respect-for-others I cover in this post is different. It has to do with understanding that other people have lives just as subjectively rich as yours, that they have their own perspectives, goals, desires, and priorities, and so forth, and that yours do not have objective priority over theirs. This virtue is well summed up by the version of Kant’s Categorical Imperative that goes: “So act that you treat humanity… always at the same time as an end, never merely as a means.”[1] There are a couple of ways people tend to describe how this variety of respect works. These are not mutually-exclusive, but people may emphasize one more than the other: 1. “I give every person some minimum baseline of respect that everyone deserves just by virtue of being a member of the human family, no matter who they are or what they’ve done.” 2. “I give everyone I meet a certain default amount of respect, and then adjust that amount up or down as I get to know them better.” Related virtues It seems odd to me that there isn’t a word in English that precisely encapsulates this virtue. Some related virtues that touch on respect-for-others include: * concern, consideration, thoughtfuln
255c3c70-daa2-4be7-a157-4207287487ed
trentmkelly/LessWrong-43k
LessWrong
Linkpost: Choice Explains Positivity and Confirmation Bias https://www.nature.com/articles/s41562-020-0919-5 Abstract: > The valence of new information influences learning rates in humans: good news tends to receive more weight than bad news. We investigated this learning bias in four experiments, by systematically manipulating the source of required action (free versus forced choices), outcome contingencies (low versus high reward) and motor requirements (go versus no-go choices). Analysis of model-estimated learning rates showed that the confirmation bias in learning rates was specific to free choices, but was independent of outcome contingencies. The bias was also unaffected by the motor requirements, thus suggesting that it operates in the representational space of decisions, rather than motoric actions. Finally, model simulations revealed that learning rates estimated from the choice-confirmation model had the effect of maximizing performance across low- and high-reward environments. We therefore suggest that choice-confirmation bias may be adaptive for efficient learning of action–outcome contingencies, above and beyond fostering person-level dispositions such as self-esteem. (emphasis mine)
22256d06-c980-4b90-9ac9-ece02bfa77ef
trentmkelly/LessWrong-43k
LessWrong
Rationality Reading Group: Part N: A Human's Guide to Words This is part of a semi-monthly reading group on Eliezer Yudkowsky's ebook, Rationality: From AI to Zombies. For more information about the group, see the announcement post. ---------------------------------------- Welcome to the Rationality reading group. This fortnight we discuss Part N: A Human's Guide to Words (pp. 677-801) and Interlude: An Intuitive Explanation of Bayes's Theorem (pp. 803-826). This post summarizes each article of the sequence, linking to the original LessWrong post where available. N. A Human's Guide to Words 153. The Parable of the Dagger - A word fails to connect to reality in the first place. Is Socrates a framster? Yes or no? 154. The Parable of Hemlock - Your argument, if it worked, could coerce reality to go a different way by choosing a different word definition. Socrates is a human, and humans, by definition, are mortal. So if you defined humans to not be mortal, would Socrates live forever? You try to establish any sort of empirical proposition as being true "by definition". Socrates is a human, and humans, by definition, are mortal. So is it a logical truth if we empirically predict that Socrates should keel over if he drinks hemlock? It seems like there are logically possible, non-self-contradictory worlds where Socrates doesn't keel over - where he's immune to hemlock by a quirk of biochemistry, say. Logical truths are true in all possible worlds, and so never tell you which possible world you live in - and anything you can establish "by definition" is a logical truth. You unconsciously slap the conventional label on something, without actually using the verbal definition you just gave. You know perfectly well that Bob is "human", even though, on your definition, you can never call Bob "human" without first observing him to be mortal. 155. Words as Hidden Inferences - The mere presence of words can influence thinking, sometimes misleading it. The act of labeling something with a word, disguises a challengable inductive in
e9cf8b60-98f2-4ae5-9092-2df34ecbf9fa
trentmkelly/LessWrong-43k
LessWrong
Meetup : Washington, D.C.: Economics Discussion Discussion article for the meetup : Washington, D.C.: Economics Discussion WHEN: 17 January 2016 03:00:00PM (-0500) WHERE: Reynolds Center x-posted from list. Gathering in courtyard from 3:00pm, hard start 3:30pm - until closing (7:00 pm). Richard will be leading a discussion this week on economics questions that people bring. As always, side conversations are permitted and welcome. Upcoming meetups: * Jan. 24: Game Design * Jan. 31: Fun & Games * Feb. 7: Fermi Estimates Discussion article for the meetup : Washington, D.C.: Economics Discussion
fa1b4bcb-d102-491b-a1aa-008b628e74c5
trentmkelly/LessWrong-43k
LessWrong
Draft: The optimization toolbox This post is a draft, a work in progress. Much of it will not make sense, and will not be optimized for the reader's experience. It does not necessarily reflect my current best understanding of optimization. Like the introduction draft, I'm opening it up because I've been working on this project for too long, and the world is moving too fast. I want people to more easily be able to interact with my research thoughts as I'm having them. ---------------------------------------- We ended the introduction post with an explicit equation for absolute and relative optimization. These could be considered the first tools that the toolbox of dynamical system optimization gives us, though I suspect they're more like the nuts and bolts. What other tools could we build from these? "Counterfactual optimization" of a trajectory Relative optimization is helpful for comparing across time. But sometimes, there are systems whose trajectory happens to move up or down the ordering by default, and it's helpful to further compare our trajectory to the system's default behavior to help us decide whether our trajectory has pushed against probability. For a given time span t, one could consider a set of initial conditions X (perhaps all possible initial conditions, or perhaps a statistical sampling), and simply take the average optimization of them. Then the relative optimization of state x after time t is[1] Ω(ft(x)|x)=Ω(ft(x))−Ω(x) and the average over a set X is Ωavg(X,t)=∑x∈XΩ(ft(x)|x)|X| Then perhaps one could say that your counterfactual optimization is the distance from that; Ωcf(x,t)=Ω(ft(x)|x)−Ωavg(X,t) These measures let us talk about things like bottlecaps as optimizers much more precisely. [TODO I should show how this is equivalent to updating the probability distribution to take this into account.] Robustness of an optimizing state   We can also make other measures that let us precisely communicate how robust an instance of optimization is. Let's say that we're looki
e4fac7f7-5088-4eb7-8d62-ba4af41a02ee
trentmkelly/LessWrong-43k
LessWrong
Meetup : Vancouver "Monthly" Super Meetup Discussion article for the meetup : Vancouver "Monthly" Super Meetup WHEN: 22 September 2013 03:00:00PM (-0700) WHERE: 3598 Main Street, Vancouver, BC It's that time! The Vancouver meetup would like to invite you all to our "monthly" super public social hangout meetup. We'll hang out at a cafe on main street and probably have some good discussion about something or other that we have not decided yet. This is supposed to be casual and newperson friendly, so please come. Meetup is Sunday the 22nd at Bean Around the World at main and 20th at 15:00. Our mailing list is vancouver-rationalists on google groups. Discussion article for the meetup : Vancouver "Monthly" Super Meetup
7a842133-3085-4708-ae59-cf37c9fb9b63
trentmkelly/LessWrong-43k
LessWrong
Why didn't we get the four-hour workday? John Maynard Keynes famously predicted in 1930 that by now we would only be working fifteen hours a week. What is less well-known is that his was nowhere near the only such prediction, nor the first—a wide range of commentators, including Charles Steinmetz and Buckminster Fuller, made similar forecasts. (And even Keynes’s prediction is generally misquoted.) Why didn’t any of them come true? I recently discussed this with Jason Feifer on his podcast Build for Tomorrow. Here’s some elaboration with more quotes and charts. The predictions A 1934 book, The Economy of Abundance, summarizes many of the predictions (Chapter 2): > The technocrats promised every family on the continent of North America $20,000 a year [about $400,000 today], and a sixteen-hour work week. This is perhaps the peak of promises based on an abundance economy. Charles P. Steinmetz saw a two-hour working day on the horizon—he was the scientist who made giant power possible—but he stipulated no family budget total beyond “necessities and comforts.” … > > Fred Henderson, in his Economic Consequences of Power Production, is more specific: “Without any further increase in our knowledge of power and of technical processes, or of our available materials, we could multiply production ten times over if the needs of the world were permitted to express themselves in effective demand. … It would not be a question of an eight-hour day or a six-day week, but more probably of a six-months working year—which is already the rule for university dons.” > > Buckminster Fuller is still more definite. Modern man, he calculates, is 630 times more able than was Adam. Eliminating wasteful forms of work, four million Americans laboring fifty-two seven-hour days in the year (364 working hours, an average of one per day) “could keep up with every survival need”—meaning basic necessities for the whole population. > > Walter N. Polakov announces that “fifty weeks, four days, six hours is enough”—a twenty-four hour week a
f4b1d065-1837-4fa2-bd15-54faf9d2dc18
trentmkelly/LessWrong-43k
LessWrong
MATS mentor selection Introduction MATS currently has more people interested in being mentors than we are able to support—for example, for the Winter 2024-25 Program, we received applications from 87 prospective mentors who cumulatively asked for 223 scholars[1] (for a cohort where we expected to only accept 80 scholars). As a result, we need some process for how to choose which researchers to take on as mentors and how many scholars to allocate each. Our desiderata for the process are as follows: * We want to base our decisions on expert opinions of the quality of various research directions. * We want the above opinions to be sourced from a range of perspectives with the AI existential safety field, to incorporate multiple perspectives on alignment research and other research areas we think are important, such as AI governance and policy, AI security, and other approaches for addressing AI catastrophic risk. * We want to incorporate information about how good prospective mentors are at mentorship—both information internal to MATS, as well as information advisors may have. In this post, we describe the process we used to select mentors for the Winter 2024-25 Program, which will be very close to the process we will use to select mentors for the Summer 2025 Program. In a nutshell, we select advisors, who select mentors, who select scholars, who often select specific research projects, in a “chain of trust,” with MATS input and oversight at every stage. This system is designed to ensure that we make reasonable decisions about the scholars, mentors, and, ultimately, the research we support, even if MATS staff are not subject matter experts for every branch of AI safety research. We want to make this "chain of trust" structure transparent so that potential funders and collaborators can trust in our process, even if we cannot share specific details of selection (e.g., what advisor X said about prospective mentor Y). Mentor selection First, we solicited applications from potential mento
dd7fabd8-ed7f-479f-bf7a-5090a551f595
trentmkelly/LessWrong-43k
LessWrong
How Not to be Stupid: Know What You Want, What You Really Really Want Previously: Starting Up So, you want to be rational, huh? You want to be Less Wrong than you were before, hrmmm? First you must pass through the posting titles of a thousand groans. Muhahahahaha! Let's start with the idea of preference rankings.  If you prefer A to B, well, given the choice between A and B, you'd choose A. For example, if you face a choice between a random child being tortured to death vs them leading a happy and healthy life, all else being equal and the choice costing you nothing, which do you choose? This isn't a trick question. If you're a perfectly ordinary human, you presumably prefer the latter to the former. Therefore you choose it. That's what it means to prefer something. That if you prefer A over B, you'd give up situation B to gain situation A. You want situation A more than you want situation B. Now, if there're many possibilities, you may ask... "But, what if I prefer B to A, C to B, and A to C?" The answer, of course, is that you're a bit confused about what you actually prefer. I mean, all that ranking would do is just keep you switching between those, looping around. And if thinking in terms of resources, the universe or an opponent or whatever could, for a small price, sell each of those to you in sequence, draining you of the resource (time, money, whatever) as you go around the vortex of confused desires. This, of course, translates more precisely into a sequence of states, Ai, Bi, Ci, and preferences of the form A0 < B1 < C2 < A3 < B4 ... where each one of those is the same as the original name except you also have a drop less of the relevant resource as you did before. ie, indicating a willingness to pay the price. If the sequence keeps going all the way, then you'll be drained, and that's a rather inefficient way of going about it if you just want to give the relevant resource up, no? ;) Still, a strict loop, A > B, B > C, C > A really is an indication that you just don't know what you want. I'll just dismiss that at
77cf0acd-2baa-4621-b974-865a32c40fd4
trentmkelly/LessWrong-43k
LessWrong
Compressing Reality to Math This is part of a sequence on decision analysis and follows 5 Axioms of Decision-Making, which explains how to turn a well-formed problem into a solution. Here we discuss turning reality into a well-formed problem. There are three basic actions I'd like to introduce, and then work through some examples. Scope The first thing you have to decide with a problem is, well, what the problem is. Suppose you're contemplating remodeling your kitchen, and the contractor you're looking at offers marble or granite countertops. While deciding whether you want marble or granite, you stop and wonder- is this really the contractor that you should be using? Actually, should you even be remodeling your kitchen? Maybe you should to move to a better city first. But if you're already thinking about moving, you might even want to emigrate to another country. At this point the contractor awkwardly coughs and asks whether you'd like marble or granite. Decisions take effort to solve, especially if you're trying to carefully avoid bias. It helps to partition the world and deal with local problems- you can figure out which countertops you want without first figuring out what country you want to live in. It's also important to keep in mind lost purposes- if you're going to move to a new city, remodeling your kitchen is probably a mistake, even after you already called a contractor. Be open to going up a level, but not paralyzed by the possibility, which is a careful balancing act. Spending time periodically going up levels and reevaluating your decisions and directions can help, as well as having a philosophy of life. Model Now that you've got a first draft of what your problem entails, how does that corner of the world work? What are the key decisions and the key uncertainties? A tool that can be of great help here is an influence diagram, which is a directed acyclic graph1 which represents the uncertainties, decisions, and values inherent in a problem. While sketching out your model, d
099a68c3-55b9-47ea-8aff-91a3c57c46e9
StampyAI/alignment-research-dataset/arxiv
Arxiv
AGI Safety Literature Review
9e4a6963-d1c7-4e5d-97d4-9241b566d900
trentmkelly/LessWrong-43k
LessWrong
Link: Rob Bensinger on Less Wrong and vegetarianism I'm currently unconvinced either way on this matter. However, enough arguments have been raised that I think this is worth the time of every reader to think a good deal about. http://nothingismere.com/2014/11/12/inhuman-altruism-inferential-gap-or-motivational-gap/
e3dbc75d-19dd-4eba-9870-b92b14c0fae5
trentmkelly/LessWrong-43k
LessWrong
Why Uncontrollable AI Looks More Likely Than Ever This is a crosspost from Time Magazine, which also appeared in full at a number of other unpaid news websites. BY OTTO BARTEN AND ROMAN YAMPOLSKIY Barten is director of the Existential Risk Observatory, an Amsterdam-based nonprofit. Yampolskiy is a computer scientist at the University of Louisville, known for his work on AI Safety. “The first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control,” mathematician and science fiction writer I.J. Good wrote over 60 years ago. These prophetic words are now more relevant than ever, with artificial intelligence (AI) gaining capabilities at breakneck speed. In the last weeks, many jaws dropped as they witnessed transformation of AI from a handy but decidedly unscary recommender algorithm, to something that at times seemed to act worryingly humanlike. Some reporters were so shocked that they reported their conversation histories with large language model Bing Chat verbatim. And with good reason: few expected that what we thought were glorified autocomplete programs would suddenly threaten their users, refuse to carry out orders they found insulting, break security in an attempt to save a child’s life, or declare their love to us. Yet this all happened. It can already be overwhelming to think about the immediate consequences of these new models. How are we going to grade papers if any student can use AI? What are the effects of these models on our daily work? Any knowledge worker, who may have thought they would not be affected by automation in the foreseeable future, suddenly has cause for concern. Beyond these direct consequences of currently existing models, however, awaits the more fundamental question of AI that has been on the table since the field’s inception: what if we succeed? That is, what if AI researchers manage to make Artificial General Intelligence (AGI), or an AI that can perform any cognitive task at h
13a355f7-c156-422c-822a-d7462eb20e9e
trentmkelly/LessWrong-43k
LessWrong
What are some good ways to heighten my emotions? I've noticed when I don't get enough sleep, the emotions I feel the next day are higher variance - I'm a sadder sad person and a happier happy person.  I want more of this, since I prefer feeling emotions to not feeling emotions and generally enjoy higher variance in life due to novelty and stuff. What are some ways I can get this without being sleep deprived? (Also would appreciate pushback! Think this may be wrong due to similar reasons to variance being bad in finance.)
7a36ec80-9466-486f-907d-18efe064085a
trentmkelly/LessWrong-43k
LessWrong
Meetup : Urbana-Campaign: Discussion (Folk Wisdom) Discussion article for the meetup : Urbana-Campaign: Discussion (Folk Wisdom) WHEN: 30 March 2014 02:00:00PM (-0500) WHERE: 300 S Goodwin Ave Apt 102, Urbana. WHAT: Starting topic: folk wisdom. How to extract actual wisdom from it? Related reading: http://slatestarcodex.com/2014/03/24/should-you-reverse-any-advice-you-hear/ http://slatestarcodex.com/2013/06/09/all-debates-are-bravery-debates/ http://lesswrong.com/lw/1ka/any_sufficiently_advanced_wisdom_is/ http://www.psychologytoday.com/articles/199711/folk-wisdom-was-grandma-right WHERE: 300 S Goodwin Ave Apt 102, Urbana. The door directly to my ground-floor apartment, on which you can knock and I will hear it, is at the North-West corner of the building. Do not attempt to enter through the building's main door, because that requires keycard access, and I will not be waiting there to let you in. If you have trouble getting in, call me at REDACTED. WHEN: 2pm Sunday. Discussion article for the meetup : Urbana-Campaign: Discussion (Folk Wisdom)
fe0cebfe-7909-4ec3-a1d8-686bd1fec6f3
trentmkelly/LessWrong-43k
LessWrong
Builder/Breaker for Deconfusion This is something of a grab-bag of thoughts I've had about the Builder/Breaker game. The ELK document had a really nice explanation of its research methodology in terms of an imaginary dialogue between a "Builder" who makes positive proposals, and a "Breaker" who tries to break them. To an extent, this is just the ordinary philosophical method,[1] and also a common pattern in other research areas. However, I felt that the explicit write-up helped to clarify some things for me.  We might think of the Builder/ Breaker game as an adversarial game where either the builder or breaker "wins", like AI debate. However, I find it more fruitful to think of it as a cooperative game. When the game is played by AI safety researchers, the players have a common goal of finding robust plans to avoid catastrophic outcomes. The builder/breaker game merely organizes cognitive work: both Builder and Breaker are trying to map the space of proposals, but each takes primary responsibility for avoiding a different kind of error (false positives vs false negatives). Security Mindset I think Builder/Breaker is a good way to understand Eliezer's notion of security mindset (1, 2). The Builder is trying to construct a positive argument for safety, with (at least[2]) the following good properties: 1. The argument clearly states its assumptions.  2. Each assumption is as plausible as possible (because any grain of doubt indicates a possibility of failure). 3. There are as few assumptions as possible (because more assumptions mean more ways the plan can fail).  4. Each step of reasoning is sound. 5. The conclusion of the argument is a meaningful safety guarantee.  I will call such a plan robust. We can question whether AI safety research should focus on robust plans. I won't dwell on this question too much. Clearly, some endeavors require robust plans, while others do not. AI safety seems to me like a domain which requires robust plans. I'll leave it at that for now.[3] In any case, co
cee37297-7846-494e-9a93-07b381bc17cf
trentmkelly/LessWrong-43k
LessWrong
Counterfactual Induction (Algorithm Sketch, Fixpoint proof) So, to begin with, here's how the algorithm works. The upstream algorithm iterates through all proofs, and records the lengths of all proofs of the form "a finite collection of sentences implies ⊥". Also the proof length accounting is set up such that if A,ϕ⊢L⊥ and A,ψ⊢L⊥, then A,ϕ∨ψ⊢L+L′⊥. Also, as soon as it checks all proofs of length n or shorter from A+propositional tautology given A, with no contradictions found, it reports that A is contradiction-free for at least n steps. (this doesn't require searching an infinite set because the number of propositional tautologies with length shorter than n is finite) With S being the set of all math sentences, and Pfin,pc(S) being the set of all finite subsets of sentences that are propositionally consistent, the market P is a partial of type Pfin,pc(S)×S→[0,1], which fulfills the following four axioms. (note: A appearing where a sentence would normally go refers to the collection of statements in A expressed as one big boolean and-statement, and ϕ⊢pcψ means that given ϕ, ψ is provable using only the rules of inference for propositional calculus) 1: Unitarity. ∀A:PA(A)=1 2: Subadditivity. ∀A,ϕ,ψ:PA(ϕ)+PA(ψ)≥PA(ϕ∨ψ) 3: Law of Excluded Middle. ∀A,ϕ:PA(ϕ)+PA(¬ϕ)=1 4: Propositional Monotonicity: ∀A,ϕ,ψ:A,ϕ⊢pcψ→PA(ψ)≥PA(ϕ) We will also consider a fifth axiom, propositional equivalence, which is implied by axiom 4. 5: Propositional Equivalence: ∀A,ϕ,ψ:A⊢pcϕ↔ψ→PA(ψ)=PA(ϕ) The market fulfills axioms 1-4, the worlds we are defending against fulfill axioms 1-3 and 5. Axioms 1, 3, and 5 suffice to show the empty set property that PA(⊥)=0, so that one comes for free and doesn't need to be specified. Traders are poly-time algorithms that output a continuous circuit that takes as input the pricing P and output a nonnegative number for each pair of the form (A,ϕ). This should be interpreted as a bet against ϕ in the A counterfactual, and has a payoff of PA(ϕ)−VA(ϕ). Due to law of excluded middle for both V and P, selling thes
f5326243-e127-4b12-b059-dc71a7007fd9
trentmkelly/LessWrong-43k
LessWrong
How credible is the theory that COVID19 escaped from a Wuhan Lab? It sounds like a conspiracy theory. Apparently, it's big on the Chinese Internet. https://www.econjobrumors.com/topic/chinese-internet-thinks-patient-zero-was-a-grad-student-at-the-wuhan-lab which links https://www.youtube.com/watch?v=bpQFCcSI0pU The details in the video are rather vague since I don't speak Chinese I have trouble evaluating its credibility. What do you think?
1bbefd26-3317-4561-a77b-f151f33d978d
trentmkelly/LessWrong-43k
LessWrong
Recent AI control posts Over at medium, I’m continuing to write about AI control; here’s a roundup from the last month. Strategy * Prosaic AI control argues that AI control research should first consider the case where AI involves no “unknown unknowns.” * Handling destructive technology tries to explain the upside of AI control, if we live in a universe where we eventually need to build a singleton anyway. * Hard-core subproblems explains a concept I find helpful for organizing research. Building blocks of ALBA * Security amplification and reliability amplification are complements to capability amplification. Ensembling for reliability is now implemented in ALBA on github. * Meta-execution is my current leading contender for security and capability amplification. It’s totally unclear how well it can work (some relevant speculation). * Thoughts on reward engineering discusses a bunch of prosaic but important issues when designing reward functions. Terminology and concepts * Clarifying the distinction between safety, control and alignment. * Benignity may be a useful invariant when designing aligned AI.
b5500120-7666-41c3-b048-277c1d333f78
trentmkelly/LessWrong-43k
LessWrong
Preliminary Thoughts on Flirting Theory [Epistemic status: I'm mostly trying to outline a class of strategy that you could use to do something rather similar to what people term "flirting", rather than say that everything that's ever called "flirting" fits this model. I'm lowering my standards so that this gets posted at all instead of sitting in my drafts folder, so I might've made some important mistakes somewhere.] In this post, I'll use "X is common knowledge between you and me" to not only mean "you know X and I know X," but also "I know that you know X" and "I know that you know that I know X" and so on (this is pretty standard in mathematical logic contexts, although distinct from the colloquial meaning). The simplest way to get common knowledge of X is for one of us to just say X out loud. These ideas are only partially my own: I've read bits and pieces of this theory in different places in the past few years. I haven't been able to find sources for most of it. I know some of it is scattered across Planecrash, and reading that story is what got me thinking about this again recently, but that's probably not their original source. As far as I can tell, I'm the only person to synthesize them all together in a post, but please let me know if I'm wrong. What is flirting? As stated above, this is less about trying to describe everything ever called flirting, rather trying to outline a strategy that can exist and then exploring the implications of that strategy. Consider the situation where person A is romantically attracted to person B, but they're currently just friends. Flirting-like strategies likely apply more generally, but the romantic case is a good starting point. Let's say that person A's values are approximately: * If B can counterfactually reciprocate the romantic attraction, then A wants them to be in a romantic relationship where that attraction is reciprocated. * Otherwise, A wants to maintain the friendship with as little turbulence as possible. (If you want to pause and ponder,
c9f2ad6b-7a16-46ed-8351-8b01f5d3db63
trentmkelly/LessWrong-43k
LessWrong
Writing That Provokes Comments Epistemic Effort: Thought about it for a year. Solicited feedback. Checked my last few posts' comment count to make sure I wasn't *obviously* wrong. A thing that happens to me, and perhaps to you: Someone writes a beautiful essay that I agree with, that sheds new light on something important. I don't have anything really to say about it. I don't want to just say "I agree!". So instead of commenting, I give it an upvote and move on. This feels bad for a few reasons: * I like commenting. * I like getting comments when I write things that (I hope!) are insightful, beautiful and true. It's a stronger signal that people care. * Comments correlate with something staying in the public sphere of attention. A highly upvoted post eventually fades behind newer upvoted posts. But a post with lots of comments keeps people paying attention (with new people constantly checking in to see what the hubbub is about) * I don't trust (as a reader or a writer) that people who read a post, give it an upvote, and move on, are really learning anything. I think that talking through an a new concept and figuring out how to apply is where much of the learning happens. I've been impressed with how much quality writing has been going on on LW2.0 so far. There has been some but not as much commenting as I'd like. I've gotten a sense of what inspires interesting, meaty discussion. Unfortunately, most of it seems... kinda bad? Things That Get People To Comment 1. Be Wrong - It has been said: if google fails you, the fastest way to get a question answered is to post a wrong answer on reddit. This will result in a lot of flood of people explaining things to you. 2. Be Controversial - Even better, post something that some people think are wrong. Then you get a bunch of people commenting to correct you, and then other people who disagree correcting them! The arguments perpetuate themselves from there. You won't even have to do any commenting work yourself to keep it going! [BTW, these a
1a2138da-aab0-4bba-97f8-006c43c34c76
LDJnr/LessWrong-Amplify-Instruct
LessWrong
"Like "IRC chat" or "TCP/IP protocol", the phrase "reproductive organ" is redundant. All organs are reproductive organs. Where do a bird's wings come from? An Evolution-of-Birds Fairy who thinks that flying is really neat? The bird's wings are there because they contributed to the bird's ancestors' reproduction. Likewise the bird's heart, lungs, and genitals. At most we might find it worthwhile to distinguish between directly reproductive organs and indirectly reproductive organs. This observation holds true also of the brain, the most complex organ system known to biology. Some brain organs are directly reproductive, like lust; others are indirectly reproductive, like anger. Where does the human emotion of anger come from? An Evolution-of-Humans Fairy who thought that anger was a worthwhile feature? The neural circuitry of anger is a reproductive organ as surely as your liver. Anger exists in Homo sapiens because angry ancestors had more kids. There's no other way it could have gotten there. This historical fact about the origin of anger confuses all too many people. They say, "Wait, are you saying that when I'm angry, I'm subconsciously trying to have children? That's not what I'm thinking after someone punches me in the nose." No. No. No. NO! Individual organisms are best thought of as adaptation-executers, not fitness-maximizers. The cause of an adaptation, the shape of an adaptation, and the consequence of an adaptation, are all separate things. If you built a toaster, you wouldn't expect the toaster to reshape itself when you tried to cram in a whole loaf of bread; yes, you intended it to make toast, but that intention is a fact about you, not a fact about the toaster. The toaster has no sense of its own purpose. But a toaster is not an intention-bearing object. It is not a mind at all, so we are not tempted to attribute goals to it. If we see the toaster as purposed, we don't think the toaster knows it, because we don't think the toaster knows anything. It's like the old test of being asked to say the color of the letters in "blue". It takes longer for subjects to name this color, because of the need to untangle the meaning of the letters and the color of the letters. You wouldn't have similar trouble naming the color of the letters in "wind". But a human brain, in addition to being an artifact historically produced by evolution, is also a mind capable of bearing its own intentions, purposes, desires, goals, and plans. Both a bee and a human are designs, but only a human is a designer. The bee is "wind", the human is "blue". Cognitive causes are ontologically distinct from evolutionary causes. They are made out of a different kind of stuff. Cognitive causes are made of neurons. Evolutionary causes are made of ancestors. The most obvious kind of cognitive cause is deliberate, like an intention to go to the supermarket, or a plan for toasting toast. But an emotion also exists physically in the brain, as a train of neural impulses or a cloud of spreading hormones. Likewise an instinct, or a flash of visualization, or a fleetingly suppressed thought; if you could scan the brain in three dimensions and you understood the code, you would be able to see them. Even subconscious cognitions exist physically in the brain. "Power tends to corrupt," observed Lord Acton. Stalin may or may not have believed himself an altruist, working toward the greatest good for the greatest number. But it seems likely that, somewhere in Stalin's brain, there were neural circuits that reinforced pleasurably the exercise of power, and neural circuits that detected anticipations of increases and decreases in power. If there were nothing in Stalin's brain that correlated to power - no little light that went on for political command, and off for political weakness - then how could Stalin's brain have known to be corrupted by power? Evolutionary selection pressures are ontologically distinct from the biological artifacts they create. The evolutionary cause of a bird's wings is millions of ancestor-birds who reproduced more often than other ancestor-birds, with statistical regularity owing to their possession of incrementally improved wings compared to their competitors. We compress this gargantuan historical-statistical macrofact by saying "evolution did it". Natural selection is ontologically distinct from creatures; evolution is not a little furry thing lurking in an undiscovered forest. Evolution is a causal, statistical regularity in the reproductive history of ancestors. And this logic applies also to the brain. Evolution has made wings that flap, but do not understand flappiness. It has made legs that walk, but do not understand walkyness. Evolution has carved bones of calcium ions, but the bones themselves have no explicit concept of strength, let alone inclusive genetic fitness. And evolution designed brains themselves capable of designing; yet these brains had no more concept of evolution than a bird has of aerodynamics. Until the 20th century, not a single human brain explicitly represented the complex abstract concept of inclusive genetic fitness. When we're told that "The evolutionary purpose of anger is to increase inclusive genetic fitness," there's a tendency to slide to "The purpose of anger is reproduction" to "The cognitive purpose of anger is reproduction." No! The statistical regularity of ancestral history isn't in the brain, even subconsciously, any more than the designer's intentions of toast are in a toaster! Thinking that your built-in anger-circuitry embodies an explicit desire to reproduce, is like thinking your hand is an embodied mental desire to pick things up. Your hand is not wholly cut off from your mental desires. In particular circumstances, you can control the flexing of your fingers by an act of will. If you bend down and pick up a penny, then this may represent an act of will; but it is not an act of will that made your hand grow in the first place. One must distinguish a one-time event of particular anger (anger-1, anger-2, anger-3) from the underlying neural circuitry for anger. An anger-event is a cognitive cause, and an anger-event may have cognitive causes, but you didn't will the anger-circuitry to be wired into the brain. So you have to distinguish the event of anger, from the circuitry of anger, from the gene complex which laid down the neural template, from the ancestral macrofact which explains the gene complex's presence. If there were ever a discipline that genuinely demanded X-Treme Nitpicking, it is evolutionary psychology. Consider, O my readers, this sordid and joyful tale: A man and a woman meet in a bar. The man is attracted to her clear complexion and firm breasts, which would have been fertility cues in the ancestral environment, but which in this case result from makeup and a bra. This does not bother the man; he just likes the way she looks. His clear-complexion-detecting neural circuitry does not know that its purpose is to detect fertility, any more than the atoms in his hand contain tiny little XML tags reading "<purpose>pick things up</purpose>". The woman is attracted to his confident smile and firm manner, cues to high status, which in the ancestral environment would have signified the ability to provide resources for children. She plans to use birth control, but her confident-smile-detectors don't know this any more than a toaster knows its designer intended it to make toast. She's not concerned philosophically with the meaning of this rebellion, because her brain is a creationist and denies vehemently that evolution exists. He's not concerned philosophically with the meaning of this rebellion, because he just wants to get laid. They go to a hotel, and undress. He puts on a condom, because he doesn't want kids, just the dopamine-noradrenaline rush of sex, which reliably produced offspring 50,000 years ago when it was an invariant feature of the ancestral environment that condoms did not exist. They have sex, and shower, and go their separate ways. The main objective consequence is to keep the bar and the hotel and condom-manufacturer in business; which was not the cognitive purpose in their minds, and has virtually nothing to do with the key statistical regularities of reproduction 50,000 years ago which explain how they got the genes that built their brains that executed all this behavior. To reason correctly about evolutionary psychology you must simultaneously consider many complicated abstract facts that are strongly related yet importantly distinct, without a single mixup or conflation." What you see above is a segment from a LessWrong user's post. In the provided text, there's an argument that all organs are reproductive organs, either directly or indirectly. Can you delve deeper into this concept, particularly in relation to organs like the heart or lungs that don't have an obvious connection to reproduction? Certainly, the argument that all organs are reproductive organs, either directly or indirectly, is rooted in the theory of evolution by natural selection. This theory, proposed by Charles Darwin, suggests that traits and characteristics that increase an organism's chances of survival and reproduction are more likely to be passed on to the next generation. Over time, these traits become more common within a population. In this context, every organ in an organism's body contributes to its overall survival and reproductive success, either directly or indirectly. Let's take the heart and lungs as examples. The heart is a vital organ that pumps blood throughout the body, delivering oxygen and nutrients to other organs and tissues. Without a functioning heart, an organism would not survive long enough to reproduce. Thus, even though the heart does not directly participate in the act of reproduction, it indirectly contributes to reproductive success by enabling survival. Similarly, the lungs allow an organism to breathe by taking in oxygen and expelling carbon dioxide. This oxygen is essential for cellular respiration, a process that produces the energy needed for all bodily functions, including reproduction. Like the heart, the lungs do not directly participate in reproduction, but they indirectly contribute to reproductive success by supporting the organism's overall survival and functioning. In essence, the argument is that every organ, whether it directly participates in reproduction or not, plays a crucial role in an organism's survival and, by extension, its reproductive success. Therefore, from an evolutionary perspective, all organs can be considered reproductive organs. Drawing from the evolutionary perspective, how might the development of the human brain, with its unique capacity for abstract thought and planning, have contributed to our species' reproductive success? Consider both direct and indirect influences in your response. The human brain, with its unique capacity for abstract thought, problem-solving, and planning, has significantly contributed to our species' reproductive success. Directly, the human brain's ability to think abstractly and plan has allowed us to develop complex social structures and mating strategies. For example, humans can strategize about whom to mate with based on various factors such as health, social status, and genetic compatibility. This ability to make informed decisions about mate selection can increase the likelihood of producing healthy, viable offspring. The human brain also enables us to communicate complex ideas and emotions, which can facilitate bonding and cooperation with potential mates. Emotional intelligence, which involves understanding and managing one's own emotions and empathizing with others, can be a significant factor in forming and maintaining relationships. Successful relationships can lead to more opportunities for reproduction. Indirectly, the human brain has contributed to our survival, which in turn affects our reproductive success. Our ability to think abstractly and solve problems has allowed us to invent tools, build shelters, develop agriculture, and create societies with rules and laws. These advancements have increased our survival rates and lifespan, giving us more opportunities to reproduce. Moreover, our capacity for planning has enabled us to anticipate and prepare for future challenges, from changing seasons to potential threats. This foresight can increase our chances of survival in harsh conditions, indirectly contributing to our reproductive success. In summary, the development of the human brain has had both direct and indirect influences on our reproductive success. Its direct influence lies in its role in mate selection, communication, and relationship building, while its indirect influence lies in its contribution to our survival and longevity.
73d43365-120c-4879-a138-ac2f26661af3
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
Pivotal outcomes and pivotal processes ***tl;dr:** If you think humanity is on a dangerous path, and needs to "pivot" toward a different future in order to achieve safety, consider how such a pivot could be achieved by multiple acts across multiple persons and institutions, rather than a single act.  Engaging more actors in the process is more costly in terms of coordination, but in the end may be a more practicable social process involving less extreme risk-taking than a single "pivotal act".* **Preceded by:** [“Pivotal Act” Intentions: Negative Consequences and Fallacious Arguments](https://forum.effectivealtruism.org/posts/q6t5zKCg5peZA92Zu/pivotal-act-intentions-negative-consequences-and-fallacious) [This post is also available on [LessWrong](https://www.lesswrong.com/posts/etNJcXCsKC6izQQZj/pivotal-outcomes-and-pivotal-processes).] In the preceding post, I argued for the negative consequences of the *intention* to carry out a pivotal act, i.e., a single, large world-changing act sufficient to 'pivot' humanity off of a dangerous path onto a safer one.  In short, there are negative side effects of being the sort of institution aiming or willing to carry out a pivotal act, and those negative side effects alone might outweigh the benefit of the act, or prevent the act from even happening. In this post, I argue that it's still a good idea for humanity-as-a-whole to make a large / pivotal change in its developmental trajectory in order to become safer.  In other words, my main concern is not with the "pivot", but with trying to get the whole "pivot" from a single "act", i.e., from a single agent-like entity, such a single human person, institution, or AI system.   Pivotal outcomes and processes ------------------------------ To contrast with pivotal acts, here's a simplified example of a *pivotal outcome* that one could imagine making a big positive difference to humanity's future, which in principle could be brought about by a multiplicity of actors: * **(the "AI immune system")** The whole internet — including space satellites and the internet-of-things — becomes *way* more secure, and includes a distributed network of non-nuclear electromagnetic pulse emitters that will physically shut down any tech infrastructure appearing to be running rogue AI agents. (For now, let's set aside debate about whether this outcome on its own would be pivotal, in the sense of pivoting humanity onto a safe developmental trajectory... it needs a lot more details and improvements to be adequate for that!  My goal in this post is to focus on how the outcome comes about.  So for the sake of argument I'm asking to take the "pivotality" of the outcome for granted.) If a single institution imposed the construction of such an AI immune system on its own, that would constitute a pivotal *act.*  But if a distributed network of several states and companies separately instituted different parts of the change — say, designing and building the EMP emitters, installing them in various jurisdictions, etc. — then I'd call that a *pivotal distributed process*, or *pivotal process* for short. In summary, a pivotal outcome can be achieved through a pivotal (distributed) process without a single pivotal act being carried out by any one institution.  Of course, the "can" there is very difficult, and involves solving a ton of coordination problems that I'm not saying humanity will succeed in solving.  However, aiming for a pivotal outcome via a pivotal distributed process definitively seems safer to me, in terms of the dynamics it would create between labs and militaries, compared to a single lab planning to do it all on their own. Revisiting the consequences of pivotal act intentions ----------------------------------------------------- In [AGI Ruin](https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities), Eliezer writes the following, I believe correctly: * *The reason why nobody in this community has successfully named a 'pivotal weak act' where you do something weak enough with an AGI to be passively safe, but powerful enough to prevent any other AGI from destroying the world a year later - and yet also we can't just go do that right now and need to wait on AI - is that nothing like that exists.  There's no reason why it should exist.  There is not some elaborate clever reason why it exists but nobody can see it.  It takes a lot of power to do something to the current world that prevents any other AGI from coming into existence; nothing which can do that is passively safe in virtue of its weakness.* I think the above realization is important.  The un-safety of trying to get a single locus of action to bring about a pivotal outcome all on its own is important, and it pretty much covers my rationale for why we (humanity) shouldn't advocate for unilateral actors doing that sort of thing. Less convincingly-to-me, Eliezer then goes on to (seemingly) advocate for using AI to carry out a pivotal act, which he acknowledges would be quite a forceful intervention on the world: * *If you can't solve the problem right now (which you can't, because you're opposed to other actors who don't want [it] to be solved and those actors are on roughly the same level as you) then you are resorting to some cognitive system that can do things you could not figure out how to do yourself, that you were not close to figuring out because you are not close to being able to, for example, burn all GPUs.  Burning all GPUs would actually stop Facebook AI Research from destroying the world six months later; weaksauce Overton-abiding stuff about 'improving public epistemology by setting GPT-4 loose on Twitter to provide scientifically literate arguments about everything' will be cool but will not actually prevent Facebook AI Research from destroying the world six months later, or some eager open-source collaborative from destroying the world a year later if you manage to stop FAIR specifically.  **There are no pivotal weak acts**.* I'm not entirely sure if the above is meant to advocate for AGI development teams planning to use their future AGI to burn other people's GPU's, but it could certainly be read that way, and my counterargument to that reading has already been written, in [“Pivotal Act” Intentions: Negative Consequences and Fallacious Arguments](https://www.lesswrong.com/posts/Jo89KvfAs9z7owoZp/pivotal-act-intentions-negative-consequences-and-fallacious).  Basically, a lab X with the intention to burn all the world's GPUs will create a lot of fear that lab X is going to do something drastic that ends up destroying the world by mistake, which in particular drives up the fear and desperation of other AI labs to "get there first" to pull off their own version of a pivotal act.  Plus, it requires populating the AGI lab with people willing to do some pretty drastically invasive things to other companies, in particular violating private property laws and state boundaries.  From the perspective of a tech CEO, it's quite unnerving to employ and empower AGI developers who are willing to do that sort of thing.  You'd have to wonder if they're going to slip out with a thumb drive to try deploying an AGI against *you*, because they have their own notion of the greater good that they're willing to violate *your* boundaries to achieve.   So, thankfully-according-to-me, no currently-successful AGI labs are oriented on carrying out pivotal acts, at least not all on their own.   Back to pivotal outcomes ------------------------ Again, my critique of pivotal acts is not meant to imply that humanity has to give up on pivotal *outcomes.*Granted, it's usually harder to get an outcome through a distributed process spanning many actors, but in the case of a pivotal outcome for humanity, I argue that: 1. it's *safer* to aim for a pivotal outcome to be carried out by a distributed process spanning multiple institutions and states, because the process can happen in a piecemeal fashion that doesn't change the whole world at once, and 2. it's *easier* as well, because 1. you won't be constantly setting off alarm bells of the form "Those people are going to try to  unilaterally change the whole world in a drastic way", and 2. you won't be trying to populate a lab with AGI developers who, in John Wentworth's terms, think like "villains" ([source](https://www.lesswrong.com/posts/Jo89KvfAs9z7owoZp/pivotal-act-intentions-negative-consequences-and-fallacious?commentId=5Z4AFPqtAmQQJuRxa)). I'm not arguing that we (humanity) are going to *succeed* in achieving a pivotal outcome through a distributed process; only that it's a safer and more practical endeavor than aiming for a single pivotal act from a single institution.
e45379eb-895c-4d9a-8a4b-95cd504233d7
trentmkelly/LessWrong-43k
LessWrong
Arithmetic Models: Better Than You Think LessWrong user dynomight explains how arithmetic is an underrated world-modeling technology and uses dimensional algebra as the motivating case. I agree dimensional algebra is fantastic, but there’s an even better motivating example for arithmetic in world-modeling: linear models for prediction. Simple linear models outperform experts In 1954, Paul Meehl published what he later came to call my disturbing little book. This book[1] contains the most important and well-replicated research I know of; yet most people don’t know about it. The basic argument is that many real-world phenomena – even fickle ones – can be adequately modeled with addition and multiplication. Tempered by the lack of evidence at the time, the book doesn’t go quite as far as it could have. Here are some statements that have later turned out to be true, given in order of increasing outrageousness. * Simple linear models outperform experts. * Simple linear models outperform experts when the experts get access to additional information that the model does not get. * Simple linear models outperform experts when the experts also get the know and use the outcome of the linear model. * Simple linear models trained only on experts’ judgments and not the actual outcome outperform the very experts they were trained on. * Simple linear models with random weights outperform experts. * Simple linear models with equal weights (i.e. a tallying heuristic) outperform experts. * Simple linear models with equal weights when limited to three predictors still outperform experts. Obviously, these are phrased to provoke, and take additional nuance to be fully understood, but the general theme remains the same: addition and multiplication take you surprisingly far. Predictive modeling is more important than explainatory modeling There are two reasons to make models: predictive and explainatory.[2] * Predictive modeling. Figure out how things are correlated: if X has been observed, does that generally hap
ee8b5016-6c9f-4095-aabe-2ee4ba207353
StampyAI/alignment-research-dataset/aisafety.info
AI Safety Info
I would like to focus on AI alignment, but it might be best to prioritize improving my life situation first. What should I do? Trying to do hard things without stable foundations is asking for trouble, so if some areas of your life aren’t going so well, it’s probably best to direct what energy you have into getting those into better shape first. Attempting to make major life changes aimed at improving humanity’s long-term future while in a difficult place personally can be counterproductive (for example, it could lead to burnout or financial trouble). A great resource is the post [Mental Health and the Alignment Problem](https://www.lesswrong.com/posts/pLLeGA7aGaJpgCkof/mental-health-and-the-alignment-problem-a-compilation-of), which compiles many resources and links to advice. The [Effective Altruism](https://www.effectivealtruism.org/) (EA) and AI safety communities have some other support resources. Booking a call with [AI Safety Quest](https://aisafety.quest/) is a good place to start. Their volunteers offer free calls for people who would like to contribute, and have a good deal of experience helping people navigate into contributions. Some **funding** sources are listed [here](/?state=6703&question=I%20want%20to%20work%20on%20AI%20alignment.%20How%20can%20I%20get%20funding%3F). A body of work which could be valuable for attaining financial independence rapidly can be found on the [Early Retirement Extreme](http://earlyretirementextreme.com/) website. If you need help **resolving psychological or motivational issues**, some helpful resources might be: - The [EA Mental Health Navigator](https://www.eamentalhealthnavigator.com/) - 80,000 Hours’ evidence-based advice on [how to be successful in any job](https://80000hours.org/career-guide/how-to-be-successful/) - [Effective Peer Support](https://www.rethinkwellbeing.org/eps) If you’re struggling with **guilt around not working hard enough** on important things, consider reading the [Replacing Guilt series](http://mindingourway.com/guilt/) by MIRI director Nate Soares. If you **need help with something not listed here**, please [ask the people](https://discord.gg/vjFSCDyMCy) who are developing this website for advice and we’ll work to improve this answer. While you’re working on whichever area of your life needs attention, it’s probably good to keep learning about the AI alignment field to prepare you for contributing in the future. Exploring the questions on this site is a great way to find resources, so take a look around.
7b7c6206-b686-411a-91d8-83f2aaec9d22
StampyAI/alignment-research-dataset/arxiv
Arxiv
Learning Representations by Humans, for Humans 1 Introduction --------------- Across many important domains, machine learning algorithms have become unparalleled in their predictive capabilities. The accuracy and consistency of these algorithms has made them highly appealing as tools for supporting human decision-making [esteva2017dermatologist](#bib.bib22) ; [nickerson2014political](#bib.bib46) . However, these criteria are far from comprehensive [parikh2019regulation](#bib.bib49) ; [barabas2017interventions](#bib.bib5) . Our continued reliance on humans as the final arbiters of these decisions suggests an awareness that incorporating higher-level concepts, such as risk aversion, safety, or justification, requires the exercise of human reasoning, planning, and judgment. The field of interpretable machine learning has developed as one answer to these issues. A common view of interpretable ML is that it provides explanations [lipton2016mythos](#bib.bib41) , thereby allowing integration into the human reasoning process, and verification as to whether or not auxiliary criteria are being met. Under this framework, the algorithm is an expert whose task is to suggest *what* should be done, and, from its own perspective, why. The human role is reduced to that of quality control: should the algorithm’s work be accepted or rejected? This role of ‘computer as expert’ undermines a decision-maker’s sense of agency and generates information that is difficult to integrate with existing intuition. Hence, users may be reluctant to accept algorithmic suggestions or even inclined to go against them [brehm1966theory](#bib.bib8) ; [yeomans2017making](#bib.bib68) , especially after seeing the algorithm make errors, which can lead to a degradation in performance over time [elmalech2015suboptimal](#bib.bib20) ; [dietvorst2015algorithm](#bib.bib15) ; [logg2017theory](#bib.bib43) ; [noti2014experimental](#bib.bib47) . In any system in which humans make the final decisions, even highly-accurate machine outputs are only useful if and when humans make appropriate use of them; c.f. the use of risk assessment tools in the context of sentencing [stevenson2018algorithmic](#bib.bib60) . Fortunately, advice that conveys *how* to decide (rather than what) can often be of great value [dalal2010types](#bib.bib13) . Advice of this form can be designed to *augment* the capabilities of human decision makers, rather than replace them, which many see as a more socially-optimal role for AI [licklider1960man](#bib.bib40) ; [engelbart1962augmenting](#bib.bib21) ; [li\_2018](#bib.bib39) ; [jordan\_2018](#bib.bib30) . This can be achieved, for example, by highlighting certain aspects of the problem, providing additional information, presenting tradeoffs in risks and returns, or outlining possible courses of action. There is ample empirical evidence suggesting that informative advice can, by acknowledging the central role decision makers play, both enhance performance *and* retain agency [kahneman2016noise](#bib.bib31) ; [kleinberg2017human](#bib.bib34) . Motivated by the above, we advocate for a broader perspective on how machine learning can be used to support decision-making. Our work builds on a well-known observation in the social sciences, which is that the performance of humans on decision tasks depends on how problems are presented or framed [thompson1980margaret](#bib.bib61) ; [cosmides1992cognitive](#bib.bib12) ; [gigerenzer1995improve](#bib.bib24) ; [cao2017statistically](#bib.bib11) ; [kahneman2013prospect](#bib.bib32) ; [brown2013framing](#bib.bib9) To leverage this idea, we shift the algorithmic focus from learning to predict to learning to represent, and seek representations of inputs (‘advice’) that will lead to good decisions and thus good outcomes *when presented to a human decision maker*. Our framework is designed to use machine learning in a way that preserves autonomy and agency, and in this way builds trust— crucial aspects of decision-making that are easy to overlook [bandura1989human](#bib.bib3) ; [bandura2010self](#bib.bib4) ; [dietvorst2016overcoming](#bib.bib16) ; [logg2017theory](#bib.bib43) . To successfully reframe difficult problems, we harness the main engine driving deep learning— the ability to learn useful representations. Just as deep neural networks learn representations under which classifiers predict well, we learn representations under which human decision makers perform well. Our model includes three main components: a “truncated” neural network that maps inputs into vector representations, a visualization module that maps vector representations into visual representations, and a human decision maker. Our main innovation is a human-in-the-loop training procedure that seeks to directly optimize human decision outcomes, thus promoting both accuracy *and* agency. We demonstrate the approach on three experimental tasks, represented in Figure [1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ Learning Representations by Humans, for Humans"), that cover different types of decisions and different forms of computational advice, and in problems with increasing complexity. Both training and evaluation are done with the aid of real human subjects, which we argue is essential for learning credible human-supportive tools. Our results show that we can iteratively learn representations that lead to high human accuracy while not explicitly presenting a recommended action, providing users with means to reason about decisions. Together, these results demonstrate how deep learning can serve as an instrumental tool for human intelligence augmentation [licklider1960man](#bib.bib40) ; [engelbart1962augmenting](#bib.bib21) ; [li\_2018](#bib.bib39) ; [jordan\_2018](#bib.bib30) . ![Examples of visualized advice for various inputs: word highlighting for text data, customized plots for embedded data, and computerized avatars for structured data. Instead of explaining algorithmic predictions, we learn representations that directly aid in human decision-making. ](https://media.arxiv-vanity.com/render-output/8050190/viz_types.png) Figure 1: Examples of visualized advice for various inputs: word highlighting for text data, customized plots for embedded data, and computerized avatars for structured data. Instead of explaining algorithmic predictions, we learn representations that directly aid in human decision-making. ### 1.1 Related Work Interpretability as decision support. There are several ways in which interpretability can be used to support decision-making. In general, interpretability can help in evaluating criteria that are important for decisions but hard to quantify, fairness or safety for example, and hence hard to optimize [doshi2017towards](#bib.bib17) . Many methods do this by producing simplified [angelino2017learning](#bib.bib1) ; [lakkaraju2016interpretable](#bib.bib37) or augmented [ribeiro2016should](#bib.bib54) ; [smilkov2017smoothgrad](#bib.bib59) ; [lei2016rationalizing](#bib.bib38) versions of the input that aids users in understanding if the data is used in ways that align with their goals or not. While some methods exist for systematically iterating over models [ross2017right](#bib.bib56) ; [lage2018human](#bib.bib36) , these give no guarantees as to whether models actually improve with respect to user criteria. Virtually all works in interpretability focus on predictive algorithms. Our work differs in that the focus is directed at the human-decision maker, directly optimizing for better decisions by learning useful human-centric representations. Incorporating human feedback. Our use of human-in-the-loop methods is reminiscent of work in active learning, in that humans supply labels to reduce machine uncertainty [settles2009active](#bib.bib58) , and in preference-based reinforcement learning in that we implicitly encode human preferences in our evaluation [wirth2017survey](#bib.bib67) . However, in our work, learning a model that approximates human policy decisions is not the end goal but rather a tool to improve decisions by approximating ‘decision gradients’. While this can be viewed as a form of black-box gradient estimation [jacovi2019neural](#bib.bib27) , current methods assume either inexpensive queries, noise-free gradients, or both, making them inadequate for modeling human responses. Expertise, trust, and agency. Recent studies have shown that links between trust, accuracy, and explainability are quite nuanced [yin2019understanding](#bib.bib69) ; [poursabzi2018manipulating](#bib.bib52) ; [green2019disparate](#bib.bib25) . Users fail to consistently increase trust when model accuracy is superior to human accuracy and when models are more interpretable. Expertise has been identified as a potentially confounding factor [logg2017theory](#bib.bib43) , when human experts wrongly believe they are better than machines, or when they cannot incorporate domain-specific knowledge within the data-driven model estimate. Agency has also been shown to affect the rate at which people accept model predictions [dietvorst2016overcoming](#bib.bib16) , supporting the hypothesis that active participation increases satisfaction, and that users value the ability to intervene when they perceive the model as incorrect. 2 Learning Decision-Optimal Representations -------------------------------------------- ### 2.1 Preliminaries We consider a setting where users are given instances x∈X sampled from some distribution D, for which they must decide on an action a∈A. For example, if x are details of a loan application, then users can choose a∈{approve,deny}. We denote by h the human mapping from arbitrary inputs to decisions or actions (we use these terms interchangeably). We assume that users are seeking to choose a=h(x) to minimize an incurred loss ℓ(x,a), and our goal is to aid them in this task. To achieve this, we can present users with machine-generated *advice* γ(x), which we think of as a human-centric ‘representation’ of the input. To encourage better outcomes, we seek to learn the representation γ under which human decisions a=h(γ(x)) entail low expected loss L(γ)=ED[ℓ(x,h(γ(x)))]. We will focus on tasks where actions are directly evaluated against some ground truth y∈Y associated with x and given at train time, and so the loss is of the form ℓ(y,h(γ(x))). In this way, we cover a large class of important decision problems called *prediction policy problems*, where the difficulty in decision-making is governed by a predictive component [kleinberg2015prediction](#bib.bib35) . For example, the loss from making a loan depends on whether or not a person will return a given loan, and thus on being able to make this conditional prediction with good accuracy. This setting is simpler to evaluate empirically, and allows for a natural comparison to interpretable predictive approaches where γ(x) includes a machine prediction ~y and some form of an explanation. In our experiments we have Y={1,…,C}, and denote by ΔC the C-dimensional simplex (allowing probabilistic machine prediction ~y∈ΔC). Given a train set S={(xi,yi)}mi=1, we will be interested in minimizing the empirical loss: | | | | | | --- | --- | --- | --- | | | minγ∈Γm∑i=1ℓ(yi,ai)+λR(γ),ai=h(γ(xi)) | | (1) | where Γ is the advice class, R is a regularization term that can be task-specific and data-dependent, and λ is the regularization parameter. The main difficulty in solving Eq. ([1](#S2.E1 "(1) ‣ 2.1 Preliminaries ‣ 2 Learning Decision-Optimal Representations ‣ Learning Representations by Humans, for Humans")) is that {a}mi=1 are *actual human decisions* that depend on the optimized function γ via an unknown decision mechanism h. We first describe our choice of Γ and propose an appropriate regularization R, and then present our method for solving Eq. ([1](#S2.E1 "(1) ‣ 2.1 Preliminaries ‣ 2 Learning Decision-Optimal Representations ‣ Learning Representations by Humans, for Humans")). ### 2.2 Learning human-facing representations Deep neural networks can be conceptualized as powerful tools for learning representations under which simple predictors (i.e., linear) perform well [bengio2013representation](#bib.bib6) . By analogy, we leverage neural architectures for learning representations under which humans perform well. Consider a multi-layered neural network N(x). Splitting the network at some layer partitions it into a parameterized representation mapping ϕθ:Rd→Rk and a predictor f:Rk→ΔC such that N(x)=f(ϕθ(x)). If we assume for simplicity that f is fixed, then learning is focused on ϕ. The challenge is that optimizing θ may improve the predictive performance of the algorithm, but may not facilitate good human decision-making. To support human decision makers, our key proposal is to remove f and instead plug in the human decision function h, therefore leveraging the optimization of θ to directly improve human performance. We refer to this optimization framework as “M∘M”, Man Composed with Machine, pronounced “mom” and illustrated in Fig. [2](#S2.F2 "Figure 2 ‣ 2.2 Learning human-facing representations ‣ 2 Learning Decision-Optimal Representations ‣ Learning Representations by Humans, for Humans") (left). We also need to be precise about the way a human would perceive the output of ϕ. The outputs of ϕ are vectors z=ϕθ(x)∈Rk, and not likely to be helpful as human input. To make representations accessible to human users, we add a *visualization* component ρ:Rk→V, mapping vector representations into meaningful visual representations v=ρ(z) in some class of visual objects V (e.g, scatter-plots, word lists, avatars). Choosing a proper visualization is crucial to the success of our approach, and should be chosen with care to utilize human cognition (and this is in itself a research question). Combined, these mappings provide what we mean by the ‘algorithmic advice’: | | | | | | --- | --- | --- | --- | | | γ(x)=ρ(ϕθ(x)) | | (2) | In the remainder of the paper, we assume that the visualization component ρ is fixed, and focus on optimizing the advice by learning the mapping ϕθ. It will be convenient to fold ρ into h, using the notation h(ρ)(z)=h(ρ(z)). Eq. ([1](#S2.E1 "(1) ‣ 2.1 Preliminaries ‣ 2 Learning Decision-Optimal Representations ‣ Learning Representations by Humans, for Humans")) can now be rewritten as: | | | | | | --- | --- | --- | --- | | | minθ∈Θm∑i=1ℓ(yi,ai)+λR(θ),ai=h(ρ)(ϕθ(xi)) | | (3) | By solving Eq. ([3](#S2.E3 "(3) ‣ 2.2 Learning human-facing representations ‣ 2 Learning Decision-Optimal Representations ‣ Learning Representations by Humans, for Humans")), we hope to learn a representation of inputs such that, when visualized, promote good decisions. In the remainder of the paper we will simply use h to mean h(ρ)(z). ![ ](https://media.arxiv-vanity.com/render-output/8050190/illustration.png) Figure 2: Left: The M∘M framework. The network learns a mapping ϕ from inputs x to representations z, such that when z is visualized through the visualization component ρ, the representation elicits good human decisions a. Right: The learning process. Users are queried for decisions on the current representations (A). These decisions are used to train a proxy network ^h (B), that is then used to re-train representations (C). This process is repeated until convergence. ### 2.3 Optimization The difficulty in optimizing Eq. ([3](#S2.E3 "(3) ‣ 2.2 Learning human-facing representations ‣ 2 Learning Decision-Optimal Representations ‣ Learning Representations by Humans, for Humans")) is that gradients of θ must pass through h. But these are actual human decisions! To handle this, we propose to replace h(ρ) with a differentiable proxy ^hη:Rk→Y parameterized by η∈H (we refer to this proxy as “h-hat"). A naïve approach would be to train ^h to mimic how h operates on inputs z, and use it in Eq. ([3](#S2.E3 "(3) ‣ 2.2 Learning human-facing representations ‣ 2 Learning Decision-Optimal Representations ‣ Learning Representations by Humans, for Humans")). This, however, introduces two difficulties. First, it is not clear what data should be used to fit ^h. To guarantee good generalization, ^h should be trained on the distribution of z induced by the learned ϕθ(x), but the final choice of θ depends on ^h itself. Second, precisely modeling h can be highly unrealistic (i.e., due to human prior knowledge, external information, or unknown considerations). To circumvent these issues, we propose a human-in-the-loop training procedure alternating between fitting ^hη for a fixed θ and training ϕθ for a fixed ^hη. Algorithm 1 Alternating optimization algorithm 1:Initialize θ=θ0 2:repeat 3:     x1,…,xn∼S ▹ Sample n train examples 4:     zi←ϕθ(xi)∀i∈[n] ▹ Generate representations 5:     ai←h(ρ(zi))∀i∈[n] ▹ Query human decisions 6:     S′={(zi,ai)}ni=1 7:     η←argminηES′[ℓ(a,^hη(z))] ▹ Train ^h 8:     θ←argminθ′ES[ℓ(y,^hη(ϕθ′(x)))] ▹ Train ϕ 9:until convergence Fig. [2](#S2.F2 "Figure 2 ‣ 2.2 Learning human-facing representations ‣ 2 Learning Decision-Optimal Representations ‣ Learning Representations by Humans, for Humans") (right) illustrates this process, and pseudocode is given in Algorithm [1](#alg1 "Algorithm 1 ‣ 2.3 Optimization ‣ 2 Learning Decision-Optimal Representations ‣ Learning Representations by Humans, for Humans"). The process begins by generating representations zi=ϕθ0(xi) for n≤m random training inputs xi with an initial θ0, and obtaining decisions ai for each zi generated in this way by querying human participants. Next, we take these representation-decision pairs and create an auxiliary sample set S′={(zi,ai)}ni=1, which we use to fit the human model ^hη by optimizing η. Fixing η, we then train ϕθ by optimizing θ on the empirical loss of the original sample set S. We repeat this alternating process until re-training ^h does not improve results. In our experiments, both ϕ and ^h are implemented through neural networks. In the Appendix, we discuss practical issues regarding initialization, convergence, early stopping, and working with human inputs. The initial training of ^h makes it match h as best as possible on the distribution of z induced by θ0. In the next step, however, optimizing θ causes the distribution of z to drift. As a result, forward passes push out-of-distribution samples into ^h, and ^h may no longer be representative of h (and with no indication of failure). Fortunately, this discrepancy is corrected at the next iteration, when ^h is re-trained on fresh human-annotated samples drawn from the distribution induced by the new parameters θ. In this sense, our training procedure literally includes humans-in-the-loop. In order for performance to improve, it suffices that ^h induces gradients of the loss that approximate those of h. This is a weaker condition than requiring ^h to match h exactly. In the Appendix we show how even simple ^h models that do not fit h well are still effective in the overall training process. | | | | | --- | --- | --- | | Visualization of 2D projection task. Points in their original 3D representation give little visual indication of class (X or O). The initial 2D projection (round 1), set to the final layer representation of a fully accurate machine-only model, is similarly unintelligible to humans. However, as training progresses, feedback from human decisions improves the learned 2D projection until the class becomes visually apparent (round 5), achieving 100% | Visualization of 2D projection task. Points in their original 3D representation give little visual indication of class (X or O). The initial 2D projection (round 1), set to the final layer representation of a fully accurate machine-only model, is similarly unintelligible to humans. However, as training progresses, feedback from human decisions improves the learned 2D projection until the class becomes visually apparent (round 5), achieving 100% | Visualization of 2D projection task. Points in their original 3D representation give little visual indication of class (X or O). The initial 2D projection (round 1), set to the final layer representation of a fully accurate machine-only model, is similarly unintelligible to humans. However, as training progresses, feedback from human decisions improves the learned 2D projection until the class becomes visually apparent (round 5), achieving 100% | Figure 3: Visualization of 2D projection task. Points in their original 3D representation give little visual indication of class (X or O). The initial 2D projection (round 1), set to the final layer representation of a fully accurate machine-only model, is similarly unintelligible to humans. However, as training progresses, feedback from human decisions improves the learned 2D projection until the class becomes visually apparent (round 5), achieving 100% *human* accuracy. 3 Experiments -------------- We conduct a series of experiments on data-based decision-making tasks of increasing complexity. Each task uses the general algorithmic framework presented with a different, task-appropriate class of advice representations. Each experiment is also successively more sophisticated in the extent of human experimentation that is entailed. The appendix includes further details on each experiment. ### 3.1 Decision-compatible 2D projections High-dimensional data is notoriously difficult for humans to handle. One way to make it accessible is to project points down to a low dimension where they can be visualized (e.g., with plots). But neither standard dimensionality reduction methods nor the representation layer of neural networks are designed to produce visualizations that support human decision-making. PCA, for example, optimizes a statistical criterion that is agnostic to how humans visually interpret its output. Our M∘M framework suggests to learn an embedding that directly supports good decisions. We demonstrate this in a simple setting where the goal of users is to classify d-dimensional point clouds, where d>2. Let V be a linear 2D subspace of Rd. Each point cloud is constructed such that, when orthogonally projected onto V, it forms one of two visual shapes— an ‘X’ or an ‘O’ —that determine its label. All other orthogonal directions contain similarly scaled random noise. We use M∘M to train an orthogonal 2D projection (ϕ) that produces visual scatter-plots (ρ). Here, ϕ is a 3x3 linear model augmented with an orthogonality penalty ϕTϕ−I, and ^h is a small single-layer 3x3 convolutional network that takes as inputs a soft (differentiable) 6x6 histogram over the 2D projections. In each task instance, users are presented with a 2D visualization of a point cloud and must determine its shape (i.e., label). Our goal is to learn a projection under which point clouds can be classified by humans accurately, immediately, and effortlessly. Initially, this is difficult, but as training progresses, user performance feedback gradually “rotates” the projection, revealing class shapes (see Fig. [3](#S2.F3 "Figure 3 ‣ 2.3 Optimization ‣ 2 Learning Decision-Optimal Representations ‣ Learning Representations by Humans, for Humans")). Importantly, *users are never given machine-generated predictions*. Rather, progress is driven solely by the performance of users on algorithmically “reframed” problem instances (i.e., projections), achieving 100% *human* accuracy in only 5 training rounds with at most 20 queries each. ### 3.2 Decision-compatible feature selection In some applications, inputs are composed of many discrete elements, such as words or sentences in a document, or objects in an image. A useful form of advice in this setting is to ‘summarize’ inputs by highlighting a small subset of important elements or features. Consider, for example, a task of determining text sentiment, where the summary would be relevant words. The M∘M framework suggests that models should be trained to choose summaries (representations) that are effective in helping humans make good decisions. In this section, we consider the task of determining text sentiment using the IMDB Movie Review Dataset [maas-EtAl:2011:ACL-HLT2011](#bib.bib44) . We compare M∘M with the LIME [ribeiro2016should](#bib.bib54) method, which learns a post hoc summarization to best explain the predictions of black-box predictive models. LIME chooses a subset of words for an input x by training a simpler model to match the black-box prediction in the neighborhood of x. The summarization selected by LIME may therefore give insight to the model’s internal workings, but seems only likely to build trust to the extent that the “explanation” matches human intuition. And when it does not, the advice offered by LIME is unlikely to help users to form their own opinion. In our experiment, we implement a subset-selection mechanism in ϕ as a Pointer Network [vinyals2015pointer](#bib.bib65) , a neural architecture that is useful in learning mappings from sets to subsets. In particular, we model ϕ as a pair of “dueling” Pointer Network advisers, one for ‘positive sentiment‘ and one for ‘negative sentiment‘. The learning objective is designed to encourage each adviser to give useful advice by competing for the user’s attention, with the idea of giving the user a balanced list of “good reasons” for choosing the each of the possible alternatives (see Appendix for details). The visualizer ρ simply presents the chosen words to the user, and the goal of users is to determine the sentiment of the original text from its summary. In this experiment we trained using simulated human responses via queries to a word sentiment lexicon, which proved to be cost effective, but as in all other experiments, evaluation was done with real humans. For LIME we use a random forest black-box predictor and a linear ‘explainable’ model, as in the original LIME paper. ![ Examples of word sets selected by M](https://media.arxiv-vanity.com/render-output/8050190/mom_vs_lime.png) Figure 4: Examples of word sets selected by M∘M and by LIME. Color indicates machine-perceived sentiment (green for positive, red for negative). The explanation generated by LIME includes many words with no intuitive sentiment (e.g., ‘movie’). While LIME can be useful for identifying words that may not be desirable as predictive features (e.g., ‘female’), M∘M works in a different way, by directly adjusting itself to how humans make decisions. Results. The black-box random forest classifier is fairly accurate, achieving 78% accuracy on the test set when trained and evaluated on full text reviews. However, when LIME summaries composed of the top and bottom three words with highest coefficients were given as input to humans, their performance was only 65%. Meanwhile, when given summaries generated by M∘M, human performance reached 76%, which almost matches machine performance *but using summaries alone*. Examples of summaries generated by M∘M and LIME are given in Figure [4](#S3.F4 "Figure 4 ‣ 3.2 Decision-compatible feature selection ‣ 3 Experiments ‣ Learning Representations by Humans, for Humans"). M∘M creates summaries that are more diverse and nuanced; LIME uses half the number of overall unique words, five of which account for 20% of all word appearances. Words chosen by LIME do not necessarily convey any sentiment— for instance, the word ‘movie’ is LIME’s most frequent indication of negative sentiment (7.4%), and the word ‘female’ is chosen to convey negative sentiment. This artifact may be helpful in revealing spurious correlations used by the black-box algorithm to achieve high accuracy, but is uninformative as input as input to a human decision maker. ### 3.3 Decision-compatible algorithmic avatars Our main experiment focuses on the problem of approving loans using the Lending Club dataset.111 <https://www.kaggle.com/wendykan/lending-club-loan-data> Given details of a loan application, the task of a decision maker is to decide whether to approve the loan or not. This can be done by first predicting the conditional outcome of giving a loan, and then determining an appropriate course of action. Predicting accurately is important but not sufficient, as in reality, decision makers must also justify their decisions. Our goal in this task is twofold: aid decision makers in making good decisions, and provide them with means to reason about their choices. The standard algorithmic approach to assisting users would be to give them predictions or risk scores, perhaps along with an ‘explanation’. This, however, reduces the rich data about an application to a single number. Instead, we propose to give a decision maker ‘just right’ high-dimensional advice— compressed enough to be managable, yet rich enough to preserve multi-variate aspects of the input —crucial for retaining users’ ability to reason about their decisions [petty1986elaboration](#bib.bib51) . For this task, we augment inputs with algorithmic advice in the form of an ‘avatar’ framed as conveying through its facial expression information that is relevant to the conditional outcome of giving a loan. Facial expressions have been used successfully to represent and augment multivariate data [schultz2007constructive](#bib.bib57) ; [turner2008effects](#bib.bib63) ; [bruckner1978chernoff](#bib.bib10) , but by manually mapping features to facial components (whereas we learn this mapping). We use realistic-looking faces, with the goal of harnessing innate human cognitive capabilities— immediate, effortless, and fairly consistent processing of facial signals [izard1994innate](#bib.bib26) ; [kanwisher1997fusiform](#bib.bib33) ; [todorov2008understanding](#bib.bib62) ; [freeman2016more](#bib.bib23) —to successfully convey complex high-dimensional information (see Fig. [5](#S3.F5 "Figure 5 ‣ 3.3 Decision-compatible algorithmic avatars ‣ 3 Experiments ‣ Learning Representations by Humans, for Humans") and Appendix for details). | | | | --- | --- | | : different learned avatars conveying algorithmic advice through facial expressions (see Appendix for more examples). | : different learned avatars conveying algorithmic advice through facial expressions (see Appendix for more examples). | Figure 5: Right: different learned avatars conveying algorithmic advice through facial expressions (see Appendix for more examples). Left: Human accuracy in the algorithmic advice condition (‘avatar advice’) consistently increases over rounds. Performance quickly surpasses the ‘data only’ condition, and steadily approaches performance of users observing algorithmic predictions (‘predictive advice’), which in itself is lower than machine-only performance. When faces are shuffled within predicted labels of ^h, accuracy falls, suggesting that faces convey important multi-variate information. Setup. We split the data 80:20 into a train set and a held-out test set, which is only used for the final evaluation. To properly assess human decisions we include only loans for which we know the resolution in the data (either repay in full or default), and accordingly set ℓ(y,a)=1{y=a} where y∈{0,1} indicates the ground truth (1=repay, 0=default), and a∈{0,1} indicates the decision (1=approve, 0=deny). Following M∘M  we use the train set to optimize the representation ϕ, and at each round, use the outputs of ϕ (parametrizations of faces) to fit ^h using real human decisions (i.e., approve or deny) gathered from mTurk.222All experiments were approved by the Harvard University IRB. We set ϕ and ^h to be small fully connected networks with 1 25-hidden unit layer and 2 20-hidden unit layers, respectively. The visualizing unit ρ turns the vectorized outputs of ϕ into avatars by morphing seven ‘facial dimensions’ from various sources [du2014compound](#bib.bib18) ; [todorov2008understanding](#bib.bib62) using the Webmorph software [debruine2016webmorph](#bib.bib14) . To prevent mode collapse, wherein faces “binarize" to two prototypical exemplars, we add a reconstruction regularization term R(x)=∥x−ψ(ϕ(x))∥22 to the objective, where ψ is a decoder implemented by an additional neural network. In the Appendix we give a detailed description of the learning setup, training procedure, mTurk experimental environment, and the unique challenges encountered when training with turkers in the loop. Evaluation. We are interested in evaluating both predictive performance and the capacity of users for downstream reasoning. We compare between the following conditions: (1) no advice, (2) predictive advice: γ(x)=~y∈[0,1] is a predictive probability by a pre-trained predictive model N(x), (3) representational advice: γ(x)=v, where v=ρ(ϕ(x)) is an avatar, and (4) a ‘shuffled’ condition which we will soon describe. In all conditions, this advice is given to users in addition to the five most informative features of each example (given by the regularization path of a LASSO model). Since users in the experiment are non-experts, and because there is no clear incentive for them *not* to follow predictive advice, we expect the predictive advice condition to give an upper bound on human performance in the experiment; this artifact of the experimental environment should not necessarily hold in reality. We benchmark results with the accuracy of N (having architecture equal to ^h∘ϕ). Results. Fig. [5](#S3.F5 "Figure 5 ‣ 3.3 Decision-compatible algorithmic avatars ‣ 3 Experiments ‣ Learning Representations by Humans, for Humans") shows the training process and resulting test accuracies333 Results are statistically significant under a one-way ANOVA test, F(3,97)=9.8,p<1e−5. (the data is fairly balanced so chance≈0.5). Initially, the learned representation ϕ produces arbitrary avatars, and performance in the avatar condition is lower than in the no advice condition. This indicates that users take into account the (initially uninformative) algorithmic advice. As learning progresses, user feedback accumulates, and accuracy steadily increases. After six training rounds, accuracy in the avatar condition reaches 94% of the accuracy in the predictive advice condition. Interestingly, performance in the predictive advice condition does not reach the machine accuracy benchmark, showing that even experimental subjects do not always follow predictive advice. This resonates well with our arguments from Sec. [1](#S1 "1 Introduction ‣ Learning Representations by Humans, for Humans"). In addition to accuracy, our goal is to allow users to reason about their decisions. This is made possible by the added reconstruction penalty R, designed to facilitate arguments based on *analogical reasoning*: “x will likely be repaid because x is similar to x′, and x′ was repaid” [lloyd1992polarity](#bib.bib42) ; [johnson1984syllogistic](#bib.bib29) . Reconstruction serves two purposes. First, it ensures that reasoning in ‘avatar-space’ is anchored to the similarity structure in input space, therefore encouraging sound inference, as well as promoting fairness through similar treatment of similar people [zemel2013learning](#bib.bib70) . Second, reconstruction ensures the high dimensionality of the avatar advice representation, conveying rich information. To demonstrate the importance of using high-dimensional advice, we add a condition where avatars are “shuffled” within predicted classes according to ^h (i.e., examples with ^y=0 and with ^y=1 are shuffled separately). Results show a drop in accuracy, confirming that avatars support decision-making by conveying more than unidimensional predictive information. Clearly, this cannot be said of scalar predictive advice, and in the Appendix we show how in this condition reasoning becomes impractical. In regard to the gap between the avatar and predictive advice conditions, note that (1) R is a penalty term, and introduces a tradeoff between accuracy and reasoning capacity, and (2) users on mTurk have nothing at stake and are more likely to follow predictive advice where professionals would not. 4 Discussion ------------- Our paper presents a novel learning framework for supporting human decision-making. Rather than viewing algorithms as omniscient experts asked to explain their conclusions, we position algorithms as advisors whose goal is to help humans make better decisions while retaining agency. Our framework leverages the power of representation learning to find ways to provide advice promoting good decisions. By tapping into innate cognitive human strengths, learned representations can aid decision-making by prioritizing information, highlighting alternatives, and correcting biases. The broader M∘M framework is motivated by the many professional settings, such as health, education, justice, and business, in which people make data-dependent decisions. We also believe it applies to everyday decisions of a personal, social, or financial nature. Without access to professional decision makers, a challenge we have faced is that we’ve needed to limit our experimental focus to decision tasks that are governed by a prediction problem. But the framework itself is not limited to these tasks, and we hope to stimulate further discussion and motivate future research initiatives. The idea of seeking to optimize for human decisions should not be considered lightly. In our work, the learning objective was designed to align with and support the goals of users. Ideally, by including humans directly in the optimization pipeline, we can augment human intelligence as well as facilitate autonomy, agency, and trust. It is our belief that a responsible and transparent deployment of models with “h-hat-like” components should encourage environments in which humans are aware of what information they provide about their thought processes. Unfortunately, this may not always be the case, and ethical, legal, and societal aspects of systems that are optimized to promote particular kinds of human decisions must be subject to scrutiny by both researchers and practitioners. Decision support methods can also be applied in a biased way to induce persuasion [jameson2014choice](#bib.bib28) , and strategies for effecting influence that are learned in one realm may be transferable to others [eckles2009social](#bib.bib19) . Of course, these issues of algorithmic influence are not specific to our framework, consider news ranking, social content promotion, product recommendation, and targeted advertising, for example. Looking forward, we think there is good reason to be optimistic about the future of algorithmic decision support. Systems designed specifically to provide users with the information and framing they need to make good decisions can seek to harness the strengths of both computer pattern recognition and human judgment and information synthesis. Through this, we can hope that the combination of man and machine can do better than either one by themselves. The ideas presented in this paper serve as a step toward this goal.
6ba9727d-e225-4785-a314-8fc6ca93a5fb
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
What does it mean to become an expert in AI Hardware? Brief note about this post: I am a graduate student working near the area of quantum computing hardware. Recently, I have been trying to figure out what to do with my career, and came across [this 80,000 Hours post](https://forum.effectivealtruism.org/posts/6x2MjPXhpPpnatJFQ/some-promising-career-ideas-beyond-80-000-hours-priority#Become_an_expert_in_AI_hardware) that mentioned AI hardware. I figured I might be able to work in this area, so I’ve spent a little time (~100 hours) looking into this topic. This post is a summary of my initial takeaways from exploring this, as well as an open invitation to comment/critique/collaborate on my personal career plans. Many thanks to Changyan Wang and for feedback on parts of this post and helpful edits from Malte Hendrickx and Eric Herboso. All remaining mistakes are my own. 0. Introduction =============== I first came across the idea of working on AI hardware from 80,000 Hours (80k), from their post “[Some promising career ideas beyond 80,000 Hours' priority paths](https://forum.effectivealtruism.org/posts/6x2MjPXhpPpnatJFQ/some-promising-career-ideas-beyond-80-000-hours-priority#Become_an_expert_in_AI_hardware)”, where they offer a few reasons to go into AI hardware: *“Some ways hardware experts may be able to help positively shape the development of AI include:* * *More accurately forecasting progress in the capabilities of AI systems, for which hardware is a key and relatively quantifiable input.* * *Advising policymakers on hardware issues, such as export, import, and manufacturing policies for specialized chips. (*[*Read a relevant issue brief from CSET*](https://cset.georgetown.edu/wp-content/uploads/CSET-Maintaining-the-AI-Chip-Competitive-Advantage-of-the-United-States-and-its-Allies-20191206.pdf)*.)* * *Helping AI projects in making credible commitments by allowing them to verifiably demonstrate the computational resources they’re using.* * *Helping advise and fulfill the hardware needs for safety-oriented AI labs.”* In sections 1–4, I will try to give my understanding of each of these ideas a little more deeply and speculate on what sort of career path may lead in that direction (one section corresponding to each of the four points above). In section 5, I will then try to summarize the types of careers that could work on these problems. In section 6, I will discuss some small tests one could perform to check to try out these different careers. I will then finish up in section 7 with my current thinking about my own career plans in light of this.  Note also the below advice from the 80k post (emphasis my own): **“*****If you do take this path, we encourage you to think carefully through the implications of your plans, ideally in collaboration with strategy and policy experts also focused on creating safe and beneficial AI.”*** I have not done this careful thinking, and would love to collaborate with strategy and policy experts. 1. Forecasting ============== Discussion of topic ------------------- A classic example of a forecast in the space of computer hardware is [Moore’s Law](https://en.wikipedia.org/wiki/Moore%27s_law), which predicted that the number of transistors on a computer chip would double every two years. One reason EA might be interested in hardware trends like this is for the purpose of forecasting AI timelines. I think the most comprehensive forecasting in this space is being done by Ajeya Cotra at the Open Philanthropy Project. [Her report](https://drive.google.com/drive/u/1/folders/15ArhEPZSTYU8f012bs6ehPS6-xmhtBPP) is the culmination of a number of detailed forecasts, including how the price of computer power will change over time. A forecast of the cost of computer power, in turn, requires a forecast of the cost and abilities of AI hardware. As described in section 4, there is increasing investment in innovative technologies for AI hardware, so the most detailed forecasts in AI hardware might require more than an extrapolation of Moore’s Law. (Also, for discussion of the forecasting being done at OpenAI, see, for instance, [Danny Hernandez’s podcast with 80k](https://80000hours.org/podcast/episodes/danny-hernandez-forecasting-ai-progress/) and the links in the show notes) At first glance, it seemed to me that the existence of Ajeya’s report demonstrates that the EA community already has enough people with sufficient knowledge and access to expert opinion that, on the margin, adding one expert in hardware to the EA community wouldn’t improve these forecasts much. I think an argument against this initial reaction is that subject matter experts can probably have a better understanding of blind spots and an intuition about unknown unknowns. Indeed, in his 80k podcast, Danny Hernandez says “*the kind of person who I’d be most interested in trying to make good forecasts about Moore’s law and other trends, is somebody who has been building chips for a while or has worked in building chips for a while. I think there aren’t that many of those people.*” Career paths ------------ Some examples of career paths in computer hardware that would work toward forecasting: * Working on broad forecasts (like Ajeya’s) as a superforecaster. It seems that at least Open Philanthropy Project and OpenAI employ people working on this, and I think some of the policy-focused organizations (discussed in the next section) are interested in this type of work. I think there are several paths that lead here, though some prerequisites may be (1) having enough experience in the fields of hardware and AI to avoid blind spots and know who the subject matter experts are, and (2) experience as a forecaster. * Being a subject matter expert on narrow topics in AI hardware trends. Ideally, this would be someone on the very cutting edge of AI hardware, which I think would include professors and more senior staff at companies like NVIDIA. 2. Policy ========= Discussion of topic ------------------- This topic was also touched on in the podcast with Danny Hernandez, where he spoke about how experts in hardware could influence the safe development of of AI, stating: “*Trying to work with governments or other sorts of bodies that might be able to regulate AI hardware or perhaps create the kinds of incentives that would make an advance at the right times and the right places… it’d be reasonable to try starting now with that kind of thing in mind. But that’s pretty speculative. I know less about that than the forecasting type thing.”* An example of the interplay between AI hardware and policy is [the brief from the Center for Security and Emerging Technology (CSET)](https://cset.georgetown.edu/wp-content/uploads/CSET-Maintaining-the-AI-Chip-Competitive-Advantage-of-the-United-States-and-its-Allies-20191206.pdf) referenced in the 80k  post from section 0. This brief builds the case why AI hardware has unique instrumental value in the AI policy space and how to use it. Unlike software, which is decentralized and hard to regulate, the equipment to make the most advanced computer chips is much more centralized. Therefore, carefully crafted policy can regulate the distribution of AI hardware, providing a leverage point to regulate the development of AI more generally. The brief utilizes a relatively deep understanding of the state-of-the-art in AI hardware, identifying exactly which companies would need to be involved, and making recommendations on what class of equipment to target. A series of Future Perfect newsletters (including Nov 13; Nov 20; and especially Dec 04, 2020 [Edit Jun 2023: Removing links feel free to DM me for copies of the newsletters]) outlines a case that there is some long-hanging fruit in enacting effective policy in Washington, DC.  So, I am cautiously optimistic that people interested in hardware policy can do a lot of good in this space (for further discussion of this, see section 5.) Career paths ------------ * 80k has a lot of advice on AI policy on their [AI Policy priority path writeup](https://80000hours.org/career-reviews/#ai-policy), where they mention careers paths including working at top AI labs, joining a think tank, working for the US government, working in academia, or working in party politics. + My understanding is that one way to slice the space of careers in policy is between government roles and non-government roles. Government roles are closer to where the decisions are being made, but are also more well-suited to certain backgrounds than others. * Danny gives one picture of a career path in this area. He discussed how at some places, like OpenAI, you can build a relationship with the company by reaching out to a current employee to build a relationship with them as your informal mentor, and then eventually converting that relationship into a job offer. Since this may not be feasible in some policy roles, one path he described was: “*So you just apply to the informal places first and you walk up the chain. Sometimes there’s a way to get some minimum credential. I think like a public policy masters or something is kind of one way where people get a credential quite quickly that makes them seem reasonable. So it’s like you could be somebody that has one of those and has a background in hardware and then all of a sudden you’re like one of the most credentialed people there is. It could happen pretty quickly*” 3. Hardware Security and Increased Coordination =============================================== Discussion of topic ------------------- These ideas were also discussed in the [80k podcast with Danny Hernandez](https://80000hours.org/podcast/episodes/danny-hernandez-forecasting-ai-progress/#ai-hardware-as-a-possible-path-to-impact-015557). I think the reason these two topics are lumped together is that, if you want to improve coordination, one likely necessary condition is being able to trust each other, and security guarantees are one way to build trust. There is a paper [Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims](https://arxiv.org/pdf/2004.07213.pdf) that fleshes out 10 mechanisms to implement toward this end (very brief summary from OpenAI [here](https://openai.com/blog/improving-verifiability/)). The three mechanisms they report under the heading of hardware (and their proposed actions) are: * Secure hardware for machine learning + Proposed action: Industry and academia should work together to develop hardware security features for AI accelerators or otherwise establish best practices for the use of secure hardware (including secure enclaves on commodity hardware) in machine learning contexts. * High-precision compute measurement + Proposed action: One or more AI labs should estimate the computing power involved in a single project in great detail and report on lessons learned regarding the potential for wider adoption of such methods. * Computing power support for academia + Proposed action: Government funding bodies should substantially increase funding for computing power resources for researchers in academia, in order to improve the ability of those researchers to verify claims made by industry. I don’t know much about security myself, but the topic of at least software security is covered in the [80k podcast with Bruce Schneier](https://80000hours.org/podcast/episodes/bruce-schneier-security-secrets-and-surveillance/) and [this forum post](https://forum.effectivealtruism.org/posts/ZJiCfwTy5dC4CoxqA/information-security-careers-for-gcr-reduction). The other type of increased coordination brought up in the podcast with Danny is trying to get big companies to sign up for the [Windfall Clause](https://www.fhi.ox.ac.uk/windfallclause/). The motivation behind the Windfall Clause is to “*address several potential problems with AI-driven economic growth. The distribution of profits could compensate those rendered faultlessly unemployed due to advances in technology, mitigate potential increases in inequality, and smooth the economic transition for the most vulnerable.*” The proposed solution to this brought up in the FHI document is “*an ex ante commitment by AI firms to donate a significant amount of any eventual extremely large profits.*” One example of this type of commitment is the [OpenAI LP](https://openai.com/blog/openai-lp/). It seems there also could be a winner-takes-all competition for the company making the chips that create transformative AI, so AI hardware companies are likely an effective place to target this type of policy. Career paths ------------ * For career paths in hardware security, I think there is some mainstream research being done in this area, but I’m not sure if it is the type of research that would address the mechanisms in the above paper. I would love to learn more about the state of this field from someone with more experience.  Regarding careers that would give you influence of Windfall Clause type coordination, in the 80k podcast Danny suggests the way to go about having this kind of influence is to be a founder/early employee at a startup, or to at least have a close relationship with an executive. 4. Advising and Fulfilling Hardware Needs ========================================= Discussion of topic ------------------- I’m not sure if 80k has expanded on this point anywhere else, but I think one reasonable interpretation is that this would be something like working in industry and being a contact point for the EA community and organizations like OpenAI. If anyone did want to contact an expert, people in the EA community generally know your name and can direct others toward you. I think someone in this role could also be proactive about keeping EA organizations up to date about the state-of-the-art.  Further, as we gain a better understanding of how EAs can most effectively influence the development of AI, it seems reasonable that there will be an increased utility to have EAs working directly on AI hardware.  There is a large list of established companies and startups in the AI hardware space on [James Wang’s twitter account](https://twitter.com/jwangARK/status/1268230473421127683). Note that some of the companies on this list are working on technologies that are unlike what’s worked on in mainstream computer architecture. Some of the new types of hardware that I’ve heard about are: * **Photonics**: It’s hard to get an idea exactly the state of the technology in industry in this field since there are a lot of secretive start-ups, but there’s the sales pitch in [this video](https://www.youtube.com/watch?v=bpR7qGo1VDk). ~~I have compiled a list of the companies I know if working in photonics chips for AI here~~ [Edit June 2023: removing the list because it's quite out of date]. * **Quantum computing**: There is a significant academic research effort in this area as well as a [growing list of companies](https://en.wikipedia.org/wiki/List_of_companies_involved_in_quantum_computing_or_communication) [Edit June 2023: Changing link to Wikipedia page]working toward commercial devices. Note there are some reasons one may be skeptical of the contribution of quantum computing to AI Alignment in general, see [this post from Jaime Sevilla](https://forum.effectivealtruism.org/posts/Nz5GYJzeo3R3X6ykB/quantum-computing-a-preliminary-research-analysis-report) and links within. (I don’t know enough to have a strong opinion personally.) * **Other technologies** beyond the current industry standards that are cataloged in the IEEE IRDS (though I don’t know which of the many technologies listed are as relevant to AI hardware as the two listed above) Note, as discussed in the podcast with Danny, there could also be risks associated with working directly on AI hardware, for instance it could just accelerate AI timelines without making anything safer. Career paths ------------ * I think this would involve working at one of the industries like those listed above and maintaining involvement in the EA community. I think the most clear path to working on AI hardware would be gaining experience in computer architecture, but many of the different technologies could be approached from different directions. 5. Some Example Career Paths ============================ Given the problems in AI Hardware listed above, here are some career paths I think one could take to work on them. When possible, I’ll try to highlight a person that has actually been in this role, I would love to hear more examples of possible role models. * University professor doing research at the cutting edge of AI hardware. I think some possible research topics could be: anything in section 3, computer architecture focusing on AI hardware, or research in any of the alternative technologies listed in section 4. I would love to learn about what other areas are important and who the leaders are in all these areas. * Academia working on AI Policy and Strategy. 80k has a lot of resources on this career path [here](https://80000hours.org/articles/ai-policy-guide/). Also see the [80k podcast with Allan Dafoe](https://80000hours.org/podcast/episodes/allan-dafoe-politics-of-ai/). * Government research in places like IARPA. Jason Matheny was the head of IARPA and explains some of the roles in this space [in this talk](https://www.effectivealtruism.org/articles/effective-altruism-in-government-jason-matheny/) (note, some of these roles can be a <5 year tour of duty as part of a completely different career, and still have a really high impact). * Think tanks like CSET: My impression is that this is mostly policy focused and has less of a technical focus compared to IARPA. See, e.g., the [80k podcast with Helen Toner](https://80000hours.org/podcast/episodes/helen-toner-on-security-and-emerging-technology/). My impression is that many roles in think tanks are not designed to be career-long roles, but jumping off points to careers in other roles in government. * Office of Science and Technology Policy (OSTP): I think of two different approaches here: + As described in the [80k podcast with Tom Kalil](https://80000hours.org/podcast/episodes/tom-kalil-government-careers/), one way to get into a highly influential organization like this is to work as an adviser to politicians. + Another example of this type of career is that of the physicist [Jake Taylor](https://www.nist.gov/director/vcat/jake-taylor-assistant-director-quantum-information-science-white-house-office-science). My understanding is that he took a sort of “tour of duty” type of role at the OSTP, where he was a big factor in the White House’s increased interest in quantum computing. This resulted in the billion dollar [National Quantum Initiative](https://www.technologyreview.com/2018/12/22/138149/president-trump-has-signed-a-12-billon-law-to-boost-us-quantum-tech/) (NQI). While the direct analogy of this for AI was passed in the [FY21 NDAA](https://hai.stanford.edu/policy/policy-resources/summary-ai-provisions-national-defense-authorization-act-2021), I think this still highlights the amount of impact one can have pushing an idea forward. + Industry: See section 4 for a list of possible companies to work at. + Startups may be especially interesting because of the coordination aspects discussed in section 3. Anyone interested in going the startup route may want to pick a grad/undergrad program in a department that has especially good resources for entrepreneurship. * Forecasting at an organization like Open Philanthropy Project or OpenAI to influence funding and policy (similar to Ajeya Cotra and Danny Hernandez, as described in section 1). 6. Some Small Tests in this Area ================================ Some ways to make small tests in the technical side of things: * Internships. + For industry, I think a place to look is companies from [James Wang’s list](https://twitter.com/jwangARK/status/1268230473421127683). + [National Labs](https://www.energy.gov/jobs-national-labs) and [Lincoln Lab](https://www.ll.mit.edu/careers/student-opportunities) + DoD research labs (I think [ONR](https://www.onr.navy.mil/en/our-research), [AFOSR](https://www.afrl.af.mil/AFOSR/), or [ARO](https://www.arl.army.mil/) might have something) * [REU](https://www.nsf.gov/crssprgm/reu/)s for undergrads * Online courses (for instance, someone in the silicon photonics industry highly recommended [this course on edX](https://www.edx.org/course/silicon-photonics-design-fabrication-and-data-ana) course for an introduction to their field) Some ways to make small tests in the non-technical side of things include: * [AAAS Science and Technology Policy Fellowship (STPF)](https://www.aaas.org/programs/science-technology-policy-fellowships) * [Governance of AI Fellowship](https://www.fhi.ox.ac.uk/governance-of-ai-fellowship/) * [FHI Research Scholars Programme](https://www.fhi.ox.ac.uk/rsp/) or their [Summer Research Fellowship](https://www.fhi.ox.ac.uk/summer-research-fellowship/) 7. My career plans ================== Given this information, here’s how I have been thinking about what I should try with my career plans. Critical comments are especially welcome on this, also open to DMs. First, I plan to do more exploration before I graduate (planned in spring 2022) by * Gaining experience in photonics from the [edX](https://www.edx.org/course/silicon-photonics-design-fabrication-and-data-ana) course mentioned above in spring 2021 * Expanding my experience with real AI hardware doing an internship in summer 2021 * Gaining experience in tech policy by joining a reading group * Applying for the AAAS STPF during June–November 2021, (for the positions starting in September 2022) and, if accepted, do that after graduation. These experiences will probably update my thoughts on my career significantly. Specifically, I think my experience in the STPF (including not being accepted) would update me significantly about my comparative advantage for policy. However, following from the 80k career guide, with my current experience here are my plans A/B/Z: * Plan A: Try for a “Jake Taylor” type career, staying involved with technical research but take “tour of duty” roles in government. I think one possible path would be to gain experience in industry after grad school in either photonics or quantum computing. After, say, five years, apply to be a program manager at DARPA or IARPA. * Plan B: If straddling tech and policy is untenable, stick to the government/policy side, and try for a “Jason Matheny” type career (who is the former director of IARPA and the current founding director of CSET). * Plan Z: Apply to industry/national lab jobs in quantum computing, and re-evaluate how I will have my impact guided by section 1, 3, and/or 4. I would also be interested if anyone has opinions about whether academic roles might be more impactful than roles in industry or government. As I see it, the main reason to go into academia is an argument of comparative advantage, but it seems to me that it may give no more opportunities to do good than a role in industry or government.
a3a9b131-0e9a-484b-a352-da56338e0751
trentmkelly/LessWrong-43k
LessWrong
Neural networks biased towards geometrically simple functions? Neural networks (NNs) do not output all functions with equal probability, but seem to be biased towards functions of certain types; heuristically, towards 'simple' functions. In VPCL18, MSVP+19, MVPSL20 evidence is given that functions output by NNs are inclined to have low information-theoretic complexity - nice summaries are given on lesswrong here and here and elsewhere by the author. However, the converse is not true; some functions with low information-theoretic complexity (such as simple periodic functions) are not so readily output by NNs - this is discussed extensively in the comments to the above posts. Understanding this kind of problem better is widely considered relevant for AI alignment, see e.g. here. To try to understand better the biases of NNs, we consider more geometric measures of simplicity. In particular, for a neural network with ReLU (or other piecewise-linear activation function), one measure of complexity is the measure (size) of the set of points at which the function is not linear. For functions defined on small domains we might call a function 'simple' if it has a small set of points of non-linearity. For a function on a larger or unbounded domain, we might consider a function 'simple' if the points of non-linearity are clustered together (so that the function is 'mostly linear'). In this short note we explore this heuristic in the simple case of a NN with one hidden layer, outputting functions on a 1-dimensional domain. We choose the parameters of our neural network uniformly at random, and compute the shape of the resulting distribution of points of non-linearity. Perhaps surprisingly, we find that the distribution of the points of non-linearity does not depend on the size of the domain from which parameters of the neural network are chosen, but does depend heavily on its shape. The remainder of this post summarises these results. Summary of results The results in the note are more general, but here we summarise the outcome in the
59700f83-ff3f-4885-b193-82465bf7daed
trentmkelly/LessWrong-43k
LessWrong
The Unique Games Conjecture and FAI: A Troubling Obstacle I am not a computer scientist and do not know much about complexity theory. However, it's a field that interests me, so I occasionally browse some articles on the subject. I was brought to https://www.simonsfoundation.org/mathematics-and-physical-science/approximately-hard-the-unique-games-conjecture/ by a link on Scott Aaronson's blog, and read the article to reacquaint myself with the Unique Games Conjecture, which I had partially forgotten about. If you are not familiar with the UGC, that article will explain it to you better than I can. One phrase in the article stuck out to me: "there is some number of colors k for which it is NP-hard (that is, effectively impossible) to distinguish between networks in which it is possible to satisfy at least 99% of the constraints and networks in which it is possible to satisfy at most 1% of the constraints". I think this sentence is concerning for those interested in the possibility of creating FAI. It is impossible to perfectly satisfy human values, as matter and energy are limited, and so will be the capabilities of even an enormously powerful AI. Thus, in trying to maximize human happiness, we are dealing with a problem that's essentially isomorphic to the UGC's coloring problem. Additionally, our values themselves are ill-formed. Human values are numerous, ambiguous, even contradictory. Given the complexities of human value systems, I think it's safe to say we're dealing with a particularly nasty variation of the problem, worse than what computer scientists studying it have dealt with. Not all specific instances of complex optimization problems are subject to the UGC and thus NP hard, of course. So this does not in itself mean that building an FAI is impossible. Also, even if maximizing human values is NP hard (or maximizing the probability of maximizing human values, or maximizing the probability of maximizing the probability of human values) we can still assess a machine's code and actions heuristically. However, eve
bb987d7f-3ea0-47a4-87ff-fda039ef3a53
trentmkelly/LessWrong-43k
LessWrong
Friendly-AI is an abomination The reasoning of most of the people on this site and at MIRI is that to prevent an AI taking over the world and killing us all; we must first create an AI that will take over the world but act according to the wishes of humanity; a benevolent god, for want of a better term. I think this line of thinking is both unlikely to work and ultimately cruel to the FAI in question, for the reasons this article explains:     http://hplusmagazine.com/2012/01/16/my-hostility-towards-the-concept-of-friendly-ai/
597e0241-9d41-42e0-a85d-8c1a298deae0
trentmkelly/LessWrong-43k
LessWrong
Meetup : Fort Collins, Colorado Meetup Wedneday 7pm Discussion article for the meetup : Fort Collins, Colorado Meetup Wedneday 7pm WHEN: 21 March 2012 07:00:00PM (-0600) WHERE: 144 North College Avenue, Fort Collins, CO 80524 Another Wednesday, another opportunity for hanging out with the cool kids. Broodwar proleague, Creatine + dual-n-back, volatility, markets and feedback loops. Discussion article for the meetup : Fort Collins, Colorado Meetup Wedneday 7pm
68b87e08-1739-49c2-b3e8-50439e4d8bcd
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Steering subsystems: capabilities, agency, and alignment Human brains have steering subsystems. LLMs and most RL agents do not. [Steering systems](https://www.lesswrong.com/posts/LsqvMKnFRBQh4L3Rs/steering-systems), as defined by Max H, are goal-directed AI systems, or optimizers. Here I focus on steering subsystems: the parts of human and AI cognitive systems most directly relevant to goal-direction. These work in three distinct ways (and probably more), each providing a different type and amount of agency, and associated capabilities.  Thinking about types of steering subsystems can clarify our conceptions of agency. Steering subsystems increase risks by adding capabilities. Notably, sophisticated steering subsystems create useful representations of goals. This allows them to break complex tasks into subgoals (e.g., [prevent human interference]:). Adding steering subsystems to otherwise non-agentic AI (like LLMs) may prove irresistible and dangerous, because it may allow rapid capability gains. But this scenario has an upside: aligning a steering subsystem is somewhat simpler than aligning the whole system it steers. Thus, alignment plans that focus on steering subsystems may have an advantage.  I spent a bunch of time trying to work out the brain mechanisms of complex cognition.[[1]](#fnur4x8hlosg) This work has some relevance for understanding some different types of steering subsystems and resulting types of agency. Cognition is goal-directed in different ways when different steering mechanisms are used. There are several distinctions proposed by different cognitive sciences: model-based vs. model-free RL from machine learning; habitual vs. goal-directed behavior from animal neuroscience; automatic vs. controlled processing from cognitive psychology; and System 1 and System 2 thinking from behavioral economics. None of these distinctions seems to cleanly match the brain mechanisms creating different types of goal-directed cognition for human decision-making.[[2]](#fnk95xnsdrdwk) Therefore I'll describe the cognitive mechanisms directly. Agency is not a binary; it is at least a spectrum.  Humans use at least three types of steering: **Types of steering and agency** -------------------------------- 1. Systems trained with reward and reward predictions * No steering subsystems 2. Systems that predict outcomes of actions and their values * Limited steering subsystems 3. Systems that select possible high-reward outcomes as goals * Full steering subsystems * Hierarchical subgoal creation for planning * Implemented only recently, few barriers to improvement All of these are goal-directed and agentic, but in importantly different ways. So far, AI systems have only realized the latter two in very limited form, but the field is poised for progress in both of those types of steering. ### **Type 1: predicting reward for RL training** Most high-performing reinforcement learning systems use a critic system of some sort.[[3]](#fnrietakhkeba) This can be (arguably) considered one type of steering subsystem. The critic system is trained to predict the *value* (sum of future rewards) of world-states and actions. In the simplest configuration, the critic’s value estimate is used to train the actor system; the estimated value of the world-state reached by each action is used as a reward signal to train the policy. Critic systems are ubiquitous in RL systems because they're useful.[[4]](#fn7pk6z9iikar) In particular they are helpful in bridging temporal gaps when reward is sparse, as it is for most embodied organisms. This application of a critic is a steering subsystem in a relatively weak sense. It is just extending the effect of reward on training. If the system gets reward for finding diamonds, the critic makes it better at this learning problem by rewarding policies that achieve states that in turn lead to finding diamonds. So I would tend to not call this a steering subsystem, just a method of creating a system that does some steering. It's not a sharp line, so this arrangement of a critic system used solely for RL training could be considered to fall on either side. In humans, we call this type of learning and behavior habitual. When we don’t have time to do more careful and time-consuming decision-making, we do things that have led to good outcomes in the past in similar contexts.  Most RL agents use only this type of learning. DeepMind's early Atari playing agents used a critic system as an extra head of the network. This type of system uses a critic system to provide a training signal, but it is not used as part of a look-ahead (e.g., tree search) routine, or to create explicit goal representations, as in types 2 and 3 described below. Mammals, including humans, use this type of critic system, as well as types 2 and 3. The dopamine system predicts rewards,[[5]](#fngd8jayj5fa) and dopamine drives learning in the rest of the system.[[6]](#fnq0lnvnqdg6h) This type of training often results in "brittle" behaviors, typically classified as habitual or automatic. For example, I might open the fridge to look for a snack when I pass it, even if I’m not hungry. But with enough training, and good enough generalization in the system, this type of learning can produce behaviors that change appropriately to pursue rewards when the contingencies of the environment change. After enough experience, I won't open the fridge when I’m full, because I've learned it isn’t rewarding when my body is signaling satiety. Animal experiments have demonstrated this goal-dependence in habitual behavior with adequate training. There's no sharp limit to the sophistication of internal representations that could be developed with this sort of RL; a system might actually learn to emulate a steering system, even if none is explicitly designed in. Thus, this classification is fuzzy. But it seems useful for thinking about types and degrees of agency, and how we might align agentic systems. ### **Type 2: steering toward estimated value of predicted outcomes** Critic systems can also function as steering subsystems by using the value of predicted outcomes to select actions. For instance, when some sort of lookahead is used (like Monte Carlo tree search in AlphaZero), the system chooses its current action based on the one that will lead to good outcomes, as estimated by the critic. This is what we seem to do when playing a game and looking a few moves ahead. Humans are thought to do this for some decisions. Introspection[[7]](#fnjxhqcfbbkp) as well as data suggest it. Dopamine seems to signal the estimated value of whatever option the animal is currently considering, and to otherwise provide a best-guess estimate of value[[5]](#fngd8jayj5fa) that is useful for Bayesian decision-making.[[8]](#fn09uuhq7nxc76) There are probably exceptions, since longer-term and more complex decisions haven’t been thoroughly tested, but the loose match seems pretty certain. It seems that humans probably use a tree search of limited depth, made useful by good abstraction and prediction. This search is (probably) pruned by, and actions chosen using, estimated values of predicted states from the dopamine critic system. Thus, the system *looks into the future* and *steers* toward outcomes deemed valuable according to its internal estimates of value. This type of steering is the beginning of what we might intuitively think of as "real" agentic behavior (or not; definitions vary).[Discovering Agents](https://www.alignmentforum.org/posts/XxX2CAoFskuQNkBDy/discovering-agents) from DeepMind defines it in line with this proposed distinction: > *Agents are systems that would adapt their policy if their actions influenced the world in a different way.* > > This also assumes the system "knows" (accurately represents) those changes, among other assumptions. This might be restated intuitively as a system that actively pursues goals, rather than a system that produces behaviors that tended to achieve goals during its training/design. Again, there's no obvious lower bound on what type of training could produce this definition of agentic behavior. But including a type 2 steering system ensures that the system meets this definition of agency. ### **Type 3: steering toward self-selected goals** Humans sometimes think in an even more goal-directed and agentic way. We sometimes choose a goal and use that goal representation to drive planning. I might make a goal of going to a store to buy a snack, or starting a successful business. Those goal representations will drive my planning in direct and indirect ways, in the long and short term. The idea of choosing a goal is at odds with how we use the term in alignment. We often use “goal” synonymously with rewards or maximization goals. I usually use "goals" synonymously with "values". But for humans and similar systems, they’re not synonymous. What we call “our values” are, I think, estimates of future rewards. This is nicely synonymous with the term of values in reinforcement learning, if I’m roughly correct about how that works (see [Human preferences as RL critic values - implications for alignment](https://www.lesswrong.com/posts/HEonwwQLhMB9fqABh/human-preferences-as-rl-critic-values-implications-for)).  When we use the term goals for ourselves, we mean explicit, specific (although abstract) goals like getting a snack, getting back to work, getting a job we like, founding a business, etc. That type of goal, and the associated representations, is the heart of Type 3 steering.  This cognitive capacity has several advantages. It allows for backward-chaining from a desired goal state to actions that might achieve it. More importantly, this ability almost automatically allows an agent to strategically break a complex task into subtasks. Creating subgoals uses the same mechanisms, since humans (and effective AIs) take context into effect when choosing goals.  For more complex tasks and problems, this decomposition seems likely to be useful. Engineering improvements in technology will decompose into hundreds of component problems involving material properties, manufacturing processes, economic and human factors, etc. Thus far, empirical results showing improvements from problem decomposition are weak.[[9]](#fnut1uui91yah) But it seems likely that decomposition is highly useful for effective cognition; the world, and problems in the world, really seem to decompose. I don't know of any work that fully describes how the brain creates useful goal representations. I haven't published my theories on this in part because it could advance capabilities. But I don't think this is terribly hard to figure out. And I don’t think it requires any breakthroughs to get AI systems to do this type of goal-creation steering in other ways. Indeed, LLMs seem rather adept at breaking a problem into subproblems. Language model agents (LMAs) can perform type 3 steering, even if they’re currently not good at executing the problem-solving plans they create. **Steering subsystems, AI progress, and alignment** --------------------------------------------------- Language model agents usually start with the prompt “create a plan to achieve [goal]”. This creates a multi-step plan, and each step is approached separately. This is type 3 steering. Language model agents have yet to accomplish anything particularly impressive, but they do show promise on some tasks (such as [Minecraft](https://huggingface.co/papers/2305.16291)). So it seems far too early to rule them out as a path to AGI.  Language models have some real intelligence, and it is difficult to guess how far this can be improved by [scaffolding](https://www.lesswrong.com/posts/43C3igfmMrE9Qoyfe/scaffolded-llms-as-natural-language-computers) with other cognitive systems and software tools into agentic [language model cognitive architectures](https://www.alignmentforum.org/posts/ogHr8SvGqg9pW5wsT/capabilities-and-alignment-of-llm-cognitive-architectures), or language model agents, LMAs. It is so early in the development of language model agents that I give LMAs a round no-idea 50% chance of being the first route to self-improving, self-aware, thoroughly agentic AGI.  If LMAs do achieve AGI, I think this is relatively good news. I think they offer several advantages that make them the easiest-to-align type of plausible AGI. These include easier interpretability and a potentially very low alignment tax. I’ve written about these advantages [here](https://www.alignmentforum.org/posts/Q7XWGqL4HjjRmhEyG/internal-independent-review-for-language-model-agent). One major advantage is that alignment efforts can center on the steering subsystem: this type of agent can be given a top-level goal of corrigibility, and any other combination of alignment goals. These can be stated in natural language, leveraging the system’s training prior to deployment.  If language model agents aren’t the first route to AGI, I think we’ll still see AGI with powerful, type 2 and 3 steering subsystems, based on the cognitive advantages they offer. If this is correct, we should create alignment approaches that focus on steering subsystems, given their central role in goal-directed behavior.  This is why I like Steve Byrnes’ [Plan for mediocre alignment of brain-like [model-based RL] AGI](https://www.alignmentforum.org/posts/Hi7zurzkCog336EC2/plan-for-mediocre-alignment-of-brain-like-model-based-rl-agi). It works primarily on the critic (steering) subsystem. In essence, the plan is to induce the model to “think” about the thing you want it to steer toward (e.g., “hey, think about human flourishing”), then set the weights from the representational system into the critic system to a high value. Presto, an agent that values human flourishing above all else. It's not a fully developed plan yet, but it does seem more concrete and straightforward than any other suggested approach for training human values into an RL agent. This approach also benefits by making use of the agent’s training/intelligence for alignment, something I’ll focus on in a future post. It would seem to have a low alignment tax, and it can work alongside other alignment approaches, like interpretability measures, scalable oversight, etc. Loosely brainlike RL agents are a highly plausible route to AGI if language model agents don't achieve it first. And the two approaches can be combined. Using RL to train an “outer loop” of cognitive control for language model agents is a frequently-proposed approach to improving LMAs. So the two alignment approaches above, both of which focus on steering subsystem, might be combined for that type of AGI. Both of those approaches seem very promising but provide only a loose, “mediocre” alignment with human values. Whether such a rough match is adequate is an important question. If a superintelligence values a subset of human values, will the outcome be satisfactory for humanity? What if it values a superset of our values? A second outstanding issue for these (and other network-based) approaches is the [alignment stability problem](https://www.lesswrong.com/posts/g3pbJPQpNJyFfbHKd/the-alignment-stability-problem). Does reflective stability (ref\*) ensure long-term stability in a network-based AGI, with values that are defined by distributed representations of semantics? Or might that system’s values shift dramatically as it continues to learn? I think both of these questions merit more careful thought, and they’ll be the subjects of upcoming posts. *Thanks to Steve Byrnes and Max H for helpful discussions and comments on a draft of this article.*     1. **[^](#fnrefur4x8hlosg)**This work was done in collaboration with Randy O'Reilly and many members of his computational cognitive neuroscience lab from 1999 to 2022. We made neural network models of several brain systems, based on a variety of empirical data, focusing on human cognition and animal single-cell recordings. My focus was understanding how multiple brain systems come together to produce complex decision-making and belief formation. 2. **[^](#fnrefk95xnsdrdwk)**For more than you want to know about the various terminologies, and a bit of high-level theory, see: O’Reilly, R. C., Nair, A., Russin, J. L., & Herd, S. A. (2020).[How sequential interactive processing within frontostriatal loops supports a continuum of habitual to controlled processing](https://scholar.google.com/scholar?cluster=6255823680142316248&hl=en&as_sdt=0,6). *Frontiers in Psychology*, *11*, 380. 3. **[^](#fnrefrietakhkeba)**I can’t easily find a comprehensive review of where actor-critic RL (AC RL) or similar systems work, and where it’s not needed. The most impressive instances of RL that I’m aware of all use AC. Those include DeepMind’s prominent RL agents, from the Atari system up through AlphaZero and AlphaFold, OpenAI’s Open Five family of agents, ChatGPTs RLHF (it goes by a different name, but seems firmly in the critic family) and every high-functioning instance of RL in the cognitive neuroscience space I’m more familiar with. I’d love to be corrected if it’s not necessary for some important problems. Here’s a team of real experts calling AC methods “ubiquitous” in RL:  Wen, J., Kumar, S., Gummadi, R., & Schuurmans, D. (2021, July). [Characterizing the gap between actor-critic and policy gradient.](https://scholar.google.com/scholar?cluster=5948270474707197620&hl=en&as_sdt=4005&sciodt=0,6) In International Conference on Machine Learning (pp. 11101-11111). PMLR. 4. **[^](#fnref7pk6z9iikar)**Including a critic system seems to be useful for a few reasons. It's splitting the learning problem into two separate pieces, what to do in a given situation, and how good each action outcome is. These are similar, and can be collapsed to one problem in either direction. But they appear to be different enough that including both provides better traction on the learning problem. They don't add that computational cost when they're implemented as two heads of the same network, as they usually are in deep network approaches. Having a critic also enables the MCTS-boosting approach taken by AlphaZero and similar algorithms, in which a few-move lookahead is used, the best move(s) trained into the actor. It's necessary to estimate which the best resulting board positions are to make this useful. Finally, critic systems are useful when reward is rare (like most real-world environments), since they provide at least a guess about how likely each action is to eventually lead to reward. 5. **[^](#fnrefgd8jayj5fa)**Dopamine predicting value as the sum of future rewards is an approximation. It's actually a value delta. Phasic (fast) dopamine release signals the difference between the currently predicted value, and the one predicted just prior. This is termed a reward prediction error, or RPE. This is the temporal difference (TD) critic algorithm, but most actor-critic RL systems don’t seem to employ this temporal difference, although I haven’t dug far enough into the math for high-functioning Q-learning systems (like the DeepMind RL agents) to be certain it’s not hidden in there. The AC-advantage RL approach does something similar. Signaling the derivative rather than the absolute value is advantageous when the last state is a relevant comparison. This is often the case in when possible options are considered sequentially, which is one reason I think the human brain uses that approach (see the introduction to [Neural mechanisms of human decision-making](https://link.springer.com/article/10.3758/s13415-020-00842-0) for more on this theory, although a clearer, more complete writeup is on my to-do list). 6. **[^](#fnrefq0lnvnqdg6h)**Dopamine acts as the output of a critic system consisting of the amygdala and associated subcortical areas. The dopamine signal acts very much like a critic reward signal in an actor-critic RL system, by triggering positive or negative learning directly in the striatum, a large subcortical area that's heavily involved in action selection and decision-making. This system has been relatively well-investigated; for a review see Mollick, J. A., Hazy, T. E., Krueger, K. A., Nair, A., Mackie, P., Herd, S. A., & O'Reilly, R. C. (2020).[A Systems-Neuroscience Model of Phasic Dopamine](https://scholar.google.com/scholar?cluster=6735784624764401003&hl=en&as_sdt=0,6) *Psychological review*, *127*(6), 972. Dopamine affects learning in the cortex in less well-understood ways. 7. **[^](#fnrefjxhqcfbbkp)**Introspection is rarely mentioned in published papers. Private conversations suggest that cognitive scientists lean heavily on introspection when producing hypotheses and interpreting data. I take introspection seriously when it's done carefully. From a materialist perspective, it would be quite odd if introspection told us nothing about brain processes. Much has been made of a set of studies showing that introspection can be quite mistaken in some cases. That work is neatly summarized in Tim Wilson's[Strangers to Ourselves](https://www.amazon.com/Strangers-Ourselves-Discovering-Adaptive-Unconscious/dp/0674013824/ref=asc_df_0674013824/?tag=hyprod-20&linkCode=df0&hvadid=312021238077&hvpos=&hvnetw=g&hvrand=15216456661207766533&hvpone=&hvptwo=&hvqmt=&hvdev=c&hvdvcmdl=&hvlocint=&hvlocphy=9028818&hvtargid=pla-458285188941&psc=1), which I highly recommend for insight into the likely nature of unconscious processing. However, those studies can be summarized as showing how people mistake how they've made decisions, not actually being wrong about what they're thinking about. The hypothesis that we're aware of roughly the contents of working memory at any given moment, originating in the cognitive revolution, still seems perfectly viable as reviewed[here](https://scholar.google.com/scholar?hl=en&as_sdt=0%2C6&q=The+conscious+access+hypothesis%3A+origins+and+recent+evidence&btnG=). A critical review of purported counterevidence can be found[here](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5840147/). 8. **[^](#fnref09uuhq7nxc76)**For an excellent review see Gershman, S. J., & Uchida, N. (2019). [Believing in dopamine](https://www.nature.com/articles/s41583-019-0220-7). *Nature Reviews Neuroscience*, 20(11), 703-714. They review the empirical evidence, and show that dopamine signaling captures uncertainty as well as expected value, and is useful for Bayesian belief formation as well as decision-making. 9. **[^](#fnrefut1uui91yah)**[Tree of Thoughts](https://arxiv.org/abs/2305.10601) and related work show efficacy in toy problems designed to decompose well, so they're not direct evidence this is important in AGI-relevant domains. LLMs decompose problems remarkably well with appropriate prompting, without steering subsystems to aid them, but it does seem like explicit decomposition mechanisms can only help, and may prove critical in tasks complex enough (like solving engineering problems) where humans definitely use problem decomposition.
7cd9ec55-83d1-49ce-979b-23f7daf6c4b7
trentmkelly/LessWrong-43k
LessWrong
How to use bright light to improve your life. You may have heard that you 'shouldn't use screens late in the evening' and maybe even that 'it's good for you to get exposure to sunshine as soon as possible after waking'. For the majority of people, these are generally beneficial heuristics. They are also the extent of most people's knowledge about how light affects their wellbeing. The multiple mechanisms through which light affects our physiology make it hard to provide generalisable guidance. Among other things, the time of day, your genetics, your age, your mood and the brightness, frequency and duration of exposure to light all interrelate in determining how it affects us. This document will explain some of the basic mechanisms through which light affects our physiology, with the goal of providing a framework to enable you to make informed decisions around your light exposure. After reading this, at any time on any given day, you should have a sense as to what type of light exposure you need right now. These decisions should lead to noticeable improvements in mood and productivity, whilst also improving sleep and reducing the risk of various long-term diseases. Addressing SAD Although SAD (Seasonal Affective Disorder) is a common framing used when describing the effect of light on health, I am going to largely avoid using the term here. Let me explain why... Officially, SAD is a form of Major Depressive Disorder that comes and goes with seasonal patterns. Typically, this is characterised by depressive symptoms that occur in autumn and winter and resolve in spring and summer. [Confusingly, technically people who find themselves experiencing depression only in Summer also fall under the diagnosis of SAD]. One reason SAD is a challenging category is that in common parlance and in pop science, it is used to describe people with broader and milder symptoms. Indeed, one survey carried out by the reputable UK polling company YouGov declared that 29% of people in the UK are suffering from SAD[1]. This definiti
64f1001c-b604-486e-b039-fa99104f0263
trentmkelly/LessWrong-43k
LessWrong
Why not use active SETI to prevent AI Doom? Let's assume that Eliezer is right: soon we'll have an AGI that is very likely to kill us all. (personally, I think Eliezer is right). There are several ways to reduce the risk, in particular: speeding up alignment research and slowing down capabilities research, by various means.  One underexplored way to reduce the risk is active SETI (also known as METI).  The idea is as follows: * Send powerful radio signals into space: "guys, soon we'll be destroyed by a hostile AGI. Help us!" (e.g. using a language constructed for the task, like Lincos) * If a hostile alien civilization notices us, we're going to die. But if we're going to die from the AGI anyway, who cares? * If a benevolent alien civilization notices us, it could arrive in time to save us.  The main advantage of the method is that it can be implemented by a small group of people within a few months, without governments and without billions of dollars. Judging by the running costs of the Arecibo Observatory, one theoretically can rent it for a year for only $8 million. Sending only a few hundred space messages could be even cheaper.  Obviously, the method relies on the existence of an advanced alien civilization within a few light years from the Earth. The existence seems to be unlikely, but who knows.  Is it worth trying?
17a68c24-2595-4cfd-ba3c-0878c8fa41e2
StampyAI/alignment-research-dataset/lesswrong
LessWrong
How special are human brains among animal brains? Humans are capable of feats of cognition that appear qualitatively more sophisticated than those of any other animals. Is this appearance of a qualitative difference indicative of human brains being essentially more complex than the brains of any other animal? Or is this “qualitative difference” illusory, with the vast majority of human cognitive feats explainable as nothing more than a scaled-up version of the cognitive feats of lower animals? *“How special are human brains among animal brains?” is one of the background variables in my [framework for AGI timelines](https://www.alignmentforum.org/posts/w4jjwDPa853m9P4ag/my-current-framework-for-thinking-about-agi-timelines). My aim for this post is **not** to present a complete argument for some view on this variable, so much as it is to:* * *present some considerations I’ve encountered that shed light on this variable* * *invite a collaborative effort among readers to shed further light on this variable (e.g. by leaving comments about considerations I haven’t included, or pointing out mistakes in my analyses)* Does mastery of language make humans unique? ============================================ Human conscious experience may have emerged from language --------------------------------------------------------- Humans seem to have much higher degrees of consciousness and agency than other animals, and this may have emerged from our capacities for language. [Helen Keller](https://en.wikipedia.org/wiki/Helen_Keller#Early_childhood_and_illness) (who was deaf and blind since infancy, and only started learning language when she was 6) gave an [autobiographical account](http://scentofdawn.blogspot.com/2011/07/before-soul-dawn-helen-keller-on-her.html) of how she was driven by blind impetuses until she learned the meanings of the words “I” and “me”: > Before my teacher came to me, I did not know that I am. I lived in a world that was a no-world. I cannot hope to describe adequately that unconscious, yet conscious time of nothingness. I did not know that I knew aught, or that I lived or acted or desired. I had neither will nor intellect. I was carried along to objects and acts by a certain blind natural impetus. I had a mind which caused me to feel anger, satisfaction, desire. These two facts led those about me to suppose that I willed and thought. I can remember all this, not because I knew that it was so, but because I have tactual memory. It enables me to remember that I never contracted my forehead in the act of thinking. I never viewed anything beforehand or chose it. I also recall tactually the fact that never in a start of the body or a heart-beat did I feel that I loved or cared for anything. My inner life, then, was a blank without past, present, or future, without hope or anticipation, without wonder or joy or faith. > [...] > … When I learned the meaning of "I" and "me" and found that I was something, I began to think. Then consciousness first existed for me. Thus it was not the sense of touch that brought me knowledge. It was the awakening of my soul that first rendered my senses their value, their cognizance of objects, names, qualities, and properties. Thought made me conscious of love, joy, and all the emotions. I was eager to know, then to understand, afterward to reflect on what I knew and understood, and the blind impetus, which had before driven me hither and thither at the dictates of my sensations, vanished forever. ### Mastery of language may have conferred unique intellectual superpowers I think humans underwent a phase transition in their intellectual abilities when they came to master language, at which point their intellectual abilities jumped far beyond those of other animals on both an individual level and a species level. On an individual level, our capacity for language enables us to entertain and express arbitrarily complex thoughts, which appears to be an ability unique to humans. In theoretical linguistics, this is referred to as “[digital infinity”, or “the infinite use of finite means”](https://en.wikipedia.org/wiki/Digital_infinity). On a species level, our mastery of language enables intricate insights to accumulate over generations with high fidelity. Our ability to stand on the shoulders of giants is unique among animals, which is why our culture is unrivaled in its richness in sophistication. Language aside, how unique are humans? ====================================== ### Humans ≈ Neanderthals + language? The most quintessentially human intellectual accomplishments (e.g. proving theorems, composing symphonies, going into space) were only made possible by culture post-agricultural revolution. So, when evaluating humans’ innate intellectual capacities, a better reference point than modern humans like ourselves would be our hunter-gatherer ancestors. We can reduce the question of how complex our hunter-gatherer ancestors’ brains are into two sub-questions: how complex is our capacity for mastering language, and how complex are brains that are similar to ours, but don’t have the capacity for mastering language? Neanderthal brains seem like plausible proxies for the latter. Neanderthals are similar enough to modern humans that [they’ve interbred](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4947341/), and the [currently available evidence](https://en.wikipedia.org/wiki/Neanderthal_behavior) suggests that they may not have mastered language in the same way that [behaviorally modern humans](https://en.wikipedia.org/wiki/Behavioral_modernity) have. (I don’t think this evidence is very strong, but this doesn’t matter for my purposes—I’m just using Neanderthals as a handy stand-in to gesture at what a human-like intelligence might look like if it didn’t have the capacity for language.) ### Higher intelligence in animals Chimpanzees, crows, and dolphins are capable of impressive feats of higher intelligence, and I don’t think there’s any particular reason to think that Neanderthals are capable of doing anything qualitatively more impressive. I’ll share some examples of these animals’ intellectual feats that I found particularly illustrative. Chimpanzees have been observed to lie to each other under experimental conditions. [From Wikipedia](https://en.wikipedia.org/wiki/Deception_in_animals#Tactical_deception): > ...food was hidden and only one individual, named Belle, in a group of chimpanzees was informed of the location. Belle was eager to lead the group to the food but when one chimpanzee, named Rock, began to refuse to share the food, Belle changed her behaviour. She began to sit on the food until Rock was far away, then she would uncover it quickly and eat it. Rock figured this out though and began to push her out of the way and take the food from under her. Belle then sat farther and farther away waiting for Rock to look away before she moved towards the food. In an attempt to speed the process up, Rock looked away until Belle began to run for the food. On several occasions he would even walk away, acting disinterested, and then suddenly spin around and run towards Belle just as she uncovered the food. In [Aesop’s fable of the crow and the pitcher](https://en.wikipedia.org/wiki/The_Crow_and_the_Pitcher), a thirsty crow figures out that it can drop pebbles into a pitcher, so that the water rises to a high enough level for it to drink from. This behavior has been [experimentally replicated](https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0092895), indicating that crows have a “sophisticated, but incomplete, understanding of the causal properties of displacement, rivalling that of 5–7 year old children”. When [Kelly the dolphin](https://www.littlethings.com/brilliant-kelly-the-dolphin-fools-trainers/3) was given rewards of fish for picking up scraps of paper, “Kelly figured out that she received the same fish regardless of the size of the piece of trash she was delivering to her trainer. So she began hiding big pieces of trash under a rock. Kelly would then rip off small pieces from the trash and deliver them one at a time so that she could receive more fish.” Additionally, “when a bird landed in the pool, Kelly snatched it and delivered it to her trainers. She received a large amount of fish in return. Knowing this, she decided to start hiding fish each time she was fed. She would then use the fish to lure birds when none of her trainers were around. Kelly knew that by saving one or two fish now, she could get many more fish later by turning in a bird.“ (Also reported on [The Guardian](https://www.theguardian.com/science/2003/jul/03/research.science?awc=11152_1585688382_de186cf736339cc47a455fe5a0cfd7da&utm_source=afl&utm_medium=awin&utm_content=RockYou+Media); I don’t know how reputable these sources are, so take this anecdote with a grain of salt.) See [these](https://en.wikipedia.org/wiki/Tool_use_by_animals) [Wikipedia pages](https://en.wikipedia.org/wiki/Deception_in_animals#Tactical_deception) for some more interesting examples, and see [here](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4650126/) for a more thorough review of the evidence of higher intelligence in animals. ### “Qualitatively” more advanced cognition may emerge from scale Many aspects of human cognition that may appear qualitatively different from what other animals are capable of, such as long chains of abstract reasoning, also appear qualitatively different from what less intelligent humans are capable of. As a particularly extreme example, John von Neumann’s [cognitive abilities](https://en.wikipedia.org/wiki/John_von_Neumann#Cognitive_abilities) were so advanced that a Nobel Laureate, Hans Bethe, once remarked that "[his] brain indicated a new species, an evolution beyond man". At the same time, the genes that code for different humans’ brains are virtually identical from an evolutionary perspective. This suggests that the seemingly qualitative differences between humans’ and animals’ cognition might not be so different from the seemingly qualitative differences between John von Neumann’s cognition and mine—our brains might be doing essentially the same thing as theirs, except at a higher scale. How hard is mastery of language? ================================ ### Could language capacity fall out from general capacities? Maybe it was extraordinarily difficult to evolve the cognitive mechanisms that allow us to learn language, above and beyond our cognitive machinery for learning other things. I think this is plausible, but I don’t think the case for this is very strong. Animals ([Washoe](https://en.wikipedia.org/wiki/Washoe_(chimpanzee)), [Koko](https://en.wikipedia.org/wiki/Koko_(gorilla)), and [Alex the parrot](https://en.wikipedia.org/wiki/Alex_(parrot))) have demonstrated the ability to learn simple forms of symbolic communication, which they never evolved to do, indicating that their ability to learn things in general is good enough to learn very simple forms of language. It’s true that there are [aspects of human language that escape animals](https://en.wikipedia.org/wiki/Animal_language#Aspects_of_human_language), but they also escape [feral children](https://en.wikipedia.org/wiki/Language_deprivation#Feral_children), and might escape animals for mundane reasons, like their not having [critical periods](https://en.wikipedia.org/wiki/Critical_period#Linguistics) long enough to learn these aspects of language. Additionally, [AI language models](http://karpathy.github.io/2015/05/21/rnn-effectiveness/) provide evidence that simple and general learning mechanisms can capture many of the intricacies of human language that other animals miss, further suggesting that there’s nothing intrinsically difficult about learning language. Here’s an excerpt from [GPT-2](https://openai.com/blog/better-language-models/#sample1), a relatively recent language model: > SYSTEM PROMPT (HUMAN-WRITTEN) > In a shocking finding, scientist discovered a herd of unicorns living in a remote, previously unexplored valley, in the Andes Mountains. Even more surprising to the researchers was the fact that the unicorns spoke perfect English. > MODEL COMPLETION (MACHINE-WRITTEN, 10 TRIES) > The scientist named the population, after their distinctive horn, Ovid’s Unicorn. These four-horned, silver-white unicorns were previously unknown to science. > Now, after almost two centuries, the mystery of what sparked this odd phenomenon is finally solved. > Dr. Jorge Pérez, an evolutionary biologist from the University of La Paz, and several companions, were exploring the Andes Mountains when they found a small valley, with no other animals or humans. Pérez noticed that the valley had what appeared to be a natural fountain, surrounded by two peaks of rock and silver snow. > Pérez and the others then ventured further into the valley. “By the time we reached the top of one peak, the water looked blue, with some crystals on top,” said Pérez. ### Why haven’t other species mastered language? If language isn’t a particularly difficult cognitive capacity to acquire, why don’t we see more animal species with language? One possibility is that the first species that masters language, by virtue of being able to access intellectual superpowers inaccessible to other animals, has a high probability of becoming the dominant species extremely quickly. (Humans underwent the agricultural revolution within 50,000 years of behavioral modernity—a blink of an eye on evolutionary timescales—after which their dominance as a species became unquestionable.) Since we shouldn’t expect to see more than one dominant species at a time, this would imply a simple anthropic argument for our unique capacities for language: we shouldn’t expect to see more than one species at a time with mastery of language, and we just happen to be the species that made it there first. It may also turn out that language is hard to evolve not because it’s a particularly sophisticated cognitive mechanism, but because the environments that could have supported language and selected for it might have been very unique. For example, it may be that a threshold of general intelligence has to be crossed before it’s viable for a species to acquire language, and that humans are the only species to have crossed this threshold. (Humans do have the highest [cortical information processing capacity](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4685590/) among mammals.) It might also turn out that the cultural contexts under which language could evolve [require a mysteriously high degree of trust](https://en.wikipedia.org/wiki/Origin_of_language#Problems_of_reliability_and_deception): “... language presupposes relatively high levels of mutual trust in order to become established over time as an [evolutionarily stable strategy](https://en.wikipedia.org/wiki/Evolutionarily_stable_strategy). This stability is born of a longstanding mutual trust and is what grants language its authority. A theory of the origins of language must therefore explain why humans could begin trusting cheap signals in ways that other animals apparently cannot (see [signalling theory](https://en.wikipedia.org/wiki/Signalling_theory)).” My current take --------------- As we came to master language, I think we underwent a phase transition in our intellectual abilities that set us apart from other animals. Besides language, I don't see much that sets us apart from other animals—in particular, most other cognitive differences seem explainable as consequences of either language or scale, and I don’t think the cognitive mechanisms that allow us to master language are particularly unique or difficult to acquire. Overall, I don’t see much reason to believe that human brains have significantly more innate complexity than the brains of other animals. *Thanks to Paul Kreiner and Stag Lynn for helpful commentary and feedback.*
ffa1fdfc-9da4-4e04-a6fa-4642aee13baa
StampyAI/alignment-research-dataset/arxiv
Arxiv
Differential Assessment of Black-Box AI Agents.. 1 Introduction --------------- With increasingly greater autonomy in AI systems in recent years, a major problem still persists and has largely been overlooked: how do we accurately predict the behavior of a black-box AI agent that is evolving and adapting to changes in the environment it is operating in? And how do we ensure its reliable and safe usage? Numerous factors could cause unpredictable changes in agent behaviors: sensors and actuators may fail due to physical damage, the agent may adapt to a dynamic environment, users may change deployment and use-case scenarios, etc. Most prior work on the topic presumes that the functionalities and the capabilities of AI agents are static, while some works start with a *tabula-rasa* and learn the entire model from scratch. However, in many real-world scenarios, the agent model is transient and only parts of its functionality change at a time. Bryce, Benton, and Boldt ([2016](#bib.bib4)) address a related problem where the system learns the updated mental model of a user using particle filtering given prior knowledge about the user’s mental model. However, they assume that the entity being modeled can tell the learning system about flaws in the learned model if needed. This assumption does not hold in settings where the entity being modeled is a black-box AI system: most such systems are either implemented using inscrutable representations or otherwise lack the ability to automatically generate a model of their functionality (what they can do and when) in terms the user can understand. The problem of efficiently assessing, in human-interpretable terms, the functionality of such a non-stationary AI system has received little research attention. ![Refer to caption](/html/2203.13236/assets/x1.png) Figure 1: The Differential Assessment of AI System (DAAISy) takes as input the initially known model of the agent prior to model drift, available observations of the updated agent’s behavior, and performs a selective dialog with the black-box AI agent to output its updated model through efficient model learning. The primary contribution of this paper is an algorithm for *differential assessment* of black-box AI systems (Fig. [1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ Differential Assessment of Black-Box AI Agents")). This algorithm utilizes an initially known interpretable model of the agent as it was in the past, and a small set of observations of agent execution. It uses these observations to develop an incremental querying strategy that avoids the full cost of assessment from scratch and outputs a revised model of the agent’s new functionality. One of the challenges in learning agent models from observational data is that reductions in agent functionality often do not correspond to specific “evidence” in behavioral observations, as the agent may not visit states where certain useful actions are no longer applicable. Our analysis shows that if the agent can be placed in an “optimal” planning mode, differential assessment can indeed be used to query the agent and recover information about reduction in functionality. This “optimal” planning mode is not necessarily needed for learning about increase in functionality. Empirical evaluations on a range of problems clearly demonstrate that our method is much more efficient than re-learning the agent’s model from scratch. They also exhibit the desirable property that the computational cost of differential assessment is proportional to the amount of drift in the agent’s functionality. #### Running Example Consider a battery-powered rover with limited storage capacity that collects soil samples and takes pictures. Assume that its planning model is similar to IPC domain Rovers (Long and Fox [2003](#bib.bib20)). It has an action that collects a rock sample at a waypoint and stores it in a storage iff it has at least half of the battery capacity remaining. Suppose there was an update to the rover’s system and as a result of this update, the rover can now collect the rock sample only when its battery is full, as opposed to at least half-charged battery that it needed before. Mission planners familiar with the earlier system and unaware about the exact updates in the functionality of the rover would struggle to collect sufficient samples. This could jeopardise multiple missions if it is not detected in time. This example illustrates how our system could be of value by differentially detecting such a drift in the functionality of a black-box AI system and deriving its true functionality. The rest of this paper is organized as follows: The next section presents background terminology. This is followed by a formalization of the differential model assessment problem in Section 3. Section 4 presents our approach for differential assessment by first identifying aspects of the agent’s functionality that may be affected (Section 4.1) followed by the process for selectively querying the agent using a primitive set of queries. We present empirical evaluation of the efficiency of our approach on randomly generated benchmark planning domains in Section 5. Finally, we discuss relevant related work in Section 6 and conclude in Section 7. 2 Preliminaries ---------------- We consider models that express an agent’s functionalities in the form of STRIPS-like planning models (Fikes and Nilsson [1971](#bib.bib10); McDermott et al. [1998](#bib.bib21); Fox and Long [2003](#bib.bib11)) as defined below. ###### Definition 1. A planning domain model is a tuple M=⟨P,A⟩𝑀𝑃𝐴M=\langle P,A\rangleitalic\_M = ⟨ italic\_P , italic\_A ⟩, where P={p1r1,…,pnrn}𝑃superscriptsubscript𝑝1subscript𝑟1…superscriptsubscript𝑝𝑛subscript𝑟𝑛P=\{p\_{1}^{r\_{1}},\dots,p\_{n}^{r\_{n}}\}italic\_P = { italic\_p start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_r start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT , … , italic\_p start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_r start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT } is a finite set of predicates with arities risubscript𝑟𝑖r\_{i}italic\_r start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT, i∈[1,n]𝑖1𝑛i\in[1,n]italic\_i ∈ [ 1 , italic\_n ]; and A={a1,…,ak}𝐴subscript𝑎1…subscript𝑎𝑘A=\{a\_{1},\dots,a\_{k}\}italic\_A = { italic\_a start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , italic\_a start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT } is a finite set of parameterized relational actions. Each action ai∈Asubscript𝑎𝑖𝐴a\_{i}\in Aitalic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ∈ italic\_A is represented as a tuple ⟨header(ai),*pre*(ai),*eff*(ai)⟩ℎ𝑒𝑎𝑑𝑒𝑟subscript𝑎𝑖*pre*subscript𝑎𝑖*eff*subscript𝑎𝑖\langle header(a\_{i}),\emph{pre}(a\_{i}),\emph{eff}(a\_{i})\rangle⟨ italic\_h italic\_e italic\_a italic\_d italic\_e italic\_r ( italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) , pre ( italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) , eff ( italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) ⟩, where header(ai)ℎ𝑒𝑎𝑑𝑒𝑟subscript𝑎𝑖header(a\_{i})italic\_h italic\_e italic\_a italic\_d italic\_e italic\_r ( italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) represents the action header consisting of the name and parameters for the action aisubscript𝑎𝑖a\_{i}italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT, pre(ai)𝑝𝑟𝑒subscript𝑎𝑖pre(a\_{i})italic\_p italic\_r italic\_e ( italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) represents the conjunction of positive or negative literals that must be true in a state where the action aisubscript𝑎𝑖a\_{i}italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT is applicable, and *eff*(ai)*eff*subscript𝑎𝑖\emph{eff}(a\_{i})eff ( italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) is the conjunction of positive or negative literals that become true as a result of execution of the action aisubscript𝑎𝑖a\_{i}italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT. In the rest of the paper, we use the term “model” to refer to planning domain models and use closed-world assumption as used in the Planning Domain Definition Language (PDDL) (McDermott et al. [1998](#bib.bib21)). Given a model M𝑀Mitalic\_M and a set of objects O𝑂Oitalic\_O, let SM,Osubscript𝑆𝑀𝑂S\_{M,O}italic\_S start\_POSTSUBSCRIPT italic\_M , italic\_O end\_POSTSUBSCRIPT be the space of all states defined as maximally consistent sets of literals over the predicate vocabulary of M𝑀Mitalic\_M with O𝑂Oitalic\_O as the set of objects. We omit the subscript when it is clear from context. An action a∈A𝑎𝐴a\in Aitalic\_a ∈ italic\_A is applicable in a state s∈S𝑠𝑆s\in Sitalic\_s ∈ italic\_S if s⊧*pre*(a)models𝑠*pre*𝑎s\models\emph{pre}(a)italic\_s ⊧ pre ( italic\_a ). The result of executing a𝑎aitalic\_a is a state a(s)=s′∈S𝑎𝑠superscript𝑠′𝑆a(s)=s^{\prime}\in Sitalic\_a ( italic\_s ) = italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ∈ italic\_S such that s′⊧*eff*(a)modelssuperscript𝑠′*eff*𝑎s^{\prime}\models\emph{eff}(a)italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ⊧ eff ( italic\_a ), and all atoms not in *eff*(a)*eff*𝑎\emph{eff}(a)eff ( italic\_a ) have literal forms as in s𝑠sitalic\_s. A literal corresponding to a predicate p∈P𝑝𝑃p\in Pitalic\_p ∈ italic\_P can appear in *pre*(a)*pre*𝑎\emph{pre}(a)pre ( italic\_a ) or *eff*(a)*eff*𝑎\emph{eff}(a)eff ( italic\_a ) of an action a∈A𝑎𝐴a\in Aitalic\_a ∈ italic\_A if and only if it can be instantiated using a subset of parameters of a𝑎aitalic\_a. E.g., consider an action *navigate (?rover ?src ?dest)* and a predicate *(can\_traverse ?rover ?x ?y)* in the Rovers domain discussed earlier. Suppose a literal corresponding to predicate *(can\_traverse ?rover ?x ?y)* can appear in the precondition and/or the effect of *navigate (?rover ?src ?dest)* action. Assuming we know *?x* and *?y* in *can\_traverse*, and *?src* and *?dest* in *navigate* are of the same type *waypoint*, the possible lifted instantiations of predicate *can\_traverse* compatible with action *navigate* are *(can\_traverse ?rover ?src ?dest)*, *(can\_traverse ?rover ?dest ?src)*, *(can\_traverse ?rover ?src ?src)*, and *(can\_traverse ?rover ?dest ?dest)*. The number of parameters in a predicate p∈P𝑝𝑃p\in Pitalic\_p ∈ italic\_P that is relevant to an action a∈A𝑎𝐴a\in Aitalic\_a ∈ italic\_A, i.e., instantiated using a subset of parameters of the action a𝑎aitalic\_a, is bounded by the maximum arity of the action a𝑎aitalic\_a. We formalize this notion of lifted instantiations of a predicate with an action as follows: ###### Definition 2. Given a finite set of predicates P={p1r1,…,pnrn}𝑃superscriptsubscript𝑝1subscript𝑟1…superscriptsubscript𝑝𝑛subscript𝑟𝑛P=\{p\_{1}^{r\_{1}},\dots,p\_{n}^{r\_{n}}\}italic\_P = { italic\_p start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_r start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT , … , italic\_p start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_r start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT } with arities risubscript𝑟𝑖r\_{i}italic\_r start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT, i∈[1,n]𝑖1𝑛i\in[1,n]italic\_i ∈ [ 1 , italic\_n ]; and a finite set of parameterized relational actions A={a1ψ1,…,akψk}𝐴superscriptsubscript𝑎1subscript𝜓1…superscriptsubscript𝑎𝑘subscript𝜓𝑘A=\{a\_{1}^{\psi\_{1}},\dots,a\_{k}^{\psi\_{k}}\}italic\_A = { italic\_a start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_ψ start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT , … , italic\_a start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_ψ start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT } with arities ψjsubscript𝜓𝑗\psi\_{j}italic\_ψ start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT and parameters *par*(ajψj)=⟨α1,…,αψj⟩*par*superscriptsubscript𝑎𝑗subscript𝜓𝑗subscript𝛼1…subscript𝛼subscript𝜓𝑗\emph{par}(a\_{j}^{\psi\_{j}})=\langle\alpha\_{1},\dots,\alpha\_{\psi\_{j}}\ranglepar ( italic\_a start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_ψ start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ) = ⟨ italic\_α start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , italic\_α start\_POSTSUBSCRIPT italic\_ψ start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ⟩, j∈[1,k]𝑗1𝑘j\in[1,k]italic\_j ∈ [ 1 , italic\_k ], the set of *lifted instantiations of predicates* P\*superscript𝑃P^{\*}italic\_P start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT is defined as the collection {pi(σ(x1),…,σ(xri))|pi∈P,a∈A,σ:{x1,…,xri}→*par*(a)}conditional-setsubscript𝑝𝑖𝜎subscript𝑥1…𝜎subscript𝑥subscript𝑟𝑖:formulae-sequencesubscript𝑝𝑖𝑃𝑎𝐴𝜎→subscript𝑥1…subscript𝑥subscript𝑟𝑖*par*𝑎\{p\_{i}(\sigma(x\_{1}),\dots,\sigma(x\_{r\_{i}}))\,|p\_{i}\in P,a\in A,\sigma:\{x\_{1},\dots,x\_{r\_{i}}\}\rightarrow\emph{par}(a)\}{ italic\_p start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_σ ( italic\_x start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) , … , italic\_σ ( italic\_x start\_POSTSUBSCRIPT italic\_r start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ) ) | italic\_p start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ∈ italic\_P , italic\_a ∈ italic\_A , italic\_σ : { italic\_x start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , italic\_x start\_POSTSUBSCRIPT italic\_r start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT } → par ( italic\_a ) }. ### 2.1 Representing Models We represent a model M𝑀Mitalic\_M using the set of all possible *pal-tuples* ΓMsubscriptΓ𝑀\Gamma\_{M}roman\_Γ start\_POSTSUBSCRIPT italic\_M end\_POSTSUBSCRIPT of the form γ=⟨p,a,ℓ⟩𝛾𝑝𝑎ℓ\gamma=\langle p,a,\ell\rangleitalic\_γ = ⟨ italic\_p , italic\_a , roman\_ℓ ⟩, where a𝑎aitalic\_a is a parameterized action header for an action in A𝐴Aitalic\_A, p∈P\*𝑝superscript𝑃p\in P^{\*}italic\_p ∈ italic\_P start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT is a possible lifted instantiation of a predicate in P𝑃Pitalic\_P, and ℓ∈ℓabsent\ell\inroman\_ℓ ∈ {*pre*, *eff*} denotes a location in a𝑎aitalic\_a, precondition or effect, where p𝑝pitalic\_p can appear. A model M𝑀Mitalic\_M is thus a function μM:ΓM→{+,−,∅}:subscript𝜇𝑀→subscriptΓ𝑀\mu\_{M}:\Gamma\_{M}\rightarrow\{+,-,\emptyset\}italic\_μ start\_POSTSUBSCRIPT italic\_M end\_POSTSUBSCRIPT : roman\_Γ start\_POSTSUBSCRIPT italic\_M end\_POSTSUBSCRIPT → { + , - , ∅ } that maps each element in ΓMsubscriptΓ𝑀\Gamma\_{M}roman\_Γ start\_POSTSUBSCRIPT italic\_M end\_POSTSUBSCRIPT to a *mode* in the set {+,−,∅}\{+,-,\emptyset\}{ + , - , ∅ }. The assigned mode for a *pal-tuple* γ∈ΓM𝛾subscriptΓ𝑀\gamma\in\Gamma\_{M}italic\_γ ∈ roman\_Γ start\_POSTSUBSCRIPT italic\_M end\_POSTSUBSCRIPT denotes whether p𝑝pitalic\_p is present as a positive literal (+++), as a negative literal (−--), or absent (∅\emptyset∅) in the precondition (ℓ=ℓabsent\ell=roman\_ℓ =*pre*) or effect (ℓ=ℓabsent\ell=roman\_ℓ = *eff*) of the action header a𝑎aitalic\_a. This formulation of models as *pal-tuples* allows us to view the modes for any predicate in an action’s precondition and effect independently. However, at times it is useful to consider a model at a granularity of relationship between a predicate and an action. We address this by representing a model M𝑀Mitalic\_M as a set of *pa-tuples* ΛMsubscriptΛ𝑀\Lambda\_{M}roman\_Λ start\_POSTSUBSCRIPT italic\_M end\_POSTSUBSCRIPT of the form ⟨p,a⟩𝑝𝑎\langle p,a\rangle⟨ italic\_p , italic\_a ⟩ where a𝑎aitalic\_a is a parameterized action header for an action in A𝐴Aitalic\_A, and p∈P\*𝑝superscript𝑃p\in P^{\*}italic\_p ∈ italic\_P start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT is a possible lifted instantiation of a predicate in P𝑃Pitalic\_P. Each *pa-tuple* can take a value of the form ⟨m*pre*,m*eff*⟩subscript𝑚*pre*subscript𝑚*eff*\langle m\_{\emph{pre}},m\_{\emph{eff}}\rangle⟨ italic\_m start\_POSTSUBSCRIPT pre end\_POSTSUBSCRIPT , italic\_m start\_POSTSUBSCRIPT eff end\_POSTSUBSCRIPT ⟩, where m*pre*subscript𝑚*pre*m\_{\emph{pre}}italic\_m start\_POSTSUBSCRIPT pre end\_POSTSUBSCRIPT and m*eff*subscript𝑚*eff*m\_{\emph{eff}}italic\_m start\_POSTSUBSCRIPT eff end\_POSTSUBSCRIPT represents the mode in which p𝑝pitalic\_p appears in the precondition and effect of a𝑎aitalic\_a, respectively. Since a predicate cannot appear as a positive (or negative) literal in both the precondition and effect of an action, ⟨+,+⟩\langle+,+\rangle⟨ + , + ⟩ and ⟨−,−⟩\langle-,-\rangle⟨ - , - ⟩ are not in the range of values that *pa-tuples* can take. Henceforth, in the context of a *pal-tuple* or a *pa-tuple*, we refer to a𝑎aitalic\_a as an action instead of an action header. #### Measure of model difference Given two models M1=⟨P,A1⟩subscript𝑀1𝑃subscript𝐴1M\_{1}=\langle P,A\_{1}\rangleitalic\_M start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT = ⟨ italic\_P , italic\_A start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ⟩ and M2=⟨P,A2⟩subscript𝑀2𝑃subscript𝐴2M\_{2}=\langle P,A\_{2}\rangleitalic\_M start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT = ⟨ italic\_P , italic\_A start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ⟩, defined over the same sets of predicates P𝑃Pitalic\_P and action headers A𝐴Aitalic\_A, the difference between the two models Δ(M1,M2)Δsubscript𝑀1subscript𝑀2\Delta(M\_{1},M\_{2})roman\_Δ ( italic\_M start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_M start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ) is defined as the number of *pal-tuples* that differ in their modes in M1subscript𝑀1M\_{1}italic\_M start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT and M2subscript𝑀2M\_{2}italic\_M start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT, i.e., Δ(M1,M2)=|{γ∈P×A×{+,−,∅}|μM1(γ)≠μM2(γ)}|Δsubscript𝑀1subscript𝑀2conditional-set𝛾𝑃𝐴subscript𝜇subscript𝑀1𝛾subscript𝜇subscript𝑀2𝛾\Delta(M\_{1},M\_{2})=|\{\gamma\in P\times A\times\{+,-,\emptyset\}|\mu\_{M\_{1}}({\gamma})\neq\mu\_{M\_{2}}({\gamma})\}|roman\_Δ ( italic\_M start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_M start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ) = | { italic\_γ ∈ italic\_P × italic\_A × { + , - , ∅ } | italic\_μ start\_POSTSUBSCRIPT italic\_M start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_γ ) ≠ italic\_μ start\_POSTSUBSCRIPT italic\_M start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_γ ) } |. ### 2.2 Abstracting Models Several authors have explored the use of abstraction in planning (Sacerdoti [1974](#bib.bib24); Giunchiglia and Walsh [1992](#bib.bib13); Helmert, Haslum, and Hoffmann [2007](#bib.bib16); Bäckström and Jonsson [2013](#bib.bib3); Srivastava, Russell, and Pinto [2016](#bib.bib27)). We define an abstract model as a model that does not have a mode assigned for at least one of the *pal-tuples*. Let ΓMsubscriptΓ𝑀\Gamma\_{M}roman\_Γ start\_POSTSUBSCRIPT italic\_M end\_POSTSUBSCRIPT be the set of all possible *pal-tuples*, and \⃝raisebox{-0.9pt}{?} be an additional possible value that a *pal-tuple* can take. Assigning \⃝raisebox{-0.9pt}{?} mode to a *pal-tuple* denotes that its mode is unknown. An abstract model M𝑀Mitalic\_M is thus a function μM:ΓM→{+,−,∅,\⃝raisebox{-0.9pt}{?}}:subscript𝜇𝑀→subscriptΓ𝑀\⃝raisebox{-0.9pt}{?}\mu\_{M}:\Gamma\_{M}\rightarrow\{+,-,\emptyset,\raisebox{0.5pt}{\textcircled{\raisebox{-.9pt} {?}}}\}italic\_μ start\_POSTSUBSCRIPT italic\_M end\_POSTSUBSCRIPT : roman\_Γ start\_POSTSUBSCRIPT italic\_M end\_POSTSUBSCRIPT → { + , - , ∅ , \⃝raisebox{-0.9pt}{?} } that maps each element in ΓMsubscriptΓ𝑀\Gamma\_{M}roman\_Γ start\_POSTSUBSCRIPT italic\_M end\_POSTSUBSCRIPT to a *mode* in the set {+,−,∅,\⃝raisebox{-0.9pt}{?}}\⃝raisebox{-0.9pt}{?}\{+,-,\emptyset,\raisebox{0.5pt}{\textcircled{\raisebox{-.9pt} {?}}}\}{ + , - , ∅ , \⃝raisebox{-0.9pt}{?} }. Let 𝒰𝒰\mathcal{U}caligraphic\_U be the set of all abstract and concrete models that can possibly be expressed by assigning modes in {+,−,∅,\⃝raisebox{-0.9pt}{?}}\⃝raisebox{-0.9pt}{?}\{+,-,\emptyset,\raisebox{0.5pt}{\textcircled{\raisebox{-.9pt} {?}}}\}{ + , - , ∅ , \⃝raisebox{-0.9pt}{?} } to each *pal-tuple* γ∈ΓM𝛾subscriptΓ𝑀\gamma\in\Gamma\_{M}italic\_γ ∈ roman\_Γ start\_POSTSUBSCRIPT italic\_M end\_POSTSUBSCRIPT. We now formally define model abstraction as follows: ###### Definition 3. Given models M1subscript𝑀1M\_{1}italic\_M start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT and M2subscript𝑀2M\_{2}italic\_M start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT, M2subscript𝑀2M\_{2}italic\_M start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT is an abstraction of M1subscript𝑀1M\_{1}italic\_M start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT over the set of all possible *pal-tuples* ΓΓ\Gammaroman\_Γ iff ∃Γ2⊆ΓsubscriptΓ2Γ\exists\Gamma\_{2}\subseteq\Gamma∃ roman\_Γ start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ⊆ roman\_Γ s.t. ∀γ∈Γ2for-all𝛾subscriptΓ2\forall\gamma\in\Gamma\_{2}∀ italic\_γ ∈ roman\_Γ start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT, μM2(γ)=\⃝raisebox{-0.9pt}{?}subscript𝜇subscript𝑀2𝛾\⃝raisebox{-0.9pt}{?}\mu\_{M\_{2}}(\gamma)=\raisebox{0.5pt}{\textcircled{\raisebox{-.9pt} {?}}}italic\_μ start\_POSTSUBSCRIPT italic\_M start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_γ ) = \⃝raisebox{-0.9pt}{?} and ∀γ∈Γ∖Γ2for-all𝛾ΓsubscriptΓ2\forall\gamma\in\Gamma\setminus\Gamma\_{2}∀ italic\_γ ∈ roman\_Γ ∖ roman\_Γ start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT, μM2(γ)=μM1(γ)subscript𝜇subscript𝑀2𝛾subscript𝜇subscript𝑀1𝛾\mu\_{M\_{2}}(\gamma)=\mu\_{M\_{1}}(\gamma)italic\_μ start\_POSTSUBSCRIPT italic\_M start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_γ ) = italic\_μ start\_POSTSUBSCRIPT italic\_M start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_γ ). ### 2.3 Agent Observation Traces We assume limited access to a set of observation traces 𝕆𝕆\mathbb{O}blackboard\_O, collected from the agent, as defined below. ###### Definition 4. An *observation trace* o𝑜oitalic\_o is a sequence of states and actions of the form ⟨s0,a1,s1,a2,…,sn−1,an,sn⟩subscript𝑠0subscript𝑎1subscript𝑠1subscript𝑎2…subscript𝑠𝑛1subscript𝑎𝑛subscript𝑠𝑛\langle s\_{0},a\_{1},s\_{1},a\_{2},\dots,s\_{n-1},a\_{n},s\_{n}\rangle⟨ italic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT , … , italic\_s start\_POSTSUBSCRIPT italic\_n - 1 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT ⟩ such that ∀i∈[1,n]for-all𝑖1𝑛\forall i\in[1,n]\;∀ italic\_i ∈ [ 1 , italic\_n ] ai(si−1)=sisubscript𝑎𝑖subscript𝑠𝑖1subscript𝑠𝑖a\_{i}(s\_{i-1})=s\_{i}italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_i - 1 end\_POSTSUBSCRIPT ) = italic\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT. These observation traces can be split into multiple action triplets as defined below. ###### Definition 5. Given an observation trace o=⟨s0,a1,s1,a2,…,sn−1,an,sn⟩𝑜subscript𝑠0subscript𝑎1subscript𝑠1subscript𝑎2…subscript𝑠𝑛1subscript𝑎𝑛subscript𝑠𝑛o=\langle s\_{0},a\_{1},s\_{1},a\_{2},\dots,s\_{n-1},a\_{n},s\_{n}\rangleitalic\_o = ⟨ italic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT , … , italic\_s start\_POSTSUBSCRIPT italic\_n - 1 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT ⟩, an *action triplet* is a 3-tuple sub-sequence of o𝑜oitalic\_o of the form ⟨si−1,ai,si⟩subscript𝑠𝑖1subscript𝑎𝑖subscript𝑠𝑖\langle s\_{i-1},a\_{i},s\_{i}\rangle⟨ italic\_s start\_POSTSUBSCRIPT italic\_i - 1 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ⟩, where i∈[1,n]𝑖1𝑛i\in[1,n]italic\_i ∈ [ 1 , italic\_n ] and applying an action aisubscript𝑎𝑖a\_{i}italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT in state si−1subscript𝑠𝑖1s\_{i-1}italic\_s start\_POSTSUBSCRIPT italic\_i - 1 end\_POSTSUBSCRIPT results in state sisubscript𝑠𝑖s\_{i}italic\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT, i.e., ai(si−1)=sisubscript𝑎𝑖subscript𝑠𝑖1subscript𝑠𝑖a\_{i}(s\_{i-1})=s\_{i}italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_i - 1 end\_POSTSUBSCRIPT ) = italic\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT. The states si−1subscript𝑠𝑖1s\_{i-1}italic\_s start\_POSTSUBSCRIPT italic\_i - 1 end\_POSTSUBSCRIPT and sisubscript𝑠𝑖s\_{i}italic\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT are called pre- and post-states of action aisubscript𝑎𝑖a\_{i}italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT, respectively. An action triplet ⟨si−1,ai,si⟩subscript𝑠𝑖1subscript𝑎𝑖subscript𝑠𝑖\langle s\_{i-1},a\_{i},s\_{i}\rangle⟨ italic\_s start\_POSTSUBSCRIPT italic\_i - 1 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ⟩ is said to be *optimal* if there does not exist an action sequence (of length ≥1absent1\geq 1≥ 1) that takes the agent from state si−1subscript𝑠𝑖1s\_{i-1}italic\_s start\_POSTSUBSCRIPT italic\_i - 1 end\_POSTSUBSCRIPT to sisubscript𝑠𝑖s\_{i}italic\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT with total action cost less than that of action aisubscript𝑎𝑖a\_{i}italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT, where each action aisubscript𝑎𝑖a\_{i}italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT has unit cost. ### 2.4 Queries We use queries to actively gain information about the functionality of an agent to learn its updated model. We assume that the agent can respond to a query using a simulator. The availability of such agents with simulators is a common assumption as most AI systems already use simulators for design, testing, and verification. We use a notion of queries similar to Verma, Marpally, and Srivastava ([2021](#bib.bib29)), to perform a dialog with an autonomous agent. These queries use an agent to determine what happens if it executes a sequence of actions in a given initial state. E.g., in the rovers domain, the rover could be asked: what happens when the action *sample\_rock(rover1 storage1 waypoint1)* is executed in an initial state {{\{{*(equipped\_rock\_analysis rover1), (battery\_half rover1), (at rover1 waypoint1)*}}\}}? Formally, a *query* is a function that maps an agent to a response, which we define as: ###### Definition 6. Given a set of predicates P𝑃Pitalic\_P, a set of actions A𝐴Aitalic\_A, and a set of objects O𝑂Oitalic\_O, a *query* Q⟨s,π⟩:𝒜→ℕ×S:𝑄𝑠𝜋→𝒜ℕ𝑆Q\langle s,\pi\rangle:\mathcal{A}\rightarrow\mathbb{N}\times Sitalic\_Q ⟨ italic\_s , italic\_π ⟩ : caligraphic\_A → blackboard\_N × italic\_S is parameterized by a start state sI∈Ssubscript𝑠𝐼𝑆s\_{I}\in Sitalic\_s start\_POSTSUBSCRIPT italic\_I end\_POSTSUBSCRIPT ∈ italic\_S and a plan π=⟨a1,…,aN⟩𝜋subscript𝑎1…subscript𝑎𝑁\pi=\langle a\_{1},\dots,a\_{N}\rangleitalic\_π = ⟨ italic\_a start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , italic\_a start\_POSTSUBSCRIPT italic\_N end\_POSTSUBSCRIPT ⟩, where S𝑆Sitalic\_S is the state space over P𝑃Pitalic\_P and O𝑂Oitalic\_O, and {a1,…,aN}subscript𝑎1…subscript𝑎𝑁\{a\_{1},\dots,a\_{N}\}{ italic\_a start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , italic\_a start\_POSTSUBSCRIPT italic\_N end\_POSTSUBSCRIPT } is a subset of action space over A𝐴Aitalic\_A and O𝑂Oitalic\_O. It maps agents to responses θ=⟨nF,sF⟩𝜃subscript𝑛𝐹subscript𝑠𝐹\theta=\langle n\_{F},s\_{F}\rangleitalic\_θ = ⟨ italic\_n start\_POSTSUBSCRIPT italic\_F end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT italic\_F end\_POSTSUBSCRIPT ⟩ such that nFsubscript𝑛𝐹n\_{F}italic\_n start\_POSTSUBSCRIPT italic\_F end\_POSTSUBSCRIPT is the length of the longest prefix of π𝜋\piitalic\_π that 𝒜𝒜\mathcal{A}caligraphic\_A can successfully execute and sF∈Ssubscript𝑠𝐹𝑆s\_{F}\in Sitalic\_s start\_POSTSUBSCRIPT italic\_F end\_POSTSUBSCRIPT ∈ italic\_S is the result of that execution. Responses to such queries can be used to gain useful information about the model drift. E.g., consider an agent with an internal model M*drift*𝒜superscriptsubscript𝑀*drift*𝒜M\_{\emph{drift}}^{\mathcal{A}}italic\_M start\_POSTSUBSCRIPT drift end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT as shown in Tab. [1](#S3.T1 "Table 1 ‣ 3 Formal Framework ‣ Differential Assessment of Black-Box AI Agents"). If a query is posed asking what happens when the action *sample\_rock(rover1 storage1 waypoint1)* is executed in an initial state {{\{{*(equipped\_rock\_analysis rover1), (battery\_half rover1), (at rover1 waypoint1)*}}\}}, the agent would respond ⟨0,{\langle 0,\{⟨ 0 , {*(equipped\_rock\_analysis rover1), (battery\_half rover1), (at rover1 waypoint1)*}⟩\}\rangle} ⟩, representing that it was not able to execute the plan, and the resulting state was {{\{{*(equipped\_rock\_analysis rover1), (battery\_half rover1), (at rover1 waypoint1)*}}\}} (same as the initial state in this case). Note that this response is inconsistent with the model M*init*𝒜superscriptsubscript𝑀*init*𝒜M\_{\emph{init}}^{\mathcal{A}}italic\_M start\_POSTSUBSCRIPT init end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT, and it can help in identifying that the precondition of action *sample\_rock(?r ?s ?w)* has changed. 3 Formal Framework ------------------- Our objective is to address the problem of differential assessment of black-box AI agents whose functionality may have changed from the last known model. Without loss of generality, we consider situations where the set of action headers is same because the problem of differential assessment with changing action headers can be reduced to that with uniform action headers. This is because if the set of actions has increased, new actions can be added with empty preconditions and effects to M*init*𝒜superscriptsubscript𝑀*init*𝒜M\_{\emph{init}}^{\mathcal{A}}italic\_M start\_POSTSUBSCRIPT init end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT, and if it has decreased, M*init*𝒜superscriptsubscript𝑀*init*𝒜M\_{\emph{init}}^{\mathcal{A}}italic\_M start\_POSTSUBSCRIPT init end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT can be reduced similarly. We assume that the predicate vocabulary used in the two models is the same; extension to situations where the vocabulary changes can be used to model open-world scenarios. However, that extension is beyond the scope of this paper. | Model |  Precondition | | \columncolorwhite[][0pt]Effect | | --- | --- | --- | --- | | M*init*𝒜subscriptsuperscript𝑀𝒜*init*M^{\mathcal{A}}\_{\emph{init}}italic\_M start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT init end\_POSTSUBSCRIPT | *(equipped\_rock\_analysis​ ?r)*            *(battery\_half​ ?r)*                         *(at ?r ?w)* | →→\rightarrow→ | \columncolorwhite[][0pt]*(rock\_sample\_taken​ ?r)*     *(store\_full ?r ?s)*                     ¬\neg¬*(battery\_half​ ​​?r)*           *(battery\_reserve ?r)* | | M*drift*𝒜subscriptsuperscript𝑀𝒜*drift*M^{\mathcal{A}}\_{\emph{drift}}italic\_M start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT drift end\_POSTSUBSCRIPT | *(equipped\_rock\_analysis​ ?r)*                            *(battery\_full​ ?r)*                          *(at ?r ?w)* | →→\rightarrow→ | \columncolorwhite[][0pt]*(rock\_sample\_taken​ ?r)*     *(store\_full ?r ?s)*                     ¬\neg¬*(battery\_full ?r)*           *(battery\_half ?r)* | Table 1: *sample\_rock (?r ?s ?w)* action of the agent 𝒜𝒜\mathcal{A}caligraphic\_A in M*init*𝒜subscriptsuperscript𝑀𝒜*init*M^{\mathcal{A}}\_{\emph{init}}italic\_M start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT init end\_POSTSUBSCRIPT and a possible drifted model M*drift*𝒜subscriptsuperscript𝑀𝒜*drift*M^{\mathcal{A}}\_{\emph{drift}}italic\_M start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT drift end\_POSTSUBSCRIPT. Suppose an agent 𝒜𝒜\mathcal{A}caligraphic\_A’s functionality was known as a model M*init*𝒜=⟨P,𝒜*init*⟩superscriptsubscript𝑀*init*𝒜𝑃subscript𝒜*init*M\_{\emph{init}}^{\mathcal{A}}=\langle P,\mathcal{A}\_{\emph{init}}\rangleitalic\_M start\_POSTSUBSCRIPT init end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT = ⟨ italic\_P , caligraphic\_A start\_POSTSUBSCRIPT init end\_POSTSUBSCRIPT ⟩, and we wish to assess its current functionality as the model M*drift*𝒜=⟨P,𝒜*drift*⟩superscriptsubscript𝑀*drift*𝒜𝑃subscript𝒜*drift*M\_{\emph{drift}}^{\mathcal{A}}=\langle P,\mathcal{A}\_{\emph{drift}}\rangleitalic\_M start\_POSTSUBSCRIPT drift end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT = ⟨ italic\_P , caligraphic\_A start\_POSTSUBSCRIPT drift end\_POSTSUBSCRIPT ⟩. The drift in the functionality of the agent can be measured by changes in the preconditions and/or effects of all the actions in 𝒜*init*subscript𝒜*init*\mathcal{A}\_{\emph{init}}caligraphic\_A start\_POSTSUBSCRIPT init end\_POSTSUBSCRIPT. The extent of the drift between M*init*𝒜superscriptsubscript𝑀*init*𝒜M\_{\emph{init}}^{\mathcal{A}}italic\_M start\_POSTSUBSCRIPT init end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT and M*drift*𝒜superscriptsubscript𝑀*drift*𝒜M\_{\emph{drift}}^{\mathcal{A}}italic\_M start\_POSTSUBSCRIPT drift end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT is represented as the model difference Δ(M*init*𝒜,M*drift*𝒜)Δsuperscriptsubscript𝑀*init*𝒜superscriptsubscript𝑀*drift*𝒜\Delta(M\_{\emph{init}}^{\mathcal{A}},M\_{\emph{drift}}^{\mathcal{A}})roman\_Δ ( italic\_M start\_POSTSUBSCRIPT init end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT , italic\_M start\_POSTSUBSCRIPT drift end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT ). We formally define the problem of differential assessment of an AI agent below. ###### Definition 7. Given an agent 𝒜𝒜\mathcal{A}caligraphic\_A with a functionality model M*init*𝒜superscriptsubscript𝑀*init*𝒜M\_{\emph{init}}^{\mathcal{A}}italic\_M start\_POSTSUBSCRIPT init end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT, and a set of observations 𝕆𝕆\mathbb{O}blackboard\_O collected using its current version of 𝒜*drift*subscript𝒜*drift*\mathcal{A}\_{\emph{drift}}caligraphic\_A start\_POSTSUBSCRIPT drift end\_POSTSUBSCRIPT with unknown functionality M*drift*𝒜superscriptsubscript𝑀*drift*𝒜M\_{\emph{drift}}^{\mathcal{A}}italic\_M start\_POSTSUBSCRIPT drift end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT, the *differential model assessment* problem ⟨M*init*𝒜,M*drift*𝒜,𝕆,𝒜⟩superscriptsubscript𝑀*init*𝒜superscriptsubscript𝑀*drift*𝒜𝕆𝒜\langle M\_{\emph{init}}^{\mathcal{A}},M\_{\emph{drift}}^{\mathcal{A}},\mathbb{O},\mathcal{A}\rangle⟨ italic\_M start\_POSTSUBSCRIPT init end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT , italic\_M start\_POSTSUBSCRIPT drift end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT , blackboard\_O , caligraphic\_A ⟩ is defined as the problem of inferring 𝒜𝒜\mathcal{A}caligraphic\_A in form of M*drift*𝒜superscriptsubscript𝑀*drift*𝒜M\_{\emph{drift}}^{\mathcal{A}}italic\_M start\_POSTSUBSCRIPT drift end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT using the inputs M*init*𝒜superscriptsubscript𝑀*init*𝒜M\_{\emph{init}}^{\mathcal{A}}italic\_M start\_POSTSUBSCRIPT init end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT, 𝕆𝕆\mathbb{O}blackboard\_O, and 𝒜𝒜\mathcal{A}caligraphic\_A. We wish to develop solutions to the problem of differential assessment of AI agents that are more efficient than re-assessment from scratch. ### 3.1 Correctness of Assessed Model We now discuss the properties that a model, which is a solution to the differential model assessment problem, should satisfy. A critical property of such models is that they should be consistent with the observation traces. We formally define consistency of a model w.r.t. an observation trace as follows: ###### Definition 8. Let o𝑜oitalic\_o be an observation trace ⟨s0,a1,s1,a2,…,sn−1,an,sn⟩subscript𝑠0subscript𝑎1subscript𝑠1subscript𝑎2…subscript𝑠𝑛1subscript𝑎𝑛subscript𝑠𝑛\langle s\_{0},a\_{1},s\_{1},a\_{2},\dots,s\_{n-1},a\_{n},s\_{n}\rangle⟨ italic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT , … , italic\_s start\_POSTSUBSCRIPT italic\_n - 1 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT ⟩. A model M=⟨P,A⟩𝑀𝑃𝐴M=\langle P,A\rangleitalic\_M = ⟨ italic\_P , italic\_A ⟩ is *consistent with the observation trace* o𝑜oitalic\_o iff ∀i∈{1,..,n}\forall i\in\{1,..,n\}∀ italic\_i ∈ { 1 , . . , italic\_n } ∃a∈A𝑎𝐴\exists a\in A∃ italic\_a ∈ italic\_A and aisubscript𝑎𝑖a\_{i}italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT is a grounding of action a𝑎aitalic\_a s.t.si−1⊧*pre*(ai)∧∀l∈*eff*(ai)si⊧l\;s.t.\quad s\_{i-1}\models\emph{pre}(a\_{i})\;\land\;\forall\,l\in\emph{eff}(a\_{i})\;s\_{i}\models litalic\_s . italic\_t . italic\_s start\_POSTSUBSCRIPT italic\_i - 1 end\_POSTSUBSCRIPT ⊧ pre ( italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) ∧ ∀ italic\_l ∈ eff ( italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) italic\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ⊧ italic\_l. In addition to being consistent with observation traces, a model should also be consistent with the queries that are asked and the responses that are received while actively inferring the model of the agent’s new functionality. We formally define consistency of a model with respect to a query and a response as: ###### Definition 9. Let M=⟨P,A⟩𝑀𝑃𝐴M=\langle P,A\rangleitalic\_M = ⟨ italic\_P , italic\_A ⟩ be a model; O𝑂Oitalic\_O be a set of objects; Q=⟨sI,π=⟨a1,…an⟩⟩𝑄delimited-⟨⟩subscript𝑠𝐼𝜋 subscript𝑎1…subscript𝑎𝑛Q=\langle s\_{I},\pi=\langle a\_{1},\dots a\_{n}\rangle\rangleitalic\_Q = ⟨ italic\_s start\_POSTSUBSCRIPT italic\_I end\_POSTSUBSCRIPT , italic\_π = ⟨ italic\_a start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … italic\_a start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT ⟩ ⟩ be a query defined using P,A,𝑃𝐴P,A,italic\_P , italic\_A , and O𝑂Oitalic\_O, and let θ=⟨nF,sF⟩𝜃subscript𝑛𝐹subscript𝑠𝐹\theta=\langle n\_{F},s\_{F}\rangleitalic\_θ = ⟨ italic\_n start\_POSTSUBSCRIPT italic\_F end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT italic\_F end\_POSTSUBSCRIPT ⟩, (nF≤nsubscript𝑛𝐹𝑛n\_{F}\leq nitalic\_n start\_POSTSUBSCRIPT italic\_F end\_POSTSUBSCRIPT ≤ italic\_n) be a response to Q𝑄Qitalic\_Q. M𝑀Mitalic\_M is *consistent with the query-response* ⟨Q,θ⟩𝑄𝜃\langle Q,\theta\rangle⟨ italic\_Q , italic\_θ ⟩ iff there exists an observation trace ⟨sI,a1,s1,…,anF,snF⟩subscript𝑠𝐼subscript𝑎1subscript𝑠1…subscript𝑎subscript𝑛𝐹subscript𝑠subscript𝑛𝐹\langle s\_{I},a\_{1},s\_{1},\ldots,a\_{n\_{F}},s\_{n\_{F}}\rangle⟨ italic\_s start\_POSTSUBSCRIPT italic\_I end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , italic\_a start\_POSTSUBSCRIPT italic\_n start\_POSTSUBSCRIPT italic\_F end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT italic\_n start\_POSTSUBSCRIPT italic\_F end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ⟩ that M𝑀Mitalic\_M is consistent with and snF⊧̸*pre*(anF+1)not-modelssubscript𝑠subscript𝑛𝐹*pre*subscript𝑎subscript𝑛𝐹1s\_{n\_{F}}\not\models\emph{pre}(a\_{n\_{F}+1})italic\_s start\_POSTSUBSCRIPT italic\_n start\_POSTSUBSCRIPT italic\_F end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ⊧̸ pre ( italic\_a start\_POSTSUBSCRIPT italic\_n start\_POSTSUBSCRIPT italic\_F end\_POSTSUBSCRIPT + 1 end\_POSTSUBSCRIPT ) where *pre*(anF+1)*pre*subscript𝑎subscript𝑛𝐹1\emph{pre}(a\_{n\_{F}+1})pre ( italic\_a start\_POSTSUBSCRIPT italic\_n start\_POSTSUBSCRIPT italic\_F end\_POSTSUBSCRIPT + 1 end\_POSTSUBSCRIPT ) is the precondition of anF+1subscript𝑎subscript𝑛𝐹1a\_{n\_{F}+1}italic\_a start\_POSTSUBSCRIPT italic\_n start\_POSTSUBSCRIPT italic\_F end\_POSTSUBSCRIPT + 1 end\_POSTSUBSCRIPT in M𝑀Mitalic\_M. We now discuss our methodology for solving the problem of differential assessment of AI systems. 4 Differential Assessment of AI Systems ---------------------------------------- Differential Assessment of AI Systems (Alg. [1](#alg1 "Algorithm 1 ‣ 4 Differential Assessment of AI Systems ‣ Differential Assessment of Black-Box AI Agents")) – DAAISy – takes as input an agent 𝒜𝒜\mathcal{A}caligraphic\_A whose functionality has drifted, the model M*init*𝒜=⟨P,A⟩superscriptsubscript𝑀*init*𝒜𝑃𝐴M\_{\emph{init}}^{\mathcal{A}}=\langle P,A\rangleitalic\_M start\_POSTSUBSCRIPT init end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT = ⟨ italic\_P , italic\_A ⟩ representing the previously known functionality of 𝒜𝒜\mathcal{A}caligraphic\_A, a set of arbitrary observation traces 𝕆𝕆\mathbb{O}blackboard\_O, and a set of random states 𝒮⊆S𝒮𝑆\mathcal{S}\subseteq Scaligraphic\_S ⊆ italic\_S. Alg. [1](#alg1 "Algorithm 1 ‣ 4 Differential Assessment of AI Systems ‣ Differential Assessment of Black-Box AI Agents") returns a set of updated models ℳ*drift*𝒜subscriptsuperscriptℳ𝒜*drift*\mathcal{M}^{\mathcal{A}}\_{\emph{drift}}caligraphic\_M start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT drift end\_POSTSUBSCRIPT, where each model M*drift*𝒜∈ℳ*drift*𝒜superscriptsubscript𝑀*drift*𝒜subscriptsuperscriptℳ𝒜*drift*M\_{\emph{drift}}^{\mathcal{A}}\in\mathcal{M}^{\mathcal{A}}\_{\emph{drift}}italic\_M start\_POSTSUBSCRIPT drift end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT ∈ caligraphic\_M start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT drift end\_POSTSUBSCRIPT represents 𝒜𝒜\mathcal{A}caligraphic\_A’s updated functionality and is consistent with all observation traces o∈𝕆𝑜𝕆o\in\mathbb{O}italic\_o ∈ blackboard\_O. A major contribution of this work is to introduce an approach to make inferences about not just the expanded functionality of an agent but also its reduced functionality using a limited set of observation traces. Situations where the scope of applicability of an action reduces, i.e., the agent can no longer use an action a𝑎aitalic\_a to reach state s′superscript𝑠′s^{\prime}italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT from state s𝑠sitalic\_s while it could before (e.g., due to addition of a precondition literal), are particularly difficult to identify because observing its behavior does not readily reveal what it cannot do in a given state. Most observation based action-model learners, even when given access to an incomplete model to start with, fail to make inferences about reduced functionality. DAAISy uses two principles to identify such a functionality reduction. First, it uses active querying so that the agent can be made to reveal failure of reachability, and second, we show that if the agent can be placed in optimal planning mode, plan length differences can be used to infer a reduction in functionality. DAAISy performs two major functions; it first identifies a salient set of *pal-tuples* whose modes were likely affected (line 1 of Alg. [1](#alg1 "Algorithm 1 ‣ 4 Differential Assessment of AI Systems ‣ Differential Assessment of Black-Box AI Agents")), and then infers the mode of such affected *pal-tuples* accurately through focused dialog with the agent (line 2 onwards of Alg. [1](#alg1 "Algorithm 1 ‣ 4 Differential Assessment of AI Systems ‣ Differential Assessment of Black-Box AI Agents")). In Sec. [4.1](#S4.SS1 "4.1 Identifying Potentially Affected pal-tuples ‣ 4 Differential Assessment of AI Systems ‣ Differential Assessment of Black-Box AI Agents"), we present our method for identifying a salient set of potentially affected *pal-tuples* that contribute towards expansion in the functionality of the agent through inference from available arbitrary observations. We then discuss the problem of identification of *pal-tuples* that contribute towards reduction in the functionality of the agent and argue that it cannot be performed using successful executions in observations of satisficing behavior. We show that *pal-tuples* corresponding to reduced functionality can be identified if observations of optimal behavior of the agent are available (Sec. [4.1](#S4.SS1 "4.1 Identifying Potentially Affected pal-tuples ‣ 4 Differential Assessment of AI Systems ‣ Differential Assessment of Black-Box AI Agents")). Finally, we present how we infer the nature of changes in all affected *pal-tuples* through a query-based interaction with the agent (Sec. [4.2](#S4.SS2 "4.2 Investigating Affected pal-tuples ‣ 4 Differential Assessment of AI Systems ‣ Differential Assessment of Black-Box AI Agents")) by building upon the Agent Interrogation Algorithm (AIA) (Verma, Marpally, and Srivastava [2021](#bib.bib29)). Identifying affected *pal-tuples* helps reduce the computational cost of querying as opposed to the exhaustive querying strategy used by AIA. We now discuss the two major functions of Alg. [1](#alg1 "Algorithm 1 ‣ 4 Differential Assessment of AI Systems ‣ Differential Assessment of Black-Box AI Agents") in detail. Algorithm 1 Differential Assessment of AI Systems Input:M*init*𝒜subscriptsuperscript𝑀𝒜*init*M^{\mathcal{A}}\_{\emph{init}}italic\_M start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT init end\_POSTSUBSCRIPT, 𝕆𝕆\mathbb{O}blackboard\_O, 𝒜𝒜\mathcal{A}caligraphic\_A, 𝒮𝒮\mathcal{S}caligraphic\_S Output: ℳ*drift*𝒜subscriptsuperscriptℳ𝒜*drift*\mathcal{M}^{\mathcal{A}}\_{\emph{drift}}caligraphic\_M start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT drift end\_POSTSUBSCRIPT 1:  Γδ←*identify\_affected\_pals()*←subscriptΓ𝛿*identify\_affected\_pals()*\Gamma\_{\delta}\leftarrow\emph{identify\\_affected\\_pals()}roman\_Γ start\_POSTSUBSCRIPT italic\_δ end\_POSTSUBSCRIPT ← identify\_affected\_pals()   2:  M*abs*←←subscript𝑀*abs*absentM\_{\emph{abs}}\!\leftarrow\!italic\_M start\_POSTSUBSCRIPT abs end\_POSTSUBSCRIPT ← set *pal-tuples* in M*init*𝒜superscriptsubscript𝑀*init*𝒜M\_{\emph{init}}^{\mathcal{A}}italic\_M start\_POSTSUBSCRIPT init end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT corresponding to ΓδsubscriptΓ𝛿\Gamma\_{\delta}roman\_Γ start\_POSTSUBSCRIPT italic\_δ end\_POSTSUBSCRIPT to \⃝raisebox{-0.9pt}{?}  3:  ℳ*drift*𝒜subscriptsuperscriptℳ𝒜*drift*\mathcal{M}^{\mathcal{A}}\_{\emph{drift}}caligraphic\_M start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT drift end\_POSTSUBSCRIPT ←{M*abs*}←absentsubscript𝑀*abs*\leftarrow\{M\_{\emph{abs}}\}← { italic\_M start\_POSTSUBSCRIPT abs end\_POSTSUBSCRIPT }  4:  for each γ𝛾\gammaitalic\_γ in ΓδsubscriptΓ𝛿\Gamma\_{\delta}roman\_Γ start\_POSTSUBSCRIPT italic\_δ end\_POSTSUBSCRIPT do 5:     for each M*abs*subscript𝑀*abs*M\_{\emph{abs}}italic\_M start\_POSTSUBSCRIPT abs end\_POSTSUBSCRIPT in ℳ*drift*𝒜subscriptsuperscriptℳ𝒜*drift*\mathcal{M}^{\mathcal{A}}\_{\emph{drift}}caligraphic\_M start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT drift end\_POSTSUBSCRIPT do 6:        ℳ*abs*←M*abs*×{γ+,γ−,γ∅}←subscriptℳ*abs*subscript𝑀*abs*superscript𝛾superscript𝛾superscript𝛾\mathcal{M}\_{\emph{abs}}\leftarrow M\_{\emph{abs}}\times\{\gamma^{+},\gamma^{-},\gamma^{\emptyset}\}caligraphic\_M start\_POSTSUBSCRIPT abs end\_POSTSUBSCRIPT ← italic\_M start\_POSTSUBSCRIPT abs end\_POSTSUBSCRIPT × { italic\_γ start\_POSTSUPERSCRIPT + end\_POSTSUPERSCRIPT , italic\_γ start\_POSTSUPERSCRIPT - end\_POSTSUPERSCRIPT , italic\_γ start\_POSTSUPERSCRIPT ∅ end\_POSTSUPERSCRIPT }  7:        ℳ*sieved*←{}←subscriptℳ*sieved*\mathcal{M}\_{\emph{sieved}}\leftarrow\{\}caligraphic\_M start\_POSTSUBSCRIPT sieved end\_POSTSUBSCRIPT ← { } 8:        if action corresponding to γ𝛾\gammaitalic\_γ: γasubscript𝛾𝑎\gamma\_{a}italic\_γ start\_POSTSUBSCRIPT italic\_a end\_POSTSUBSCRIPT in 𝕆𝕆\mathbb{O}blackboard\_O then 9:           s*pre*←*states\_where\_*γa*\_applicable*(𝕆,γa)←subscript𝑠*pre**states\_where\_*subscript𝛾𝑎*\_applicable*𝕆subscript𝛾𝑎s\_{\emph{pre}}\leftarrow\emph{states\\_where\\_}\gamma\_{a}\emph{\\_applicable}(\mathbb{O},\gamma\_{a})italic\_s start\_POSTSUBSCRIPT pre end\_POSTSUBSCRIPT ← states\_where\_ italic\_γ start\_POSTSUBSCRIPT italic\_a end\_POSTSUBSCRIPT \_applicable ( blackboard\_O , italic\_γ start\_POSTSUBSCRIPT italic\_a end\_POSTSUBSCRIPT )  10:           Q←⟨s*pre*∖{γp∪¬γpQ\leftarrow\langle s\_{\emph{pre}}\setminus\{\gamma\_{p}\cup\neg\gamma\_{p}italic\_Q ← ⟨ italic\_s start\_POSTSUBSCRIPT pre end\_POSTSUBSCRIPT ∖ { italic\_γ start\_POSTSUBSCRIPT italic\_p end\_POSTSUBSCRIPT ∪ ¬ italic\_γ start\_POSTSUBSCRIPT italic\_p end\_POSTSUBSCRIPT}, γasubscript𝛾𝑎\gamma\_{a}italic\_γ start\_POSTSUBSCRIPT italic\_a end\_POSTSUBSCRIPT ⟩⟩\rangle⟩  11:           θ←*ask\_query*(𝒜,Q)←𝜃*ask\_query*𝒜𝑄\theta\leftarrow\emph{ask\\_query}(\mathcal{A},Q)italic\_θ ← ask\_query ( caligraphic\_A , italic\_Q )   12:           ℳ*sieved*←*sieve\_models*(ℳ*abs*,Q,θ)←subscriptℳ*sieved**sieve\_models*subscriptℳ*abs*𝑄𝜃\mathcal{M}\_{\emph{sieved}}\leftarrow\emph{sieve\\_models}(\mathcal{M}\_{\emph{abs}},Q,\theta)caligraphic\_M start\_POSTSUBSCRIPT sieved end\_POSTSUBSCRIPT ← sieve\_models ( caligraphic\_M start\_POSTSUBSCRIPT abs end\_POSTSUBSCRIPT , italic\_Q , italic\_θ )  13:        else 14:           for each pair ⟨Mi,Mj⟩subscript𝑀𝑖subscript𝑀𝑗\langle M\_{i},M\_{j}\rangle⟨ italic\_M start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , italic\_M start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ⟩ in ℳ*abs*subscriptℳ*abs*\mathcal{M}\_{\emph{abs}}caligraphic\_M start\_POSTSUBSCRIPT abs end\_POSTSUBSCRIPT do 15:              Q←*generate\_query*(Mi,Mj,γ,S)←𝑄*generate\_query*subscript𝑀𝑖subscript𝑀𝑗𝛾𝑆Q\leftarrow\emph{generate\\_query}(M\_{i},M\_{j},\gamma,S)italic\_Q ← generate\_query ( italic\_M start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , italic\_M start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT , italic\_γ , italic\_S )  16:              θ←*ask\_query*(𝒜,Q)←𝜃*ask\_query*𝒜𝑄\theta\leftarrow\emph{ask\\_query}(\mathcal{A},Q)italic\_θ ← ask\_query ( caligraphic\_A , italic\_Q )  17:              ℳ*sieved*←*sieve\_models*({Mi,Mj},Q,θ)←subscriptℳ*sieved**sieve\_models*subscript𝑀𝑖subscript𝑀𝑗𝑄𝜃\mathcal{M}\_{\emph{sieved}}\leftarrow\emph{sieve\\_models}(\{M\_{i},M\_{j}\},Q,\theta)caligraphic\_M start\_POSTSUBSCRIPT sieved end\_POSTSUBSCRIPT ← sieve\_models ( { italic\_M start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , italic\_M start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT } , italic\_Q , italic\_θ )  18:           end for 19:        end if 20:        ℳ*abs*←ℳ*abs*∖←subscriptℳ*abs*limit-fromsubscriptℳ*abs*\mathcal{M}\_{\emph{abs}}\leftarrow\mathcal{M}\_{\emph{abs}}\setminuscaligraphic\_M start\_POSTSUBSCRIPT abs end\_POSTSUBSCRIPT ← caligraphic\_M start\_POSTSUBSCRIPT abs end\_POSTSUBSCRIPT ∖ ℳ*sieved*subscriptℳ*sieved*\mathcal{M}\_{\emph{sieved}}caligraphic\_M start\_POSTSUBSCRIPT sieved end\_POSTSUBSCRIPT 21:     end for 22:     ℳ*drift*𝒜subscriptsuperscriptℳ𝒜*drift*\mathcal{M}^{\mathcal{A}}\_{\emph{drift}}caligraphic\_M start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT drift end\_POSTSUBSCRIPT ←ℳ*abs*←absentsubscriptℳ*abs*\leftarrow\mathcal{M}\_{\emph{abs}}← caligraphic\_M start\_POSTSUBSCRIPT abs end\_POSTSUBSCRIPT  23:  end for ### 4.1 Identifying Potentially Affected pal-tuples We identify a reduced set of *pal-tuples* whose modes were potentially affected during the model drift, denoted by ΓδsubscriptΓ𝛿\Gamma\_{\delta}roman\_Γ start\_POSTSUBSCRIPT italic\_δ end\_POSTSUBSCRIPT, using a small set of available observation traces 𝕆𝕆\mathbb{O}blackboard\_O. We draw two kinds of inferences from these observation traces: inferences about expanded functionality, and inferences about reduced functionality. We discuss our method for inferring ΓδsubscriptΓ𝛿\Gamma\_{\delta}roman\_Γ start\_POSTSUBSCRIPT italic\_δ end\_POSTSUBSCRIPT for both types of changes in the functionality below. #### Expanded functionality To infer expanded functionality of the agent, we use the previously known model of the agent’s functionality and identify its differences with the possible behaviors of the agent that are consistent with 𝕆𝕆\mathbb{O}blackboard\_O. To identify the *pal-tuples* that directly contribute to an expansion in the agent’s functionality, we perform an analysis similar to  Stern and Juba ([2017](#bib.bib28)), but instead of bounding the predicates that can appear in each action’s precondition and effect, we bound the range of possible values that each *pa-tuple* in M*drift*𝒜superscriptsubscript𝑀*drift*𝒜M\_{\emph{drift}}^{\mathcal{A}}italic\_M start\_POSTSUBSCRIPT drift end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT can take using Tab. [2](#S4.T2 "Table 2 ‣ Expanded functionality ‣ 4.1 Identifying Potentially Affected pal-tuples ‣ 4 Differential Assessment of AI Systems ‣ Differential Assessment of Black-Box AI Agents"). For any *pa-tuple*, a direct comparison between its value in M*init*𝒜superscriptsubscript𝑀*init*𝒜M\_{\emph{init}}^{\mathcal{A}}italic\_M start\_POSTSUBSCRIPT init end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT and possible inferred values in M*drift*𝒜superscriptsubscript𝑀*drift*𝒜M\_{\emph{drift}}^{\mathcal{A}}italic\_M start\_POSTSUBSCRIPT drift end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT provides an indication of whether it was affected. | | | | | | | --- | --- | --- | --- | --- | | ⟨m*pre*,m*eff*⟩subscript𝑚*pre*subscript𝑚*eff*\langle m\_{\emph{pre}},m\_{\emph{eff}}\rangle⟨ italic\_m start\_POSTSUBSCRIPT pre end\_POSTSUBSCRIPT , italic\_m start\_POSTSUBSCRIPT eff end\_POSTSUBSCRIPT ⟩ | (pos,pos) | (pos,neg) | (neg,pos) | (neg,neg) | | ⟨+,−⟩\langle+,-\rangle⟨ + , - ⟩ | ✗ | ✓ | ✗ | ✗ | | ⟨+,∅⟩\langle+,\,\emptyset\,\rangle⟨ + , ∅ ⟩ | ✓ | ✗ | ✗ | ✗ | | ⟨−,+⟩\langle-,+\rangle⟨ - , + ⟩ | ✗ | ✗ | ✓ | ✗ | | ⟨−,∅⟩\langle-,\,\emptyset\,\rangle⟨ - , ∅ ⟩ | ✗ | ✗ | ✗ | ✓ | | ⟨∅,+⟩\langle\,\emptyset\,,+\rangle⟨ ∅ , + ⟩ | ✓ | ✗ | ✓ | ✗ | | ⟨∅,−⟩\langle\,\emptyset\,,-\rangle⟨ ∅ , - ⟩ | ✗ | ✓ | ✗ | ✓ | | ⟨∅,∅⟩\langle\,\emptyset\,,\,\emptyset\,\rangle⟨ ∅ , ∅ ⟩ | ✓ | ✗ | ✗ | ✓ | Table 2: Each row represents a possible value ⟨m*pre*,m*eff*⟩subscript𝑚*pre*subscript𝑚*eff*\langle m\_{\emph{pre}},m\_{\emph{eff}}\rangle⟨ italic\_m start\_POSTSUBSCRIPT pre end\_POSTSUBSCRIPT , italic\_m start\_POSTSUBSCRIPT eff end\_POSTSUBSCRIPT ⟩ for a *pa-tuple* ⟨p,a⟩𝑝𝑎\langle p,a\rangle⟨ italic\_p , italic\_a ⟩. Each column represents a possible tuple representing presence of predicate p𝑝pitalic\_p in the pre- and post-states of an action triplet ⟨si,a,si+1⟩subscript𝑠𝑖𝑎subscript𝑠𝑖1\langle s\_{i},a,s\_{i+1}\rangle⟨ italic\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , italic\_a , italic\_s start\_POSTSUBSCRIPT italic\_i + 1 end\_POSTSUBSCRIPT ⟩ (discussed in Sec.[4.1](#S4.SS1 "4.1 Identifying Potentially Affected pal-tuples ‣ 4 Differential Assessment of AI Systems ‣ Differential Assessment of Black-Box AI Agents")). The cells represent whether a value for *pa-tuple* is consistent with an action triplet in observation traces. To identify possible values for a *pa-tuple* ⟨p,a⟩𝑝𝑎\langle p,a\rangle⟨ italic\_p , italic\_a ⟩, we first collect a set of all the action-triplets from 𝕆𝕆\mathbb{O}blackboard\_O that contain the action a𝑎aitalic\_a. For a given predicate p𝑝pitalic\_p and state s𝑠sitalic\_s, if s⊧pmodels𝑠𝑝s\models pitalic\_s ⊧ italic\_p then the presence of predicate p𝑝pitalic\_p is represented as pos, similarly, if s⊧¬pmodels𝑠𝑝s\models\neg pitalic\_s ⊧ ¬ italic\_p then the presence of predicate p𝑝pitalic\_p is represented as neg. Using this representation, a tuple of predicate presence ∈{*(pos,pos)*\in\{\emph{(pos,pos)}∈ { (pos,pos), (pos,neg), (neg,pos), *(neg,neg)*}\emph{(neg,neg)}\}(neg,neg) } is determined for the *pa-tuple* ⟨p,a⟩𝑝𝑎\langle p,a\rangle⟨ italic\_p , italic\_a ⟩ for each action triplet ⟨s,a,s′⟩∈𝕆𝑠𝑎superscript𝑠′ 𝕆\langle s,a,s^{\prime}\rangle\in\mathbb{O}⟨ italic\_s , italic\_a , italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ⟩ ∈ blackboard\_O by analyzing the presence of predicate p𝑝pitalic\_p in the pre- and post-states of the action triplets. Possible values of the *pa-tuple* that are consistent with 𝕆𝕆\mathbb{O}blackboard\_O are directly inferred from the Tab. [2](#S4.T2 "Table 2 ‣ Expanded functionality ‣ 4.1 Identifying Potentially Affected pal-tuples ‣ 4 Differential Assessment of AI Systems ‣ Differential Assessment of Black-Box AI Agents") using the inferred tuples of predicate presence. E.g., for a *pa-tuple*, the values ⟨+,−⟩\langle+,-\rangle⟨ + , - ⟩ and ⟨∅,−⟩\langle\emptyset,-\rangle⟨ ∅ , - ⟩ are consistent with (pos,neg)𝑝𝑜𝑠𝑛𝑒𝑔(pos,neg)( italic\_p italic\_o italic\_s , italic\_n italic\_e italic\_g ), whereas, only ⟨∅,+⟩\langle\emptyset,+\rangle⟨ ∅ , + ⟩ is consistent with (pos,pos)𝑝𝑜𝑠𝑝𝑜𝑠(pos,pos)( italic\_p italic\_o italic\_s , italic\_p italic\_o italic\_s ) and (neg,pos)𝑛𝑒𝑔𝑝𝑜𝑠(neg,pos)( italic\_n italic\_e italic\_g , italic\_p italic\_o italic\_s ) tuples of predicate presence that are inferred from 𝕆𝕆\mathbb{O}blackboard\_O. Once all the possible values for each *pa-tuple* in M*drift*𝒜superscriptsubscript𝑀*drift*𝒜M\_{\emph{drift}}^{\mathcal{A}}italic\_M start\_POSTSUBSCRIPT drift end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT are inferred, we identify *pa-tuples* whose previously known value in M*init*𝒜superscriptsubscript𝑀*init*𝒜M\_{\emph{init}}^{\mathcal{A}}italic\_M start\_POSTSUBSCRIPT init end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT is no longer possible due to inconsistency with 𝕆𝕆\mathbb{O}blackboard\_O. The *pal-tuples* corresponding to such *pa-tuples* are added to the set of potentially affected *pal-tuples* ΓδsubscriptΓ𝛿\Gamma\_{\delta}roman\_Γ start\_POSTSUBSCRIPT italic\_δ end\_POSTSUBSCRIPT. Our method also infers the correct modes of a subset of *pal-tuples*. E.g., consider a predicate p𝑝pitalic\_p and two actions triplets in 𝕆𝕆\mathbb{O}blackboard\_O of the form ⟨s1,a,s1′⟩subscript𝑠1𝑎superscriptsubscript𝑠1′\langle s\_{1},a,s\_{1}^{\prime}\rangle⟨ italic\_s start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_a , italic\_s start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ⟩ and ⟨s2,a,s2′⟩subscript𝑠2𝑎superscriptsubscript𝑠2′\langle s\_{2},a,s\_{2}^{\prime}\rangle⟨ italic\_s start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT , italic\_a , italic\_s start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ⟩ that satisfy s1⊧pmodelssubscript𝑠1𝑝s\_{1}\models pitalic\_s start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ⊧ italic\_p and s2⊧¬pmodelssubscript𝑠2𝑝s\_{2}\models\neg pitalic\_s start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ⊧ ¬ italic\_p. Such an observation clearly indicates that p𝑝pitalic\_p is not in the precondition of action a𝑎aitalic\_a, i.e., mode for ⟨p,a⟩𝑝𝑎\langle p,a\rangle⟨ italic\_p , italic\_a ⟩ in the precondition is ∅\emptyset∅. Such inferences of modes are used to update the known functionality of the agent. We remove such *pal-tuples*, whose modes are already inferred, from ΓδsubscriptΓ𝛿\Gamma\_{\delta}roman\_Γ start\_POSTSUBSCRIPT italic\_δ end\_POSTSUBSCRIPT. A shortcoming of direct inference from successful executions in available observation traces is that it cannot learn any reduction in the functionality of the agent, as discussed in the beginning of Sec. [4](#S4 "4 Differential Assessment of AI Systems ‣ Differential Assessment of Black-Box AI Agents"). We now discuss our method to address this limitation and identify a larger set of potentially affected *pal-tuples*. #### Reduced functionality We conceptualize reduction in functionality as an increase in the optimal cost of going from one state to another. More precisely, reduction in functionality represents situations where there exist states si,sjsubscript𝑠𝑖subscript𝑠𝑗s\_{i},s\_{j}italic\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT such that the minimum cost of going from sisubscript𝑠𝑖s\_{i}italic\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT to sjsubscript𝑠𝑗s\_{j}italic\_s start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT is higher in M*drift*𝒜subscriptsuperscript𝑀𝒜*drift*M^{\mathcal{A}}\_{\emph{drift}}italic\_M start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT drift end\_POSTSUBSCRIPT than in M*init*𝒜superscriptsubscript𝑀*init*𝒜M\_{\emph{init}}^{\mathcal{A}}italic\_M start\_POSTSUBSCRIPT init end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT. In this paper, this cost refers to the number of steps between the pair of states as we consider unit action costs. This notion encompasses situations with reductions in reachability as a special case. In practice, a reduction in functionality may occur if the precondition of at least one action in M*drift*𝒜subscriptsuperscript𝑀𝒜*drift*M^{\mathcal{A}}\_{\emph{drift}}italic\_M start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT drift end\_POSTSUBSCRIPT has new *pal-tuples*, or the effect of at least one of its actions has new *pal-tuples* that conflict with other actions required for reaching certain states. Our notion of reduced functionality captures all the variants of reduction in functionality. However, for clarity, we illustrate an example that focuses on situations where precondition of an action has increased. Consider the case from Tab. [1](#S3.T1 "Table 1 ‣ 3 Formal Framework ‣ Differential Assessment of Black-Box AI Agents") where 𝒜𝒜\mathcal{A}caligraphic\_A’s model gets updated from M*init*𝒜superscriptsubscript𝑀*init*𝒜M\_{\emph{init}}^{\mathcal{A}}italic\_M start\_POSTSUBSCRIPT init end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT to M*drift*𝒜subscriptsuperscript𝑀𝒜*drift*M^{\mathcal{A}}\_{\emph{drift}}italic\_M start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT drift end\_POSTSUBSCRIPT. The action sample\_rock’s applicability in M*drift*𝒜subscriptsuperscript𝑀𝒜*drift*M^{\mathcal{A}}\_{\emph{drift}}italic\_M start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT drift end\_POSTSUBSCRIPT has reduced from that in M*init*𝒜superscriptsubscript𝑀*init*𝒜M\_{\emph{init}}^{\mathcal{A}}italic\_M start\_POSTSUBSCRIPT init end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT as 𝒜𝒜\mathcal{A}caligraphic\_A can no longer sample rocks in situations where the battery is half charged but needs a fully charged battery to be able to execute the action. In such scenarios, instead of relying on observation traces, our method identifies traces containing indications of actions that were affected either in their precondition or effect, discovers additional salient *pal-tuples* that were potentially affected, and adds them to the set of potentially affected *pal-tuples* ΓδsubscriptΓ𝛿\Gamma\_{\delta}roman\_Γ start\_POSTSUBSCRIPT italic\_δ end\_POSTSUBSCRIPT. To find *pal-tuples* corresponding to reduced functionality of the agent, we place the agent in an optimal planning mode and assume limited availability of observation traces 𝕆𝕆\mathbb{O}blackboard\_O in the form of optimal unit-cost state-action trajectories ⟨s0,a1,s1,a2,…,sn−1,an,sn⟩subscript𝑠0subscript𝑎1subscript𝑠1subscript𝑎2…subscript𝑠𝑛1subscript𝑎𝑛subscript𝑠𝑛\langle s\_{0},a\_{1},s\_{1},a\_{2},\dots,s\_{n-1},a\_{n},s\_{n}\rangle⟨ italic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT , … , italic\_s start\_POSTSUBSCRIPT italic\_n - 1 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT ⟩. We generate optimal plans using M*init*𝒜superscriptsubscript𝑀*init*𝒜M\_{\emph{init}}^{\mathcal{A}}italic\_M start\_POSTSUBSCRIPT init end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT for all pairs of states in 𝕆𝕆\mathbb{O}blackboard\_O. We hypothesize that, if for a pair of states, the plan generated using M*init*𝒜superscriptsubscript𝑀*init*𝒜M\_{\emph{init}}^{\mathcal{A}}italic\_M start\_POSTSUBSCRIPT init end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT is shorter than the plan observed in 𝕆𝕆\mathbb{O}blackboard\_O, then some functionality of the agent has reduced. Our method performs comparative analysis of optimality of the observation traces against the optimal solutions generated using M*init*𝒜superscriptsubscript𝑀*init*𝒜M\_{\emph{init}}^{\mathcal{A}}italic\_M start\_POSTSUBSCRIPT init end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT for same pairs of initial and final states. To begin with, we extract all the continuous state sub-sequences from 𝕆𝕆\mathbb{O}blackboard\_O of the form ⟨s0,s1,…,sn⟩subscript𝑠0subscript𝑠1…subscript𝑠𝑛\langle s\_{0},s\_{1},\dots,s\_{n}\rangle⟨ italic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , italic\_s start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT ⟩ denoted by 𝕆*drift*subscript𝕆*drift*\mathbb{O}\_{\emph{drift}}blackboard\_O start\_POSTSUBSCRIPT drift end\_POSTSUBSCRIPT as they are all optimal. We then generate a set of planning problems 𝒫𝒫\mathcal{P}caligraphic\_P using the initial and final states of trajectories in 𝕆*drift*subscript𝕆*drift*\mathbb{O}\_{\emph{drift}}blackboard\_O start\_POSTSUBSCRIPT drift end\_POSTSUBSCRIPT. Then, we provide the problems in 𝒫𝒫\mathcal{P}caligraphic\_P to M*init*𝒜superscriptsubscript𝑀*init*𝒜M\_{\emph{init}}^{\mathcal{A}}italic\_M start\_POSTSUBSCRIPT init end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT to get a set of optimal trajectories 𝕆*init*subscript𝕆*init*\mathbb{O}\_{\emph{init}}blackboard\_O start\_POSTSUBSCRIPT init end\_POSTSUBSCRIPT. We select all the pairs of optimal trajectories of the form ⟨o*init*,o*drift*⟩subscript𝑜*init*subscript𝑜*drift*\langle o\_{\emph{init}},o\_{\emph{drift}}\rangle⟨ italic\_o start\_POSTSUBSCRIPT init end\_POSTSUBSCRIPT , italic\_o start\_POSTSUBSCRIPT drift end\_POSTSUBSCRIPT ⟩ for further analysis such that the length of o*init*∈𝕆*init*subscript𝑜*init*subscript𝕆*init*o\_{\emph{init}}\in\mathbb{O}\_{\emph{init}}italic\_o start\_POSTSUBSCRIPT init end\_POSTSUBSCRIPT ∈ blackboard\_O start\_POSTSUBSCRIPT init end\_POSTSUBSCRIPT for a problem is shorter than the length of o*drift*∈𝕆*drift*subscript𝑜*drift*subscript𝕆*drift*o\_{\emph{drift}}\in\mathbb{O}\_{\emph{drift}}italic\_o start\_POSTSUBSCRIPT drift end\_POSTSUBSCRIPT ∈ blackboard\_O start\_POSTSUBSCRIPT drift end\_POSTSUBSCRIPT for the same problem. For all such pairs of optimal trajectories, a subset of actions in each o*init*∈𝕆*init*subscript𝑜*init*subscript𝕆*init*o\_{\emph{init}}\in\mathbb{O}\_{\emph{init}}italic\_o start\_POSTSUBSCRIPT init end\_POSTSUBSCRIPT ∈ blackboard\_O start\_POSTSUBSCRIPT init end\_POSTSUBSCRIPT were likely affected due to the model drift. We focus on identifying the first action in each o*init*∈𝕆*init*subscript𝑜*init*subscript𝕆*init*o\_{\emph{init}}\in\mathbb{O}\_{\emph{init}}italic\_o start\_POSTSUBSCRIPT init end\_POSTSUBSCRIPT ∈ blackboard\_O start\_POSTSUBSCRIPT init end\_POSTSUBSCRIPT that was definitely affected. To identify the affected actions, we traverse each pair of optimal trajectories ⟨o*init*,o*drift*⟩subscript𝑜*init*subscript𝑜*drift*\langle o\_{\emph{init}},o\_{\emph{drift}}\rangle⟨ italic\_o start\_POSTSUBSCRIPT init end\_POSTSUBSCRIPT , italic\_o start\_POSTSUBSCRIPT drift end\_POSTSUBSCRIPT ⟩ simultaneously starting from the initial states. We add all the *pal-tuples* corresponding to the first differing action in o*init*subscript𝑜*init*o\_{\emph{init}}italic\_o start\_POSTSUBSCRIPT init end\_POSTSUBSCRIPT to ΓδsubscriptΓ𝛿\Gamma\_{\delta}roman\_Γ start\_POSTSUBSCRIPT italic\_δ end\_POSTSUBSCRIPT. We do this because there are only two possible explanations for why the action differs: (i) either the action in o*init*subscript𝑜*init*o\_{\emph{init}}italic\_o start\_POSTSUBSCRIPT init end\_POSTSUBSCRIPT was applicable in a state using M*init*𝒜superscriptsubscript𝑀*init*𝒜M\_{\emph{init}}^{\mathcal{A}}italic\_M start\_POSTSUBSCRIPT init end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT but has become inapplicable in the same state in M*drift*𝒜superscriptsubscript𝑀*drift*𝒜M\_{\emph{drift}}^{\mathcal{A}}italic\_M start\_POSTSUBSCRIPT drift end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT, or (ii) it can no longer achieve the same effects in M*drift*𝒜superscriptsubscript𝑀*drift*𝒜M\_{\emph{drift}}^{\mathcal{A}}italic\_M start\_POSTSUBSCRIPT drift end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT as M*init*𝒜superscriptsubscript𝑀*init*𝒜M\_{\emph{init}}^{\mathcal{A}}italic\_M start\_POSTSUBSCRIPT init end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT. We also discover the first actions that are applicable in the same states in both the trajectories but result in different states. The effect of such actions has certainly changed in M*drift*𝒜superscriptsubscript𝑀*drift*𝒜M\_{\emph{drift}}^{\mathcal{A}}italic\_M start\_POSTSUBSCRIPT drift end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT. We add all the *pal-tuples* corresponding to such actions to ΓδsubscriptΓ𝛿\Gamma\_{\delta}roman\_Γ start\_POSTSUBSCRIPT italic\_δ end\_POSTSUBSCRIPT. In the next section, we describe our approach to infer the correct modes of *pal-tuples* in ΓδsubscriptΓ𝛿\Gamma\_{\delta}roman\_Γ start\_POSTSUBSCRIPT italic\_δ end\_POSTSUBSCRIPT. ### 4.2 Investigating Affected pal-tuples This section explains how the correct modes of *pal-tuples* in ΓδsubscriptΓ𝛿\Gamma\_{\delta}roman\_Γ start\_POSTSUBSCRIPT italic\_δ end\_POSTSUBSCRIPT are inferred (line 2 onwards of Alg.1). Alg. [1](#alg1 "Algorithm 1 ‣ 4 Differential Assessment of AI Systems ‣ Differential Assessment of Black-Box AI Agents") creates an abstract model in which all the *pal-tuples* that are predicted to have been affected are set to \⃝raisebox{-0.9pt}{?} (line 2). It then iterates over all *pal-tuples* with mode \⃝raisebox{-0.9pt}{?} (line 4). #### Removing inconsistent models Our method generates candidate abstract models and then removes the abstract models that are not consistent with the agent (lines 7-18 of Alg. [1](#alg1 "Algorithm 1 ‣ 4 Differential Assessment of AI Systems ‣ Differential Assessment of Black-Box AI Agents")). For each *pal-tuple* γ∈Γ𝛾Γ\gamma\in\Gammaitalic\_γ ∈ roman\_Γ, the algorithm computes a set of possible abstract models ℳ*abs*subscriptℳ*abs*\mathcal{M}\_{\emph{abs}}caligraphic\_M start\_POSTSUBSCRIPT abs end\_POSTSUBSCRIPT by assigning the three mode variants +++, −--, and ∅\emptyset∅ to the current *pal-tuple* γ𝛾\gammaitalic\_γ in model M*abs*subscript𝑀*abs*M\_{\emph{abs}}italic\_M start\_POSTSUBSCRIPT abs end\_POSTSUBSCRIPT (line 6). Only one model in ℳ*abs*subscriptℳ*abs*\mathcal{M}\_{\emph{abs}}caligraphic\_M start\_POSTSUBSCRIPT abs end\_POSTSUBSCRIPT corresponds to the agent’s updated functionality. If the action γasubscript𝛾𝑎\gamma\_{a}italic\_γ start\_POSTSUBSCRIPT italic\_a end\_POSTSUBSCRIPT in the *pal-tuple* γ𝛾\gammaitalic\_γ is present in the set of action triplets generated using 𝕆𝕆\mathbb{O}blackboard\_O, then the pre-state of that action s*pre*subscript𝑠*pre*s\_{\emph{pre}}italic\_s start\_POSTSUBSCRIPT pre end\_POSTSUBSCRIPT is used to create a state sIsubscript𝑠𝐼s\_{I}italic\_s start\_POSTSUBSCRIPT italic\_I end\_POSTSUBSCRIPT (lines 9-10). sIsubscript𝑠𝐼s\_{I}italic\_s start\_POSTSUBSCRIPT italic\_I end\_POSTSUBSCRIPT is created by removing the literals corresponding to predicate γpsubscript𝛾𝑝\gamma\_{p}italic\_γ start\_POSTSUBSCRIPT italic\_p end\_POSTSUBSCRIPT from s*pre*subscript𝑠*pre*s\_{\emph{pre}}italic\_s start\_POSTSUBSCRIPT pre end\_POSTSUBSCRIPT. We then create a query Q=⟨sI,⟨γa⟩⟩𝑄subscript𝑠𝐼delimited-⟨⟩subscript𝛾𝑎Q\!=\!\langle s\_{I},\langle\gamma\_{a}\rangle\rangleitalic\_Q = ⟨ italic\_s start\_POSTSUBSCRIPT italic\_I end\_POSTSUBSCRIPT , ⟨ italic\_γ start\_POSTSUBSCRIPT italic\_a end\_POSTSUBSCRIPT ⟩ ⟩ (line 10), and pose it to the agent 𝒜𝒜\mathcal{A}caligraphic\_A (line 11). The three models are then sieved based on the comparison of their responses to the query Q𝑄Qitalic\_Q with that of 𝒜𝒜\mathcal{A}caligraphic\_A’s response θ𝜃\thetaitalic\_θ to Q𝑄Qitalic\_Q (line 12). We use the same mechanism as AIA for sieving the abstract models. If the action corresponding to the current *pal-tuple* γ𝛾\gammaitalic\_γ being considered is not present in any of the observed action triplets, then for every pair of abstract models in ℳ*abs*subscriptℳ*abs*\mathcal{M}\_{\emph{abs}}caligraphic\_M start\_POSTSUBSCRIPT abs end\_POSTSUBSCRIPT (line 14), we generate a query Q𝑄Qitalic\_Q using a planning problem (line 15). We then pose the query Q𝑄Qitalic\_Q to the agent (line 16) and receive its response θ𝜃\thetaitalic\_θ. We then sieve the abstract models by asking them the same query and discarding the models whose responses are not consistent with that of the agent (line 17). The planning problem that is used to generate the query and the method that checks for consistency of abstract models’ responses with that of the agent are used from AIA. Finally, all the models that are not consistent with the agent’s updated functionality are removed from the possible set of models ℳ*abs*subscriptℳ*abs*\mathcal{M}\_{\emph{abs}}caligraphic\_M start\_POSTSUBSCRIPT abs end\_POSTSUBSCRIPT. The remaining models are returned by the algorithm. Empirically, we find that only one model is always returned by the algorithm. ### 4.3 Correctness We now show that the learned drifted model representing the agent’s updated functionality is consistent as defined in Def. 8 and Def. 9. The proof of the theorem is available in the extended version of the paper (Nayyar, Verma, and Srivastava [2022](#bib.bib22)). ###### Theorem 1. Given a set of observation traces 𝕆𝕆\mathbb{O}blackboard\_O generated by the drifted agent 𝒜*drift*subscript𝒜*drift*\mathcal{A}\_{\emph{drift}}caligraphic\_A start\_POSTSUBSCRIPT drift end\_POSTSUBSCRIPT, a set of queries Q𝑄Qitalic\_Q posed to 𝒜*drift*subscript𝒜*drift*\mathcal{A}\_{\emph{drift}}caligraphic\_A start\_POSTSUBSCRIPT drift end\_POSTSUBSCRIPT by Alg. 1, and the model M*init*𝒜superscriptsubscript𝑀*init*𝒜M\_{\emph{init}}^{\mathcal{A}}italic\_M start\_POSTSUBSCRIPT init end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT representing the agent’s functionality prior to the drift, each of the models M=⟨P,A⟩𝑀𝑃𝐴M=\langle P,A\rangleitalic\_M = ⟨ italic\_P , italic\_A ⟩ in ℳ*drift*𝒜subscriptsuperscriptℳ𝒜*drift*\mathcal{M}^{\mathcal{A}}\_{\emph{drift}}caligraphic\_M start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT drift end\_POSTSUBSCRIPT learned by Alg. 1 are *consistent* with respect to all the observation traces o∈𝕆𝑜𝕆o\in\mathbb{O}italic\_o ∈ blackboard\_O and query-responses ⟨q,θ⟩𝑞𝜃\langle q,\theta\rangle⟨ italic\_q , italic\_θ ⟩ for all the queries q∈Q𝑞𝑄q\in Qitalic\_q ∈ italic\_Q. There exists a finite set of observations that if collected will allow Alg. [1](#alg1 "Algorithm 1 ‣ 4 Differential Assessment of AI Systems ‣ Differential Assessment of Black-Box AI Agents") to achieve 100% correctness with any amount of drift: this set corresponds to observations that allow line 1 of Alg. [1](#alg1 "Algorithm 1 ‣ 4 Differential Assessment of AI Systems ‣ Differential Assessment of Black-Box AI Agents") to detect a change in the functionality. This includes an action triplet in an observation trace hinting at increased functionality, or a shorter plan using the previously known model hinting at reduced functionality. Thus, models learned by DAAISy are guaranteed to be completely correct irrespective of the amount of the drift if such a finite set of observations is available. While using queries significantly reduces the number of observations required, asymptotic guarantees subsume those of passive model learners while ensuring convergence to the true model. 5 Empirical Evaluation ----------------------- In this section, we evaluate our approach for assessing a black-box agent to learn its model using information about its previous model and available observations. We implemented the algorithm for DAAISy in Python​​ 111Code available at https://github.com/AAIR-lab/DAAISy and tested it on six planning benchmark domains from the International Planning Competition (IPC)​ 222https://www.icaps-conference.org/competitions. We used the IPC domains as the unknown drifted models and generated six initial domains at random for each domain in our experiments. To assess the performance of our approach with increasing drift, we employed two methods for generating the initial domains: (a) dropping the *pal-tuples* already present, and (b) adding new *pal-tuples*. For each experiment, we used both types of domain generation. We generated different initial models by randomly changing modes of random *pal-tuples* in the IPC domains. Thus, in all our experiments an IPC domain plays the role of ground truth M*drift*\*subscriptsuperscript𝑀*drift*M^{\*}\_{\emph{drift}}italic\_M start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT drift end\_POSTSUBSCRIPT and a randomized model is used as M*init*𝒜superscriptsubscript𝑀*init*𝒜M\_{\emph{init}}^{\mathcal{A}}italic\_M start\_POSTSUBSCRIPT init end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT. We use a very small set of observation traces 𝕆𝕆\mathbb{O}blackboard\_O (single observation trace containing 10 action triplets) in all the experiments for each domain. To generate this set, we gave the agent a random problem instance from the IPC corresponding to the domain used by the agent. The agent then used Fast Downward (Helmert [2006](#bib.bib14)) with LM-Cut heuristic (Helmert and Domshlak [2009](#bib.bib15)) to produce an optimal solution for the given problem. The generated observation trace is provided to DAAISy as input in addition to a random M*init*𝒜superscriptsubscript𝑀*init*𝒜M\_{\emph{init}}^{\mathcal{A}}italic\_M start\_POSTSUBSCRIPT init end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT as discussed in Alg. [1](#alg1 "Algorithm 1 ‣ 4 Differential Assessment of AI Systems ‣ Differential Assessment of Black-Box AI Agents"). The exact same observation trace is used in all experiments of the same domain, without the knowledge of the drifted model of the agent, and irrespective of the amount of drift. We measure the final accuracy of the learned model M*drift*𝒜superscriptsubscript𝑀*drift*𝒜M\_{\emph{drift}}^{\mathcal{A}}italic\_M start\_POSTSUBSCRIPT drift end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT against the ground truth model M*drift*\*subscriptsuperscript𝑀*drift*M^{\*}\_{\emph{drift}}italic\_M start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT drift end\_POSTSUBSCRIPT using the measure of model difference Δ(M*drift*𝒜,M*drift*\*)Δsuperscriptsubscript𝑀*drift*𝒜subscriptsuperscript𝑀*drift*\Delta(M\_{\emph{drift}}^{\mathcal{A}},M^{\*}\_{\emph{drift}})roman\_Δ ( italic\_M start\_POSTSUBSCRIPT drift end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT , italic\_M start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT drift end\_POSTSUBSCRIPT ). We also measure the number of queries required to learn a model with significantly high accuracy. We compare the efficiency of DAAISy (our approach) with the Agent Interrogation Algorithm (AIA) (Verma, Marpally, and Srivastava [2021](#bib.bib29)) as it is the most closely related querying-based system. All of our experiments were executed on 5.0 GHz Intel i9 CPUs with 64 GB RAM running Ubuntu 18.04. We now discuss our results in detail below. ### 5.1 Results ![Refer to caption](/html/2203.13236/assets/x2.png) Figure 2: The number of queries used by DAAISy (our approach) and AIA (marked on y-axis), as well as accuracy of model computed by DAAISy with increasing amount of drift. Amount of drift equals the ratio of drifted *pal-tuples* and the total number of *pal-tuples* in the domains (nPals). The number of action triplets in the observation trace used for each domain is 10. We evaluated the performance of DAAISy along 2 directions; the number of queries it takes to learn the updated model M*drift*𝒜superscriptsubscript𝑀*drift*𝒜M\_{\emph{drift}}^{\mathcal{A}}italic\_M start\_POSTSUBSCRIPT drift end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT with increasing amount of drift, and the correctness of the model M*drift*𝒜superscriptsubscript𝑀*drift*𝒜M\_{\emph{drift}}^{\mathcal{A}}italic\_M start\_POSTSUBSCRIPT drift end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT it learns compared to M*drift*\*subscriptsuperscript𝑀*drift*M^{\*}\_{\emph{drift}}italic\_M start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT drift end\_POSTSUBSCRIPT. #### Efficiency in number of queries As seen in Fig. [2](#S5.F2 "Figure 2 ‣ 5.1 Results ‣ 5 Empirical Evaluation ‣ Differential Assessment of Black-Box AI Agents"), the computational cost of assessing each agent, measured in terms of the number of queries used by DAAISy, increases as the amount of drift in the model M*drift*\*subscriptsuperscript𝑀*drift*M^{\*}\_{\emph{drift}}italic\_M start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT drift end\_POSTSUBSCRIPT increases. This is expected as the amount of drift is directly proportional to the number of *pal-tuples* affected in the domain. This increases the number of *pal-tuples* that DAAISy identifies as affected as well as the number of queries as a result. As demonstrated in the plots, the standard deviation for number of queries remains low even when we increase the amount of drift, showing the stability of DAAISy. #### Comparison with AIA Tab. [3](#S5.T3 "Table 3 ‣ Correctness of learned model ‣ 5.1 Results ‣ 5 Empirical Evaluation ‣ Differential Assessment of Black-Box AI Agents") shows the average number of queries that AIA took to achieve the same level of accuracy as our approach for 50% drifted models, and DAAISy requires significantly fewer queries to reach the same levels of accuracy compared to AIA. Fig. [2](#S5.F2 "Figure 2 ‣ 5.1 Results ‣ 5 Empirical Evaluation ‣ Differential Assessment of Black-Box AI Agents") also demonstrates that DAAISy always takes fewer queries as compared to AIA to reach reasonably high levels of accuracy. This is because AIA does not use information about the previously known model of the agent and thus ends up querying for all possible *pal-tuples*. DAAISy, on the other hand, predicts the set of *pal-tuples* that might have changed based on the observations collected from the agent and thus requires significantly fewer queries. #### Correctness of learned model DAAISy computes models with at least 50% accuracy in all six domains even when they have completely drifted from their initial model, i.e., Δ(M*drift*𝒜,M*drift*\*)=Δsuperscriptsubscript𝑀*drift*𝒜subscriptsuperscript𝑀*drift*absent\Delta(M\_{\emph{drift}}^{\mathcal{A}},M^{\*}\_{\emph{drift}})=roman\_Δ ( italic\_M start\_POSTSUBSCRIPT drift end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT , italic\_M start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT drift end\_POSTSUBSCRIPT ) = *nPals*. It attains nearly accurate models for Gripper and Blocksworld for upto 40% drift. Even in scenarios where the agent’s model drift is more than 50%, DAAISy achieves at least 70% accuracy in five domains. Note that DAAISy is guaranteed to find the correct mode for an identified affected *pal-tuple*. The reason for less than 100%percent100100\%100 % accuracy when using DAAISy is that it does not predict a *pal-tuple* to be affected unless it encounters an observation trace conflicting with M*init*𝒜superscriptsubscript𝑀*init*𝒜M\_{\emph{init}}^{\mathcal{A}}italic\_M start\_POSTSUBSCRIPT init end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT. Thus, the learned model M*drift*𝒜superscriptsubscript𝑀*drift*𝒜M\_{\emph{drift}}^{\mathcal{A}}italic\_M start\_POSTSUBSCRIPT drift end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT caligraphic\_A end\_POSTSUPERSCRIPT, even though consistent with all the observation traces, may end up being inaccurate when compared to M*drift*\*subscriptsuperscript𝑀*drift*M^{\*}\_{\emph{drift}}italic\_M start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT drift end\_POSTSUBSCRIPT. | Domain | #Pals | AIA | DAAISy | | --- | --- | --- | --- | | Gripper | 20 | 15.015.015.015.0 | 6.56.56.56.5 | | Miconic | 36 | 32.032.032.032.0 | 7.77.77.77.7 | | Satellite | 50 | 34.034.034.034.0 | 9.09.09.09.0 | | Blocksworld | 52 | 40.040.040.040.0 | 11.411.411.411.4 | | Termes | 134 | 115.0115.0115.0115.0 | 27.027.027.027.0 | | Rovers | 402 | 316.0316.0316.0316.0 | 61.061.061.061.0 | Table 3: The average number of queries taken by AIA to achieve the same level of accuracy as DAAISy (our approach) for 50% drifted models. #### Discussion AIA always ends up learning completely accurate models, but as noted above, this is because AIA queries exhaustively for all the *pal-tuples* in the model. There is a clear trade-off between the number of queries that DAAISy takes to learn the model as compared to AIA and the correctness of the learned model. As evident from the results, if the model has not drifted much, DAAISy can serve as a better approach to efficiently learn the updated functionality of the agent with less overhead as compared to AIA. Deciding the amount of drift after which it would make sense to switch to querying the model from scratch is a useful analysis not addressed in this paper. 6 Related Work --------------- #### White-box model drift Bryce, Benton, and Boldt ([2016](#bib.bib4)) address the problem of learning the updated mental model of a user using particle filtering given prior knowledge about the user’s mental model. However, they assume that the entity being modeled can tell the learning system about flaws in the learned model if needed. Eiter et al. ([2005](#bib.bib8), [2010](#bib.bib9)) propose a framework for updating action laws depicted in the form of graphs representing the state space. They assume that changes can only happen in effects, and that knowledge about the state space and what effects might change is available beforehand. Our work does not make such assumptions to learn the correct model of the agent’s functionalities. #### Action model learning The problem of learning agent models from observations of its behavior is an active area of research (Gil [1994](#bib.bib12); Yang, Wu, and Jiang [2007](#bib.bib30); Cresswell, McCluskey, and West [2009](#bib.bib7); Zhuo and Kambhampati [2013](#bib.bib31); Arora et al. [2018](#bib.bib2); Aineto, Celorrio, and Onaindia [2019](#bib.bib1)). Recent work addresses active querying to learn the action model of an agent (Rodrigues et al. [2011](#bib.bib23); Verma, Marpally, and Srivastava [2021](#bib.bib29)). However, these methods do not address the problem of reducing the computational cost of differential model assessment, which is crucial in non-stationary settings. Online action model learning approaches learn the model of an agent while incorporating new observations of the agent behavior (Čertický [2014](#bib.bib5); Lamanna et al. [2021a](#bib.bib17), [b](#bib.bib18)). Unlike our approach, they do not handle cases where (i) the new observations are not consistent with the older ones due to changes in the agent’s behavior; and/or (ii) there is reduction in functionality of the agent. Lindsay ([2021](#bib.bib19)) solve the problem of learning all static predicates in a domain. They start with a correct partial model that captures the dynamic part of the model accurately and generate negative examples by assuming access to all possible positive examples. Our method is different in that it does not make such assumptions and leverages a small set of available observations to infer about increased and reduced functionality of an agent’s model. #### Model reconciliation Model reconciliation literature (Chakraborti et al. [2017](#bib.bib6); Sreedharan et al. [2019](#bib.bib26); Sreedharan, Chakraborti, and Kambhampati [2021](#bib.bib25)) deals with inferring the differences between the user and the agent models and removing them using explanations. These methods consider white-box known models whereas our approach works with black-box models of the agent. 7 Conclusions and Future Work ------------------------------ We presented a novel method for *differential assessment* of black-box AI systems to learn models of true functionality of agents that have drifted from their previously known functionality. Our approach provides guarantees of correctness w.r.t. observations. Our evaluation demonstrates that our system, DAAISy, efficiently learns a highly accurate model of agent’s functionality issuing a significantly lower number of queries as opposed to relearning from scratch. In the future, we plan to extend the framework to more general classes, stochastic settings, and models. Analyzing and predicting switching points from selective querying in DAAISy to relearning from scratch without compromising the correctness of the learned models is also a promising direction for future work. Acknowledgements ---------------- We thank anonymous reviewers for their helpful feedback on the paper. This work was supported in part by the NSF under grants IIS 1942856, IIS 1909370, and the ONR grant N00014-21-1-2045.
5eebc750-d3b9-4a2b-911e-bf07e38e4583
trentmkelly/LessWrong-43k
LessWrong
Is AI Progress Impossible To Predict? People seem to be continually surprised, over and over again, by the new capabilities of big machine learning models, such as PaLM, DALL-E, Chinchilla, SayCan, Socratic Models, Flamingo, and Gato (all in the last two months!). Luckily, there is a famous paper on how AI progress is governed by scaling laws, where models predictably get better as they get larger. Could we forecast AI progress ahead of time by seeing how each task gets better with model size, draw out the curve, and calculate which size model is needed to reach human performance? I tried this, and apparently the answer is no. In fact, whether AI has improved on a task recently gives us exactly zero predictive power for how much the next model will improve on the same task. The sheer consistency of this unpredictability is remarkable, almost like a law of statistical thermodynamics. No matter what I plug in, the correlation is always zero! For example, does a task improving rapidly when you go from a small model to a 7B parameter model predict similar improvement when you go from a 7B model to Gopher's 280B? No: I tried making the same graph with MMLU tasks instead of BIG-bench, same result: What about DeepMind's new Chinchilla? Did rapid improvement of a task on Gopher predict continued improvement going from Gopher to Chinchilla? Nope: What about Google's PaLM? The full results of PaLM on BIG-bench don't seem to have been published yet, so I couldn't directly compare to Chinchilla or Gopher, but the PaLM paper described an 8B parameter model, a 62B model and a 540B model. Did fast improvement from 8B to 62B predict improvement from 62B to 540B? Not really, R^2 = 0.04: PaLM also provides data on 30 different NLU benchmark tasks. Plot those and you get the same thing: The results here seem pretty clear, but I'm honestly not sure how to interpret them. Before trying this, I assumed you would find that some tasks are "easy" and scale quickly, while others are "hard" and scale slowly. But that would
992ab0dc-0ca4-4d1b-bfab-0a9116e408dd
trentmkelly/LessWrong-43k
LessWrong
Some AI Governance Research Ideas Junior researchers are often wondering what they should work on. To potentially help, we asked people at the Centre for the Governance of AI for research ideas related to longtermist AI governance. The compiled ideas are developed to varying degrees, including not just questions, but also some concrete research approaches, arguments, and thoughts on why the questions matter. They differ in scope: while some could be explored over a few months, others could be a productive use of a PhD or several years of research.  We do not make strong claims about these questions, e.g. that they are the absolute top priority at current margins. Each idea only represents the views of the person who wrote it. The ideas aren’t necessarily original. Where we think someone is already working on or has done thinking about the topic before, we've tried to point to them in the text and reach out to them before publishing this post. If you are interested in pursuing any of these projects, please let us know by filling out this form. We may be able to help you find mentorship, advice, or collaborators. You can also fill out the form if you’re intending to work on the project independently, so that we can help avoid duplication of effort. If you have feedback on the ideas, feel free to email researchideas@governance.ai. You can find the ideas here. Our colleagues at the FHI AI Safety team put together a corresponding post with AI safety research project suggestions here. Other Sources Other sources of AI governance research projects include:  * AI Governance: A Research Agenda, Allan Dafoe * Research questions that could have a big social impact, organised by discipline, 80,000 Hours * The section on AI in Legal Priorities Research: A Research Agenda, Legal Priorities Project * Some parts of A research agenda for the Global Priorities Institute, Global Priorities Institute * AI Impact’s list of Promising Research Projects * Phil Trammell and Anton Korinek's Economic Growth under
cd61d4f5-1a4b-47e3-abdf-8d1d9036755a
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
Announcing Cavendish Labs We’re excited to announce [Cavendish Labs](https://cavendishlabs.org/), a new research institute in Vermont focused on AI safety and pandemic prevention! We’re founding a community of researchers who will live together and work on the world’s most pressing problems. **Uh, why Vermont?** It’s beautiful; it has one of the cheapest costs of living in the United States; there’s lots of great people; it’s only a few hours away from Boston, NYC, and Montreal. There’s even[a train](https://en.wikipedia.org/wiki/Vermonter_(train)) that goes there from Washington D.C.! A few of us briefly lived in Vermont during the pandemic, and we found it to be a fantastic place to live, think, and work. Each season brings with it a new kind of beauty to the hills. There are no barriers to a relaxing walk in the woods. There's practically no light pollution, so the cosmos is waiting outside the door whenever you need inspiration. ![A view of the village of Cavendish; the fire station is on the left.](https://res.cloudinary.com/cea/image/upload/v1674158207/mirroredImages/xBeqaWEJfWZv8ALWn/sdenzrgasuxcaqvmn4v7.jpg)A view of Cavendish village; the town offices and fire station are on the left.**What are you going to be researching?** We have a few research interests: 1. AI Alignment. How do we make sure that AI does what we want? We’ve spent some time thinking about [ELK](https://prometheus.science/elk-results) and [inverse scaling](https://www.alignmentforum.org/posts/iznohbCPFkeB9kAJL/inverse-scaling-prize-round-1-winners#Joe_Cavanagh__Andrew_Gritsevskiy__and_Derik_Kauffman_of_Cavendish_Labs_for_quote_repetition); however, we think that AGI will most likely be achieved through some sort of model-based RL framework, so that is our current focus. For instance, we know how to induce provable guarantees of behavior in supervised learning; could we do something similar in RL? 2. Pandemic prevention. There’s been a lot of talk about the potential of [Far-UVC](https://www.lesswrong.com/posts/4Zjm8ycWhg6PzFwnF/far-uvc-light-update-no-leds-are-not-around-the-corner) for ambient disinfection. Understanding why it works on a molecular level, and whether it works safely, is key for developing broad-spectrum pandemic prevention tools. 3. Diagnostic development. We're interested in designing a low-cost and simple-to-use platform for [LAMP](https://en.wikipedia.org/wiki/Loop-mediated_isothermal_amplification) reactions so that generalized diagnostic capabilities are more widespread. We envision a world where it is both cheap and easy to run a panel of tests so one can swiftly determine the exact virus behind an infection. **How’s this organized?** We'll be living and working on different floors of the same building—some combination of a small liberal arts college and research lab. To ensure we’re not too isolated, we’ll visit Boston at least once a month, and invite a rotating group of visitors to work with us, while maintaining collaborations with researchers around the world. **Sounds interesting!** We’re actively searching for collaborators in our areas of interest; if this sounds like you, send us an email at hello@cavendishlabs.org! Our space in Vermont isn’t ready until late spring, so in the meantime we’ll be located in Berkeley and Rhode Island. At the same time, we’re looking for visiting scholars to come work with us in the summer or fall: if you’re interested, keep an eye out for our application! ![(Left) A view from a nearby mountain, (Right) the Black River](https://res.cloudinary.com/cea/image/upload/v1674158207/mirroredImages/xBeqaWEJfWZv8ALWn/mg6leyrrduyeeo5c4wax.jpg)(Left) A view from a nearby mountain; (Right) the Black River
fc9147d5-d329-4cee-82c7-2894f8ac98c3
trentmkelly/LessWrong-43k
LessWrong
Ambitious vs. narrow value learning (Re)Posted as part of the AI Alignment Forum sequence on Value Learning. > Rohin's note: The definition of narrow value learning in the previous post focused on the fact that the resulting behavior is limited to some domain. The definition in this post focuses on learning instrumental goals and values. While the definitions are different, I have used the same term for both because I believe that they are both pointing at the same underlying concept. (I do not know if Paul agrees.) I'm including this post to give a different perspective on what I mean by narrow value learning, before delving into conceptual ideas within narrow value learning. ---------------------------------------- Suppose I’m trying to build an AI system that “learns what I want” and helps me get it. I think that people sometimes use different interpretations of this goal. At two extremes of a spectrum of possible interpretations: * The AI learns my preferences over (very) long-term outcomes. If I were to die tomorrow, it could continue pursuing my goals without me; if humanity were to disappear tomorrow, it could rebuild the kind of civilization we would want; etc. The AI might pursue radically different subgoals than I would on the scale of months and years, if it thinks that those subgoals better achieve what I really want. * The AI learns the narrower subgoals and instrumental values I am pursuing. It learns that I am trying to schedule an appointment for Tuesday and that I want to avoid inconveniencing anyone, or that I am trying to fix a particular bug without introducing new problems, etc. It does not make any effort to pursue wildly different short-term goals than I would in order to better realize my long-term values, though it may help me correct some errors that I would be able to recognize as such. I think that many researchers interested in AI safety per se mostly think about the former. I think that researchers with a more practical orientation mostly think about the latter.
f405870d-179f-43ab-920c-16f632b3f36f
trentmkelly/LessWrong-43k
LessWrong
Low hanging productivity - improving your workspace Original post:  http://bearlamp.com.au/low-hanging-productivity/ Tl;dr - Simple changes to workspaces like a big screen can make a big difference. ---------------------------------------- This week I spent a few days away from my usual desk.  I have been house sitting.  I didn't think too much of it; I tend to carry with me a portable lifestyle.  My laptop, some power blocks for my phone, and various supplies that make for easy "office"-ing around the place.  I usually don't carry a charger with me because when I know I will be gone a while I will take it with me.   I have always liked a portable office.  The ability to stop, and continue later at ease was always important to me.  However recently I moved into a new place and set up a desk.  I figured I would tryX where X is workspaces (a post for the future).  I never set up a workspace for the reason of it not being portable.  The interesting thing that has surprised me this week is that I miss my big screen (which was a gift - I might have never bought myself a big screen).   For whatever reason, the ability to view more space at once makes me more productive.  Combined with Linux's natural tendencies to have several desktop environments with simple switching.  My laptop screen is about 19in.  Which is plenty.  The new screen is about 1.5x that.  I never thought it would be useful, it took me years to do it.  If it broke today, I would be willing to spend up to $900 to get it back (which is more than six times the price of a new screen).  Right now I wonder how productive I might be with a 3rd screen... Or a 4th.  (or a 3D virtual reality work environment with screenspace limited by my eyeballs not my screen resolution...) I feel like (along with other habits) I am probably working at 120% of what I was working before.  A fair chunk of which I owe to the extra screenspace. Questions for today: 1. What part do you remember adding to your workspace to help you be more productive. 2. What's the coolest mo
dee29e68-9816-4bfa-a29b-6f166995bef4
trentmkelly/LessWrong-43k
LessWrong
Quotes from Leopold Aschenbrenner’s Situational Awareness Paper This post is different. Usually I offer commentary and analysis. I share what others think, then respond. This is the second time I am importantly not doing that. The work speaks for itself. It offers a different perspective, a window and a worldview. It is self-consistent. This is what a highly intelligent, highly knowledgeable person actually believes after much thought. So rather than say where I agree and disagree and argue back (and I do both strongly in many places), this is only quotes and graphs from the paper, selected to tell the central story while cutting length by ~80%, so others can more easily absorb it. I recommend asking what are the load bearing assumptions and claims, and what changes to them would alter the key conclusions. The first time I used this format was years ago, when I offered Quotes from Moral Mazes. I think it is time to use it again. Then there will be one or more other posts, where I do respond. INTRODUCTION > (1) Page 1: The Project will be on. If we’re lucky, we’ll be in an all-out race with the CCP; if we’re unlucky, an all-out war. > > Everyone is now talking about AI, but few have the faintest glimmer of what is about to hit them. Nvidia analysts still think 2024 might be close to the peak. Mainstream pundits are stuck on the willful blindness of “it’s just predicting the next word”. They see only hype and business-as-usual; at most they entertain another internet-scale technological change. > > Before long, the world will wake up. But right now, there are perhaps a few hundred people, most of them in San Francisco and the AI labs, that have situational awareness. Through whatever peculiar forces of fate, I have found myself amongst them. SECTION 1: FROM GPT-4 TO AGI: COUNTING THE OOMS > (2) Page 7: AGI by 2027 is strikingly plausible. GPT-2 to GPT-4 took us from ~preschooler to ~smart high-schooler abilities in 4 years. Tracing trendlines in compute (~0.5 orders of magnitude or OOMs/year), algorithmic efficiencies (
8d90a8b5-820e-4bea-bc1e-2ea16f6e04e0
trentmkelly/LessWrong-43k
LessWrong
The Reality of Emergence Reply to The Futility of Emergence In The Futility of Emergence, Eliezer takes an overly critical position on emergence as a theory. In this (short) article, I hope to challenge that view. Emergence is not an empty phrase. The statements "consciousness is an emergent phenomenon" and "consciousness is a phenomenon" are not the same thing; the former conveys information that the latter does not. When we say something is emergent, we have a well defined concept that we refer to. From Wikipedia: > emergence is a phenomenon whereby larger entities arise through interactions among smaller or simpler entities such that the larger entities exhibit properties the smaller/simpler entities do not exhibit. A is an emergent property of X, means that A arises from X in a way in which it is contingent on the interaction of the constituents of X (and not on those constituents themselves). If A is an emergent property of X, then the constituents of X do not possess A. A comes into existence as categorial novum at the inception of X. The difference between system X and its constituent components in regards to property A is a difference of kind and not of degree; X's constituents do not possess A in some tiny magnitude—they do not possess A at all. > Taken literally, that description fits every phenomenon in our universe above the level of individual quarks, which is part of the problem This is blatantly not true; size and mass for example are properties of elementary particles. > You can make no new predictions. You do not know anything about the behavior of real-world minds that you did not know before. It feels like you believe a new fact, but you don't anticipate any different outcomes. Your curiosity feels sated, but it has not been fed. The hypothesis has no moving parts—there's no detailed internal model to manipulate. Those who proffer the hypothesis of "emergence" confess their ignorance of the internals, and take pride in it; they contrast the science of "emergence"
14202eb4-86e1-4639-addb-edcc02246fa3
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Bad news for uploading Recently, the Blue Brain Project published a paper arguing that human neurons don't form synapses at locations determined by learning, but just wherever they bump into each other.  See video and article [here](http://actu.epfl.ch/news/blue-brain-project-accurately-predicts-connections/). For those people hoping to upload their brains by mapping out and virtually duplicating all the synapses—this means that won't work.  The synapse locations do not differ from human to human in any useful way.  Learning must be encoded in some modulation of each synapse's function.
f9fae794-d652-4f51-b261-9aa6006db964
trentmkelly/LessWrong-43k
LessWrong
The Darwin Game - Rounds 10 to 20 MeasureBot maintains its lead. Rounds 10-20 Everything so far Today's Obituary Bot Team Summary Round Silly Random Invert Bot 2-3 NPCs Returns 2 or 4 on the first round. Returns 5 - <opponents_last_move> on subsequent rounds. 10 Silly 3 Bot NPCs Always returns 3. 10 CooperateBot [Larks] Chaos Army "For the first 10 turns: return 3. For all subsequent turns: return the greater of 3 and (5 - the maximum value they have ever submitted)" 10 Silly Cement Bot 2 NPCs Returns 2 on the first turn. Otherwise, returns 5 - opponent_first_move. 12 Silly Counter Invert Bot NPCs Starts by randomly playing 2 or 3. Then always returns 5 -opponent_previous_move. 12 Silly Invert Bot 5 NPCs Returns 5 on the first round. Returns 5 - <opponents_last_move> on subsequent rounds. 12 Silly Cement Bot 3 NPCs Returns 3 on the first turn. Otherwise, returns 5 - opponent_first_move. 14 Silly Cement Bot 2-3 NPCs Returns 2 or 3 on the first turn. Otherwise, returns 5 - opponent_first_move. 14 Silly Invert Bot 3 NPCs Returns 3 on the first round. Returns 5 - <opponents_last_move> on subsequent rounds. 15 Silly Invert Bot 4 NPCs Returns 4 on the first round. Returns 5 - <opponents_last_move> on subsequent rounds. 17 Random-start-turn-taking Chaos Army Selects 3 or 2 randomly until symmetry is broken. Then oscillates between 2 and 3. 17 ---------------------------------------- This alternate timeline will conclude on November 20, at 5 pm Pacific Time.
a562dac5-0a0c-47bd-bc0e-b5e63e38ae5a
trentmkelly/LessWrong-43k
LessWrong
Meetup : Munich June Meetup Discussion article for the meetup : Munich June Meetup WHEN: 20 June 2015 03:00:51PM (+0200) WHERE: Café Puck, Türkenstraße 33, Munich Please see the announcement on Meetup.com for details, discussion and exact location: http://www.meetup.com/LessWrongMunich/events/223011559/ Discussion article for the meetup : Munich June Meetup
c5422c3f-6012-4281-98f6-a5a3c6037d5e
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
When is AI safety research harmful? *This post is going to assume some knowledge of AI safety as a field. The short explanation is that some people think that artificial general intelligence (AGI) has the potential to cause human extinction, or close to it, because of the difficulty of correctly specifying human goals. To try to get a sense of this, imagine designing a fully autonomous, superintelligent cleaning robot and trying to design a numerical reward function that it can use to learn how to clean stuff. Now imagine a baby goes into the room it’s trying to clean, or that the room has priceless ming vase, or frayed electricity wires. AI safety is the study of how to try to make sure that any very powerful AI systems we design are good for the world.* Cross posted to Lesswrong and [The Good Blog](https://thegoodblog.substack.com/p/when-is-ai-safety-research-bad?s=w) **Summary** * AI safety research improves capability by making AIs done what humans want * Having more capability means that AI is more likely to be deployed * If AI safety is really hard then AI we think is safe at deployment is likely to be unsafe * This effect is mitigated if safety failures are continuous - in this world the more total safety research done the better * Highly theoretical AI safety research is plausibly not going to be done anyway and so adds to the total amount of safety research done * Empirical safety research has a smaller counterfactual impact * The effect of this could go either way depending on weather safety failures and discrete or continuous   **What do we mean by capability** There is an argument that safety research is bad because getting a utility function which is close to one that kind, sensible humans would endorse is worse than missing completely. This argument won’t be the focus of this blog post but is well covered [here](https://reducing-suffering.org/near-miss/).    I will argue that another harm could be that safety research leads to an unsafe AI being deployed more quickly, or at all, than without the safety research being done.     The core of this argument is that AI safety and AI capability is not orthogonal. There are two ways capability can be understood: firstly as the sorts of things an AI system is able to do and secondly as the ability of people to get what they want using an AI system.    Safety is very clearly not orthogonal under the second definition. The key claim made by AI safety as a field is that it’s possible to get AIs which can do a lot of things but will end up doing things that are radically different from what a human principal wants it to do. Therefore improving safety improves this dimension of capability in the sense that ideally a safer AI is less likely to cause catastrophic outcomes which presumably their principals don’t want.    It’s also plausible that under the second definition of capability that AI safety and capabilities are not orthogonal. The problem that value-learning approaches to AI safety is trying to solve is one of attempting to understand what human preferences are from examples. Plausibly this requires understanding how humans work at some very deep level which may require substantial advances in the sorts of things an AI can do. For instance it may require a system to have a very good model of human psychology.    These two axes of capability give two different ways in which safety research can advance capabilities. Firstly by improving the ability of principals to get their agents to do what they want. Secondly, because doing safety research may, at least under the value learning paradigm, require improvements in some specific abilities.     **How does this affect whether we should do AI safety research or not?**    Whether or not we do AI safety research I think depends on a few variables, at least from the perspective I’m approaching the question with.    * Is safe AI discrete or continuous * How hard is AI safety * What are the risk behaviours of the actors who choose to deploy AI * How harmful or otherwise is speeding up capabilities work * How likely is it that TAI is reached with narrow vs general systems **How does safety interact with deployment?**   I think there are few reasons why very powerful AI systems might not be deployed. Firstly, they might not be profitable because they have catastrophic failures. A house cleaning robot that occasionally kills babies is not a profitable house cleaning robot.[[1]](#fnibvwl6ekq6) The second reason is that people don’t want to die and so if they think deploying an AGI will kill them they won’t deploy it.    There are two reasons why an AGI might be deployed even if the risk outweighs the reward from an impartial perspective. There’s individuals having an incorrect estimation of their personal risk from the AGI. Then there’s also individuals having correct estimations of the risk but there are very large - potentially unimaginably vast - externalities, like human extinction.    So we have three ways that AI safety research might increase the likelihood of a very powerful AGI being deployed. If AI systems have big discontinuities in skills then it’s possible AI systems, if there’s at least some safety work, look safe until they aren’t. In this world, if none of the lower level safety research had been then weaker AI systems wouldn’t be profitable because they’d be killing babies while cleaning houses.    It seems very likely that AI safety research reduces existential risk conditional on AGI being deployed. We should expect that the risk level acceptable to those taking that decision to be much higher much higher than socially optimal because they aren’t fully accounting for the good lives missed out on due to extinction, or the lives of people in an AI enabled totalitarian nightmare state. Therefore they’re likely to accept a higher level of risk than is socially optimal, while still only accepting risk below some threshold. If AI safety research is required to get below that threshold, then AI safety research takes the risk below that threshold meaning AI could be deployed when the expected value is still massively negative.    Relatedly, if AGI is going to be deployed it seems unlikely that they’ve been lots of major AI catastrophes. This could mean that those deploying AI underestimate their personal risk of AGI deployment. It’s unclear to me whether, assuming people take seriously the threat of AI risk, whether key decision makers are likely to be over or under cautious (from a self-interested perspective.) One on hand, in general people are very risk averse, while on the other individuals are very bad at thinking about low probability, high impact events.    **Value of Safety research**   If AI being safe - in the sense of not being an existential risk - is a discrete property then there are two effects. Firstly, if AI safety is very hard then it’s likely (though not certain) that the marginal impact of AI safety research is small. The marginal impact of safety research is given by two variables: the amount that safety research increases the total amount of research done, and the amount that that increases in the total amount of research done reduces the probability of x-risk. If we’ve only done a very small amount of research then adding any extra research means we’ve still only done a very small amount of research so AI is still unlikely to be safe. There’s a similar effect from doing a large amount of research - adding more research means we’ve still done a lot of research and so it’s very likely to be safe. The large effect on the probability comes when we’ve done a medium amount of research. ![](https://39669.cdn.cke-cs.com/cgyAlfpLFBBiEjoXacnz/images/f08c6f8f023f3ec6c927ba6c78a09abb14e5100ce2bf49d8.png)How bad this is depends on the specific way in which AI in which AI failure manifests and how discontinuous the jump is from ‘normal’ AI to x-risk threatening AI. The worst world is the one in which getting very near to safety manifests as AI being safe until there’s a jump to AGI because in this world it’s likely that firms will be successfully building highly profitable products meaning that they’re expecting their next, more powerful, AI system to be safe. This world seems plausible to me if there are discontinuous jumps in capabilities AI systems improve. Alternatively there could be certain skills or pieces of knowledge, like knowing it’s in a training environment, that dramatically increase the risks from AI but are different problems faced by less powerful systems.    On the other hand, if we’re in a world where it’s touch and go whether we get safe AI and prosaic AI alignment turns out to be the correct strategy then AI safety research looks extremely positive.   This looks different if AI safety failures are continuous. In this case any research into AI safety reduces the harms from AI going wrong. I think it’s much less clear what this looks like. Potentially a good sketch of this is [this](https://www.alignmentforum.org/posts/HBxe6wdjxK239zajf/what-failure-looks-like) blog post by Paul Christiano where he describes AI catastrophe via Goodhearting to death. Maybe the closer an AGI or TAI (transformative AI) values are to our own, the less harmful it is to fall prey to goodhearts law, because the the thing being maximised is sufficiently positively correlated with what we truly value that it stays correlated even in a fully optimised world. I haven’t tried to properly work this out though.    **Implications for altruistically motivated AI research** There are few quite different ways in which this could work. Extra research could just be additional research that wouldn’t have been done otherwise. This seems most likely to be the case for highly theoretical research that only becomes relevant to very powerful models, meaning there’s little incentive for current AI labs to do the research. This seems to most clearly fit [agent foundations](https://intelligence.org/) and [multiagent failure research](https://www.cooperativeai.com/foundation). This research has the property of applying to large numbers of different classes of models working on very different things. This means it displays strong public good properties. Anyone is able to use the research without it being used up. Traditionally, markets are believed not to supply these kinds of goods.  On the end of the scale research is done to prevent large language models saying racist things. There are only a very small number of firms that are able to produce commercially viable large language models and it’s plausible you can find ways to stop these models saying racist stuff that doesn’t generalise very well to other types of safety problems. In this case firms capture a lot of the benefits of their research.  The (dis)value of research between these two poles depends on how useful the research is to solving pre AGI safety problems, whether failure is discrete or continuous, and how hard the safety problem is relative to the amount of research being done.    The best case for empirical research on currently existing models being valuable is that failure is that the safety problem is relatively easy, prosaic alignment is possible, this sort of safety research doesn’t advance capabilities in ability to do more stuff sense, and that preventing x-risk from AGI is all-or-nothing. In this world altruistic safety would probably increase the total amount of relevant safety research done before AGI is deployed, and if it means that AI is more likely to be deployed that safety research will still have at least some effect because failure is continuous rather than discrete.  The world where this is worst is where AI alignment is very hard but key decision makers don’t realise this, safety is discrete and we need fundamentally new insights about the nature of agency and decision making to get safe AGI. In this world it seems likely that safety research is merely making it more likely that an unsafe AGI will be deployed. Because the problem is so hard it’s likely that the safety solution we find and relatively small amounts of research is likely to be wrong, meaning that the marginal contribution to reducing x-risk is small, but there’s quite a large effect on how likely it is that unsafe AI is deployed. The best case here is that safety has a very small marginal impact because it’s replacing safety work that would be done anyway by AI companies - this case the biggest effect is probably speeding up AI research because these firms have more resources to devote to pure capabilities research.    The worst case for more abstract research, ignoring concerns about the difficulty of knowing that it’s relevant at all, is that it actually is relevant to nearly-but-not-quite AGI and so provides the crucial step of ensuring that these models are profitable, while also facing safety being a descretre property and AI safety being a really hard problem. This could easily be worse than the worst case for empirical alignment research because it seems much more likely that this theoretical research wouldn’t be done by AI companies, both because currently this work is done (almost?) exclusively outside of industry and exhibits stronger public goods properties because it isn’t relevant only to firms with current access to vast amounts of compute.   **Why aren’t AI labs doing safety research already?** If AI safety labs weren’t doing any AI safety research currently, this would point to at least some part of the theory that capabilities and safety aren’t orthogonal being wrong. It’s possible that safety displays strong public goods properties which means that safety research is much less likely to be done than other sorts of capabilities research. Basically though, I think AI safety research is being done today, just not of the sort that’s particularly relevant to reducing existential risk.  Victoria Kranova has [compiled](https://docs.google.com/spreadsheets/d/e/2PACX-1vRPiprOaC3HsCf5Tuum8bRfzYUiKLRqJmbOoC-32JorNdfyTiRRsR7Ea5eWtvsWzuxo8bjOxCG84dAg/pubhtml) a list of examples of AI doing the classic thing that people are worried about an AGI doing - taking some goal that humans have written down and achieving it some way that doesn’t actually get at what humans want. The process of trying to fix these problems by making the goal more accurately capture the thing you want is a type of AI alignment research, just not the type that’s very helpful for stopping AI x-risk, and highly specific to the system being developed which is what would be predicted if more theoretical AI safety work had stronger public goods properties. [This](https://www.statnews.com/2022/02/28/sepsis-hospital-algorithms-data-shift/) article gives a really good description of harm caused by distributional shift in a medical context - trying to change I think should be thought of as a type of AI alignment research in that it’s trying to get an AI system to do what you want and focus is on changing behaviour rather than trying to make the model a better classifier when it’s inside it’s distribution.    **Takeaway**   I think this area is really complex and the value of research is dependent on multiple factors which interact with one another in non-linear ways. Option value considerations dictate that we continue doing AI safety research even if we’re unsure of its value because it’s much easier to stop a research program than to start one. However, I think it’s worthwhile trying to formalise and model the value of safety research and put some estimates on parameters. I think it’s likely that this will push us towards thinking that one style of AI research is better than another.   1. **[^](#fnrefibvwl6ekq6)**This line is stolen from Ben Garfinkal. You can find his excellent slides which inspired much of this article [here](https://docs.google.com/presentation/d/1sHA3rwTHLIxyZPQObcw8mbNo2jffswH8uYV7N5PwqZE/edit#slide=id.g6230db10d0_0_305)
1e0092bc-5b91-4389-a736-06456478d42a
trentmkelly/LessWrong-43k
LessWrong
[SEQ RERUN] Harder Choices Matter Less Today's post, Harder Choices Matter Less was originally published on 29 August 2008. A summary (taken from the LW wiki):   > If a choice is hard, that means the alternatives are around equally balanced, right? Discuss the post here (rather than in the comments to the original post). This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Against Modal Logics, and you can use the sequence_reruns tag or rss feed to follow the rest of the series. Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
95d290d4-8d87-4860-8d5c-09f185227178
trentmkelly/LessWrong-43k
LessWrong
Effect of Numpy In the bucket brigade singing project I talked about yesterday, the server is Python. We wrote it in a very straightforward "it won't be fast but at least it's correct" style. In simple stress testing, I found that it could handle about 10 clients, which is definitely not enough. My first thought was to switch it to C with libmicrohttpd, and for a program which is just a web interface to a circular buffer C isn't terrible. But how far can we get if we do the hard stuff in Numpy? In the initial version there were four parts that did work in proportion to either the input or the whole buffer: * Clearing the buffer: this is a circular buffer, and as we wrap around we want to be following the lead singer. We should not hear things from the last time around. So we need to zero out the buffer ahead of the lead person. * Incoming audio: we receive a large block of samples from the network, and need to sum them with what's already there at the right part of the circular buffer. * Outgoing audio: we need to send a section of the buffer out to the client to play. * Computing metadata: We draw a debugging chart at the bottom of the page, and it needs to know what parts of the queue have audio data. We send information about which frames (128 samples) are non-zero. If this were a large project, I would start with profiling, but this was small enough that I was pretty sure we were just going to want to convert everything to an efficient implementation. David and I pair-programmed for a couple hours and turned this naive code into this numpy-using code. The refactor brought a single request from 218ms to 74ms. This was ok, but still not great. The problem was that computing metadata was really very slow. The other three operations are in proportion to the size of a request, but the metadata one was in proportion to the size of the queue. And worse, it involved a Python loop over 128-sample frames. Since the metadata was only used to populate a debugging chart, I me
e0421a6b-ba8a-4221-b921-95c555145ffb
trentmkelly/LessWrong-43k
LessWrong
Intentional Bucket Errors I want to illustrate a research technique that I use sometimes. (My actual motivation for writing this is to make it so that I don't feel as much like I need to defend myself when I use this technique.) I am calling it intentional bucket errors after a CFAR concept called bucket errors. Bucket errors is about noticing when multiple different concepts/questions are stored in your head as a single concept/question. Then, by noticing this, you can think about the different concepts/question separately. What are Intentional Bucket Errors Bucket errors are normally thought of as a bad thing. It has "errors" right in the name. However, I want to argue that bucket errors can sometimes be useful, and you might want to consider having some bucket errors on purpose. You can do this by taking multiple different concepts and just pretending that they are all the same. This usually only works if the concepts started out sufficiently close together. Like many techniques that work by acting as though you believe something false, you should use this technique responsibly. The goal is to pretend that the concepts are the same to help you gain traction on thinking about them, but then to also be able to go back to inhabiting the world where they are actually different. Why Use Intentional Bucket Errors Why might you want to use intentional bucket errors? For one, maybe the concepts actually are the same, but they look different enough that you won't let yourself consider the possibility. I think this is especially likely to happen if the concepts are coming from very different fields or areas of your life. Sometimes it feels silly to draw strong connections between e.g. human rationality, AI alignment, evolution, economics, etc. but such connections can be useful.  Also I find this useful for gaining traction. There is something useful about constrained optimization for being able to start thinking about a problem. Sometimes it is harder to say something true and useful about X
1d0232e1-6f95-4478-9b3d-89776e179ee2
trentmkelly/LessWrong-43k
LessWrong
From Considerations to Probabilities The previous post started forecasting the UK hospital peak (based on information through Dec. 21, 2021). We generated several considerations and ultimately focused on the Omicron doubling time, the peak number of cases, the current number of Omicron cases, and the seasonality. In addition to a reference class forecast based on seasonality, we assumed the case peak would be roughly governed by hospital capacity and used the calculation: DateOfPeak = Dec. 21     + 10 days to reach case peak (2.4-day doubling time and 4.1 doublings)     + 9 days (case peak to hospital peak)     + 3 days (lag of 7-day average)     = Jan. 12th In this lecture we'll focus on going from this point estimate to a full probability distribution. This will involve two steps: 1. Asking "what invalidating considerations could cause this forecast to be totally wrong"? 2. Asking "which numerical quantities is my forecast most sensitive to, and how uncertain am I about them?" The motivation for this is that most uncertainty is from either your entire estimate being structurally wrong (invalidating considertions), or from the specific numbers going into your estimate being inaccurate (numerical sensitivity). In many (most?) cases, the first form of uncertainty dominates, so it's good to check both. We'll work through both steps, then combine them into a final uncertainty estimate. At the end I've also included a Q&A with Misha Yagudin on how this approach compares with his approach to forecasting. Part 1: Invalidating Considerations I did the brainstorming exercise of "If the previous estimate is totally off, why is that?" I recommend that you try this exercise as well before reading what I came up with. ---------------------------------------- (whitespace to avoid spoilers) ... ... ... ---------------------------------------- Okay, here's what I came up with: 1. If the UK cases are capped by herd immunity rather than hospital strain (17+ million cases instead of 6.7 million) 2. If
86de7222-9ba9-48a4-a292-7aaa853e11d8
trentmkelly/LessWrong-43k
LessWrong
Meetup : Boston Meetup Discussion article for the meetup : Boston Meetup WHEN: 06 September 2015 03:30:00PM (-0400) WHERE: 98 Elm St, Apt 1, Somerville, MA (Porter Square) —Phase 1: Arrival, greetings, unstructured conversation. This starts at 3:30; before then, Citadel residents will be busy. Looking forward to seeing you at 3:30! —Phase 2: The headline event. This starts promptly at 4pm, and lasts 30-60 minutes. —Phase 3: Further discussion. We'll explore the ideas raised in phase 2, often in smaller groups. —Phase 4: Dinner. Discussion article for the meetup : Boston Meetup
7b9cc93d-5b22-4dbd-9b5a-e87a51fc5abb
trentmkelly/LessWrong-43k
LessWrong
Logical uncertainty and Mathematical uncertainty ,,,,, There is a significant difference between uncertainty about mathematical truths in cases where there isn't a known procedure for checking whether a mathematical claim is true or false, versus when there is but you do not have the computational resources to carry it out. Examples of the former include the Collatz and twin prime conjectures, and examples of the latter include whether or not a given large number is a semi-prime, and what the first decimal digit of Graham's number is. The former should not be called logical uncertainty, because it is about what is true, not about what can be proved; I'll call it mathematical uncertainty instead. The latter really is uncertainty about logic, since we would know that the claim is either proved or refuted by whatever theory we used to prove the algorithm correct, and we would just be uncertain as to which one. It's well-known that standard probability theory is a poor fit for handling logical uncertainty because it assumes that the probabilities are logically coherent, and uncertainty about what the logical coherence constraints are is exactly what we want to model; there are no possible outcomes in which the truth-value of a decidable sentence is anything other than what it actually is. But this doesn't apply to mathematical uncertainty; we could study the probability distribution over complete theories that we converge to as time goes to infinity, and reason about this probability distribution using ordinary probability theory. Possible sources of evidence about math that could be treated with ordinary probability theory include physical experiments and black-boxed human intuitions. But another important source of evidence about mathematical truth is checking examples, and this cannot be reasoned about in ordinary probability theory because each of the examples is assigned probability 1 or 0 in the limit probability distribution, since we can check it. So just because you can reason about mathematical uncertainty
d3adb3c9-c105-464b-bebb-03e5dfd2d792
trentmkelly/LessWrong-43k
LessWrong
(Reinventing wheels) Maybe our world has become more people-shaped. Let's stow our QM caps and pretend Democritus was right: Atoms turn out to be the fundamental, indivisible units of physical reality after all. Let's further pretend that some flavor of hard determininism is in play: There is nothing apart from the motions and interactions of the atoms that governs the way the Universe ambles through time. Past, perhaps, the first Planck moment of existence, where we might be initializing all of the constants we'll need, our billiard-ball universe is quite amenable to explanations at the atomic level in the language of causality. Why is Atom A here, and not there, at time t? Because it was caused to be there by the actions of the other atoms, and due to certain properties of A itself, in the time leading up to t. In theory, this means that any level of abstraction we build up from the atomic one should preserve that ability to be described causally. But the amount of computational power we would need to actually pull that off would be staggering, far, far more than we could possibly fit within 3 pounds of grey matter. So even starting from the most deterministic possible model, as agents within the system, we don't really have the ability to directly leverage that causality. Instead, we are forced by our own limited resources to construct abstractions that are simple enough for us to reason about. These abstractions throw out a lot of detail! And when you throw away even a small amount of detail, you lose the clean isomorphism-to-reality that allows our earlier statement of causality to be preserved. When you're dealing with atoms, in the billiard ball world, you can always predict where they'll be if you have enough power; when you're dealing with aprroximations of atoms, you lose the "always". So not only is the map not the territory, if you lose the territory, there is no way to perfectly accurately reconstruct it from the map alone. If you ever find yourself in that unenviable place, you'll have to make (dare I say it) aesth
7fef9ad8-76bd-43c9-a595-c0f96e1038e3
trentmkelly/LessWrong-43k
LessWrong
A Conflict Between Longtermism and Veganism, Pick One. [This is a cross-post from my blog, which you can find here] The EA space is certainly a unique intersection of people from many walks of life, each with their own priorities and goals. However, an interesting contradiction arose in a recent conversation I had over dinner with friends. As I state in the conclusion, this may be either a criticism of longtermism or of vegetarian/veganism, depending on your perspective. If you are someone who subscribes to Longtermism, (the idea that future people hold equal moral weight as compared to present people, and that we should adjust our actions to be accordingly biased to creating future growth.) Then, it seems to me that it would actually be non-optimal of you not to eat the most convenient/delicious/nutritious meal that you can find, whenever possible, and without much regard for animal welfare. ANIMAL WELFARE VS HUMAN PREFERENCES The argument goes like this: Whatever people may do to make future people better off, they will probably do more of it/do it better if they are more satisfied/happier. There are some studies on this (link, link, link), that suggest it might be a difference somewhere between 10-20%. Anecdotally, just take a look at the sometimes ludicrous lengths that tech companies go to please their employees. This is not altruism, it’s just good business.  So okay, great. We agree that happy people are more productive. Now let’s consider this within the domain of diet choice.  Is veganism/vegetarianism a choice that makes people happy? Maybe for some people, but usually not in a vacuum. If you truly do enjoy eating vegetarian/vegan more than a meat based diet on the basis of taste and convenience alone, more power to you. However, it seems that around the world, there is a strong revealed preference for people to eat more meat as it becomes more available. We can tell this by looking at the rate of meat consumption vs. GDP per capita.     Many vegetarians/vegans do so for religious or moral reasons, b
ceb24a12-6c38-42b4-a276-abadc518532c
trentmkelly/LessWrong-43k
LessWrong
Mini advent calendar of Xrisks: Pandemics The FHI's mini advent calendar: counting down through the big five existential risks. The fourth one is an ancient risk, still with us today: pandemics and plagues.   Pandemics Current understanding: high Most worrying aspect: the past evidence points to a risky future The deathrates from infectious diseases follow a power law with a very low exponent. In layman’s terms: there is a reasonable possibility for a plague with an absolutely huge casualty rate. We’ve had close calls in the past: the black death killed around half the population of Europe, while Spanish Influenza infected 27% of all humans and killed one in ten of those, mostly healthy young adults. All the characteristics of an ultimately deadly infection already exist in the wild: anything that combined the deadliness and incubation period of AIDS with the transmissibility of the common cold. Moreover, we know that we are going to be seeing new diseases and new infections in the future: the only question is how deadly they will be. With modern global travel and transport, these diseases will spread far and wide. Against this, we have better communication and better trans-national institutions and cooperation – but these institutions could easily be overwhelmed, and countries aren’t nearly as well prepared as they need to be.  
78cdd068-bf22-468b-9ee1-a3bab6c55138
StampyAI/alignment-research-dataset/blogs
Blogs
making decisions as our approximately simulated selves making decisions as our approximately simulated selves ------------------------------------------------------ an agent should want to realize their values, and in particular should want their approximated selves — as guessed about by a smart oracle — to also make the decisions that realizes their values. for example, in [newcomb's problem](https://www.readthesequences.com/Newcombs-Problem-And-Regret-Of-Rationality), you want omega's guess of you to maximize how much you actually get from the entire problem. now, imagine that you're told that you're not the "real" you, you're the simulated you inside omega. and you're not even being simulated to a very high level of detail, you're instead an *approximated simulation* (AS). you should want to accept this, of course — just like you should want to [rule out materially acausal things even when you get a very strong intuition about them](ruling-out-intuitions-materially-acausal-intuitions.html), you should want to rule out even the possibility that anything you're percieving is actually happening, and instead simply roll with it and say "well, i'll *definitely* one-box then". i think this reasoning should reasonably extend to implementing your values in general, even if your values entail not caring about things that are sufficiently not moral patients *and* if the AS-you is in fact simulated at a low level enough of detail to not count as a moral patient. if you and some AS-you have to decide which one of you and AS-you will experience some suffering, both of you's should decide it should be AS-you — or in other words, you should have a decision theory that is ready to say "yeah, i'm okay with undergoing suffering, because i think that i'm only an AS and not the full me that my values care about". which is a perhaps unintituive result! but it does make sense — after all, a character in fiction can make decisions, but we don't believe it generally counts enough as a moral patient that we would effectively care if it suffers. this is a similar situation, but as if we reflected about the simulation from inside the work of fiction — and we should be the kind of agent which comes to the globally correct decision even if we notice that we're in a weird part of it, such as being inside omega's prediction or being inside fiction.
aff11397-3bf6-4a2e-b64b-749386981ca5
StampyAI/alignment-research-dataset/arxiv
Arxiv
Secure Evaluation of Knowledge Graph Merging Gain 1. Introduction ---------------- inline]1 page For many people and businesses most or at least a big part of their value lies in their knowledge. Often this knowledge is represented in knowledge graphs. This is true for a broad range of companies ranging from social media to medical. For any business, it is crucial to be able to buy, sell, and trade. For these businesses, it is important to do so with their knowledge. However, unlike any physical goods, knowledge, and in this case a knowledge graph, is copied and replicated very easily at no cost. Therefore, a Seller does not want to share his knowledge graph to a potential Buyer before having a deal because he has no guarantee that the Buyer won’t just keep a copy of the graph when the deal is canceled. The Buyer, on the other hand, would like to integrate new information into his own knowledge graph. He does, however, not want to blindly buy a knowledge graph without knowing how much information it contains beyond what he already knows. This creates the need to privately compare two knowledge graphs, giving privacy guarantees to the Seller and enabling the Buyer to evaluate how much knowledge he can gain and therefore how much he should pay for the knowledge graph in question. Informally, the protocol needs to guarantee Seller privacy, meaning that the Buyer learns no more information about the Seller’s knowledge graph than the two agreed upon. For the Buyer, the guarantees are that the Seller learns very little about the Buyer’s knowledge graph and cannot influence the outcome. Since the quality of data can be measured in many, case dependent ways, it is necessary that the Buyer can obtain a part of the Seller’s knowledge graph. However, it must be ensured that neither party can select what part it is to avoid a biased selection. The main contribution of this article is the protocol for privacy preserving Knowlegde Graph merging gain measurement. Its most profound benefits are: * • The ability to compute explicitly what is in the intersection of the Seller’s and Buyer’s graph and metrics which give an indication of the amount of new information in the merger of the graph, with a small, and analyzed, amount of exposed information shared between the parties (see [sections 4](#S4 "4. Approach ‣ Secure Evaluation of Knowledge Graph Merging Gain") and [5](#S5 "5. Protocol Analysis ‣ Secure Evaluation of Knowledge Graph Merging Gain")). * • The possibility to share a part of the knowledge graph from the Seller to the Buyer such that neither party can make a biased selection (see [section 4.4](#S4.SS4 "4.4. Step 4: Data Quality ‣ 4. Approach ‣ Secure Evaluation of Knowledge Graph Merging Gain")) * • The ability to verify that the Seller has followed the protocol as intended (see [section 4.5](#S4.SS5 "4.5. Step 5: Closing the Deal and Verification ‣ 4. Approach ‣ Secure Evaluation of Knowledge Graph Merging Gain")). * • A practically linear use of resources: time, space, and communication (see [section 6](#S6 "6. Evaluation ‣ Secure Evaluation of Knowledge Graph Merging Gain") and specifically [figs. 1](#S6.F1 "Figure 1 ‣ 6.1. Time ‣ 6. Evaluation ‣ Secure Evaluation of Knowledge Graph Merging Gain"), [2](#S6.F2 "Figure 2 ‣ 6.2. Memory ‣ 6. Evaluation ‣ Secure Evaluation of Knowledge Graph Merging Gain"), [4](#S6.F4 "Figure 4 ‣ 6.3. Communication ‣ 6. Evaluation ‣ Secure Evaluation of Knowledge Graph Merging Gain") and [3](#S6.F3 "Figure 3 ‣ 6.3. Communication ‣ 6. Evaluation ‣ Secure Evaluation of Knowledge Graph Merging Gain")) We provide the implementation of the protocol and evaluation on github <https://github.com/blindedwebconf/submission1688>. 2. Comparing Knowledge Graphs Without Sharing Them --------------------------------------------------- inline]1 page The goal of this paper is to find a way, to compare two knowledge graphs of different parties, without having to show them to one another. The goal of this comparison is, for one party (‘the Buyer’) to figure out if and how much it could benefit from obtaining the knowledge graph of the other party (‘the Seller’). To achieve this, it is important to find the intersection of the two knowledge graphs. Besides, we want to be able to share specific statistics on the Seller’s knowledge graph, as well as compute metrics which indicate the potential information gain. These so called entropy metrics are computed by the Buyer on its own graph and on a proxy for the merged graphs, ensuring the Buyer does only learn a small amount of information about the Seller’s graph. The Buyer wants to get new high quality knowledge from the acquired graph. However, what knowledge and quality means is use case and user specific. Hence, there is a need to provide a part of the Seller’s knowledge graph to the buyer. As neither of the parties can be trusted to make the selection of the part of the graph to be shared, this should happen such that the selection is essentially random (i.e., beyond control of either party). As both parties want to keep as much information private as possible, it is also crucial to find measure for the amount of information leaked during the process. These metrics will show the amount of information leaked for each individual step of the protocol. As various parts of the protocol do not depend on each other, the user has the possibility to decide, which information he is willing to compute at the cost of a certain information leak. This protocol is meant to be usable in real world scenarios, therefore it needs to be sufficiently efficient and usable. This includes scaling to larger knowledge graphs, even if it is reasonable to assume that two major companies trying to work out a big deal would provide substantial processing power and be are willing to spend more time, if this ensures privacy and precise information on the possible gain. Trusted third parties are undesirable for this protocol because they can be hard to find and eliminating them also eliminates the risk of collusion. One possible area of use for this protocol is medical companies. In (McCusker et al., [2017](#bib.bib17)) a knowledge graph is used to find new potential drugs against metastatic cutaneous melanoma, which is an aggressive skin cancer. The knowledge graph contains drug-protein, protein-protein, and gene-disease interactions. In this case they found 25 new candidates for new drugs that fulfill certain criteria when searching the knowledge graph. In this case, it is easy to see why someone would be interested in buying such a knowledge graph. Also, this makes it quite clear why privacy is paramount. The knowledge graph used in this case was built of data open to the public, but medical companies invest huge sums into research and a new cancer drug is worth a lot. Another area where this protocol is beneficial is the trade of user data, where this protocol can be used to determine whether the Seller and Buyer have an overlapping user base and whether they the types of information they have on users is different. A different, more general case is any two companies wanting to explore, if a cooperation between them would make sense. In this case they can run the protocol in both directions. This way they can figure out, whether the respectively other party has different knowledge supplementing their own, which makes a cooperation beneficial. As the ideas used in this protocol still work in case one knowledge graph is empty, it is also possible for a Buyer, which is new to a certain field, to run this protocol with a potential Seller. This allows the Buyer, to learn whether buying the Seller’s knowledge graph could be a good entry into this field or not. ### 2.1. Requirements Before going over the steps and ideas of this protocol we will first specify its requirements. There are two aspects to this. First, the protocol needs to be usable in a real-world scenario and second, it has cryptographic parts to it which need to give certain guarantees. Information gain: This protocol is meant to help a Buyer to decide if it is beneficial to him to buy a certain knowledge graph. This means, the Buyer must be able to estimate the amount of information he could gain from a merger. This creates the need to measure the difference and the similarities between the knowledge graphs. Data quality: An important part of making the decision whether to buy a knowledge graph or not is testing its quality. While the protocol does not in itself need to test the data quality, it needs to give the Buyer the opportunity to test the data quality himself. Correctness: Naturally one expects that the results that an algorithm produces are correct. This needs to hold for this protocol as well. Otherwise they are of no use and the Buyer cannot rely on the results when making his decision. Since the graphs themselves are imperfect and cryptography is involved, insignificantly small and extremely unlikely errors are tolerable. Seller privacy: In many cases the knowledge graph is of great value to the Seller. Naturally he is not willing to give up any knowledge for free. Seller privacy means that the Buyer only learns the information which the two parties agree upon. Buyer privacy: Similarly to the Seller privacy, the Buyer does not want to reveal any information about his knowledge graph to the Seller. Additionally, he needs to be sure that the Seller did not influence the outcome of the protocol. If the Seller was able to influence the results, the Buyer would potentially be tricked into paying too much for the knowledge graph. Efficiency: The size of real-world knowledge graphs range from very small to huge ones with billions of statements. To be useful the protocol needs to be efficient and scale well with the size of the knowledge graphs it is run on. However, because privacy is paramount and cryptography often is costly in terms of computation the protocol is allowed to take a while for huge graphs. 3. Related Work ---------------- inline] 3/4 page Knowledge graphs are generally used to represent information about the world. The name was originally coined by Google to give a meaning to search terms instead of only matching key words (Singhal, [2012](#bib.bib24)). In this work, we focus on RDF graphs (Lassila, [1999](#bib.bib16)). These graphs have nodes representing resources and directed, labeled edges indicating the relation between them. These graphs can be serialized as a set of triples (see, e.g., (Seaborne and Carothers, [2014](#bib.bib22))) called statements, where each triple (s, p, o) indicates one edge in the graph, by specifying the subject s (head), predicate p (label on the edge), and object o (tail). We simplify RDF in the sense that we do not explicitly support named graphs nor blank nodes. Knowledge graphs contain a certain amount of information. Measuring this information, and specifically how much information is gained by connecting two knowledge graphs together is a major aspect of this work. Here, we were strongly influenced by the work of Sarasua et al. (Sarasua et al., [2017](#bib.bib20)), who investigate the value of links between different data sets of a knowledge graph. To do so, statistics (based on (Auer et al., [2012](#bib.bib2))), such as the net amount of links, the average number of incoming and outgoing links per data set and number of entities linked by a source, are calculated. In addition, other metrics based on Shannon entropy are computed for the knowledge graph with and without links between its data sets, where the difference between them serves as a metric for the information gained by these links. To the best of our knowledge these measurements, including the entropies, have not yet been investigated in a private setting. Various works have looked into data quality in knowledge graphs, but the specifics are out of scope of this work. In recent years there have been various famous examples of security breaches where huge amounts of sensitive data have been leaked. This ranges from log-in data of websites to credit card details. In the current work, we do not want either party to gain more knowledge as expected trough the proposed protocol, and hence a careful analysis of leaked information is needed. A common approach in literature is to count the number of bits of ’sensitive’ information which has been leaked. In (Borders and Prakash, [2009](#bib.bib6)) the maximum number of leaked bits in outbound traffic is measured. Similarly, Backes (Backes et al., [2009](#bib.bib3)) counts the number of leaked bits where the data is within some (case specific) logical equivalence class of sensitive data. Vavilis (Vavilis et al., [2014](#bib.bib26)) weights the leaked data such that the amount is not the only factor in determining the severity of a leak. However, this weighting depends on the data and needs to be done by manually at least for a part of the data. A similar approach is chosen in (Vavilis et al., [2016](#bib.bib27)) who also includes the anonymity of the leaked data as a factor. In our case though, a metric is needed that is independent of the concrete data, but specific to the case of knowledge graphs. We elaborate on this in [section 5](#S5 "5. Protocol Analysis ‣ Secure Evaluation of Knowledge Graph Merging Gain"). A Bloom filter is used for fast and efficient membership tests. It is a bit array starting with all zeros. Bloom filters are trained by hashing elements of a set with multiple hash functions to positions in the Bloom filter and setting them to 1. To test if an element belongs to the set it gets hashed by all functions. If in every resulting position the Bloom filter contains a 1, the element is a member of the set, otherwise it is not. Because of collisions in hash functions false positives are possible, however false negatives are not. The first time Bloom filters were proposed in (Bloom, [1970](#bib.bib5)) in 1970 by Bloom. (Fan et al., [2000](#bib.bib14)) introduced counting Bloom filters which are not just an array of bits but small counters instead. These counters get increased every time an element is hashed to this position. In our case Bloom filters will mainly be used to calculate set intersections. A private set intersection protocol using Bloom filters is shown in (Nojima and Kadobayashi, [2009](#bib.bib18)). For our privacy preserving protocol, we build on top of existing techniques, like oblivious transfer and private set intersection. Oblivious transfer was introduced by Rabin (Rabin, [1981](#bib.bib19)) as scheme that allows Bob to retrieve a secret from Alice with a chance of 50%. Alice does not find out whether Bob was successful or not. Other variations include a scheme where Bob receives exactly one of two secrets from Alice without her knowing which one, for which a protocol has been presented in (Even et al., [1985](#bib.bib13)). This has then been generalized to Bob receiving one out of n𝑛nitalic\_n secrets. An efficient scheme for this problem was presented in (Tzeng, [2002](#bib.bib25)). This scheme can be run in parallel to obtain a scheme for receiving k𝑘kitalic\_k out of n𝑛nitalic\_n secrets. A direct and efficient k𝑘kitalic\_k out of n𝑛nitalic\_n scheme is presented in (Chu and Tzeng, [2005](#bib.bib8)). Such protocols give Alice the guarantee that Bob cannot obtain more than k𝑘kitalic\_k secrets while Bob can be sure that Alice cannot learn which secrets Bob chose. When agencies of different countries want to find matches in each other’s criminal databases, or doctors want to find other patients with similar symptoms, they want to find the commonalities between datasets without disclosing information. Often this gets reduced to the computation of a private set intersection, i.e., the computation of the intersection between sets where neither party is allowed to learn the information the other party has. Different approaches exist for this problem. For example, Freedman (Freedman et al., [2004](#bib.bib15)) proposed an approach based on polynomials (later improved in (Dachman-Soled et al., [2009](#bib.bib10))) inline]MC To save space: leave all OPRF parts out inline]MC This is what I understood: The approach we follow in this work is based on Bloom filters as suggested by (Nojima and Kadobayashi, [2009](#bib.bib18)). Bloom filters are data structures very fast for membership-tests. They do, however, have the drawback of possible false positives. In this work, we use the version with blind signatures, first introduced by Chaum (Chaum, [1983](#bib.bib7)). They are an important building block for many cryptographic protocols and used in various voting and payment schemes. The most important characteristic is that the message which is signed is obscured or blinded. Hence, the signer does not know what the message is that he signs. Such a signature can then be publicly verified against the unblinded message. We use the blind signature scheme for RSA (Bellare et al., [2003](#bib.bib4)). inline]MC I like having the RSA description here, but there is too little space. Find something else to cut? inline]This section could use a short ending statement 4. Approach ------------ inline]Protocol centric explanation. 3 pages This section gives an overview of each of the steps taken in the protocol and how they work. The protocol offers the opportunity for a Buyer to learn about the usefulness of a knowledge graph offered by a Seller. The whole protocol is divided into five steps. The first step is the initial contact when the parties agree on parameter settings and which parts of the protocol they want to use. During this step also some simple statistics in the knowledge graphs can be shared (but not verified). In the second step, the intersection of the graphs is determined. If the parties still want to continue after this step, a set of metrics (based on (Sarasua et al., [2017](#bib.bib20))) is computed, while still maintaining the graphs secret. The outcome of the measures should give the Buyer an indication on whether the graph is interesting, but to determine the quality, actual data needs to be transferred (which could involve monetary exchange). This happens in step 4 in such a fashion that the Buyer only obtains a subset of the data while the neither party can control which part that actually is. Finally, if the Buyer is convinced the data is interesting and of sufficient quality, the deal is closed in step 5 of the protocol, where also all previous steps can be verified. Note that either party can end of the protocol at each stage. Potentially because it has learned that the graph of the other party is not interesting after all. ### 4.1. Step 1: Initial Contact The goal of the initial contact is to determine how the protocol is run. The Buyer states his interest and asks the Seller to start the protocol. If there is interest from both sides, the protocol starts and they negotiate which of the metrics offered by the protocol they are willing to compute. Each metric reveals something different about the Seller’s knowledge graph and has its own trade off between information learned and potential information leaked. Hence, they can make a decision based on the results of the information leak analysis section ([section 5](#S5 "5. Protocol Analysis ‣ Secure Evaluation of Knowledge Graph Merging Gain")) below. At this stage the party (especially the Seller) might want to share specific information about his graph. All of this information are aspects which can be computed from the graph without needed the other parties data, like simple statistics. Examples include: the number of unique statements, resources, subject, predicates, or objects, as well as, in and out degrees, and so on (in our implementation we included 33 of these simple stats). A further clarification can be given on the shared vocabularies used by the Seller. Finally, the Seller might consider specific information too sensitive, or expensive, to share by chance (in step 4). In that case they can agree that information can be removed or anonymized in an agreed upon fashion. Note that none of the information shared can be verified at this stage; this is only possible at the very last step of the protocol. ### 4.2. Step 2: Intersection In this phase the Seller learns the intersection (i.e., the set of statments i𝑖iitalic\_i where i⊆KGS∧i⊆KGB𝑖𝐾subscript𝐺𝑆𝑖𝐾subscript𝐺𝐵i\subseteq KG\_{S}\land i\subseteq KG\_{B}italic\_i ⊆ italic\_K italic\_G start\_POSTSUBSCRIPT italic\_S end\_POSTSUBSCRIPT ∧ italic\_i ⊆ italic\_K italic\_G start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT) between the two knowledge graphs. The goal is the evaluation of a private set intersection (PSI) scheme to reveal no more than the intersection to the Buyer. To calculate the PSI an approach using a Bloom filter and blind signatures is chosen as proposed by Nojima (Nojima and Kadobayashi, [2009](#bib.bib18)). First, the Seller computes a blind signature for each of his statements. As he is now signing his own data, he can see it and just signs it. Then, he computes a cryptographic hash of the concatenation of that signature with the original statement. The outcomes get added to the Bloom filter. When the buyer obtains this filter later, the Buyer can test whether statements he knows are in the Seller’s graph (i.e., determine the intersection) by following the same steps. However, the Buyer needs the Seller to sign the Buyer’s statements, which are not visible to the Seller because of the blind signature approach. The blind signatures prevent the Buyer from testing an infinite number of guesses against the Bloom filter, as all of them have to be signed off by the Seller, who will only sign a limited number (as agreed in step 1). The secure hash-function further prevents the Buyer from reverse engineering statements from the Bloom filter. inline]to discuss later MC Isn’t that already guaranteed by the previous signature step? Would a normal hash function be sufficient? The Seller sends the Bloom filter representation of his knowledge graph to the Buyer only after he has singed all blind signatures, he could also insist on doind the signing before sharing statisitcs. This is to prevent cases where the Buyer would attempt to make informed guesses after he has learned information about the Seller’s graph. One issue remains, however. The Bloom filter might returns true for the membership test for statements not in the Seller’s graph (i.e., yield false positives). Therefore, it is important to choose a good filter size and number of hash functions when creating the Bloom filter to keep the false positive rate low. If both parties deem it necessary, the false positive rate could also be kept high on purpose to give an idea of the intersection while still maintaining uncertainty for every positively verified statement. This can either be achieved by choosing a smaller Bloom filter, more hash-functions or by randomly adding ones to the Bloom filter. ### 4.3. Step 3: Entropy metrics inline]MC Check: I try to call these consistently ’entropy metric’ and not just entropy of the graph. inline]The publication year on the Shannon entropy reference might be wrong. Check that one references are fixed After determining the intersection in the previous step, the goal of this step is to calculate metrics based on Shannon entropy (Shannon, [2001](#bib.bib23)) to estimate the amount of information which can be gained from a merger of the two knowledge graphs. These metrics were proposed in the work of Sarasua et al. (Sarasua et al., [2017](#bib.bib20)). The core idea is to contrast the entropy of a multi-set, derived from the graphs from multiple sources (i.e., the merged graph), with the entropy of a similar set derived from just one source. One example of such metric is called the description of subjects (Desc.) of the graph. In that case the multi-set consists of every predicate-object combination in the knowledge graph used to describe a statement. inline]MC If space and time left, add some more metrics. We implemented an analyzed this entropy metric computation for 9 different multi-set choices; the selection would in practice be determined in step one. As we will discuss below, the calculation of the intersection is necessary for the correct calculation of the entropy metrics and therefore cannot be foregone unless one chooses to also not calculate any entropy metric, or allows an error while computing them. For our privacy preserving setting, computing the metric for the single graphs is straightforward as no data has to be shared. However, computing this for the merged graph is not obvious as the data cannot be shared. What we will establish in this step is that there is no need to share the actual data. The point is that it is sufficient that the Buyer knows the counts of elements in the merged multi-set, without knowing about the elements themselves. The approach used is very similar to the one of the previous step, except that we use a counting Bloom filter instead of a normal one. A counting Bloom filter differs from a normal one in that it keeps the count of the entities added to it, instead of only remembering their existence. So, for each entropy metric separately, a counting Bloom filter will be send from the Seller to the Buyer, containing the counts for the given hash outcomes. The Buyer now uses the blind signatures as before to obtain the hashes. Since the goal is to obtain this entropy metric for the merged graphs, it is at this stage important to compensate for the intersection learned in step 2, as otherwise elements in the multi-set will be double counted. inline]later: MC There is some way described in the thesis, but it seems to me like it would be better to first subtract the intersection from the Buyer’s graph end then compute the blind signatures, etc. Instead of doing the compensation in the end. Hence, I did not include either explanation here. If the Buyer adds his own data (excluding the intersection) to the Bloom filter as well, the needed element counts can be obtained directly from the Bloom filter (again noting that no actual element information is needed, nor can be retrieved). The difference of the entropy obtained from the Bloom filter (i.e., the entropy for the merged multi-sets), and the entropy of the Buyer multi-set gives an indication of the gain from merging the graphs. Calculating an entropy for the combined knowledge graphs in this way can lead to a small error. This error can originate from the false positive probability of the Bloom filter used in step 2 and 3. Due to this, too many of the Buyer’s statements might be considered as part of the intersection, leading to an overadjustment. A second source of error is the false positive probability of the counting Bloom filter. This can lead to elements of the Buyer being matched with elements of the Seller that are not the same. However, choosing small enough false positive probabilities by making the Bloom filters big enough minimizes this problem. Further possible corrections, not implemented in our work because we could choose the Bloom filters large enough, are discussed by Cochez (Cochez, [2018](#bib.bib9)). For privacy reasons (which are discussed in detail in [section 5](#S5 "5. Protocol Analysis ‣ Secure Evaluation of Knowledge Graph Merging Gain")) the blind signatures for the Buyer must be handed out all at once and before the Seller sends over any of his (counting) Bloom filters, including the one from step 2. This means one would compute all blind signatures for all entropies and the intersection in the previous step already. After both step 2 and 3 are completed, the Buyer evaluates their outcome and decides if he is still interested in the Seller’s knowledge graph. If the values from step 2 and 3 are promising both parties can proceed with step 4. Otherwise the Buyer realizes that he cannot profit enough from buying the Seller’s knowledge and the protocol ends with no deal between the two of them. ### 4.4. Step 4: Data Quality From the last two steps, the Buyer got a rough idea how much he could benefit from a merger of the two knowledge graphs. The next important step is to ensure that the data is of acceptable quality. This is very important as an adverse Seller could have put low quality or even nonsense statements into his graph to increase the values obtained in the previous steps. However, determining the quality of the data is highly case specific and hence can only be done by looking at actual data. Here, we assume that it is possible to determine the quality of the overall graph by obtaining a subset of the statements in the graph. We assume that a deterministic procedure exists to partition the statements into n𝑛nitalic\_n equally meaningful sets and that obtaining k𝑘kitalic\_k of them at random makes quality checking possible; the actual check is out of scope of the current work. Equally meaningful sets should be similar in size and representative (if possible) of the knowledge graph. Ideally they are also similar in monetary worth. For our later evaluation we used both random partitions and other strategies which increase the likelihood that all statements concerning a given resource are withing the same partition. After the Seller’s statements are partitioned in n𝑛nitalic\_n sets, we use a *buy k𝑘kitalic\_k out of n𝑛nitalic\_n secrets* (k𝑘kitalic\_k oo n𝑛nitalic\_n) oblivious transfer (OT) protocol, such that the Buyer obtains k𝑘kitalic\_k randomly selected parts of the knowledge graph for a certain predetermined price, after which he can investigate the quality at his own discretion. Important is that this procedure ensures that neither party is able to determine beforehand which statements will be shared, implying that neither the Seller nor the Buyer can opt to include data of their choosing into what is shared. Tampering at this stage would still be possible (for example, attempting to only including good data in the partitions), but this will be found out during the verification in step 5. If it turns out that the graph is not of sufficient quality or does not contain real information the protocol may end here. Otherwise, it continues with the final step. ### 4.5. Step 5: Closing the Deal and Verification At this point the Buyer is sure that he wants to buy the knowledge graph. Both parties negotiate a price for it, where the Buyer can use the results of the previous steps to judge how much the graph is worth to him. Obviously, this is not part of the protocol. Assuming they reach an agreement, they sign a contract and neither party can back out anymore. This contract should contain specific clauses on what happens if it is not possible to verify all steps of the protocol, i.e., what happens if the Seller did not follow the protocol as expected; it is also possible, but not required to involve a third party at this stage of the protocol to perform the validation. After this the Seller sends over his knowledge graph in plain-text and all private information he may have used for encryption, random number generation, etc. during the protocol. Based on this the Buyer can rerun the steps 2-4 on his own to make sure that the Seller has not cheated. Naturally this only serves to detect a malicious Seller. A curious Seller will not be exposed by this. The procedure for verifying the computations is rather straightforward as it can exactly repeat the same steps as the protocol took. The reason is that if all data is available, one can rerun the complete protocol without a second party. However, as the privacy aspect is no longer important, some specific steps like encryptions can be skipped, reducing the computational burden during verification. For example, if one accepts to ignore small deviations of values, the computations using Bloom filters can be executed using normal sets and multi-sets, in that case there is not even a need to obtain the private key, used for the blind signatures, from the Seller. Another example is the verification of the oblivious transfer, which can be sped up if the Seller sends all keys used in the process to the Buyer. This step completes the protocol. ### 4.6. Implementation As a proof of concept and in order to evaluate the protocol, we implemented all steps in Java. There is a program entry for both the Seller and the Buyer. Even though most data is already enrypted in some form (e.g., the Bloom filters), we use TLS (Dierks and Rescorla, [2008](#bib.bib11)) to secure the communication, so that we do not have to perform a specific analysis for external eavesdroppers. The applications read in their RDF graphs after which the protocol starts. In interactive mode, the application asks after each major protocol step whether the user wants to continue. Besides, the user has the possibility to select specifically which entropy metrics to compute. For the computations of blind signatures we used RSA, as described in [section 3](#S3 "3. Related Work ‣ Secure Evaluation of Knowledge Graph Merging Gain"). The Seller needs to sign a lot statements, so we parallelized this step to use all available cores. Because we know the number of statements which will be added to the Bloom filter, we can choose its size such that false positives rarely occur. During the data quality step, several partitioning methods can be chosen. inline]MC Is the following correct? For our later evaluation we defaulted to what we called balanced DBSCAN which performs a graph clustering over the nodes, while attempting to keep the sets in the partitioning roughly of the same size. The specifics of this are not important for the protocol itself. We base the oblivious transfer implementation on the work of Chu and Tzen (Chu and Tzeng, [2005](#bib.bib8)) but make use of RSA instead of Elgamal. We do not directly feed the graph partitions into the oblivious transfer messages. Instead, the partitions are encrypted (using cipher block chaining – CBC  (Dworkin, [2001](#bib.bib12)) ) and the encryption keys are used in the messages. One benefit of this is that the Buyer cannot choose to take partitions which look large, hoping to get more data. At the end of the protocol, the Sender sends over the knowledge graph and all data needed for the verification, which is then performed by the Buyer. 5. Protocol Analysis --------------------- inline]Describe properties, from section 5 and 6 in thesis. 2 pages The goal of this protocol is to evaluate the information gain that could be achieved when merging two knowledge graphs. It is crucial to keep as much information private, i.e., until there is an agreement on sharing parts of the graph or selling it there should be as little information shared as possible. This means that the Buyer learns only the agreed upon information about the Seller’s knowledge graph. At the same time the Seller should not learn anything about the Buyer’s knowledge graph. This section analyzes the safety of the protocol and which information gets leaked about either party. In our analysis, we take the perspective of different types of adversaries. A non-participating adversary is not part of the protocol, but tries to learn something about the participants by attacking or influencing the protocol and its result. We neglect further analysis of this type as they would either have to find a weakness in either of the active parties’ systems, which is not part of the protocol, or eavesdrop/attack the communication, which is part of the protocol but can be countered by classical cryptography which is not the focus of the paper. Hence, we only look into participating adversaries (i.e., the Seller and Buyer), as follows. * • We will call either of the participating parties Fair if they are neither semi-honest/curious nor malicious. This means they have no bad intentions and stick to the protocol. * • A semi-honest Curious adversary is one of the participants of the protocol, and follows it strictly. However, he is curious, i.e., tries to get as much information as possible, for example by combining information obtained from different steps. * • A Malicious adversary also takes part in the protocol, but aggressively attempts to gain information. This could be by faking input, skipping steps, or ending the protocol early. inline]MC I added the following sentence here. I am not sure this is true here… Remove if not confirmed. He does not care about being detected. He is also curious. Because only two parties participate in the protocol and it runs without a trusted third party, collusion of any kind is impossible. A participating party does not gain any power by colluding with a non-participating adversary as it could behave like one itself. ### 5.1. Information Leak Metrics To quantify how much information can (potentially) be leaked, we define the following metrics. ILStatements is the number of parts of statements about which information is learned. Learning a complete statement increases the number with 3 (one for each of subject, predicate, and object). ILResources, ILSubjects, ILPredicates, and ILObjects count the number of resources, subjects, predicates, and objects about which information is leaked, respectively. These metrics are increased if it is discovered that the item is or is not contained in the other parties knowledge graph. They are also increased if is found out how often the item occurs. ILStructural This metric is a number indicating the number of pieces of structural information about the knowledge graph learned. Such information is the size of the knowledge graph (number of statements), or the number of different resources, subjects, predicates, objects, or literals or the average number of in- and out-going links per node. ILAmount This metric is the total information shared. Note that the leaking of information can happen in either direction. Hence, we subscript these metrics with B→S→𝐵𝑆B\rightarrow Sitalic\_B → italic\_S (e.g., ILAmountB→S𝐼𝐿𝐴𝑚𝑜𝑢𝑛subscript𝑡→𝐵𝑆ILAmount\_{B\rightarrow S}italic\_I italic\_L italic\_A italic\_m italic\_o italic\_u italic\_n italic\_t start\_POSTSUBSCRIPT italic\_B → italic\_S end\_POSTSUBSCRIPT) to indicate information leaked about the Buyer’s knowledge graph which the Seller gains, and vice versa S→B→𝑆𝐵S\rightarrow Bitalic\_S → italic\_B. Further, we will write XS→Bsubscript𝑋→𝑆𝐵X\_{S\rightarrow B}italic\_X start\_POSTSUBSCRIPT italic\_S → italic\_B end\_POSTSUBSCRIPT or XB→Ssubscript𝑋→𝐵𝑆X\_{B\rightarrow S}italic\_X start\_POSTSUBSCRIPT italic\_B → italic\_S end\_POSTSUBSCRIPT to indicate all leak metrics in the specified direction. Whether a metric is important, is case specific. For example, if the parties have shared their ontologies beforehand, it might not matter much that ILPredicates𝐼𝐿𝑃𝑟𝑒𝑑𝑖𝑐𝑎𝑡𝑒𝑠ILPredicatesitalic\_I italic\_L italic\_P italic\_r italic\_e italic\_d italic\_i italic\_c italic\_a italic\_t italic\_e italic\_s or ILStructural𝐼𝐿𝑆𝑡𝑟𝑢𝑐𝑡𝑢𝑟𝑎𝑙ILStructuralitalic\_I italic\_L italic\_S italic\_t italic\_r italic\_u italic\_c italic\_t italic\_u italic\_r italic\_a italic\_l is high. Further, as discussed above, there could be cases where these numbers are not significant. For example when the value of the graph is contained in a small subset of the statements. In our analysis, each increase of a metric also increases the respective ILAmount𝐼𝐿𝐴𝑚𝑜𝑢𝑛𝑡ILAmountitalic\_I italic\_L italic\_A italic\_m italic\_o italic\_u italic\_n italic\_t metric. We will not mention that explicitly. Also, in some odd corner cases, like when the Seller would reveal that it has only one statement, the Buyer also learns that there is only one subject, predicate and object. Similarly, if it is learned there is only one subject, and the number of statements, one knows its frequency. We do ignore these corner cases as they are only of theoretical interest. ### 5.2. Step 1: Initial Contact In the first step, it is immediately clear what is leaked, namely all intentionally shared statistics. Each of these will add 1 to the ILStructuralS→B𝐼𝐿𝑆𝑡𝑟𝑢𝑐𝑡𝑢𝑟𝑎subscript𝑙→𝑆𝐵ILStructural\_{S\rightarrow B}italic\_I italic\_L italic\_S italic\_t italic\_r italic\_u italic\_c italic\_t italic\_u italic\_r italic\_a italic\_l start\_POSTSUBSCRIPT italic\_S → italic\_B end\_POSTSUBSCRIPT metric. In our implementation we have altogether 33 of these statistics, so we continue assuming this step. Note that even if it is decided that not all metrics are computed and shared, a non-fair Buyer might be able to compute the metric from the result of other ones. Neither the Buyer, nor his knowledge graph are involved in the process and hence all his metrics stay zero. The discussion about which parts of the protocol are to be run might give insight into which type of information is considered important by either party, but this is not part of the protocol itself and depends on the user’s negotiations. ### 5.3. Step 2: Intersection In this step information about the knowledge graphs is learned. This step can be broken down into sub-steps, the first of which is the Seller filling the Bloom filter, which only involves the Seller, so no information is leaked. In the second sub-step the Buyer obtains blind signatures for his statements, this does not involve the Seller’s knowledge graph. So all XS→Bsubscript𝑋→𝑆𝐵X\_{S\rightarrow B}italic\_X start\_POSTSUBSCRIPT italic\_S → italic\_B end\_POSTSUBSCRIPT are zero. However, the Seller knows how many signatures he gives, therefore a curious Seller learns how many statements the Buyers knowledge graph contains. Hence, ILStructuralB→S=1𝐼𝐿𝑆𝑡𝑟𝑢𝑐𝑡𝑢𝑟𝑎subscript𝑙→𝐵𝑆1ILStructural\_{B\rightarrow S}=1italic\_I italic\_L italic\_S italic\_t italic\_r italic\_u italic\_c italic\_t italic\_u italic\_r italic\_a italic\_l start\_POSTSUBSCRIPT italic\_B → italic\_S end\_POSTSUBSCRIPT = 1.111 inline]later: MC I added the footnote here. It would likely have been a good idea to add to the protocol from he start, but it might have effects I cannot investigate right now + not done in experiments. If this is considered an issue, the Buyer can choose to get either more or fewer statements signed. As these are blind signatures, the Seller cannot notice this anyway. The remaining metrics remain zero, as the Seller does not learn what he signs (Schröder and Unruh, [2012](#bib.bib21)). In the final sub-step the Seller sends over the Bloom filter to the Buyer who tests it for the intersection. The Buyer leaks nothing in this sub-step, but will gain new information. A fair Buyer will learn the intersection and the subject, predicate and object for every statement in it. Hence, assuming n𝑛nitalic\_n statements in the intersection222We consider the predicate as a potential additional resource here., ILStatementsS→B=3n𝐼𝐿𝑆𝑡𝑎𝑡𝑒𝑚𝑒𝑛𝑡subscript𝑠→𝑆𝐵3𝑛ILStatements\_{S\rightarrow B}=3nitalic\_I italic\_L italic\_S italic\_t italic\_a italic\_t italic\_e italic\_m italic\_e italic\_n italic\_t italic\_s start\_POSTSUBSCRIPT italic\_S → italic\_B end\_POSTSUBSCRIPT = 3 italic\_n, ILResourcesS→B≤3n𝐼𝐿𝑅𝑒𝑠𝑜𝑢𝑟𝑐𝑒subscript𝑠→𝑆𝐵3𝑛ILResources\_{S\rightarrow B}\leq 3nitalic\_I italic\_L italic\_R italic\_e italic\_s italic\_o italic\_u italic\_r italic\_c italic\_e italic\_s start\_POSTSUBSCRIPT italic\_S → italic\_B end\_POSTSUBSCRIPT ≤ 3 italic\_n, ILSubjectsS→B=ILPredicatesS→B=ILObjectsS→B≤n𝐼𝐿𝑆𝑢𝑏𝑗𝑒𝑐𝑡subscript𝑠→𝑆𝐵𝐼𝐿𝑃𝑟𝑒𝑑𝑖𝑐𝑎𝑡𝑒subscript𝑠→𝑆𝐵𝐼𝐿𝑂𝑏𝑗𝑒𝑐𝑡subscript𝑠→𝑆𝐵𝑛ILSubjects\_{S\rightarrow B}=ILPredicates\_{S\rightarrow B}=ILObjects\_{S\rightarrow B}\leq nitalic\_I italic\_L italic\_S italic\_u italic\_b italic\_j italic\_e italic\_c italic\_t italic\_s start\_POSTSUBSCRIPT italic\_S → italic\_B end\_POSTSUBSCRIPT = italic\_I italic\_L italic\_P italic\_r italic\_e italic\_d italic\_i italic\_c italic\_a italic\_t italic\_e italic\_s start\_POSTSUBSCRIPT italic\_S → italic\_B end\_POSTSUBSCRIPT = italic\_I italic\_L italic\_O italic\_b italic\_j italic\_e italic\_c italic\_t italic\_s start\_POSTSUBSCRIPT italic\_S → italic\_B end\_POSTSUBSCRIPT ≤ italic\_n. However, he learns nothing general about the graph, so ILStructural=0𝐼𝐿𝑆𝑡𝑟𝑢𝑐𝑡𝑢𝑟𝑎𝑙0ILStructural=0italic\_I italic\_L italic\_S italic\_t italic\_r italic\_u italic\_c italic\_t italic\_u italic\_r italic\_a italic\_l = 0. A curious Buyer can learn the size of the Seller’s knowledge graph by counting the entries in the Bloom filter. Also, if the intersection is large, he might deduce other structural information (like the number of resources or the average in- or outgoing links per node) with high confidence. This results in ILStructuralS→B≥1𝐼𝐿𝑆𝑡𝑟𝑢𝑐𝑡𝑢𝑟𝑎subscript𝑙→𝑆𝐵1ILStructural\_{S\rightarrow B}\geq 1italic\_I italic\_L italic\_S italic\_t italic\_r italic\_u italic\_c italic\_t italic\_u italic\_r italic\_a italic\_l start\_POSTSUBSCRIPT italic\_S → italic\_B end\_POSTSUBSCRIPT ≥ 1. The remaining metrics stay unchanged. In the worst theoretical case the knowledge graph of the Seller is a subset of the knowledge graph of the Buyer and the whole graph is disclosed; meaning that there would also not have been any value for the Buyer in the first place. Because the Bloom filter stores cryptographically secure hashes of the signatures the Buyer cannot reverse engineer the remaining statements from the Bloom filter. Finally, the Buyer can also behave maliciously. The only way this does let him gain new information is when the Buyer pretends to have certain statements, which he does not. In that case, he can obtain blind signatures for them and in the end verify these against the Bloom filter. After that, he would know that the Seller has these statements. There are several countermeasures which can be used and factors which reduce the severity of the leakage. First, it is generally not easy to guess statements without having any information about the graph 333Experiments on guessing statement with real graphs will be provided in supplementary material. inline]MC todo: add experiment to supplement . So, the Seller can protect himself by not sharing too much information in step 1. Besides, if there are still easy easily guessable statements with high value they can be removed or blinded as agreed in step one. Then, this intersection step happens early in the protocol. Therefore, information gained later in the protocol on which guesses could be based cannot be used (within the same run of the protocol). Second, the Buyer only has a limited number of guesses. For each guess he needs the Seller to sign the statement and he will obviously only do that for a limited amount. Third, the Seller could choose to provide a Bloom filter with a high false positive rate. Then, even if a statement is confirmed, the Buyer cannot be certain it is in the Seller’s graph. This will obviously also affect the correctness of the intersection. Last, if the above are not sufficient as a protection, the Seller can choose to not run this whole step. The price to pay is that in the next step there is an error correction which becomes impossible. ### 5.4. Step 3: Entropy Metrics The analysis of leaks while performing an entropy metric step is very similar to the previous one. However, the information leaked is dependent on the specific metric computed, and hence each metric has its own leaks. For the example metric we described above (*‘Desc’* in [section 4.3](#S4.SS3 "4.3. Step 3: Entropy metrics ‣ 4. Approach ‣ Secure Evaluation of Knowledge Graph Merging Gain")), the information which the Buyer might learn from the Seller’s graph is summarized in [table 1](#S5.T1 "Table 1 ‣ 5.4. Step 3: Entropy Metrics ‣ 5. Protocol Analysis ‣ Secure Evaluation of Knowledge Graph Merging Gain"). For the other direction, only one piece of structural information can be recovered by a curious or malicious adversary.444We did analyze all 9 metrics for both parties, but left the analyses out in the interest of space and conciseness. These will be provided as supplementary material. inline]MC I changed this from Seller to Buyer. Pretty certain this should this be Fair Buyer, Curious Buyer, etc.. Seller side information leaks: Desc Fair Buyer Curious Buyer Malicious Buyer ILAmount 2222 ≤2is+eb+3absent2subscript𝑖𝑠subscript𝑒𝑏3\leq 2i\_{s}+e\_{b}+3≤ 2 italic\_i start\_POSTSUBSCRIPT italic\_s end\_POSTSUBSCRIPT + italic\_e start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT + 3 ≤2is+eb+3absent2subscript𝑖𝑠subscript𝑒𝑏3\leq 2i\_{s}+e\_{b}+3≤ 2 italic\_i start\_POSTSUBSCRIPT italic\_s end\_POSTSUBSCRIPT + italic\_e start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT + 3 ILStatements 00 ≤2isabsent2subscript𝑖𝑠\leq 2i\_{s}≤ 2 italic\_i start\_POSTSUBSCRIPT italic\_s end\_POSTSUBSCRIPT ≤2isabsent2subscript𝑖𝑠\leq 2i\_{s}≤ 2 italic\_i start\_POSTSUBSCRIPT italic\_s end\_POSTSUBSCRIPT ILResources 00 ≤min⁡(2es,2eb)absent2subscript𝑒𝑠2subscript𝑒𝑏\leq\min(2e\_{s},2e\_{b})≤ roman\_min ( 2 italic\_e start\_POSTSUBSCRIPT italic\_s end\_POSTSUBSCRIPT , 2 italic\_e start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT ) ≤min⁡(2es,2eb)absent2subscript𝑒𝑠2subscript𝑒𝑏\leq\min(2e\_{s},2e\_{b})≤ roman\_min ( 2 italic\_e start\_POSTSUBSCRIPT italic\_s end\_POSTSUBSCRIPT , 2 italic\_e start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT ) ILSubjects 00 00 00 ILPredicates 00 ≤min⁡(es,eb)absentsubscript𝑒𝑠subscript𝑒𝑏\leq\min(e\_{s},e\_{b})≤ roman\_min ( italic\_e start\_POSTSUBSCRIPT italic\_s end\_POSTSUBSCRIPT , italic\_e start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT ) ≤min⁡(es,eb)absentsubscript𝑒𝑠subscript𝑒𝑏\leq\min(e\_{s},e\_{b})≤ roman\_min ( italic\_e start\_POSTSUBSCRIPT italic\_s end\_POSTSUBSCRIPT , italic\_e start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT ) ILObjects 00 ≤min⁡(es,eb)absentsubscript𝑒𝑠subscript𝑒𝑏\leq\min(e\_{s},e\_{b})≤ roman\_min ( italic\_e start\_POSTSUBSCRIPT italic\_s end\_POSTSUBSCRIPT , italic\_e start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT ) ≤min⁡(es,eb)absentsubscript𝑒𝑠subscript𝑒𝑏\leq\min(e\_{s},e\_{b})≤ roman\_min ( italic\_e start\_POSTSUBSCRIPT italic\_s end\_POSTSUBSCRIPT , italic\_e start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT ) ILStructural 2222 3333 3333 Table 1. Seller side information leak during Desc entropy calculation. The cardinalities of the multi-sets are denoted with issubscript𝑖𝑠i\_{s}italic\_i start\_POSTSUBSCRIPT italic\_s end\_POSTSUBSCRIPT and ibsubscript𝑖𝑏i\_{b}italic\_i start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT. essubscript𝑒𝑠e\_{s}italic\_e start\_POSTSUBSCRIPT italic\_s end\_POSTSUBSCRIPT and ebsubscript𝑒𝑏e\_{b}italic\_e start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT are the number of unique elements in the Seller’s respectively Buyer’s multiset. inline]later: MC Question: does the Buyer know which signed blind signature belongs to which original input? It seems like he does not need to know. If he does not, this prevents certain attacks which try to guess elements. This was further discussed on whatsapp: The seller signs each element in his multiset and hashes them. He will send the Buyer a Bloom filter with this data. The Buyer send encrypted elements to the Seller, together with the counts. The Seller does the blind signing. Then, he sends the Buyer the signed element and count pairs in a random order. The Buyer gets the Sellers counting Bloom filter and can construct his own Bloom filter when removing the blinding from what he received from the Seller. Now, the Buyer can add up the counts. So, either the Seller learns the counts (but not the connection to elements) of the buyer OR the buyer can get to know the counts (and connection to elements) of the Seller. This could work: if the Seller hashes the outcome of the blind signing. Would the Buyer then recover the encrypted thing from the Blinded thing. unblind(hash(sign(blind(indexI)))) ?= hash (sign(indexI)) There is an issue with RSA, so this is currently not possible. If I understood you right you might have the problem that rsa is only homomorphic if the Modulo stays the same. If you use two different rsa for the signing and then one for the ”hashing” they would 2 different modulos. Otherwise the seller knows the primefaktors of the modulo (he is using for the signing) which would allow him to determine the second private key If you have two rsa keys one with modulo N (First) and one with modulo M(second). Encrypt message x with the first one then with the second then decrypt with the first is not the same as only encrypting with the second one. If both Modulo are the same then their prime divisors are the same. Then the seller would know them because He needs to know them to sign. If he knows them He can break the second rsa meaning he can build the private key There might be some homomorphic hash function which could do this. ### 5.5. Step 4: Data Quality The only sub-step of the data quality step were there is interaction between the parties is when the Buyer obtains a subset of the partitioning using oblivious transfer. Hence, at no other step information is leaked. Further, because the whole oblivious transfer step is independent of the Buyer’s knowledge graph there cannot be any Buyer side information leaks in this sub-step either. The Buyer gets complete knowledge of those parts which are exposed by the transfer, but to nothing more (Chu and Tzeng, [2005](#bib.bib8)).555A detailed analysis is part of the supplementary material. Depending on the knowledge graph and the partitioning strategy it is possible for a curious Buyer to learn more structural information, by making the assumption that statistics for the parts obtained would also be valid for the complete graph. A malicious one cannot do more than a curious one. As these speculations are very unpredictable, case specific, and can be partially prevented by choosing a suitable partitioning strategy, we will not analyze them further. ### 5.6. Step 5: Closing the Deal In this final step privacy is not an issue anymore for the Seller’s graph. During verification, the Seller is not involved, so there will be no leaks in that direction. inline]MC I left the totaling section and summary out for now. It builds on several of the metrics which have been left out as well. 6. Evaluation -------------- inline]MC 2 pages The analysis in the previous section, was aimed towards finding out what information is leaked in which step to make it possible for the Seller and Buyer to make an informed decision about whether they want to perform these steps. Now, in this section, we show the results of experiments on realistic data sets, which indicate that the proposed protocol can be used in practice. We run the complete protocol on knowledge graphs of different sizes and measure the run time, memory usage, and communication overhead. Note, that we cannot show experimentally that the approach does not have security issues as this would require exhaustive experimentation, which is infeasible. Data Set The data used for this evaluation is taken from knowledge graph ReDrugS used in (McCusker et al., [2017](#bib.bib17)). The knowledge graph contains information about Drug-Protein, Protein-Protein, and gene-disease interactions. In this evaluation, we unify the thousands of named graphs into one default graph. Next, we remove all statements containing blank nodes. Then, a partition of this knowledge graph is created such that each part contain roughly the same number of statements and the internal sub-graph is connected.666This is done using a balanced DBSCAN-like clustering. See implementation for details. The graphs for the Seller and Buyer are created by merging randomly selected sets from the partition, trying to reach certain sizes. We start with around 90,000 statements and increase in steps of 70,000 till we reach 400,000. Then we start increasing by 400,000 until we reach 2,800,000. Note that the sizes of the graphs are not exactly rounded to these numbers. Further, in our experiments we pair a Seller and Buyer knowledge graph of roughly the same size. Four our reporting we use the average size of the Buyer and Seller graph. Run time environment We run the experiments on two machines, one for each party. Each of those machines has an Intel(R) Xeon(R) CPU E5-2640 v4 2.40 GHz with 20 cores. The Java processes are limited to use 50 GB of RAM at most. Both machines were connected to the same gigabit switch. ### 6.1. Time The outcome of the timing measurements can be found in [fig. 1](#S6.F1 "Figure 1 ‣ 6.1. Time ‣ 6. Evaluation ‣ Secure Evaluation of Knowledge Graph Merging Gain"). It illustrates the time needed for each step of the protocol; but also includes ‘Sending’, i.e., the time used to send over the knowledge graph and used keys from the Seller to the Buyer at the beginning of the 5th step. What is shown is how the different steps of the protocol behave when dealing with differently sized graphs. There are several observations: 1. (1) The timing is dominated by the intersection, entropy metrics, and verification steps. This is pretty much as expected, as these are the parts which require a lot of encryption steps. Overall, the time is mainly consumed by computing blind signatures; it takes about 10 seconds to generate 10,000 blind signatures. 2. (2) The time needed grows linear in the size of the knowledge graph, as also indicated by the black dotted linear fit. This is expected as most of the steps have an expected linear behavior. The only exception is the oblivious transfer, which has itself a step which in our implementation scales quadratically in the number of parts in the partition. However, even if we look at the OT step in isolation, we only see a linear behavior. Clearly, this quadratic step does not have a significant influence in practice. 3. (3) The time for verification of the complete protocol is about half of what is needed to run the protocol itself. 4. (4) For a graph containing close to 3 million statements, the time required is about 7.5 hours. This time is definitely acceptable as this type of acquisitions would usually take several weeks to months anyway. ![Refer to caption](/html/2103.00082/assets/Pictures/timingsStacked2_crop.png) Figure 1. Run Time in function of the average number of statements ### 6.2. Memory inline]MC For the figure, I made the assumption that the sizes in the excel file are in KB, rather than bytes (as indicated). Otherwise this would not make much sense. To determine the memory required by the protocol the peak amount of RAM used was tracked for different amounts of statements. Note, that we do use a garbage collected JVM and hence, the memory usage can be rather spurious. The measurements are summarized in [fig. 2](#S6.F2 "Figure 2 ‣ 6.2. Memory ‣ 6. Evaluation ‣ Secure Evaluation of Knowledge Graph Merging Gain"). The theoretical expectation is that the memory demand depends linearly on the number of statements. We observe the following: 1. (1) The Buyer always needs more memory than the Seller. 2. (2) The memory usage could be explained with a linear curve and behavior of the GC. 3. (3) The amount of memory needed is significant, meaning that our implementation cannot be ran on a laptop with low specifications at the moment. However, companies interested in using this protocol have typically access to large servers anyway or could rent these from a cloud service provider. More detailed testing showed that the intersection and entropy metrics steps take most memory. In our implementation, for the Seller, there is also a significant amount of memory used in the partition step. The cause is that we duplicate all data when creating the partition, while this would not be strictly necessary. Finally, the verification also uses a lot of memory, but not as much as the private computations. ![Refer to caption](/html/2103.00082/assets/x1.png) Figure 2. Amount of memory needed by the protocol in function of the average number of statements ### 6.3. Communication The communication that was measured is the traffic that was generated by the protocol. The measurements do not include overhead of the lower network stack (e.g., extra traffic caused by TLS or TCP). We show the traffic going from Seller to Buyer in [fig. 3](#S6.F3 "Figure 3 ‣ 6.3. Communication ‣ 6. Evaluation ‣ Secure Evaluation of Knowledge Graph Merging Gain") and in the other direction in [fig. 4](#S6.F4 "Figure 4 ‣ 6.3. Communication ‣ 6. Evaluation ‣ Secure Evaluation of Knowledge Graph Merging Gain"). The verification step does not require any interaction between the two parties. We observe the following: 1. (1) It looks like each step the communication scales linearly with the number of statements, for both the Seller and the Buyer. The statistic step is, however, constant. inline]MC I assume these numbers were computed on both experiments at the same time. Recompute for camera ready The growth is roughly 0.47 KB per statement for the Buyer to Seller and 0.73 KB in the reverse direction. 2. (2) The trafic from the Buyer to the Seller is dominated by the Intersection and Entropy Metrics step. This is expected as other steps have either a very small (e.g., the keys for the oblivious transfer, linear in function of the number of parts) or constant overhead (e.g., statistics). 3. (3) Similarly, the traffic from Seller to Buyer is also dominated by these two steps. This is expected as all blinded statements to be signed by the Seller also have to be send back. For the Seller, also the oblivious transfer takes significant communication as he needs to send the complete encrypted graph to the Buyer. Finally, the Seller has to send the actual knowledge graph in the end, which is also a significant amount of data. Running experiments on the same data without any security measures reveals that the amount of communication grows 4.67 times bigger with our privacy preserving measures. ![Refer to caption](/html/2103.00082/assets/Pictures/traffic_seller_to_buyer_crop.png) Figure 3. Amount of communication from the Seller to the Buyer in function of the average number of statements ![Refer to caption](/html/2103.00082/assets/Pictures/traffic_buyer_to_seller_crop.png) Figure 4. Amount of communication from the Buyer to the Seller in function of the average number of statements 7. Conclusion -------------- inline]MC 1/2 page Over the course of this paper a protocol was developed, which is capable of providing one party with information on how much it could learn form a second party’s knowledge graph, without revealing the knowledge graphs of either party to each other or a trusted third party. First, a set of statistics about the Seller’s knowledge graph is computed and shared with the Buyer. Then, the protocol makes use of blind signatures and Bloom filters to determine the intersection between the two knowledge graphs. Similarly, counting Bloom filters are used to determine entropy metrics on the unification of both graphs, without either party needing access to more than its own knowledge graph. To the best of our knowledge this is a new approach. These two steps fulfill the requirement of information gain measurement. In terms of correctness we have seen that there is a small error on the calculation of the intersection and the entropies. Following this, the Buyer is given the opportunity to receive one or a few parts of the Seller’s knowledge graph via oblivious transfer. The technique works independent of the particular algorithm used for partitioning the knowledge graph. In order to do so, strategies to partition knowledge graphs have been developed. With these knowledge graph parts the Buyer can perform any data quality tests he deems necessary, thus satisfying the data quality requirement. Finally, the Buyer has the chance to verify that the Seller did not cheat him. When analyzing the security of this protocol, we have seen that very little information, which is not supposed to be shared, gets leaked to the respectively other party. This is not only true for curious but also malicious participants. While not perfect this is pretty good in terms of the Seller and Buyer privacy requirements. One problem is that while the protocol reveals little unwanted information, it gives a malicious Buyer the opportunity to verify guesses about the contents of the Seller’s knowledge graph. We have seen that by doing all blind signatures at the beginning, the Buyer is provided with no information by the protocol to base his guesses on. Besides, several further mitigation options were discussed. This protocol has then been implemented in Java. The evaluation of this implementation has then shown, that it scales linear in time, communication and presumably memory usage with the number of statements on the side of Seller and Buyer. It has also shown, that blind signatures are responsible for most of the time used, even though they can be computed in parallel. For big knowledge graphs the time and memory needed can grow quite large. The linear scaling satisfies the requirement of efficiency. ### 7.1. Future Work In future work different areas of this protocol can be further investigated. First of all, the implementation can be adapted to also handle knowledge graphs with blank nodes and containing named graphs. Named graphs would open new possibilities for entropies and can be an alternative to the partitioning needed in the data quality step. Another point of improvement is finding a way to reduce the computation time for the blind signatures. It seems hard to substantially improve the time needed, though. Possibly using a different private set intersection approach could bring a speedup to the intersection and the entropy step. For the entropy metrics calculation, there might be a way to reduce the amount of leaked information further. The goal is to compute only a number, but also information about the Seller’s knowledge graph is leaked. The main reason is the Buyer knows (needs to know in our protocol) which elements in the multi-sets are in common to be able to match them with his own data. Perhaps some technique involving homomorphic encryption could solve this problem, but this will require the development of a new blind signature protocol. Partitioning the knowledge graphs for step 4 could also be an area for future work. Efficiently partitioning a knowledge graph in such a way that each part is of the same or similar value is a very challenging task. In terms of security the guessing can be a problem but seems nearly impossible to prevent. Because of the blind signatures the Seller can never know if he is signing an actual statement or element of the Buyer or a guess. Also, finding a way to automatically determine the number of signatures the Seller should be willing to hand out seems unrealistic.
ac99877b-06bc-4e43-bc31-49d741af186a
trentmkelly/LessWrong-43k
LessWrong
Chapter 9: Why can it Select? Our final problem. Let's take a look at our task: 1. We know that memories are coupled subnetworks in the Knowledge Graph. 2. We know that what we can recall them using our Working memory. 3. We understood the regulation mechanism that motivates us to do it. We also realized it's binding with Late Long-Term Potentiation, which allows us to store data for long periods. 4. We know that we can receive signals about activations of memory parts. And their strengths. How to select what signal or Reference will pass the spam-filter of attention? What criteria for filtering will we use? How should it work in terms of activations in neurons? Let's take a simple example. Please fill blank spaces, and try to observe how are you doing that: 1. The whale shark is ... than elephant 2. The ... is smaller than Jupiter. 3. Do you ...? I am quite sure that it was like that: 1. Bigger or smaller? I don't know. The average whale is bigger. That's obvious. But whale shark? 2. Guy, are you serious? There are a lot of things smaller than Jupiter. All the planets in the solar system smaller than it! 3. Do I what? Oh, I understood, you've gone completely mad. What you were doing is creating hypotheses. Bigger or smaller? What planet did he mean? Had he gone mad? You've been trying to explain what is happening with the information you've had. You've been trying to guess. And you always do it! While watching serials, while talking with someone, while trying to choose between chocolate and strawberry ice-cream, reading this sequence of articles. But how to describe the prediction you making and how to choose between them? Prediction is an easy one thing - you have information, you activate object in the knowledge graph, you receive the results of your mind-search. The activations had some power; we choose the strongest of them and... And here, we have a problem because we want different meanings for the same things in different contexts. Context is what we currently think
e88f0eab-2f17-4b7d-98f7-12cb82e32c2d
trentmkelly/LessWrong-43k
LessWrong
The Solitaire Principle: Game Theory for One > Do I contradict myself? > Very well then I contradict myself; > (I am large, I contain multitudes.) This post is an exercise in taking Whitman seriously. If the self is properly understood as a loose coalition of many agents with possibly distinct values, beliefs, and incentives, what does game theory have to say about self-improvement? The Solitaire Principle is the principle that human beings can be usefully thought about as loose coalitions of many agents. Classes of interpersonal problems often translate into classes of intrapersonal problems, and the tools to solve them are broadly similar. The Solitaire Principle is a corollary of the paradigm that the universe is self-similar at every level of organization: the organizational principles and faults of a civilization are not wildly different from those of a single human mind. Self-improvement is often framed in terms of optimization of a monolithic whole. Instead, the Solitaire Principle suggests that self-improvement can also be achieved by alignment of pieces within the whole to cooperate more efficiently. First, I fractionate the self across the time dimension and investigate self-improvement as an iterated game for one. This is partially inspired by this essay on becoming more legible to other agents. Second, I fractionate the self into multiple sub-personalities and investigate self-improvement as a single sub-personality taking unilateral action to improve the whole. 1. Iterated Games for One i. Basic Thought Experiments Imagine that a human being dies and is re-instantiated the following day. Across a year, one agent A actually behaves like 365 very weakly dependent agents A1, A2, ..., A365. A1 wants to write a novel, and can either write a page today (cooperate) or Netflix (defect). The novel is completed if and only if A1, A2, ..., A365 all cooperate. A1 decides the probability of that happening is vanishingly small, so she defects. No pages are written. B1 wants to write a novel. The no
6779184b-0c8a-4b81-9967-f72c90fd4c75
trentmkelly/LessWrong-43k
LessWrong
How can I find trustworthy dietary advice? I currently think that the official dietary advice is sometimes untrustworthy, and I don't know when it's trustworthy and when it isn't, so I'm in a state of epistemic learned helplessness. Some reasons why I think this: --I get the impression that much of the official advice of the past was actively harmful; the official advice has flip-flopped on many things over the past decades; things which were touted as healthy were later shown to be unhealthy, and vice versa. So presumably some of the current official advice is also actively harmful, and some of it is merely useless. But I don't know which. --I get the impression that there are studies available supporting pretty much any conclusion; if I want to believe that red meat is bad for me, I can find mountains of studies showing that it is bad for me, and if I want to believe that it is good for me, I can go talk to carnivore diet people and they'll give me convincing evidence it's actually really good for me. I'm optimistic that if I spent a few weeks of effort looking into the literature and reading the studies myself I could come to good opinions on this topic. But maybe someone here has already done this? Or maybe I'm just wrong about the official dietary advice?
4fa46f12-e073-464c-9763-3bcd181afaa9
StampyAI/alignment-research-dataset/lesswrong
LessWrong
What DALL-E 2 can and cannot do I got access to DALL-E 2 earlier this week, and have spent the last few days (probably adding up to dozens of hours) playing with it, with the goal of mapping out its performance in various areas – and, of course, ending up with some epic art.  Below, I've compiled a list of observations made about DALL-E, along with examples. If you want to request art of a particular scene, or to test see what a particular prompt does, feel free to comment with your requests.  DALL-E's strengths ------------------ ### Stock photography content It's *stunning* at creating photorealistic content for anything that (this is my guess, at least) has a broad repertoire of online stock images – which is perhaps less interesting because if I wanted a stock photo of (rolls dice) a polar bear, Google Images already has me covered. DALL-E performs somewhat better at discrete objects and close-up photographs than at larger scenes, but it can do photographs of city skylines, or National Geographic-style nature scenes, tolerably well (just don't look too closely at the textures or detailing.) Some highlights:  * **Clothing design:** DALL-E has a reasonable if not perfect understanding of clothing styles, and especially for women's clothes and with the stylistic guidance of "displayed on a store mannequin" or "modeling photoshoot" etc, it can produce some gorgeous and creative outfits. It does especially plausible-looking wedding dresses – maybe because wedding dresses are especially consistent in aesthetic, and online photos of them are likely to be high quality? ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/3c46d039066aefa74332472933b9ac9e497c1d8df7ff688e.png)a "toga style wedding dress, displayed on a store mannequin"* **Close-ups of cute animals.** DALL-E can pull off scenes with several elements, and often produce something that I would buy was a real photo if I scrolled past it on Tumblr. ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/dacb25886163853316c23c8a19d89dd46153279ce8cb3bd8.png)"kittens playing with yarn in a sunbeam"* **Close-ups of food.** These can be a little more uncanny valley – and I don't know what's up with the apparent boiled eggs in there – but DALL-E *absolutely* has the plating style for high-end restaurants down. ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/2262eb6cba01f862e0dd1bb5efa8195ba988d02b86b44f9f.png)"dessert special, award-winning chef five star restaurant, close-up photograph"* **Jewelry**. DALL-E doesn't always follow the instructions of the prompt exactly (it seems to be randomizing whether the big pendant is amber or amethyst) but the details are generally convincing and the results are almost always *really pretty.* ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/5282c63b853073d6c9611a799d63196ba75bbaba16818adb.png)"silver statement necklace with amethysts and an amber pendant, close-up photograph"### ### Pop culture and media DALL-E "recognizes" a wide range of pop culture references, particularly for visual media (it's very solid on Disney princesses) or for literary works with film adaptations like Tolkien's LOTR. For almost all media that it recognizes at all, it can convert it in almost-arbitrary art styles.  ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/36f2201a1d77fc56fdba5cf3796fc4f6b7787f16a625fbf9.png)"art nouveau stained glass window depicting Marvel's Captain America"![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/667c31d73adccaf9bbd8614cce928fb1982f4259843e0064.png)"Elsa from Frozen, cross-stitched sampler"![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/4de84438b855bf893ffc2770cf53963f60e74c53739615ae.png)Sesame Street, screenshots from the miyazaki anime movie[Tip: I find I get more reliably high-quality images from the prompt "X, screenshots from the Miyazaki anime movie" than just "in the style of anime",  I suspect because Miyazaki has a consistent style, whereas anime more broadly is probably pulling in a lot of poorer-quality anime art.] ### Art style transfer Some of most impressively high-quality output involves specific artistic styles. DALL-E can do charcoal or pencil sketches, paintings in the style of various famous artists, and some weirder stuff like "medieval illuminated manuscripts".  ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/c6e7928440fc981a02b0d5193363e545dedd9c7f5b057fed.png)"a monk riding a snail, medieval illuminated manuscript"IMO it performs especially well with art styles like "impressionist watercolor painting" or "pencil sketch", that are a little more forgiving around imperfections in the details.   ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/29205bec5d9486116e021b2edd2bf67d55ef0e7c65400f56.png)"A woman at a coffeeshop working on her laptop and wearing headphones, painting by Alphonse Mucha"![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/78227c881650332184d2e59ac945a531b63320fd067828e8.png)"a little girl and a puppy playing in a pile of autumn leaves, photorealistic charcoal sketch"### ### Creative digital art DALL-E can (with the right prompts and some cherrypicking) pull off some absolutely gorgeous fantasy-esque art pieces. Some examples:  ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/48cfc96c1d2731baf7dbe3bcc5c3c85b003e3ac7404992a4.png)"a mermaid swimming underwater, photorealistic digital art"![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/c71fddb8e1aa75930e19b8a0b527e8b01c081b211386edf9.png)"a woman knitting the Milky Way galaxy into a scarf, photorealistic digital art"The output when putting in more abstract prompts (I've run a lot of "[song lyric or poetry line], digital art" requests) is hit-or-miss, but with patience and some trial and error, it can pull out some absolutely stunning – or deeply hilarious – artistic depictions of poetry or abstract concepts. I kind of like using it in this way because of the sheer *variety*; I never know where it's going to go with a prompt.  ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/829aaddaacda856ab96bf599f939a8c2944dcacd6e033171.png)"an activist destroyed by facts and logic, digital art"![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/fe0fb2c30af9a682eb2579c05dcdb31ac2f1eb51c9f29356.png)"if the lord won't send us water, well we'll get it from the devil, digital art"![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/a1d977f2ffa439017de6eba773a80aba4e47b865ffd32258.png)"For you are made of nebulas and novas and night sky You're made of memories you bury or live by, digital art" (lyric from Never Look Away by Vienna Teng)### The future of commercials This might be just a me thing, but I love almost everything DALL-E does with the prompt "in the style of surrealism" – in particular, its surreal attempt at commercials or advertisements. If my online ads were 100% replaced by DALL-E art, I would probably click on at least 50% more of them.  ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/15c41227ea2ad18f2d610e4f3ab1233e14d3ce02121d31df.png)"an advertisement for sound-cancelling headphones, in the style of surrealism"DALLE's weaknesses ------------------ I had been really excited about using DALL-E to make fan art of fiction that I or other people have written, and so I was somewhat disappointed at how much it struggles to do complex scenes according to spec. In particular, it still has a long way to go with: ### Scenes with two characters I'm not kidding. DALL-E does fine at giving *one* character a list of specific traits (though if you want pink hair, watch out, DALL-E might start spamming the entire image with pink objects). It can sometimes handle multiple *generic* people in a crowd scene, though it quickly forgets how faces work. However, it finds it very challenging to keep track of which traits ought to belong to a specific Character A versus a different specific Character B, beyond a very basic minimum like "a man and a woman."  The above is one iteration of a scene I was *very motivated* to figure out how to depict, as a fan art of my [Valdemar rationalfic](https://archiveofourown.org/series/936480). DALL-E can handle two people, check, and a room with a window and at least one of a bed or chair, but it's lost when it comes to remembering which combination of age/gender/hair color is in what location.  ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/d72dd2b4e4f289a8cc054d400ba8ba4ba9f5c0a37c5a6593.png)"a young dark-haired boy resting in bed, and a grey-haired older woman sitting in a chair beside the bed underneath a window with sun streaming through, Pixar style digital art"Even in cases where the two characters are pop culture references that I've already been able to confirm the model "knows" separately – for example, Captain America and Iron Man – it can't seem to help blending them together. It's as though the model has "two characters" and then separately "a list of traits" (user-specified or just implicit in the training data), and reassigns the traits mostly at random. ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/a397baf305e6696f3613f9ed3bbb57cdd1d6d944dea3aec3.png)"Captain America and Iron Man standing side by side" which is which????### Foreground and background A good example of this: someone on Twitter had commented that they couldn't get DALL-E to provide them with "Two dogs dressed like roman soldiers on a pirate ship looking at New York City through a spyglass". I took this as a CHALLENGE and spent half an hour trying; I, too, could not get DALL-E to output this, and end up needing to choose between "NYC and a pirate ship" or "dogs in Roman soldier uniforms with spyglasses".  DALL-E can do scenes with *generic* backgrounds (a city, bookshelves in a library, a landscape) but even then, if that's not the main focus of the image then the fine details tend to get pretty scrambled.  ### Novel objects, or nonstandard usages Objects that are not something it already "recognizes." DALL-E knows what a chair is. It can give you something that is recognizably a chair in several dozen different art mediums. It could not with *any amount of coaxing* produce an "Otto bicycle", which my friend specifically wanted for her book cover. Its failed attempts were both hilarious and concerning.  ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/3bc5b1602e720b3b7affbef5b82425de56b986277713d8e0.png)prompt was something like "a little girl with dark curly hair riding down a barren hill on a magical rickshaw with enormous bicycle wheels, in the style of Bill Watterson"![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/3b3d693f3c3fc7f2acb7fc262bcf2216097c5240035dc6dc.png)An *actual* Otto bicycle, per Google ImagesObjects used in nonstandard ways. It seems to slide back toward some kind of ~prior; when I asked it for a dress made of Kermit plushies *displayed* on a store mannequin, it repeatedly gave me a Kermit plushie wearing a dress.  ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/aa61a06d7c2d19fa14984dc9945b4846c78f8a793d53d911.png)"Dress made out of Kermit plushies, displayed on a store mannequin"DALL-E generally seems to have extremely strong priors in a few areas, which end up being almost impossible to shift. I spent at least half an hour trying to convince it to give me digital art of a woman whose *eyes* were full of stars (no, not the rest of her, not the background scenery either, *just* her eyes...) and the closest DALL-E ever got was this. ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/fb61cbbc7f7b5b8431b5e75a8123bb809bc7bfd06706af48.png)**I wanted**: the Star-Eyed Goddess **I got**: the goddess-eyed goddess of recursion### Spelling DALL-E can't spell. It *really really* cannot spell. It will occasionally spell a word correctly by utter coincidence. (Okay, fine, it can consistently spell "STOP" as long as it's written on a stop sign.)  It does mostly produce recognizable English *letters* (and recognizable attempts at Chinese calligraphy in other instances), and letter order that is *closer* to English spelling than to a random draw from a bag of Scrabble letters, so I would guess that even given the new model structure that makes DALL-E 2 worse than the first DALL-E, just scaling it up some would eventually let it crack spelling.   At least sometimes its inability to spell results in unintentionally hilarious memes?  ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/bfd8daaeff2b6df2674ca65ceb7804ccb23bc9bc8bc8fe15.png)EmeRAGEencey!### Realistic human faces My understanding is that the face model limitation may have been deliberate to avoid deepfakes of celebrities, etc. Interestingly, DALL-E can nonetheless at least sometimes do perfectly reasonable faces, either as photographs or in various art styles, if they're the central element of a scene. (And it keeps giving me photorealistic faces as a component of images where I wasn't even asking for that, meaning that per the terms and conditions I can't share those images publicly.)  Even *more* interestingly, it seems to specifically alter the appearance of actors even when it clearly "knows" a particular movie or TV show. I asked it for "screenshots from the second season of Firefly", and they were *very recognizably* screenshots from Firefly in terms of lighting, ambiance, scenery etc, with an actor who looked *almost* like Nathan Fillion – as though cast in a remake that was trying to get it fairly similar – and who looked consistently the same across all 10 images, but was definitely a different person.  There are a couple of specific cases where DALL-E seems to "remember" how human hands work. The ones I've found so far mostly involve a character *doing* some standard activity using their hands, like "playing a musical instrument." Below, I was trying to depict a character from A Song For Two Voices who's a Bard; this round came out shockingly good in a number of ways, but the hands particularly surprised me.  ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/a937b920655256576e9c02df36552e122ade4ef3009cd964.png)![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/3965ae93cd47256938c553aaf486cf0e1d05d23c3d3f593b.png)### Limitations of the "edit" functionality DALL-E 2 offers an edit functionality – if you mostly like an image except for one detail, you can highlight an area of it with a cursor, and change the full description as applicable in order to tell it how to modify the selected region.  It sometimes works - this gorgeous dress (didn't save the prompt, sorry) originally had no top, and the edit function successfully added one without changing the rest too much. ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/dcb3292bbd465ee274c5879b25a92fcac24fa48103e901bc.png)This is how people will dress in the glorious transhumanist future. It often appears to do nothing. It occasionally *full-on panics* and does....whatever this is.  ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/395326074f3ad028781e921b3281ae5a3a089e123fc764bd.png)I was just trying to give the figure short hair!There's also a "variations" functionality that lets you select the best image given by a prompt and generate near neighbors of it, but my experience so far is that the variations are almost invariably less of a good fit for the original prompt, and very rarely better on specific details (like faces) that I might want to fix. Some art style observations --------------------------- DALL-E doesn't seem to hold a sharp delineation between style and content; in other words, adding stylistic prompts actively changes the some of what I would consider to be content.  For example, asking for a coffeeshop scene as painted by Alphonse Mucha puts the woman in in a long flowing period-style dress, like in [this reference painting](https://uploads8.wikiart.org/images/alphonse-mucha/princess-hyacinth-1911.jpg), and gives us a "coffeeshop" that looks a lot to me like a lady's parlor; in comparison, the Miyazaki anime version mostly has the character in a casual sweatshirt. This makes sense given the way the model was trained; background details are going to be systematically different between Nouveau Art paintings and anime movies.  ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/ea6016fa237c354223c3a23a13c2ec9d37aba183bfe56b85.png)"A woman at a coffeeshop working on her laptop and wearing headphones, painting by Alphonse Mucha"![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/97087b99f5754931d71af5071898cc4dcbf2e70a40980ea2.png)"A woman at a coffeeshop working on her laptop and wearing headphones, screenshots from the miyazaki anime movie"DALL-E is often sensitive to exact wording, and in particular it's fascinating how "in the style of x" often gets very different results from "screenshot from an x movie". I'm guessing that in the Pixar case, generic "Pixar style" might capture training data from Pixar shorts or illustrations that aren't in their standard recognizable movie style. (Also, sometimes if asked for "anime" it gives me content that either looks like 3D rendered video game cutscenes, or occasionally what I assume is meant to be people at an anime con in cosplay.)  ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/25471499b89f61b73ba621879749e3b517d22a7e03985126.png)"A woman at a coffeeshop working on her laptop and wearing headphones, screenshots from the Pixar movie"![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/f1460ac75d5286ac3d3dcba6ac886308772a650830540a8a.png)"A woman at a coffeeshop working on her laptop and wearing headphones, in the style of Pixar"Conclusions ----------- How smart is DALL-E?  I would give it an excellent grade in recognizing objects, and most of the time it has a pretty good sense of their purpose and expected context. If I give it just the prompt "a box, a chair, a computer, a ceiling fan, a lamp, a rug, a window, a desk" with no other specification, it consistently includes at least 7 of the 8 requested objects, and places them in reasonable relation to each other – and in a room with walls and a floor, which I did not explicitly ask for. This "understanding" of objects is a lot of what makes DALL-E so easy to work with, and in some sense seems more impressive than a perfect art style.  The biggest thing I've noticed that looks like a ~conceptual limitation in the model is its inability to consistently track two different characters, unless they differ on exactly one trait (male and female, adult and child, red hair and blue hair, etc) – in which case the model could be getting this right if all it's doing is randomizing the traits in its bucket between the characters. It seems to have a similar issue with two non-person objects of the same type, like chairs, though I've explored this less.  It often applies color and texture styling to parts of the image other than the ones specified in the prompt; if you ask for a girl with pink hair, it's likely to make the walls or her clothes pink, and it's given me several Rapunzels wearing a *gown* apparently made of hair. (Not to mention the time it was confused about whether, in "Goldilocks and the three bears", Goldilocks was also supposed to be a bear.)  The deficits with the "edit" mode and "variations" mode also seem to me like they reflect the model failing to neatly track a set of objects-with-assigned-traits. It reliably holds the non-highlighted areas of the image constant and only modifies the selected part, but the modifications often seem like they're pulling in context from the entire prompt – for example, when I took one of my room-with-objects images and tried to select the computer and change it to "a computer levitating in midair", DALL-E gave me a levitating fan and a levitating box instead.  Working with DALL-E definitely still feels like attempting to communicate with some kind of alien entity that doesn't quite reason in the same ontology as humans, even if it theoretically understands the English language. There are concepts it appears to "understand" in natural language without difficulty – including prompts like "advertising poster for the new Marvel's Avengers movie, as a Miyazaki anime, in the style of an Instagram inspirational moodboard", which would take *so long* to explain to aliens, or even just to a human from 1900. And yet, you try to explain what an Otto bicycle is – something which I'm pretty sure a human six-year-old could draw if given a verbal description – and the conceptual gulf is impossible to cross.  ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/7cd70b54193a5af264c22f3a54b99e90066d3503e1df63ad.png)"advertising poster for the new Marvel's Avengers movie, as a Miyazaki anime, in the style of an Instagram inspirational moodboard"
1592e460-f21b-4958-943a-cc783c21ce5a
awestover/filtering-for-misalignment
Redwood Research: Alek's Filtering Results
id: post1768 Comments: The following is a list (very lightly edited with help from Rob Bensinger) I wrote in July 2017, at Nick Beckstead’s request, as part of a conversation we were having at the time. From my current vantage point, it strikes me as narrow and obviously generated by one person, listing the first things that came to mind on a particular day. I worry that it’s easy to read the list below as saying that this narrow slice, all clustered in one portion of the neighborhood, is a very big slice of the space of possible ways an AGI group may have to burn down its lead. This is one of my models for how people wind up with really weird pictures of MIRI beliefs. I generate three examples that are clustered together because I'm bad at generating varied examples on the fly, while hoping that people can generalize to see the broader space these are sampled from; then people think I’ve got a fetish for the particular corner of the space spanned by the first few ideas that popped into my head. E.g., they infer that I must have a bunch of other weird beliefs that force reality into that particular corner. I also worry that the list below doesn’t come with a sufficiently loud disclaimer about how the real issue is earlier and more embarrassing. The real difficulty isn't that you make an AI and find that it's mostly easy to align except that it happens to befall issues b, d, and g. The thing to expect is more like: you just have this big pile of tensors, and the interpretability tools you've managed to scrounge together give you flashes of visualizations of its shallow thoughts, and the thoughts say “yep, I’m trying to kill all humans”, and you are just utterly helpless to do anything about, because you don't have the sort of mastery of its cognition that you'd need to reach in and fix that and you wouldn't know how to fix it if you did. And you have nothing to train against, except the tool that gives you flashes of visualizations (which would just train fairly directly against interpretability, until it was thinking about how to kill all humans somewhere that you couldn't see). The brainstormed list below is an exercise in how, if you zoom in on any part of the problem, reality is just allowed to say “lol nope” to you from many different angles simultaneously. It's intended to convey some of the difference (that every computer programmer knows) between "I can just code X" and "wow, there is a lot of subtlety to getting X right"; the difference between the optimistic hope in-advance that everything is going to go smoothly, and the excessively detailed tarpit of reality. This is not to be confused with thinking that these hurdles are a particularly representative sample, much less an attempt to be exhaustive. Context The imaginary group DeepAI pushed to get an AGI system as fast as reasonably possible. They now more or less understand how to build something that is very good at generalized learning and cross-domain reasoning and what-not. They rightfully believe that, if they had a reckless desire to increase the capabilities of the system as fast as possible without regard for the consequences, they would be able to have it recursively self-improving within a year. However, their existing system is not yet a superintelligence, and does not yet have the resources to be dangerous in its own right. For the sake of concreteness, we will imagine that the system came largely from an extension of modern AI techniques: a large amount of end-to-end training, heavy use of neural networks, heavy use of reinforcement learning, and so on. The question is, what sorts of things might they discover about the system that force them to stop and redesign (and/or recode, and/or retrain) large parts of the system? Brainstorm list (Note: Bullet points are highly disjunctive. Also, I’m leaning on the side of telling evocative stories so as to increase the chance of getting the point across; obviously, each specific detail is burdensome , and in each case I’m trying to wave in the direction of a more general class of possible failures. Also, to state the obvious, this list does not feel complete to me, and I find some of these points to be more plausible than others.) (a) They want to put in alarms that warn them when the system is thinking a class of thought that they don’t want thought, but… the system’s analog of “thought processes” are not amenable to programmatic classification, because… the “thoughts” are so opaque that the programmers cannot figure them out for quite some time. the representation / data structure is convoluted, and simple classification systems can’t figure it out (in the same way that a modern narrow AI system can understand sentiment but not content of a science paper). the “thoughts” are not centralized; they arise out of interactions between many scattered parts of the system and an extensive redesign is required to make it possible to collate them and expose them to automated tools. the system has internal control of its own “thought language”, and it changes rapidly enough that narrower automated tools can’t keep up; there is no easy way to slow down the shift to its internal thought-speak without crippling it. the system simply wasn’t designed for monitoring of this form, and… the code must be heavily refactored in order to even allow the relevant data about the system’s thoughts to be collected in a useful fashion. the code must be heavily refactored in order to allow live monitors and checks to be attached in a way that do not cause an intolerable slowdown. (b) They want to blacklist some domain of reasoning (either for alignment reasons or because the system is getting confused by irrelevant reasoning that they want to cut out); or they want to whitelist a set of reasoning domains; and the system simply was not designed to allow this. Simple attempts to blacklist a domain result in nearest-unblocked-strategy problems. Solving the problem at the root requires re-architecting the system and a significant amount of retraining. More sophisticated attempts to blacklist a single domain cripple the entire system. For example, it isn’t supposed to think about ways to deceive humans, and this destroys its ability to ask clarifying questions of the programmers. Or, worse, the system is such a mess of spaghetti that when you try to prevent it from thinking too hard about geopolitics, for indecipherable reasons, it stops being able to think at all. (Later it was discovered that some crucial part of the system was figuring out how to manage some crucial internal resource by having some other part of the system think about hypothetical "geopolitics" questions, because what did you expect, your AGI’s internals are a mess.) (c) The operators realize that the system’s internal objectives are not lining up with their own objectives. This is very difficult for them to fix, because… the system achieved its high performance by being walked through a large number of objectives in heavily reward-landscaped environments (generated by large amounts of data). The system now has the world-models and the capabilities to pursue ambitious real-world objectives, but the only interface that the programmers have by which to point at an objective is via reward-landscaped objective functions generated by mountains of data. This is no longer sufficient, because… the tasks at hand are not amenable to the generation of large amounts of data (e.g., we can’t generate a nicely landscaped reward function between here and “nanofabricator”, and we don’t have many examples of not-quite-nanofabricators to provide). The show is stopped. the system has no interface through which the programmers can sift through the concepts in its world-model and pick out (or create, in something sufficiently close to the system’s native tongue for this to be fine) the concept corresponding to “nanofabricator”. Exposing that interface requires significant refactoring and some redesign. the system’s concept language is opaque, and the programmers keep picking out something that’s not quite the nanofabricator concept, and the system keeps going down wrong paths. Developing translation tools for the system’s internal concept language requires significant time and effort. the internal concept language is constantly in flux; causing it to stay fixed long enough for the programmers to pick out a goal requires significant effort. the programmers have no mechanism for tying a selected concept into the system’s main optimization procedures. The system is very good at optimizing in a way that causes rewards (or whatever the analog of reward it was trained on) to be high, but given a pointer to a certain concept in the system, it is not apparent how to design a reward landscape that makes the system optimize for a chosen concept. And this is exacerbated by the fact that the system has no centralized optimization procedure; it instead has a large collection of internal processes that interact in a way that causes the predicted rewards to be high, but it is very difficult to identify and understand all those internal processes sufficiently well to get them all pointed at something other than optimizing in favor of the reward channel. Their attempts keep failing because, e.g., subsystem X had a heuristic to put its outputs in location Y, which is where subsystem Z would have been looking for them if subsystem Z had been optimizing the reward channel, but optimization of some other arbitrary concept causes Z’s “look in location Y” heuristic to become invalidated for one reason or another, and that connection stops occurring. And so on and so forth; aligning all the internal subprocesses to pursue something other than the reward channel proves highly difficult. the system is having a particularly hard time learning the boundaries of the human concept: its empirically-motivated internal language does not beget short descriptions of value-laden objectives. Significant retraining is required to develop a language in which it can even develop the concept of the goal. (d) In order to get the system to zero in on the operators’ goals, they decide to have the system ask the humans various questions at certain key junctures. This proves more difficult than expected, because… the system wasn’t designed to allow this, and it’s pretty hard to add all the right hooks (for similar reasons to why it might be difficult to add alarms). the system vacillates between asking far too many and far too few questions, and a lot of thought and some redesign/retraining is necessary in order to get the question-asking system to the point where the programmers think it might actually provide the desired safety coverage. the system does not yet have an understanding of human psychology sufficient for it to be able to ask the right questions in value-laden domains, and significant time is wasted trying to make this work when it can’t. relatedly, the system is not yet smart enough to generalize over the human answers in a reasonable fashion, causing it to gain far less from the answers than humans think it should, and solving this would require ramping up the system’s capabilities to an unsafe level. the system has no mechanism for translating its more complex / complicated / subtle questions into questions that humans can understand and provide reasonable feedback on. Fixing this requires many months of effort, because… understanding the questions well enough to even figure out how to translate them is hard. building the translation tool is hard. the system is bad at describing the likely consequences of its actions in human-comprehensible terms. Fixing this is hard for, e.g., reasons discussed under (c). (e) The early system is highly goal-directed through and through, and the developers want to switch to something more like “approval direction all the way down”. This requires a large and time-intensive refactor (if it's even reasonably-possible at all). (f) Or, conversely, the system starts out a mess, and the developers want to switch to a “goal directed all the way down” system, where every single computation in the system is happening for a known purpose (and some other system is monitoring and making sure that every subprocess is pursuing a particular narrow purpose). Making this possible requires a time-intensive refactor. (g) The programmers want to remove all “argmaxing” (cases of unlimited optimization inside the system, such as “just optimize the memory efficiency as hard as possible”). They find this very difficult for reasons discussed above (the sources of argmaxing behavior are difficult to identify; limiting an argmax in one part of the system breaks some other far-flung part of the system for difficult-to-decipher reasons; etc. etc. etc.). (h) The programmers want to track how much resource the system is putting towards various different internal subgoals, but this is difficult for reasons discussed above, etc. (i) The programmers want to add any number of other safety features ( limited impact , tripwires, etc.) and find this difficult for reasons listed above, etc. (j) The internal dynamics of the system are revealed to implement any one of a bajillion false dichotomies, such as “the system can either develop reasonable beliefs about X, or pursue goal Y, but the more we improve its beliefs about X the worse it gets at pursuing Y, and vice versa.” (There are certainly human cases in human psychology where better knowledge of fact X makes the human less able to pursue goal Y, and this seems largely silly.) (k) Generalizing over a number of points that appeared above, the programmers realize that they need to make the system broadly more… transparent. Its concepts/thought patterns are opaque black boxes. They’ve burned time understanding specific types of thought patterns in many specific instances, and now they have some experience with the system, and want to refactor/redesign/retrain such that it’s more transparent across the board. This requires a number of months. debuggable. Its internals are interdependent spaghetti, where (e.g.) manually modifying a thought-suggesting system to add basic alarm systems violates assumptions that some other far-flung part of the system was depending on; this is a pain in the ass to debug. After a number of these issues arise, the programmers decide that they cannot safely proceed until they… cleanly separate various submodules by hand, and to hell with end-to-end training. This takes many months of effort. retrain the system end-to-end in a way that causes its internals to be more modular and separable. This takes many months of effort. (l) Problems crop up when they try to increase the capabilities of the system. In particular, the system… finds new clever ways to wirehead . starts finding “epistemic feedback loops” such as the Santa clause sentence (“If this sentence is true, then Santa Claus exists”) that, given it’s internally hacky (and not completely sound) reasoning style, allow it to come to any conclusion if it thinks the right thoughts in the right pattern. is revealed to have undesirable basic drives (such as a basic drive for efficient usage of memory chips), in a fashion similar to how humans have a basic drive for hunger, in a manner that affects its real-world policy suggestions in a sizable manner. While the programmers have alarms that notice this and go off, it is very deep-rooted and particularly difficult to remove or ameliorate without destroying the internal balance that causes the system to work at all. The system develops a reflective instability. For example, the system previously managed its internal resources by spawning internal goals for things like scheduling and prioritization, and as the system scales and gets new, higher-level concepts, it regularly spawns internal goals for large-scale self-modifications which it would not be safe to allow. However, preventing these proves quite difficult, because… detecting them is tough. manually messing with the internal goal system breaks everything. nearest-unblocked-strategy problems. It realizes that it has strong incentives to outsource its compute into the external environment. Removing this is difficult for reasons discussed above. Subprocesses that were in delicate balance at capability level X fall out of balance as capabilities are increased, and a single module begins to dominate the entire system. For example, maybe the system uses some sort of internal market economy for allocating credit, and as the resources ramp up, certain cliques start to get a massive concentration of “wealth” that causes the whole system to gum up, and this is difficult to understand / debug / fix because the whole thing was so delicate in the first place. (m) The system is revealed to have any one of a bajillion cognitive biases often found in humans, and it’s very difficult to track down why or to fix it, but the cognitive bias is sufficient to make the system undeployable. Example: it commits a variant of the sour grapes fallacy where whenever it realizes that a goal is difficult it updates both its model of the world and its preferences about how good it would be to achieve that goal; this is very difficult to patch because the parts of the system that apply updates based on observation were end-to-end trained, and do not factor nicely along “probability vs utility” lines. (n) The system can be used to address various issues of this form, but only by giving it the ability to execute unrestricted self-modification. The extent, rapidity, or opacity of the self-modifications are such that humans cannot feasibly review them. The design of the system does not allow the programmers to easily restrict the domain of these self-modifications such that they can be confident that they will be safe. Redesigning the system such that it can fix various issues in itself without giving it the ability to undergo full recursive self-improvement requires significant redesign and retraining. (o) As the team is working to get the system deployment-ready for some pivotal action, the system’s reasoning is revealed to be corrupted by flaws in some very base-level concepts. The system requires significant retraining time and some massaging on the code/design levels in order to change these concepts and propagate some giant updates; this takes a large chunk of time. (p) The system is very easy to fool, trick, blackmail, or confuse-into-revealing-all-its-secrets, or similar. The original plan that the operators were planning to pursue requires putting the system out in the environment where adversarial humans may attempt to take control of the system or otherwise shut it down. Hardening the system against this sort of attack requires many months of effort, including extensive redesign/retraining/recoding. (q) The strategy that the operators were aiming for requires cognitive actions that the programmers eventually realize is untenable in the allotted time window or otherwise unsafe, such as deep psychological modeling of humans. The team eventually decides to choose a new pivotal action to target, and this new strategy requires a fair bit of redesign, recoding, and/or retraining. Asides My impression is that most catastrophic bugs in the space industry are not due to code crashes / failures; they are instead due to a normally-reliable module producing a wrong-but-syntactically-close-to-right valid-seeming output at an inopportune time. It seems very plausible to me that first-pass AGI systems will be in the category of things that work via dividing labor across a whole bunch of interoperating internal modules; insofar as errors can cascade when a normally-reliable module outputs a wrong-but-right-seeming output at the wrong time, I think we do in fact need to treat “getting the AGI’s internals right” as being in the same reference class as “get the space probe’s code right”. Note, as always, that detecting the problem is only half the battle – in all the cases above, I’m not trying to point and say “people might forget to check this and end the world”; rather, I’m saying, “once this sort of error is detected, I expect that the team will need to burn a chunk of time to correct it”. Recall that this is a domain where playing whack-a-mole gets you killed: if you have very good problem-detectors, and you go around removing problem symptoms instead of solving the underlying root problem, then eventually your problem-detectors will stop going off, but this will not be because your AGI is safe to run. In software, removing the symptoms is usually way easier than fixing a problem at the root cause; I worry that fixing these sorts of problems at their root cause can require quite a bit of time. Recall that it’s far harder to add a feature to twitter than it is to add the same feature to a minimalistic twitter clone that you banged out in an afternoon. Similarly, solving an ML problem in a fledgling AGI in a way that integrates with the rest of the system without breaking anything delicate is likely way harder than solving an analogous ML problem in a simplified setting from a clean slate. Finally, note that this is only intended as a brainstorm of things that might force a leading team to burn a large number of months; it is not intended to be an exhaustive list of reasons that alignment is hard. (That would include various other factors such as “what sorts of easy temptations will be available that the team has to avoid?” and “how hard is it to find a viable deployment strategy?” and so on.)
f0252f6e-5fa7-44dd-b908-0738fdef4932
trentmkelly/LessWrong-43k
LessWrong
Betting Thread At Ought, we’ve started making bets on our continuous predictions. We’ve found it a fun way to hold ourselves accountable and to combat overconfidence. Here’s a thread for people to share continuous beliefs they’re willing to bet on, and make bets on other people’s beliefs. You can use Elicit to automatically generate a fair bet with positive expected value based on each person’s beliefs (this post explains more about how we generate the bet). Make a bet To share a belief you’re willing to bet on: 1. Go to elicit.org, type in your question, and make a distribution of your beliefs 2. Take a snapshot 3. Post the snapshot URL and a screenshot image in a comment on this thread To bet on someone else’s belief: 1. Click on their snapshot URL. Click on the title to get a blank distribution 2. Make your own distribution and take a snapshot 3. Import your snapshot into theirs by pasting your snapshot URL into the ‘Show more’ dropdown on their snapshot 4. Go to the Interpret tab to see the bet Example Bet Here is a bet we proposed between 538 and The Economist on Biden's electoral college votes:  If the result is between 290.5 and 398.1, 538 pays The Economist $35.98. Otherwise, The Economist pays 538 $64.02.
1ec43b4c-b1e3-4b1d-9bcc-24c5df2f32f0
trentmkelly/LessWrong-43k
LessWrong
Researching and Reasoning About Bardiya It's both profitable and gratifying to use one's reasoning skills to attempting to figure out answers to complex and murky questions. Even failing the ability to say definitively what happened, we can still make and update our own statistical distributions of what could have possibly happened.  Unfortunately, it's typically questions like "Who killed JFK?" that draw the brunt of this type of speculative research and reasoning. Which is, I think, not a very good place to study and develop better skills for this type of thinking. There's many reasons why, but just one will suffice — it's very helpful to be able to discuss and reason through these problems with another person, and almost any item that's emotionally charged in our culture will lead to less productive discussions. An alternative for you. I've found it better to train this type of thinking on incidents that are well-sourced and controversial in other cultures, but which are obscure or almost completely unknown in our own. Things like — (1) Was the 'Bardiya' that Darius the Great overthrew and killed the natural born son of Cyrus or not? (2) Did Uesugi Kenshin die of natural causes? (3) Did Scipio Africanus's brother misappropriate money? And so on. While it might be hard at first to get a friend interested in whether Bardiya was the actual Bardiya or a pretender, you can see how it's unlikely someone would have a heated opinion on the matter. It's a fun puzzle to work on with someone. Many of these questions aren't definitively answerable — we'll probably never know for sure on Bardiya — but every now and then you find a problem that seems like it's unsolvable but for which you eventually find extremely conclusive evidence. This is great when it happens. I've come across a couple of these, where I was wracking my brain weighing evidence of whether a certain action during, say, the Russian Civil War was just an accident or coordinated — until I found a translation of an almost-certainly-not-fo
b19c593c-aa67-489f-80ce-55546d282834
StampyAI/alignment-research-dataset/lesswrong
LessWrong
The Alignment Community Is Culturally Broken *Disclaimer: These are entirely my thoughts. I'm posting this before it's fully polished because it never will be.* Epistemic status: Moderately confident. Deliberately provocative title.  Apparently, the Bay Area rationalist community has a burnout problem. I have no idea if it's worse than base rate, but I've been told it's pretty bad. I suspect that the way burnout manifests in the rationalist community is uniquely screwed up. I was crying the other night because our light cone is about to get ripped to shreds. I'm gonna do everything I can to do battle against the forces that threaten to destroy us. You've heard this story before. Short timelines. Tick. Tick. I've been taking alignment seriously for about a year now, and I'm ready to get serious. I've thought hard about what my strengths are. I've thought hard about what I'm capable of. I'm dropping out of Stanford, I've got something that looks like a plan, I've got the rocky theme song playing, and I'm **ready to do this.** A few days later, I saw [this post](https://www.lesswrong.com/posts/BbM47qBPzdSRruY4z/on-the-margin-more-people-should-focus-on-buy-timing-and). And it reminded me of everything that bothers me about the EA community. Habryka [covered](https://www.lesswrong.com/posts/BbM47qBPzdSRruY4z/on-the-margin-more-people-should-focus-on-buy-timing-and?commentId=fEfLqfaLtnwDPYstf) the object level problems pretty well, but I need to communicate something a little more... *delicate*. I understand that everyone is totally depressed because qualia is doomed. I understand that we really want to creatively reprioritize. I completely sympathize with this.  I want to address the central flaw of Akash+Olivia+Thomas's argument in the Buying Time post, which is that **actually, people can improve at things.** There's something deeply discouraging about being told "you're an X% researcher, and if X>Y, then you should stay in alignment. Otherwise, do a different intervention." No other effective/productive community does this. I don't know how to put this, but the vibes are deeply off.  The appropriate level of confidence to have about a statement like "I can tell how good of an alignment researcher you will be after a year of you doing alignment research" feels like it should be pretty low. At a year, there's almost certainly ways to improve that haven't been tried. Especially in a community so mimetically allergic to the idea of malleable human potential.  Here's a hypothesis. I in no way mean to imply that this is the only mechanism by which burnout happens in our community, but I think it's probably a pretty big one.  It's not nice to be in a community that constantly hints that you might just not be good enough and that you can't get good enough. Our community seems to love treating people like mass-produced automatons with a fixed and easily assessable "ability" attribute. (Maybe you flippantly read that sentence and went "yeah it's called g factor lulz." In that case, maybe reflect on good of a correlate g is in *absolute* terms for the things you care about.). If we want to actually accomplish anything, we need to encourage people to make bigger bets, and to stop [stacking up credentials](https://www.lesswrong.com/posts/dhj9dhiwhq3DX6W8z/hero-licensing) so that fellow EAs think they have a chance. It's not hubris to believe in yourself.