id
stringlengths
36
36
source
stringclasses
15 values
formatted_source
stringclasses
13 values
text
stringlengths
2
7.55M
7083b592-2b90-4055-a8ad-acc255efdf35
trentmkelly/LessWrong-43k
LessWrong
Analysis of key AI analogies The following is an analysis of seven prominent AI analogies: aliens, the brain, climate change, electricity, the Industrial Revolution, the neocortex, & nuclear fission. You can find longer versions of these as separate blogposts on my substack. 0. Why? AI analogies have a real-world impact * For better or worse, analogies play a prominent role in the public debate about the long-term trajectory and impacts of AI.  * Analogies play a role in designing international institutions for AI (e.g. CERN, IPCC) and in legal decisions * Analogies as mental heuristics can influence policymakers in critical decisions. Changes in AI analogies can lead to worldview shifts (e.g. Hinton) * Having worked with a diverse set of experts my sense is that their thinking is anchored by wildly different analogies Analogies can be misleading * Boaz Barak (“Metaphors for AI, and why I don’t like them”) and Matthew Barnett (“Against most, but not all, AI risk analogies”) have already discussed the shortcomings of analogies on this forum * Every individual analogy is imperfect. AI is its own thing, and there is simply no precedent that would closely match the characteristics of AI across 50+ governance-relevant dimensions. * Overly relying on a single analogy without considering differences and other analogies can lead to blind spots, overconfidence, and overfitting reality to a preconceived pattern. Analogies can be useful * When facing a complex, open-ended challenge, we do not start with a system model. It is not clear which domain logic, questions, scenarios, risks, or opportunities we should pay attention to. Analogies can be a tool to explore such a future with deep uncertainty.  * Analogies can be an instrumental tool in advocacy to communicate complex concepts in a digestible and intuitively appealing way. My analysis is written in the spirit of exploration without prescribing or proscribing any specific analogy. At the same time, as a repository, it may still be o
e1177d27-8fa7-4b20-9e07-1f220465f72f
trentmkelly/LessWrong-43k
LessWrong
Anyone with the medical knowledge to evaluate an extraordinary claim? In a different forum I frequent ([The Ornery American](http://www.ornery.org/cgi-bin/ubbcgi/ultimatebb.cgi)), a regular member there (LetterRip) has recently been making an extraordinary claim - a new theory of medicine he has devised that relates and can contribute in the cure of several neurological-related conditions. I understand that the prior probablities for him being a crank are much much higher than him being a new Louis Pasteur. Still I was wondering if there is anyone here with sufficient medical/medicinal knowledge that they can easily determine if there's something obviously ludicrous in LetterRip's theory, or even the opposite: if indeed there's something there that makes sense and is worth investigating. Here are some of the relevant threads he began: -[where he requests contacts] (http://www.ornery.org/cgi-bin/ubbcgi/ultimatebb.cgi?ubb=get_topic;f=6;t=014966) -[where he publishes portion of his theory as a Kindle book](http://www.ornery.org/cgi-bin/ubbcgi/ultimatebb.cgi?ubb=get_topic;f=6;t=014997) -[where he announces more "breakthroughs" and insights and offers to cure or at least alleviate simple ailments](http://www.ornery.org/cgi-bin/ubbcgi/ultimatebb.cgi?ubb=get_topic;f=6;t=015009) Once again: I understand it's highly unlikely there's anything in his theory; still, I felt a cost-benefit analysis justified my making this post here. So... anyone with enough understanding of biology/medicine to evaluate these claims of his?
a348f094-a007-472f-bf4e-0078fd2aec55
trentmkelly/LessWrong-43k
LessWrong
GPT-3 and the future of knowledge work The most recent episode of the Futurati Podcast is a big one. We had Jungwon Byun and Andreas Stuhlmüller on to talk about their startup 'Ought' and, to the best of my knowledge, this is the first public, long-form discussion of their work around. (It's also probably our funniest episode.) Their ambition is to wrap a sleek GUI around advanced language models to build a platform which could transform scholarship, education, research, and almost every other place people think about stuff. The process is powered by GPT-3, and mostly boils down to teaching it how to do something you want it to do by showing it a couple of examples. To complete a list of potential essay topics you'd just show it 3-4 essay topics, and it'd respond by showing you a few more. The more you interact with it, the better it gets. There's all sorts of subtlety and detail, but that's the essence of it. This may not sound all that impressive, but consider what it means. You can have Elicit (a separate spinoff of Ought) generate counterarguments to your position, brainstorm failure modes (and potential solutions) to a course of action, summarize papers, and rephrase a statement as a question or in a more emotionally positive tone. The team is working on some integrations to extend these capabilities. Soon enough, Elicit will be able to connect to databases of published scientific papers, newspapers, blogs, or audio transcripts. When you ask it a research question, it'll be able to link out to millions of documents and offer high-level overviews of every major theme; it'll be able to test your comprehensions by asking you questions as you read; it'll be able to assemble concept hierarchies; it'll be able to extract all the figures from scientific papers and summarize them; it'll be able to extract all the proper names, find where those people are located, get their email addresses where available, and write them messages inviting them on your podcast. We might one day be able to train a mode
9cc2f8c3-ff18-4d5a-9095-92f3e3d546e4
trentmkelly/LessWrong-43k
LessWrong
Artificial Intelligence: A Modern Approach (4th edition) on the Alignment Problem Previously: AGI and Friendly AI in the dominant AI textbook (2011), Stuart Russell: AI value alignment problem must be an "intrinsic part" of the field's mainstream agenda (2014) The 4th edition of Artificial Intelligence: A Modern Approach came out this year. While the 3rd edition published in 2009 mentions the Singularity and existential risk, it's notable how much the 4th edition gives the alignment problem front-and-center attention as part of the introductory material (speaking in the authorial voice, not just "I.J. Good (1965) says this, Yudkowsky (2008) says that, Omohundro (2008) says this" as part of a survey of what various scholars have said). Two excerpts— > 1.1.5 Beneficial machines > The standard model has been a useful guide for AI research since its inception, but it is probably not the right model in the long run. The reason is that the standard model assumes that we will supply a fully specified objective to the machine. > > For an artificially defined task such as chess or shortest-path computation, the task comes with an objective built in—so the standard model is applicable. As we move into the real world, however, it becomes more and more difficult to specify the objective completely and correctly. For example, in designing a self-driving car, one might think that the objective is to reach the destination safely. But driving along any road incurs a risk of injury due to other errant drivers, equipment failure, and so on; thus, a strict goal of safety requires staying in the garage. There is a tradeoff between making progress towards the destination and incurring a risk of injury. How should this tradeoff be made? Furthermore, to what extent can we allow the car to take actions that would annoy other drivers? How much should the car moderate its acceleration, steering, and braking to avoid shaking up the passenger? These kinds of questions are difficult to answer a priori. They are particularly problematic in the general area of human–robot
b1b9fd77-77d4-4ff5-93d4-6e07d2cdcd65
trentmkelly/LessWrong-43k
LessWrong
More Recent Progress in the Theory of Neural Networks Thanks to Dan Roberts and Sho Yaida for comments on a draft of this post. In this post, I would like to draw attention to the book Principles of Deep Learning Theory (PDLT), which I think represents a significant advance in our understanding of how neural networks work [1]. Among other things, this book explains how to write a closed-form formula for the function learned by a realistic, finite-width neural network at the end of training [2] to an order of approximation that suffices to describe representation learning, and how that formula can be interpreted as the solution to a regression model. This makes manifest the intuition that NNs are doing something like regression, but where they learn the features appropriate for a given dataset rather than having them be hand-engineered from the start. I've condensed some main points from the 400-page book into an 8-page summary here: Review of select results from PDLT (Other good places to learn about the book, though perhaps with less of a focus on AI-safety-relevant parts, include this series of five lectures given by the authors at a deep learning summer school or this one-hour lecture for a non-expert audience.) For those who have been following the discussions of ML theory on this forum, the method used in the book is to go to the next-to-leading order in a 1/width expansion. It thus builds on recent studies of infinitely wide NNs that were reviewed in the AF post Recent Progress in the Theory of Neural Networks [3].  However, by going beyond the leading order, the authors of PDLT are able to get around a key qualitative shortcoming of the earlier work in that infinitely wide NNs can't learn features.  The next-to-leading order formula also introduces a sum over many steps of gradient descent, getting around an objection [4] that the NTK/infinite width limit may not be applicable to realistic models since in that limit, we can land on the fully trained model after just one fine-tuned training step.  I think t
c5352642-429e-49ec-89ca-408d98ddd6ba
StampyAI/alignment-research-dataset/arxiv
Arxiv
Toward Transparent AI: A Survey on Interpreting the Inner Structures of Deep Neural Networks I Introduction --------------- A defining feature of the last decade of deep learning is drastic increases in scale and capabilities [sevilla2021parameters, kaplan2020scaling], with the training compute for machine learning systems growing by a factor of 10 billion from 2010 to 2022 [sevilla2022compute]. At the same time, deep neural networks (DNNs) are increasingly being used in settings where safe, predictable behavior is critical. And if rapid progress continues, automated, broad-domain intelligence has the potential to be highly impactful to society [bostrom2014superintelligence, muller2016future, tegmark2017life, russell2019human, christian2020alignment, ord2020precipice]. Given these developments, it is essential that practitioners are able to understand how AI systems make decisions, and in particular, their failure modes. AI systems are most typically evaluated by their performance on a test set for a particular task. This raises concerns because a black-box that performs well on a test set does not imply that the learned solution is adequate. For instance, the deployment distribution may differ from the testing one, and/or specification of the task objective may lead to unintended behavior (e.g. [lehman2020surprising, krakovna2020specification]). And even if a user is aware of inadequacies, the black-box nature of the system can make it difficult to fix flaws. Thus, an essential step to building safe and trustworthy AI systems is to have techniques to detect and address these flaws. Toward this end, having a diverse set of techniques for rigorously interpreting AI systems will be valuable (See [I-A](#S1.SS1 "I-A The Importance of Interpretability for Safer AI ‣ I Introduction ‣ Toward Transparent AI: A Survey on Interpreting the Inner Structures of Deep Neural Networks")). We define an interpretability method as any process by which a system’s behavior can be characterized in human-understandable terms. This encompasses a broad set of techniques in the literature on DNNs, so in this paper, we focus specifically on methods that are useful for understanding *internal* structures and representations. We call these *inner* interpretability methods. We discuss a taxonomy for these methods, provide an overview of the literature, discuss key connections between interpretability and other topics in deep learning, and conclude with directions for continued work. Our central goals are twofold: (1) to provide a thorough resource for reference on existing inner interpretability methods, and (2) to offer guiding directions for continued, safety-focused research. Key Themes: * A principal motivation for interpretability techniques is understanding potential problems with models. Thus, interpretability methods will be highly relevant for building AI systems that are more safe and trustworthy. * Interpretability techniques should be evaluated by their ability to produce novel, valid, and actionable insights. This can be difficult, and evaluation is often done poorly in the literature. Rigorous tests and benchmarks for evaluating interpretations are needed and should involve rediscovering known flaws in DNNs. * There are a number of rich connections between interpretability, modularity, adversarial robustness, continual learning, network compression, and semblance to the human visual system. * Compelling directions for future work include scalable methods for using human input, reverse-engineering systems, detecting latent knowledge, benchmarking, and studying interactions between techniques. ### I-a The Importance of Interpretability for Safer AI For AI systems to be beneficial, they need the correct objectives, and they need to effectively optimize for them. It is principally this second desideratum in which interpretability techniques offer advantages for building more trustworthy AI [hubinger2020overview, ToCInterpretability]. We outline major motivations here. Showcasing failure: Uncovering why a model fails to produce a correct output equips researchers with insights on what failures look like and how to detect them. This information can help researchers avoid these issues, and help regulators establish appropriate rules for deployed systems. Fixing bugs: By understanding a failure and/or producing examples that exploit it, a network can be redesigned, fine-tuned, and/or adversarially-trained to better align it with the user’s goals. Improving basic understanding: By offering users more knowledge of how DNNs learn, interpretability techniques could develop improved models or better forecast progress in AI. Determining accountability: Having the ability to characterize failures is essential for establishing responsibility in the case of misuse or deployment failures. “Microscope” AI: Rigorously understanding how an AI system accomplishes a task may provide additional domain knowledge. This goal has been referred to as “microscope” AI [hubinger2020overview], and it could allow for reverse engineering more understandable models. This may be especially valuable for studying systems with superhuman performance in some domain. ### I-B Desiderata For interpretability techniques to contribute toward the goals above, they should meet certain desiderata. Accuracy – verification, not persuasion: An interpretability technique should give a correct picture of the computation being performed by the model, rather than just plausibly *appearing* to do so. Giving the user a false sense of security can be actively harmful. A persistent example of this occurs with input attribution methods which often provide misleading explanations about the decisions of the model [adebayo2018sanity, denain2022auditing]. Furthermore, explanations should come with uncertainty estimates. Human-understandability: On the other hand, explanations produced by interpretability techniques should be easy for humans to grasp. In one sense, the most accurate “explanation” of a model would simply be to return its parameters, but this would almost always be intractable for a human to understand. Thus, accuracy should be balanced with understandability. Depth: The “depth” of an inner interpretability technique refers to its ability to explain complex subprocesses. It is likely that some kinds of features or computations in DNNs are more naturally human-understandable than others, and this raises the possibility of an overly-simplistic understanding of a model. Interpretations should not be biased toward the portions of the model that are easy to explain. Generalizability: Interpretations should be able to generalize to different examples than the ones used to develop them [yang2019evaluating]. This can allow for them to help with the diagnosis of failures that occur beyond the training/validation distribution. Competitiveness: An interpretability technique should not lead to significantly decreased competitiveness such as degraded performance, increased demands for compute, or difficulty to use in modern deep learning frameworks. Competitive shortcomings could also lead to “value erosion” [dafoe2020ai] in which safer AI practices are not adopted in favor of more competitive models. Generating actionable insights: The ultimate goal of an interpretability method should be to produce useful insights. It is key that interpretations can be used to make and validate testable predictions about the model. Two ways in which this can be done are by using an interpretation to guide the design of a novel adversary or to manually finetune a model to induce a predictable change. This is closely related to accuracy; the result of interpretability methods should enable unambiguous insights into a model’s behavior. In Section [VI](#S6 "VI Discussion ‣ Toward Transparent AI: A Survey on Interpreting the Inner Structures of Deep Neural Networks") we discuss the importance of actionable insights and how existing works typically fail to demonstrate them. ### I-C Scope and Taxonomy Our focus is on *inner* interpretability methods for DNNs. Notably, model-agnostic techniques, black-box techniques, input attribution methods, neurosymbolic methods, and “good old-fashioned AI” are beyond the scope of this survey. This is not to say that they are of less value to building safe AI than the methods we focus on – we believe that a diverse set of techniques is crucial. However we focus on inner interpretability methods (1) for a tractable scope to this survey, and (2) because they are well-equipped for certain goals such as understanding how to modify a model, reverse engineering solutions, and detecting latent knowledge that does not normally appear in the system’s deployment behavior. See also a number of previous surveys and critiques of interpretability work that have overlap with ours [doshivelez2017towards, adadi2018peeking, samek2019towards, gilpin2018explaining, molnar2020interpretable, danilevsky2020survey, jacovi2020towards, molnar2020pitfalls, das2020opportunities, krishnan2020against, samek2021explaining, sajjad2021neuron, rudin2022interpretable, molnar2022]. This survey, however, is distinct in its focus on inner interpretability, AI safety, and the intersections between interpretability and several other research paradigms. See our discussion in Section [VI](#S6 "VI Discussion ‣ Toward Transparent AI: A Survey on Interpreting the Inner Structures of Deep Neural Networks"). In the following sections we organize our discussion of techniques based what part of a DNN’s computational graph they explain: weights, neurons, circuits, or representations. Fig. [1](#S1.F1 "Fig. 1 ‣ I-C Scope and Taxonomy ‣ I Introduction ‣ Toward Transparent AI: A Survey on Interpreting the Inner Structures of Deep Neural Networks") depicts how inner methods can be organized as such. Aside from this breakdown, interpretability techniques can also be divided by whether they are used during or after model training. Intrinsic interpretability techniques involve training models to be more easy to study or come with natural interpretations. And Post hoc techniques aim to interpret a model after it has been trained. We divide methods by whether they are intrinsic or post hoc at the subsection level. These two approaches are not mutually exclusive. ![A taxonomy of inner interpretability methods. This mirrors our organization of Sections ](https://media.arxiv-vanity.com/render-output/6606394/x1.png) Fig. 1: A taxonomy of inner interpretability methods. This mirrors our organization of Sections [II](#S2 "II Weights ‣ Toward Transparent AI: A Survey on Interpreting the Inner Structures of Deep Neural Networks")-[V](#S5 "V Internal Representations ‣ Toward Transparent AI: A Survey on Interpreting the Inner Structures of Deep Neural Networks"). Ii Weights ----------- See also circuits analysis in Section [IV-D](#S4.SS4 "IV-D Circuits Analysis (Post Hoc): ‣ IV Subnetworks ‣ Toward Transparent AI: A Survey on Interpreting the Inner Structures of Deep Neural Networks"). ### Ii-a Continual Learning (Intrinsic): One research paradigm in deep learning is to train systems that are able to learn new tasks without forgetting old ones. This is known as *continual learning* or avoiding *catastrophic forgetting* [de2019bias, smith2022closer]. Some techniques are based on the principle of having weights specialize for learning about particular types of input data and updating more for some than others [kirkpatrick2017overcoming, li2017learning, zenke2017continual, mallya2018packnet, aljundi2019task, ahn2019uncertainty, titsias2019functional]. This offers a natural way to interpret weights based on the tasks or classes that they specialize in. See also methods for continual learning that operate on neurons in Section [III-A](#S3.SS1 "III-A Continual Learning (Intrinsic): ‣ III Individual Neurons ‣ Toward Transparent AI: A Survey on Interpreting the Inner Structures of Deep Neural Networks"). ### Ii-B Weight Masking (Post Hoc): In contrast to intrinsic methods, one can also train differentiable weight masks over a network to determine which weights are key for which tasks [csordas2020neural, wortsman2020supermasks, zhao2020masking]. For example, a mask over the network’s weights can be trained to cover as many weights as possible in the network while still preserving performance on one particular subtask. The resulting mask identifies a subnetwork that can be understood as specializing in that subtask. ### Ii-C Frivolous Weights (Hazard): A difficulty in interpreting weights is that many are often unimportant to the network. Past works on weight pruning have shown that networks can often be pruned to contain a very small fraction of their original weights (though sometimes with fine-tuning) with little to no loss in performance [frankle2018lottery, blalock2020state, vadera2020methods]. Iii Individual Neurons ----------------------- As is common in the literature, we use “neuron” to refer both to units in dense layers and feature maps in convolutional layers. ![Inner interpretability methods for individual neurons can focus on (1) continual learning techniques that make neurons specialize in tasks, (2) using a dataset to find which neurons respond to which features, (3) synthesizing features to activate neurons, (4) analyzing network behavior when neurons are perturbed/ablated, and (5) analysing partial derivatives of outputs w.r.t. neural activations.](https://media.arxiv-vanity.com/render-output/6606394/x2.png) Fig. 2: Inner interpretability methods for individual neurons can focus on (1) continual learning techniques that make neurons specialize in tasks, (2) using a dataset to find which neurons respond to which features, (3) synthesizing features to activate neurons, (4) analyzing network behavior when neurons are perturbed/ablated, and (5) analysing partial derivatives of outputs w.r.t. neural activations. ### Iii-a Continual Learning (Intrinsic): Just as continual learning [de2019bias, smith2022closer] can be facilitated via specialization among weights (see Section [II-A](#S2.SS1 "II-A Continual Learning (Intrinsic): ‣ II Weights ‣ Toward Transparent AI: A Survey on Interpreting the Inner Structures of Deep Neural Networks")), the same can be done with neurons. Unlike weight-based continual learning methods which have weights update more for some tasks than others, neuron-based ones typically rely on adding new neurons to the DNN’s architecture upon encountering a new task [rusu2016progressive, yoon2017lifelong, lee2020neural]. This allows for a natural interpretation of neurons in terms of what subtask they specialize in and serves to reduce the degree to which they can simultaneously detect unrelated features. See also Section [III-F](#S3.SS6 "III-F Polysemantic Neurons (Hazard): ‣ III Individual Neurons ‣ Toward Transparent AI: A Survey on Interpreting the Inner Structures of Deep Neural Networks") which discusses “polysemantic” neurons. ### Iii-B Dataset-Based (Post Hoc): A simple way to characterize the role of individual neurons is to use a dataset to analyze which types of inputs they respond to. Perhaps the simplest example of this could be to search through a dataset and select the inputs that maximally excite a given neuron [zhou2014object]. A more sophisticated technique known as network “dissection” uses a richly-labeled dataset of semantic concepts to analyze neural responses [bau2017dissection, bau2018GAN, Bau2020understanding]. The degree to which a neuron can be successfully interpreted in this way can be measured as the degree of alignment between a neuron’s activations and a particular type of input. This line of work has been extended to assign descriptions to neurons using compositional logic expressions on a set of labels [mu2020compositional]. This allows the interpretability of neurons to be quantified as the intersection over union for a logical formula on input features and the neuron’s activations. This has been further extended to develop natural language explanations by using captioning methods to describe a set of image patches that activated a neuron [hernandez2021natural, oikarinen2022clip\_dissect]. Aside from these, dissection has also been used to analyze what types of neurons are exploited by adversarial examples [xu2019interpreting], identify failure modes for text-to-image models [cho2022dall], and probe neural responses in transformers to isolate where certain facts are stored [meng2022locating]. Unfortunately, these types of methods are limited by the diversity of examples in the dataset used and the quality of labels. ### Iii-C Feature Synthesis (Post Hoc): This type of approach is based on synthesizing inputs with the goal of maximally or minimally activating a given neuron. In contrast to using a specialized dataset to probe neural responses, synthesis methods come with the advantage of not being limited to a particular dataset. A number of works have taken this approach, optimizing inputs to excite particular neurons [mahendran2015understanding, nguyen2016multifaceted, olah2017feature]. There has also been work on using generative models instead of directly optimizing over input features [nguyen2016synthesizing, nguyen2016plug, casper2021robust]. Moreover, one can use a distance measure in the optimization objective to synthesize a batch of inputs to be diverse [olah2017feature]. A broader survey of these types of methods is provided by [nguyen2019understanding]. ### Iii-D Neural Perturbation (Post Hoc): One neuroscience-inspired [herreras2010cognitive] method for studying neurons is to perturb them and analyze how this affects the DNN’s behavior. By analyzing which types of inputs are processed differently under perturbation to a neuron, one can gain insight on the type of information that it processes. For example, if a neuron in an image classifier robustly and uniquely detects dogs, one should expect performance on dog classification to worsen when that neuron is ablated (i.e. dropped out). A key benefit of these methods is that they allow for testing counterfactuals, helping establish a causal rather than correlational relationship between neural activations and the behavior of the network. Work in this area have used single-neuron ablation [zhou2018revisiting, hod2021detecting], subspace ablation [morcos2018importance, ravfogel2022linear], and non-ablation perturbations, [bau2018identifying]. Perturbations have also been useful for studying transformers [elhage2021mathematical]. Notably, the net effect of perturbing a neuron can vary by context and which others, if any, are also perturbed. To account for this, one can compute Shapley values for neurons on a certain task to understand their role in the network averaged over the ablation of other neurons [Stier2018analysing, ghorbani2020shapley]. ### Iii-E Gradient-Based Attribution (Post-Hoc): A great deal of work has been done on gradient-based feature attribution to study which features are influential for neural responses or model outputs. There are several surveys and critiques of these methods in particular [dombrowski2019explanations, adebayo2018sanity, zhang2019should, ancona2019gradient, slack2020fooling, jeyakumar2020can, nielsen2021robust, denain2022auditing, fokkema2022attribution]. Most of this work has been done to study attributions on inputs rather than internal parts of the model and is thus outside the scope of this survey. However, similar approaches can be applied for attribution for internal neurons. [sundararajan2017axiomatic] introduce an approach for this using gradients along with runtime tests for sensitivity and invariance to evaluate the quality of attribution-based interpretations. Building off this, several works have found gradient-based attribution to be useful for large language models [ancona2017towards, durrani2020analyzing, lundstrom2022rigorous] particularly to guide a search for where certain facts are stored [dai2021knowledge]. ![Inner interpretability methods for subnetworks can focus on (1) simplifying the computational subgraph via sparsity, (2) either intrinsic modularity among neurons or post hoc groupings of neurons into modules, or (3) analysis of neural “circuits” which can be understood as performing a specific task.](https://media.arxiv-vanity.com/render-output/6606394/x3.png) Fig. 3: Inner interpretability methods for subnetworks can focus on (1) simplifying the computational subgraph via sparsity, (2) either intrinsic modularity among neurons or post hoc groupings of neurons into modules, or (3) analysis of neural “circuits” which can be understood as performing a specific task. ### Iii-F Polysemantic Neurons (Hazard): *Polysemantic* neurons are activated by multiple unrelated features (e.g. images of cat faces and cars [olah2020zoom]). They have been been discovered via dataset-based methods [fong2018net2vec, mu2020compositional, hernandez2021natural], various forms of visual feature synthesis [nguyen2016multifaceted, olah2020zoom, voss2021visualizing, goh2021multimodal], and feature attribution [elhage2022solu]. How and why they form remains an open question. However, [olah2020zoom] observed a tendency for monosemantic neurons to become polysemantic and hypothesized that it results from networks learning to represent information more efficiently. This would suggest that polysemantic neurons might be useful for model performance. However, they also pose a significant challenge for two reasons. First, interpretations of polysemantic neurons are more likely to be incorrect or incomplete. Second, it has been shown that polysemantic neurons can be exploited for adversarial attacks [mu2020compositional, hernandez2021natural]. See also Section [V-C](#S5.SS3 "V-C Disentanglement (Intrinsic): ‣ V Internal Representations ‣ Toward Transparent AI: A Survey on Interpreting the Inner Structures of Deep Neural Networks") for a discussion of *entanglement* which generalizes the notion of polysemanticity to layers. ### Iii-G Frivolous Neurons (Hazard): Frivolous neurons are not important to a network. [casper2019frivolous] define and detect two distinct types in DNNs: *prunable* neurons which can be removed from a network by ablation, and *redundant* neurons which can be removed by refactoring layers. They pose a challenge for interpretability because a frivolous neuron’s contribution to the network may either be meaningless or difficult to detect with certain methods (e.g. neural perturbation). Network compression may offer a solution. For example, [sainath2013low, srinivas2015data, hu2016network, luo2017thinet, he2020learning] each compress networks by eliminating frivolous neurons. Moreover, compression and the interpretability of neurons are linked. After compressing a network, [li2019exploiting] found that the remaining neurons were more interpretable with minimal change in performance, and [yao2021deep] used proxies for neuron interpretability to guide neuron-level pruning. Additionally, the motivation for pruning to increase interpretability is closely-related to intrinsically interpretable layer representations. See also Section [V-C](#S5.SS3 "V-C Disentanglement (Intrinsic): ‣ V Internal Representations ‣ Toward Transparent AI: A Survey on Interpreting the Inner Structures of Deep Neural Networks") on neural disentanglement. Iv Subnetworks --------------- ### Iv-a Sparsity (Intrinsic): Sparse weights inside of DNNs allow for much simpler analysis of relationships between neurons. In some cases, sparsification can reduce the number of weights by almost two orders of magnitude while causing little to no tradeoff with performance [frankle2018lottery]. Sparsity-aided interpretability has been explored through pruning [frankle2019dissecting, wong2021leveraging, bena2021extreme, moran2021identifiable] regularization [ross2017neural], and sparse attention [meister2021sparse]. Pruning portions of the network architecture can also be guided by measures of interpretability [yeom2021pruning, wang2020dynamic]. Interestingly, [frankle2019dissecting] finds no increase in interpretability through the dissection of pruned networks, and [meister2021sparse] fail to find evidence of improved interpretability of individual neurons with sparse attention. Meanwhile, as an alternative to sparsity, [wu2018beyond] introduce a method to regularize the behavior of a neural network to mimic that of a decision tree. ### Iv-B Modularity (Intrinsic): Modularity is a common principle of engineered systems and allows for a model to be understood by analyzing its parts separately. At a high level, [amer2019review] offers a survey of DNN modularization techniques, and [agarwala2021one, mittal2022modular] study the capabilities and generality of modular networks compared to monolitic ones. The simplest way to design a modular DNN is to use an explicitly modular architecture. This can be a form of “model-aided deep learning” [shlezinger2020model] if domain-specific considerations are used to guide the design. Modular architectures were studied by [voss2021branch] who analyzed the extent to which neurons in a branched or “multi-path” architecture learned to process different features from those in other branches, and [yan2020neural] who experimented with distinct neural modules that were trained to execute algorithmic subroutines. Branched architectures could be considered a form of “hard” modularity, but a “softer” form can be achieved if neurons in different modules are connected to each other but must compete for access to information. This can allow for end-to-end differentiability, yet sparse information flow between modules. Methods for soft modularity have been studied via a controller [kirsch2018modular, jiang2019self] or sparse attention [andreas2016neural, goyal2019recurrent, serra2018overcoming, elhage2022solu]. Notably [serra2018overcoming] used attention to task to induce specialization among neurons and reduce catastrophic forgetting. Additionally, [filan2021clusterability] explore two methods for making networks more modular involving initialization and regularization. See also methods for avoiding catastrophic forgetting by having subsets of neurons specialize in a given task in Section [III-A](#S3.SS1 "III-A Continual Learning (Intrinsic): ‣ III Individual Neurons ‣ Toward Transparent AI: A Survey on Interpreting the Inner Structures of Deep Neural Networks"). ### Iv-C Modular Partitionings (Post Hoc): One way of understanding a DNN in a modular fashion is to partition the neurons into a set of modules, each composed of related neurons. Toward this goal, [watanabe2018modular, watanabe2019understanding, filan2021clusterability] divide neurons into modules based on graphical analysis of the network’s weights and analyze how distinct the neurons in each module were. These methods involve no data or runtime analysis. In contrast, [watanabe2019interpreting, arik2020explaining, hod2021detecting, casper2022graphical, lange2022clustering] each perform partitioning and cluster analysis based on how neurons associate with inputs and/or outputs. In particular, [hod2021detecting] present a statistical pipeline for quantifying the interpretability of neuron clusters with no human in the loop. ![Inner interpretability methods for neural representations can focus on (1) having networks explain their decisions, (2) adversarial training, (3) “disentangling” latent representations such that each neuron nonredundantly responds to a coherent concept in data, (4) analysis of token evolution or attribution maps in transformers, (5) analysis of vectors in latent space, and (6) probing neural representations to evaluate their transferrability to a target task.](https://media.arxiv-vanity.com/render-output/6606394/x4.png) Fig. 4: Inner interpretability methods for neural representations can focus on (1) having networks explain their decisions, (2) adversarial training, (3) “disentangling” latent representations such that each neuron nonredundantly responds to a coherent concept in data, (4) analysis of token evolution or attribution maps in transformers, (5) analysis of vectors in latent space, and (6) probing neural representations to evaluate their transferrability to a target task. ### Iv-D Circuits Analysis (Post Hoc): Instead of analyzing an entire partitioning of a network, a much simpler approach is to explain individual subnetworks inside of it. These have often been referred to as neural “circuits” which can be as small as just a few neurons and weights. This analysis often focuses on characterizing neurons using single-neuron interpretability methods followed by analysis of how they associate with each other, typically based on weight magnitudes or neural perturbation experiments. A motivation for this is that it can be difficult to accurately interpret a neuron’s behaviour without understanding how it influences other neurons – which requires understanding the weights and neurons adjacent to to the neuron of interest. This has been done with weight masking [wang2018interpret], data-based methods [fiacco2019deep, santurkar2021editing], feature synthesis [olah2017feature, olah2018building, olah2020overview, cammarata2020curve, olah2020naturally, olah2020zoom, voss2021visualizing, schubert2021high, petrov2021weight], and neural ablation [meyes2020under, hamblin2022pruning]. See also Section [V-D](#S5.SS4 "V-D Tokens and Attention (Intrinsic and Post Hoc): ‣ V Internal Representations ‣ Toward Transparent AI: A Survey on Interpreting the Inner Structures of Deep Neural Networks") for a discussion of circuits in transformers. V Internal Representations --------------------------- ### V-a Self-Explaining Models (Intrinsic): Most methods in the literature used for understanding DNNs aim to help a human “open up” the network and study parts of it. If one wants to understand another human’s reasoning, the analogous techniques would involve studying their brain directly. These techniques are sometimes useful, but in most cases, simply asking another human for an explanation of what they are thinking is much more effective. Self-explaining AI systems are meant to provide such explanations of internal reasoning in an analogous way to how humans can provide them. Competing definitions are offered in the literature, but we will use one based on [elton2020self], which simply requires that a model produces an explanation for its reasoning that can be easily be understood by a human, ideally paired with confidence estimates. In computer vision, one approach has been to classify images based on their similarity to a set of learned “prototypes” [kim2014bayesian, li2018deep, alvarez2018towards, chen2019looks, rymarczyk2021interpretable]. Prototype-based classification has also been studied in language models [Farhangi\_2022]. These methods allow the model to attribute its outputs to a set of examplar datapoints, allowing its decision to be explained as “*this* input resembles *these* other examples.” Another self-explaining AI strategy has been to supervise human-understandable explanations for visual classification or question answering [hendricks2016generating, hendricks2018grounding, kim2018textual, akata2018generating, patro2020robust] so that the model outputs both a decision and a explanation computed from the same inner representations. In natural language processing, this approach has been used for question answering and natural language inference with explanations [camburu2018snli, lamm2020qed, kumar2020nile, zhao2020lirex]. Another approach has been to design a “ConceptTransformer” whose outputs can be explained as an attention map over user-defined concepts [rigotti2021attention]. For large language models that have highly general language capabilities, explanations can also simply be elicited via prompts (e.g. [brown2020language, chowdhery2022palm]), however the extent to which these explanations accurately explain the model’s decision making is very unclear [kadavath2022language]. [alvarez2018towards] argues that explanations should meet three criteria: (1) Explicitness: are explanations direct and understandable? (2) Faithfulness: do they correctly characterize the decision? And (3) Stability: how consistent are they for similar examples? Producing self-explaining models that meet these remains an open challenge. It has been shown that explanations from such models can be unfaithful [alvarez2018towards, valentino2021natural] or vulnerable to adversarial examples [zheng2019analyzing, camburu2019make, hoffmann2021looks]. Toward the goals of understanding and fixing these issues, [deyoung2019eraser] introduces an NLP benchmark, and [bontempelli2021toward] provides an interactive debugging method for prototype networks. ### V-B Adversarial Training (Intrinsic) [engstrom2019adversarial] found that adversarially-trained classifiers exhibited improvements in a number of interpretability-related properties including feature synthesis for neurons (see Section [III-C](#S3.SS3 "III-C Feature Synthesis (Post Hoc): ‣ III Individual Neurons ‣ Toward Transparent AI: A Survey on Interpreting the Inner Structures of Deep Neural Networks")). It has also been found that these adversarially trained networks produce better representations for transfer learning [salman2020adversarially], image synthesis [santurkar2019image, casper2021robust], and for modeling the human visual system [engstrom2019learning]. Unfortunately, robustness may be at odds with accuracy [tsipras2018robustness], potentially due to predictive but “nonrobust” features in a dataset [ilyas2019adversarial]. This had led to an understanding that adversarial examples and adversarial training can be used to help to understand what types of useful or exploitable features a network detects and represents internally [casper2021robust]. ### V-C Disentanglement (Intrinsic): During a pass through a network, each layer’s activations can be represented as a vector in latent space. The goal of *disentanglement* refers to ensuring that features can be more easily identified by analyzing a latent vector. This section focuses on entanglement at the layer-level, but see also Section [III-F](#S3.SS6 "III-F Polysemantic Neurons (Hazard): ‣ III Individual Neurons ‣ Toward Transparent AI: A Survey on Interpreting the Inner Structures of Deep Neural Networks") for a discussion of polysemantic neurons. Disentanglement can be done in a supervised manner by encouraging neurons to align to a set of predetermined concepts. [chen2020concept] did this by applying a whitening operation to decorrelate features in data followed by a learned orthogonal transformation to produce latent activations that could be supervised. Similarly, inner supervision was used by [losch2019interpretability, koh2020concept, losch2021semantic] to train a ‘bottleneck’ layer to separate features, and by [subramanian2018spine] to learn sparse, interpretable embeddings. Disentanglement can also be done in an unsupervised manner. An early partial example of this is dropout [JMLR:v15:srivastava14a] which prevents coadaptation among neurons, though at the cost of increasing redundancy. Other works have explored using lateral inhibition between neurons in a layer to make them compete for activation [cao2018lateral, krotov2019unsupervised, subramanian2018spine, elhage2022solu], designing a “capsule” based architecture in which a group of neurons have activations that each represent a specific feature [sabour2017dynamic, deliege2019effective], aligning activations to components of variation in data [kuo2019interpretable], using a mutual information loss [chen2016infogan], using an inter-class activation entropy-based loss [zhang2018interpretable], regularizing the hessian of the network’s outputs [peebles2020hessian], training a classifier and autoencoder from the same latents [schneider2021explaining], or learning a mask over features [he2022exploring]. Aside from these, other works have focused on training autoencoders to have more independently-activated neurons [higgins2016beta, kumar2017variational, burgess2018understanding, kim2018disentangling, chen2018isolating]. However, in a survey these methods [locatello2019challenging, locatello2020sober] prove an impossibility result for unsupervised disentanglement without inductive biases on both the models and data. ### V-D Tokens and Attention (Intrinsic and Post Hoc): Transformer architectures process data by alternatingly passing token representations through attention and feed forward layers. These architectural building blocks pose unique opportunities for studying the network’s internal representations. First, the tokens can be studied. This can be by interpreting token representations in transformers directly [li2021implicit, geva2022lm, geva2022transformer] or analyzing how fully-connected layers process them [geva2020transformer]. Second, key-query products are computed inside of an attention layer and represent how much each inner token is attending to others. This notion of studying relations between token representations has similarities to circuits analysis covered in Section [IV-D](#S4.SS4 "IV-D Circuits Analysis (Post Hoc): ‣ IV Subnetworks ‣ Toward Transparent AI: A Survey on Interpreting the Inner Structures of Deep Neural Networks"). In their sentinel work, [bahdanau2014neural] showed that an attentional alignment appeared to show the expected attention patterns for machine translation, and other more recent works have used this approach more systematically [vashishth2019attention, clark2019does, hao2021self, abnar2020quantifying]. Notably, this can be used for identifying harmful biases [de2019bias]. Interactive tools for visual analysis of attentional attribution are provided by [lee2017interactive, liu2018visual, strobelt2018s, vig2019multiscale]. And [elhage2021mathematical, chefer2021transformer, olsson2022context] expanded on this approach toward the goal of multi-step attribution across multiple layers. Importantly, the analysis of attention may not always suggest faithful explanations, and an over-reliance on them for interpretation can be hazardous [jain2019attention, serrano2019attention, wiegreffe2019attention]. Finally, transformers may have many frivolous, prunable attention heads [voita2019analyzing], suggesting a further need for caution because not all heads may be worth interpreting. ### V-E Concept Vectors (Post Hoc): While disentanglement aims to align concepts with individual neurons, methods for analyzing concept vectors are post hoc solutions to the same type of problem. Here, the goal is to associate directions in latent space with meaningful concepts. Several works have done this by analysis of activations induced by images from a dataset of concepts [fong2018net2vec, kim2018interpretability, zhou2018interpretable, reif2019visualizing, lucieri2020explaining, lucieri2020interpretability] including [abid2022meaningfully] who used it explicitly for debugging. A contrasting approach was used by [schneider2021explaining]. Rather than beginning with concepts and then identifying vectors for them, they first identified vectors using a generator and layer selectivity heuristic, and then sought to find post hoc explanations of what they encoded. A debugging-oriented approach was taken by [jain2022distilling] who trained linear classifiers on embeddings of a data class to identify ones that were incorrectly labeled by a classifier. This allowed them to detect, interpret, and intervene for potentially difficult inputs for the model as well as to identify underrepresented subcategories of data. ### V-F Probing (Post Hoc): Given some way of embedding data, the goal of probing is to understand whether or not that embedding captures a certain type of information. Probing leverages transfer learning as a test for whether embeddings carry information about a target task. The three steps to probing are to (1) obtain a dataset which contains examples capturing variation in some quality of interest, (2) embed the examples, and (3) train a model on those embeddings to see if it can learn the quality of interest. Any intermediate representation from any model can be used, making this a versatile technique. A survey of probing works is provided by [belinkov2022probing]. The simplest example of probing is to use an unsupervised learning model [hoyt2021probing]. Additional work has been done with linear probes for image classifiers [alain2016understanding]. However, probing has most commonly been done in NLP [gupta2015distributional, kohn2015s, adi2016fine, ettinger2016probing, conneau2018you, perone2018evaluation, niven2019probing, saleh2020probing, lepori2020picking, tamkin2020investigating, miaschi2021probing, lindstrom2021probing, li2022probing]. While versatile, probing is imperfect [antverg2021pitfalls]. One issue is that a probe’s failure to learn to represent the desired quality in data is not necessarily an indicator that that quality is not represented. For example, this may be the case with an underparameterized probe. Also, a successful probe does not necessarily imply that the model that is being probed actually uses that information about the data. This was demonstrated by [ravichander2020probing] who argued for the use of rigorous controls when probing. In a subsequent paper, [elazar2021amnesic] aimed to address this problem by pairing probing with experiments that manipulate the data in order to analyze the causal influence of perturbations on the model’s performance. Vi Discussion -------------- Interpretability is closely-linked with adversarial robustness research.111The works referenced in this paragraph are not limited only to inner interpretability methods. There are numerous connections between the two areas. (1) More interpretable networks are more robust to adversaries. A number of works have studied this connection by regularizing the input gradients of networks in order to improve robustness [ross2018improving, finlay2019scaleable, etmann2019connection, kim2019bridging, kaur2019perceptually, boopathy2020proper, hartl2020explainability, du2021fighting, sarkar2021get, mangla2020saliency, noack2021empirical]. Aside from this, [eigen2021topkconv] use lateral inhibition and [tsiligkaridis2020second] use a second-order optimization technique, each to improve *both* interpretability and robustness. Furthermore, emulating properties of the human visual system in a convolutional neural network improves robustness [dapello2020simulating]. (2) More robust networks are more interpretable [engstrom2019adversarial, augustin2020adversarial, ortiz2021optimism]. Adversarially trained networks also produce better representations for transfer learning [salman2020adversarially, agarwala2021one], image synthesis [santurkar2019image, casper2021robust], and for modeling the human visual system [engstrom2019learning]. (3) Using an interpretability technique to guide the design of novel adversarial examples is a way to rigorously demonstrate its usefulness. This has been done by [carter2019exploring, mu2020compositional, hernandez2021natural, casper2021robust] and has been used to more effectively generate adversarial training data [ziegler2022adversarial]. As a means of debugging models, [hubinger2019relaxed] argues for using “relaxed” adversarial training which can rely on interpretability techniques to discover general distributions of inputs which may cause a model to fail. (4) Adversarial examples can be used as interpretability tools [dong2017towards, tomsett2018failure, ilyas2019adversarial, casper2021robust]. (5) Finally, adversarial trojan detection methods can also be used as interpretability/debugging tools [wang2019neural, guo2019tabor, gao2021design, liu2020survey, wang2022survey]. Interpretability is also closely linked with continual learning, modularity, and network compression. Continual learning methods [delange2021continual, smith2022closer] involving parameter isolation, and/or regularization make neurons and weights more intrinsically interpretable. As discussed in Sections [II-A](#S2.SS1 "II-A Continual Learning (Intrinsic): ‣ II Weights ‣ Toward Transparent AI: A Survey on Interpreting the Inner Structures of Deep Neural Networks") and [III-A](#S3.SS1 "III-A Continual Learning (Intrinsic): ‣ III Individual Neurons ‣ Toward Transparent AI: A Survey on Interpreting the Inner Structures of Deep Neural Networks"), these methods suggest intrinsic interpretations for their weights and/or neurons in terms of task. Thus, they allow for each weight or neuron to be interpreted as having partial memberships in a set of task-defined modules. Aside from this, a number of other intrinsic modularity techniques were the focus of Section [IV-B](#S4.SS2 "IV-B Modularity (Intrinsic): ‣ IV Subnetworks ‣ Toward Transparent AI: A Survey on Interpreting the Inner Structures of Deep Neural Networks"). And as discussed in Section [IV-C](#S4.SS3 "IV-C Modular Partitionings (Post Hoc): ‣ IV Subnetworks ‣ Toward Transparent AI: A Survey on Interpreting the Inner Structures of Deep Neural Networks"), networks can also be interpreted by partitioning them into modules and studying each separately. Furthermore, “frivolous” neurons [casper2019frivolous] are compressible and include sets of redundant neurons which can be interpreted as modules and can often be merged by weight refactorization. And finally, compression can guide interpretations [li2019exploiting], and interpretations can guide compression [yao2021deep] (see Section [III-G](#S3.SS7 "III-G Frivolous Neurons (Hazard): ‣ III Individual Neurons ‣ Toward Transparent AI: A Survey on Interpreting the Inner Structures of Deep Neural Networks") on frivolous neurons and compression). Interpretability techniques should scale to large networks. Frequently, small networks and simple tasks such as classification on the MNIST dataset [lecun-mnisthandwrittendigit-2010] are used for testing methods. However, we stress that simple DNNs performing simple tasks can only be deployed in a limited number of real world settings, and they are sometimes easy to replace with other intrinsically interpretable, non-DNN models. As a result, we argue that the ability of a method to scale to large models relates strongly to its potential to be practically useful for diagnostics and debugging. For example, capsule networks [sabour2017dynamic] achieve impressive performance on MNIST classification and have intrinsic interpretability properties that convolutional neural networks lack. However, they are less parameter efficient and have thus far not achieved competitive performance beyond the CIFAR-10 [krizhevsky2009learning] level, let alone the ImageNet [russakovsky2015imagenet] level [patrick2022capsule]. Methods like these may offer excellent inspiration for future work, but should they fail to be tractable for use with large models, they may be of limited value for practical interpretability. We urge researchers to detail computational requirements for their methods. Quantifying uncertainty is crucial. The use of interpretability techniques to explain parts of a DNN should be paired with ways to estimate how accurate the explanations are likely to be. How to measure certainty depends on the method at hand, and there is no single procedure to measure the reliability of an interpretability technique. Different approaches can be used (e.g., multiple trials, comparisons to random baselines, attempting to construct adversarial examples, etc.). Whether or not a method appears to provide helpful explanations can also be strongly influenced by cherry-picking results. Readers should be wary of demonstrations that do not quantify uncertainty or showcase weaknesses of the approach. Interpretability techniques should not be used to generate hypotheses – not conclusions. Producing plausible explanations is insufficient. Rigorous evaluation of interpretations is key. Consider the goal of explaining what role a particular neuron plays in a DNN. There exist a number of methods to do so using data, synthesis, perturbation, gradients, etc. However, even if such an approach strongly suggests that the neuron had a particular role, this does not offer any guarantee that this explanation is complete and faithful to the true function of the network. Works such as [olah2020zoom, bolukbasi2021illusion, hoffmann2021looks] demonstrate that very plausible-seeming explanations can be very easy to find counterexamples for. A great number of works in interpretability have failed to go beyond simply inspecting the results of a method. More care is needed. Interpretability techniques can only be evaluated to the extent that they help guide us toward hypotheses that can be used to make nontrivial, testable predictions. They are only genuinely useful inasmuch as those predictions validate. And the validity of an interpretation is only granted on the distribution of data for which validating tests were conducted – extrapolating interpretations is risky (e.g. [bolukbasi2021illusion]). Interpretability techniques should be judged by average or worst case effectiveness, not best case. As a result of the inherent difficulty of interpreting neural networks, many works in the literature choose to showcase individual, highly successful applications of their method. On one hand, this is useful for providing illustrative examples. But the evaluation of interpretability techniques should not be biased toward their best case performance on cherrypicked examples. This would be akin to evaluating a system based on its performance on the testing examples that it happened to perform the best on. The goal should be methods that are effective in the average case, or for high-stakes or adversary-prone settings, the worst case. Works should include quantitative results evaluating how their techniques perform on randomly or adversarially sampled tasks. For example, a work on characterizing neural circuits should not focus only on presenting results from circuits that were particularly amenable to interpretation. Instead, it should focus on explaining the role of randomly or adversarially sampled neurons inside of circuits or on finding circuits that can explain how the network performs randomly or adversarially selected subtasks. Vii Future Work ---------------- The connections between interpretability, modularity, adversarial robustness, continual learning, network compression, and semblance to the human visual system should be better understood. One of the most striking findings of modern interpretability work for DNNs is its connections with other goals in deep learning. One of the central goals of this survey has been to highlight these connections. Currently, the intersections in the literature between interpretability and these other areas are relatively sparse. Moving forward, we believe that an interdisciplinary understanding of interpretability will lead to improved insights in multiple domains. Scaling requires efficient human oversight. Many explanations obtained by state of the art interpretability techniques have involved a degree of human experimentation and creativity in the loop. But if the goal is to obtain a thorough understanding of a system, human oversight must be efficient. This poses a conceptual challenge due to the end-goal of interpretability techniques being an understanding of the DNN that is useful for *human* oversight. But we are optimistic about using active learning (e.g. [gao2020consistency]), weak supervision (e.g. [boecking2020interactive]), implicit supervision using proxy models trained on human-labeled data, and/or rigorous statistical analysis of proxies to reduce human involvement. Toward this end, obtaining additional high-quality datasets (e.g. [bau2017dissection]) with richly-labeled samples may be valuable. And a useful approach will be to use other models as proxies for a human. For example, feature visualization has conventionally relied on human interpretations, but using the labels and/or latents from outside models could be used to automate much of this process away. Additional progress toward obtaining rigorous, quantitative results without a human in the loop has been made by, [zhou2022exsum] who proposed a mathematical framework for quantitative evaluation of model understanding and [hod2021detecting] who introduced a statistical pipeline for quantifying interpretability based on proxy measures. Focus on discovering novel behaviors – not just analyzing them. Many existing methods are only well-equipped to study how models behave in limited settings. For example, any interpretability method that relies on a dataset is limited to characterizing the model’s behavior on that data distribution. But ideally, methods should not be limited to a given dataset or to studying potential failures when the failure modes are already known. For example, an important practical problem is the detection of offensive or toxic speech, but no dataset contains examples of all types of offensive sentences, and hand-specifying a function to perfectly identify offensive from inoffensive speech is intractable. Humans can, however, usually identify offensiveness when they see it with ease. This highlights a need for techniques that allow a user to discover failures that may not be in a typical dataset or easy to think of in advance. This represents one of the *unique* potential benefits of interpretability methods compared to other ways of evaluating DNNs. “Mechanistic interpretability” and “microscope AI” are ambitious but potentially very valuable goals. One direction for interpretability research is *mechanistic interpretability*, which aims to gain an algorithmic-level understanding of a DNN’s computation. This can be operationalized as converting the DNN into some form of human-understandable pseudocode [filan2018mechanistic]. This is related to the goal of *microscope* AI which refers to gaining domain insights by thoroughly interpreting high-performing AI systems [hubinger2020overview]. These capabilities would come with advantages including predicting counterfactual behaviours and reverse-engineering a model. Thus far, there have been a limited number of attempts towards this type of goal with moderate levels of success [verma2018programmatically, cammarata2020curve]. Future work in this direction may benefit by using techniques from program synthesis and analysis. Detecting deception and eliciting latent knowledge may be very valuable for advanced systems. A system is *deceptive* if it passes false or incomplete information along some communication channel (e.g. to a human). Relatedly, *latent knowledge* [christiano2022eliciting] is something that a system ‘knows’ but does not seem to know based on its behavior. For example, a language model might repeat a common misconception like “bats are blind” in some contexts but not others. Being able to characterize deceptive behavior and latent knowledge has clear implications for safer highly-intelligent AI by allowing humans to know when a model may be untrustworthy. But this may be difficult for a number of reasons including (1) how, by definition, deceptive behavior and latent knowledge cannot be determined by observing the model’s deployment behavior alone, (2) that any mismatches between the features/concepts used by humans and the model require a method for ontology translation, and (3) that it is unclear the extent to which a human can interpret an AI system that is superhuman on some task. However, inner interpretability methods offer a unique approach for these challenges via scrutinizing parts of the model’s computational graph that may correspond to deceptive behavior and latent knowledge. Ultimately, progress toward characterizing deceptiveness and latent knowledge would be very valuable for avoiding particularly insidious ways that models can be misaligned with humans goals. Rigorous benchmarks are sorely needed Interpretability work on DNNs is done with numerous techniques, not all of which have the same end goal. For example, some methods aim to explain how a DNN handles a single input while others are aimed at more generalizable understandings of its parts. For these reasons, plus the rapid development of techniques, widely-accepted benchmarks for interpretability do not yet exist. However, as research in the area matures, we believe that having common and rigorous tests for the usefulness of interpretability techniques will be indispensable. A well-known example for a benchmark’s success at driving immense progress is how ImageNet [russakovsky2015imagenet] catalyzed work in supervised image classification, and we hope that analogous benchmarks will have the same type of influence on improved interpretability work. Ideally, benchmarks should involve rediscovering known flaws. Some works have evaluated the success of techniques by measures of how thorough characterizations are on average (e.g. [bau2017dissection]), and others have evaluated methods by comparing how easy it is for a human to use them to develop reasonable-seeming characterizations of parts of the network in comparison to a baseline (e.g. [elhage2022solu]). Evaluation methods like these are valuable. But we caution that this type of approach only measures a proxy for the true usefulness of an interpretability technique. If the end goal is to provide valid and actionable explanations, methods for evaluating them ought to measure their ability to characterize behaviors of the network that are novel, known, and relevant to its desired function. One promising approach for benchmarking would be to evaluate interpretability techniques by their ability to discover known flaws that an adversary has implanted in the network [hubinger\_2021]. For intrinsic methods, a general training procedure would need be available to implant the flaw, while for post hoc methods, a network with such a flaw already implanted would suffice. Then judging techniques by their ability to rediscover such flaws would offer a much more direct measure of their practical usefulness than ad hoc approaches. Related techniques for feature attribution methods have been used but not yet popularized [adebayo2020debugging, bastings2021will, denain2022auditing]. Competitions for implanting and rediscovering flaws hosted at well-known venues or platforms may be a useful way to drive progress in both techniques and benchmarks. Combining techniques may lead to better results. The large majority of work in interpretability focuses on developing and evaluating individual techniques. However, combining multiple intrinsic methods or intrinsic with post hoc methods may yield better results. We hope that new baselines and increased demand for rigorously interpretable systems will further incentivize results-oriented interpretability work. Ideally, progress in interpretability should not come with increased capabilities. Good interpretability methods should be competitive, but not too competitive – interpretability might lead to undesirable increases in capabilities. On one hand, a model being more interpretable may be a byproduct of increased task performance in general. For example, large language models can often be prompted to “explain” their reasoning, but only as a byproduct of having advanced, broad-domain capabilities. On the other hand, interpretability may lead to performance increases via debugging or model insights. However, from the perspective of avoiding risks from advanced AI systems, neither of these is ideal. A focus on improving interpretability techniques *without* corresponding increases in capabilities offers the best chance of preventing advancements in AI from outpacing our ability for effective oversight. From this perspective, we argue that improvements in safety rather than capabilities should be the principal guide for future work in interpretability. Growing the field of interpretability. Many ethical or safety concerns with AI systems can be reduced via tools to better understand how models make decisions and how they may fail. As a result, we argue that instead of being a separate field of interest, interpretability should be seen as a *requirement* for systems that are deployed in high-stakes settings. A compelling path forward is to establish rigorous techniques and benchmarks (as mentioned above). There are also other reasons for optimism. The field is maturing, and many interpretability tools have proven their worth for practical insights and debugging. And although they are the focus of this survey, we emphasize that that inner interpretability methods will not be the only valuable ones for improving AI safety. Ultimately, we are hopeful for how rigorous interpretability techniques can guide the design of safer AI systems, and we believe that a more deliberate, interdisciplinary, and safety-focused field will be the key moving forward. Acknowledgements ---------------- We thank Davis Brown and Peter Hase for feedback. Tilman Räuker is supported in part by the Long-Term Future Fund.
ff856a8c-4677-4afd-9eeb-530b373eaed6
trentmkelly/LessWrong-43k
LessWrong
Link: Poking the Bear (Podcast) A Dan Carlin Podcast about how the United States is foolishly antagonizing the Russians over Ukraine.  Carlin makes an analogy as to how the United States would feel if Russia helped overthrow the government of Mexico to install an anti-American government under conditions that might result in a Mexican civil war.  Because of the Russian nuclear arsenal, even a tiny chance of a war between the United States and Russia has a huge negative expected value.
c59c9e8d-b62b-4e48-af39-21bc231ccf4a
StampyAI/alignment-research-dataset/lesswrong
LessWrong
AI Alignment Problem: “Human Values” don’t Actually Exist *Previous posts in the series: “[What AI Safety Researchers Have Written About the Nature of Human Values](https://www.lesswrong.com/posts/GermiEmcS6xuZ2gBh/what-ai-safety-researchers-have-written-about-the-nature-of)”, [Possible Dangers of the Unrestricted Value Learners](https://www.lesswrong.com/posts/hzEaasJyQsutYDNfN/possible-dangers-of-the-unrestricted-value-learners#comments). Next planned post: AI safety approaches which don’t use the idea of human values.* Summary: The main current approach to the AI safety is AI alignment, that is, the creation of AI whose preferences are aligned with “human values.” Many AI safety researchers agree that the idea of “human values” as a constant, ordered sets of preferences is at least incomplete. However, the idea that “humans have values” underlies a lot of thinking in the field; it appears again and again, sometimes popping up as an uncritically accepted truth. Thus, it deserves a thorough deconstruction, which I will do by listing and analyzing comprehensively the hidden assumptions of the idea that “humans have values.” This deconstruction of human values will be centered around the following ideas: “Human values” are useful descriptions, but not real objects; “human values” are bad predictors of behavior; the idea of a “human value *system”* has flaws; “human values” are not good by default; and human values cannot be separated from human minds. The method of analysis is listing hidden assumptions on which the idea of “human values” is built. I recommend that either the idea of “human values” should be replaced with something better for the goal of AI safety, or at least be used very cautiously. The approaches to AI safety which don’t use the idea of human values at all may require more attention, like the use of full brain models, boxing, and capability limiting. Introduction ============ The idea of AI which learns human values is a core of the current approach to artificial general intelligence (AGI) safety. However, it is based on some assumptions about the nature of human values, including the assumption that they completely describe human motivation, are non-contradictory, are normatively good, etc. Actual data from psychology provides a rather different picture. The literature analysis of what other AGI safety researchers have written about the nature of human values is rather large and is presented in another of my texts: [What AGI Safety Researchers Have Written About the Nature of Human Values](https://www.lesswrong.com/posts/GermiEmcS6xuZ2gBh/what-ai-safety-researchers-have-written-about-the-nature-of). A historical overview of the evolution of the idea of human values can be found in Clawson, Vinson, “[Human values: a historical and interdisciplinary analysis](http://www.acrwebsite.org/volumes/9454/volumes/v05/NA-05)” (1978). A list of ideas for achieving AGI safety without the idea of human values will also be published separately. In Section 1 the ontological status of human values is explored. In section 2 the idea of human values as an ordered set of preferences is criticized. Section 3 explores whether the idea of human values is useful to AGI safety. **1. Ontological status and sources of human values** ===================================================== 1.1. AI alignment requires an actually existing, stable, finite set of predicting data about peoples’ motivation, which is called “human values” ------------------------------------------------------------------------------------------------------------------------------------------------ In an AI alignment framework, future advanced AI will learn human values. So, we don’t need to directly specify human preferences, we just need to create AI capable of learning human values. (From a safety point of view, there is a circularity problem here, as such AI needs to be safe before it starts to learn human values, or it could do it in unsafe and unethical ways, as I describe in detail in [Possible Dangers of the Unrestricted Value Learners](https://www.lesswrong.com/posts/hzEaasJyQsutYDNfN/possible-dangers-of-the-unrestricted-value-learners), but let’s assume for now as it is somehow bypassed—perhaps via a set of preliminary safety measures.) The idea of AI alignment is based on the idea that there is a finite, stable set of data about a person, which could be used to predict one’s choices, and which is actually morally good. The reasoning behind this basis is because if it is not true, then learning is impossible, useless, or will not converge. The idea of value learning assumes that while human values are complex, they are much simpler than the information needed for whole brain emulation. Otherwise, full brain emulation will be the best predictive method. Moreover, the idea of AI alignment suggests that this information could be learned if correct learning procedures are found. (No procedures = no alignment.) This *actually existing and axiological good, stable, finite set of predicting data about peoples’ motivation is often called “human values,*” and it is assumed that any AI alignment procedure will be able to learn this data. This view on the nature of human values from an AI alignment point of view is rather vague, and it doesn’t say *what* human values are, neither does it show how they are implemented in the human mind. This view doesn’t depend on any psychological theory of the human mind. As a pure abstraction, it could be applied to any agent whose motivational structure we want to learn. Before the values of a person can be learned, they have to become “human.” That is, they need to be combined with some theory about how values are encoded in the human brain. AI safety researchers have suggested many such theories, and exiting psychological literature suggests even more theories about the nature of human motivation. In psychology, there is also a “theory of human values,” a set of general motivational preferences which influence choices. This theory should be distinguished from “human values” as an expected output of an AI alignment procedure. For example, some psychological tests may say that Mary’s values are freedom, kindness, and art. However, the output of an AI alignment procedure could be completely different and not even presented in words, but in some set of equations about her reward function. To distinguish human values as they are expected in AI alignment from human values as a part of psychology, we will call the first “human-values-for-AI-alignment.” The main intuition behind the idea of human values is that, in many cases, we can predict another person’s behavior if we know what he wants. For example: “I want to drink,” or “John wants to earn more money” often clearly translates into agent-like behavior. As a result, the idea behind AI alignment could be reconstructed as the following: if we have a correct theory of human motivation a priori, and knowledge about the human claims and choices as posteriori data, we could use something like Bayesian logic to reconstruct his actual preferences. To have this a priori knowledge, we need to know the internal structure of human values and how they are encoded in the human brain. Several over simplified theories about the structure of the human values have been suggested: as a human reward function, as a reward-concept association, as a combination of liking-approving-wanting etc. However, all these are based on the assumption that human values *exist* at all: That the human motivation could be compressed unequivocally into one—and only one—simple stable model. This, and other assumptions, appear even before the “correct psychological theory” of human values is chosen. In this article, these assumptions will be analyzed. 1.2. Human values do not actually exist; they are only useful descriptions of human behavior and rationalization ---------------------------------------------------------------------------------------------------------------- The idea that human behavior is determined by human values is now deeply incorporated into people’s understanding of the world and is rarely subject to reservations. However, the ontological status of human values is uncertain: do they actually exist, or they are just a useful way of describing human behavior? The idea of human-values-for-AI-alignment requires that some predicting set of data about motivation does actually exist. If it is only a description, in which there could be multiple descriptions in various situations, extrapolating such a description will create problems. In other words, descriptions are observer-dependent, while actually existing things are observer independent. If we program in an agent with utility function U, this utility function exists independently of any observations and could be unequivocally learned by some procedures. If we have a process with agent-like features, it could be described differently, depending on how complex a model of the process we want to create. For example, if we have a mountain with several summits, we could describe it as one mountain, two mountains or three mountains depending on the resolution ability of our model. In the discussion of the ontological status of human values we encounter a very long-standing philosophical problem of the reality of *[universals](https://en.wikipedia.org/wiki/Problem_of_universals)*, that is, high-level abstract ideas. The Middle Ages dispute between realists, who thought that universals are real, and nominalists, who thought that only singular things are real, was won by nominalists. From history, we know that “human values” is a relatively new construction which appears only in some psychological theories of motivation. However, we can’t say that “human values” are just random descriptions, which can be chosen completely arbitrarily, because there are some natural levels where the description matches reality. In case of values, it is a level of one’s claims about his-her preferences. While these claims may not perfectly much any deeper reality of one’s values, they exist unequivocally, at least in the given moment. The main uncertainty in the human values is the unequivocal existence of some deeper level, which creates such claims, which is the level of “true values”. Human preferences are relatively easy to measure (while measuring libido is not easy). One could ask a person about his/her preferences, and s/he will write that s/he prefers apples over oranges, Democrats over Republicans, etc. Such answers could be statistically consistent in some groups, which could allow the prediction of future answers. But it is often assumed that real human values are something different than explicit preferences. Human values are assumed to be able to generate preferential statements but not to be equal to them. One could also measure human behavior in these choice experiments, and check if the person actually prefers oranges over apples, and also get consistent results. The obvious problem of such preferential stability is that it is typically measured for psychologically stable people in a stable society, and in a stable situation (a controlled experiment). The resulting stability is still statistical: That is, one who like apples, may sometimes choose an orange, but this atypical choice may be disregarded as noise in the data. Experiments which deliberately disrupt situational stability consistently show that human preferences play a small role in actual human behavior. For example, changes in social pressure result in consistent changes in behavior, thus contradicting declared and observed values. The most famous example is the [Stanford Prison Experiment](https://en.wikipedia.org/wiki/Stanford_prison_experiment), where students quickly took on abusive rolls. The only way for human values to actually exist, would be if we could pinpoint some region of the human brain where they are explicitly presented as rules. However, only very simple behavioral patterns, like the swimming reflex, may actually be genetically hardcoded in the brain, and all others are socially defined. So, there are two main interpretations of the idea of “human values”: 1) *Values actually exist, and each human makes choices based on their own values.* There is one stable source of human claims, actions, emotions and measurable preferences, which completely defines them, is located somewhere in the brain, and could be unequivocally measured. *2) Values are useful descriptions.* Humans make choices under the influence of many inputs, including situation, learned behavior, mood, unconscious desires, and randomness, and to simplify the description of the situation we use the designation “human values.” More detail on this topic can be found in the book by Lee Ross et al: “[The person and the situation: Perspectives of social psychology](https://www.amazon.com/Person-Situation-Perspectives-Social-Psychology/dp/1905177445). Humans have a surprisingly big problem when they are asked about their ultimate goals: They just don’t know them! They may create *ad hoc* some socially acceptable list of preferences, like family, friendship, etc., but this will be a poor predictor of their actual behavior. It is surprising that most humans can live successful lives without explicitly knowing and using their list of goals and preferences. In contrast, a person can generally identify his/her current wishes, a skill obviously necessary for survival, for example s/he can consider thirst and the desire for water. 1.3. Five sources of information of human values: verbalizations, thoughts, emotions, behavior and neurological scans --------------------------------------------------------------------------------------------------------------------- There are several different ways one could learn about someone’s values. There is an underlying assumption that all of these ways converge to the same values, which appears to be false upon closer examination. The main information channels to learn someone’s preferences are: 1. *Verbal claims*. This what a person says about his/her preferences. Such claims try to present a person as better according to expected social norms. Armstrong [suggested](https://www.lesswrong.com/posts/pQz97SLCRMwHs6BzF/using-lying-to-detect-human-values) the examination of facial expressions when a person lies about his/her true values to deduce his/her real values, perhaps by training some AGI to do it. He based this suggestion on the interesting idea that “Humans have a self-model of their own values.” However, it appears that most humans either could live without such a model, or that their model is rationalization made to look good. Such claims could have different subtypes: what a person says to friends, writes in books, etc. Written claims could be more consistent and socially appropriate, as they are better thought out. Claims to close friends could be more orientated toward short-term effect, manipulation, and social situation-dependence. At the same time, claims to friends could also be more sincere, as they have been subjected to less internal censorship. Similarly, claims made under drugs, especially alcohol, could be even less censored, but might not present “true values”. They could present some suppressed “counter-values”, such as the use of an obscene lexicon or some mimetically replicated social cliché, like “I hate all members of social group X.” 2. *Internal thought claims:* the private thoughts which appear in the internal dialog or planning. People may be more honest in their thoughts. However, many people lie to themselves about their own values, or are just unable to fully articulate the complexity of their values. 3. *Behavior.* What a person actually does could represent the sum of all his/her desires, trained models of behavior, random actions, etc. Contradicting values could result in zero behavior, such as a case when one wants to buy a dress, but is afraid to spend too much money on it. Behavior can also take different forms: *choices* between two alternatives, which one might signal in many ways; *verbal behavior*, other than statements of one’s own values; and chains of *physical actions* (e.g. dancing). 4. *Expression of emotions*. Human values could be reconstructed based on emotional reactions to stimuli. A person could prefer to look at some images longer, feel arousal, smile, etc. However, this way of learning values would overestimate suppressed emotions and underestimate rational preferences. For example, a pedophile may be become aroused by some type of images, but on the rational level s/he may fight this type of emotion. Emotions could be presented to the outside in many ways, by facial expressions, tone of voice and content of speech, pose, and even body odor. A person could also suppress expression of emotions or fake them. 5. Non-behavioral, *neurophysiological representations of values.* Most of these are currently unavailable to outside observers, but brain waves, neurotransmitter concentrations or single-neuron activations, as well as some connectome connections, could be directly or indirectly used to gather information about one’s values. AGI with advanced nanotechnology may have full access to the internal states of one’s brain. 1.4. Where do human values come from: evolution, culture, ideologies, logic and personal events ----------------------------------------------------------------------------------------------- If one has something (e.g. a car), it is assumed that one made an act of choice by either buying it or at least keeping it in one’s possession. However, this description is not applicable to values, as one is not making a choice to have a value, but instead makes choices based on the values. Or, if one says that a person makes a choice to hold some value (and this choice was not based on any other values), one assumes the existence of something like “free will”, which is an even more speculative and problematic concept than values [ref]. Obviously, some instrumental values could be derived from terminal values, but that is more like planning, not generation of values. If one were to define the “source” of human values, it would simplify value learning, as one could derive values directly from the source. There are several views about the genesis of human values: 1) God gives values as rules. 2) “Free will”: some enigmatic ability to create values and choices out of nothing. 3) Genes and evolution: Values are encoded in human genes in the form of some basic drives, and these drives appeared as a result of an evolutionary process. 4) Culture and education: Values are embedded in social structure and learned. There are several subvariants regarding source, e.g. language, religion, parents, social web, social class (Marx), books one read or memes which are currently affecting the person. 5) Significant personal events: These could be trauma or intense pleasure in childhood, e.g. “birth trauma,” or first love in school. 6) Logical values: A set of ideas that a rational mind could use to define values based on some first principles, e.g. Kant’s imperative [ref]. 7) Random process: Some internal random process results in choosing the main priorities, probably in childhood [ref]. God and free will are outside of rational discussion. However, all the other ideas have some merit as these six factors could affect the genesis of human values and it is not easy to choose one which is dominating. **2. Critics of the idea of human values as a constant set of personal preferences: it is based on many assumptions** ===================================================================================================================== 2.1. Human preferences are not constant --------------------------------------- Personal values evolve from childhood to adulthood. They also change when a person becomes a member of another social group, because of the new and different role, exposure to different peer pressure and different ideology. Moreover, it is likely that we have *a meta-value about evolving values*: that it is good that someone’s values are changing with age. If a person continues to play with the same toys at 30 he played with at 3 years old, it may be a signal of developmental abnormalities. Another way to describe human preferences is not as “values”, but as “wishes”. The main difference is that “values” are assumed to be constant, but wishes are assumed to be constantly changing and even chaotic in nature. Also, a wish typically disappears when granted. If I wish for some water and then get some, I will not want any more water for the next few hours. Wishes are also more instrumental and often represent physiological needs or basic drives. 2.2. Human choices are not defined by human values -------------------------------------------------- The statement that “humans have values” assumes that these values are most important factor in predicting human behavior. For example, if we know that a chess AI’s terminal goal is to win in chess, we could assume that it *will* try to win in chess. But in the human case, knowing someone values may have surprisingly little predictive power about this person’s actions. In this subsection, we will look at different situations in which human choices are not defined by (declared) human values but are affected by some other factors. ### 1. Situation The idea of “human values” implies that a person acts according to his/her values. This is the central idea of all value theory, because it assumes that if we know choices, we can reconstruct values, and if we know values, we can presumably reconstruct the behavior of the person. There is also another underlying assumption that the relation between behavior and values is unequivocal, that is, given a set of behavior *B* we could reconstruct one and only one set of values *V* which defines it. But this doesn’t work even from a mathematical point of view, as for any finite *B* there exist infinitely many programs which could create it. Thus, for a universal agent, similar behavior could be created by very different values. Armstrong wrote about this, stating that the behavior of an agent depends not only on values, but on policy, which, in turn, depends on one’s biases, limits of intelligence, and available knowledge. However, in the human case, the main problem is not that human beings are able to pretend that they have one set of values, but that they actually have different values. Typically, only con artists and psychopaths are lying about their actual intentions. The problem is that human behavior is not defined by human values at all, as demonstrated in numerous psychological experiments. A great description of these results can be found in Ross and Nisbett’s book “The person and the situation: Perspectives of social psychology”. In a 1973 experiment, Darley and Batson checked if a person would help a man who was lying in their path. “They examined a group of students of the theological seminary who were preparing to utter his first sermon. If the subjects, being afraid of being late for the sermon, hurried, then about 10% of them provided assistance. On the contrary, if they did not hurry, having enough time before it began, the number of students who came to the aid increased to 63%”. Ross et al wrote that maximum attainable level of prediction of the behavior of a person in a new situation, based either on their personal traits or statistics regarding their previous behavior, has a correlation coefficient of 0.3. ### 2. Internal conflicts Another important conception described Ross and Nisbett’s “The person and the situation” is that stable behavior can be underpinned by conflicting attitudes, where different forces balance each other. For example, a person wants to have unlimited access to sex, but is also afraid of social repercussions and costs of such desire, and thus uses porn. This may be interpreted as if he has a wish to or values using porn, but that is not so: porn is only a compromise between two forces, and such a balance is rather fragile and could have unpredictable consequences if a person is placed in a different situation. These ideas were explored by Festinger (Ross, p29). ### 3. Emotional affect It is known that many crimes occur under intense and unexpected emotional affect, for example “road rage,” or murders committed out of jealousy. These emotions are intense reaction of our “atavistic animal brain” to the situation. Such situation may be insignificant in broader context of contemporary civilization, but intense emotions can override our rational judgements and almost take control over our actions. Note, there is still no conclusion about the nature of emotion in psychological literature, though there is a rational model of emotions as accelerators of learning by increasing appreciation of the situation (as mentioned in Sotala’s [article](http://intelligence.org/files/DefiningValuesForValueLearners.pdf) about values). *[Umbrello comment: “This is the exact point that Johnson (in the above comment) argues against, the enlightenment era idea of the separation of psychological faculties (i.e., reason vs. imagination). We have to be careful to not fall within this dichotomy since it is not clear what the boundaries of these different states of mind are.”]* ### 4. Peer pressure Experiments conducted by Ash and Milgram demonstrated that peer pressure can cause people to act against what they perceive or value. Zimbardo’s Stanford Prison Experiment also demonstrated how peer pressure affect people behavior and even believes. ### 5. Random processes in the brain Some human actions are just random. Neurons can fire randomly, and many random factors affects mood. This randomness creates noise in experiments, but basically, we try to clean the data of noise. We could hide randomness of behavior in some probabilistic predictions about behavior. Humans can randomly forget or remember something, and this includes their wishes. In other words, declared values could randomly drift. ### 6. Conditional and unconditional reflexes Some forms of behavior are hardwired in the human brain and even in the primitive hindbrain, and thus are independent of any human values, that is, unconditional reflexes, e.g. the swimming reflex, fight or flight response, etc. There are also conditional reflexes, i.e. it is possible to train a person to present reaction B if stimuli A is given. If such reflex is trained, such reflex does not present any information about the person’ values. But some desires can be triggered intentionally, an approach which is intensively used in advertising: a person seeing Coca-Cola may start to feel a desire to drink soda. Similarly, a person hearing a loud bang may have a panic attack if he has PTSD. ### 7. Somnambulism, the bisected brain, and actions under hypnosis It is well-known that humans are capable of performing complex behaviors while completely unconscious. The best example is somnambulism, or sleep walking. Some people are able to perform complex behavior in that state, even drive a car and commit a murder, without any memories of the event (in this way, it differs from actions in dreams, where at least some form of control exists). Surely, a person’s actions in that situation could not be used to extrapolate the person’s preferences. While somnambulism is an extreme case, many human actions occur mechanically, that is, out of any conscious control, including driving a car and the compulsive behavior of addicts. Experiments (as often in psychology, questionable) have also demonstrated that humans whose brain hemispheres were separated have two different “consciousnesses” with different preferences (though these results have recently been challenged) [ref]. Another extreme case is hypnosis, where a human is conditioned to act according to another person’s will, sometimes even without knowing it. While extreme cases of hypnosis are rare and speculative, the effectiveness of TV propaganda in “brain washing” demonstrates that some form of suggestion is real and plays an important role in mass behavior. For example, Putin’s autocracy invested a lot to gain control over TV and most of TV-viewers in Russia support Putin politics. ### 8. Actions under influence of drugs; demented people and children Some drugs, which are part of human culture and value systems, notably alcohol, are known to change behavior and presented values, mostly because self-control is lowered and suppressed instinctive drives become active. Also, the policies to achieve goals become less rational under drugs. It also seems that alcohol and other drugs increase internal mis-alignment between different subpersonalities. While a person is legally responsible for what he does under influence of drugs, his presenting of his values changes: some hidden or suppressed values may become openly expressed (in vina veritas). Even some cars’ AI can recognize that a person is drunk and prevent him from driving. For pure theoretical AGI this may be a difficulty, as it is not obvious why sober people are somehow more “value privileged” than drunk people. Why, then, should the AGI ignore this large class of people and their values? Obviously, “drunk people” is not the only class which should be ignored. Small children, patients in mental hospitals, people with dementia, dream characters, victims of totalitarian brainwashing, etc. – all of these and many more can be regarded as classes of people whose values should be ignored, which could become a basis for some form of discrimination at the end. Also, presented values depends on the time of the day and physiological conditions. If a person is tired, ill or sleepy, this could affect his-her values centered behavior. An extreme case of “brainwashing” is feral children risen by animals: and most of their values also should not be regarded as “human values”. 2.3. “Human values” can’t be easily separated from biases --------------------------------------------------------- The problem of the inconsistency of human behavior was well known to the founders of the rationalists and the AGI safety movement, who described it via the idea of biases. It seems that humans, according to rationalists understanding, have a constant set of values. However, humans act irrationally based on this set of values because they are affected by numerous cognitive biases. By applying different rationalist training and debiasing to a person, we could presumably create a “rational person” who will act consistently and rationally and will effectively reach his-her own positive values. The problem is that such model of purely rational person acting on the set of coherent altruistic values is completely non-human. *[Umbrello comment: Heuristic tools can be used to de-bias AGI design. I argued this in a paper, and showed a way in which it can be done. See Umbrello, S. (2018) ‘The moral psychology of value sensitive design: the methodological issues of moral intuitions for responsible innovation’, Journal of Responsible Innovation. Taylor & Francis, 5(2), pp. 186–200. doi: 10.1080/23299460.2018.1457401.]* Another problem is that a lot of humans have different serious psychiatric diseases, including schizophrenia, obsessive-compulsive disorder, mania, and others, which significantly affect their value structure. While extreme cases can be easily recognized, weaker forms may be part of the “psychopathology of ordinary life”, and thus part of “human nature”. We don’t know if a truly healthy human mind exists at all. Armstrong [suggested](https://www.lesswrong.com/posts/FgRRY3vSwDLx2Hk46/the-low-cost-of-human-preference-incoherence) not to separate biases from preferences, as AGI will find easy ways to overcome biases. But the AGI could, in the same way, find the paths to overcome the preferences. 2.4. Human values are subject-centered and can’t be separated from the person ----------------------------------------------------------------------------- In the idea “humans have values,” the verb “have” assumes the type of relation that could be separated. This implies some form of orthogonality between human mind and human values, as well as a strict border between the mind and values. For example, if I have an mp3 file, I can delete the file. In that case, the statement “I don’t have the file” will be factual. I can give this file to another person and in that case, I can say: “That person now has the file”. But human values can’t be transferred in the same way as a file for two reasons: they are subject-centered, and there is no strict border between values and the other parts of the mind. Most human values are centered around a particular person (with the exception of some artificially constructed purely altruistic values, like someone who wants to reduce the amount of suffering in the world, but completely ignoring who is sufferings: humans or animals, etc.) One may argue that non-subject values are better, but this is not how human values works. For example, a person attaches a value not to a tasty food, but to the fact that he will consume such food in the future. If healthy food exists without the potential one could consume it, we can’t say that it has value. From this, it follows that if we copy human values in an AGI, that AGI should describe the same state of the world, but not the same preferences. For example, we don’t want to copy into AGI a desire to have sex with humans, but we want that AGI will help its owner in his/her reproductive success. However, instrumental goals like self-preservation will be still AGI-centered. The subject of value is more important than a value itself, because if a typical human A has some value X, there is surely someone else on Earth who is getting X, but X doesn’t matter to that person. However, if the same person A gets another valuable thing Y, it is still good for him. Attempts to properly define the subject quickly evolves into the problem of personal identity, which is notoriously difficult and known to be paradoxical. That problem is much more difficult to verbalize, that is, a person may correctly say what he wants, but fails to provide a definition of who he is. Obviously, there is no easy way to separate values from all underlying facts, neuronal mechanisms, biases and policies – more on that in the next subsection. More about similar problems was said in the post of Joar Skalse “[Two agents can have the same source code and optimise different utility functions](https://www.lesswrong.com/posts/zMHK9gFY6t48Exqup/two-agents-can-have-the-same-source-code-and-optimise).” Human preferences are self-centered, but if AGI takes human preferences as its own, they will not be AGI-centered, but will be preferences about the state of the world, and this will make them closer to the external rules. In other words, preferences about the well-being of something outside you is an obligation and burden, and AGI will search the ways to overcome such preferences. 2.5. Many human values are not actionable + the hidden complexity of values --------------------------------------------------------------------------- If someone says that “I like poetry”, it is a clear representation of his/her declarative values, but it is unlikely to predict what he actually does. Is he writing poems every day for an hour, and if so, which type? Or he is reading every week for two hours – and what does he read: Homer, Byron or his girlfriend’s poems? Will he attend a poetry slam? This could be called the “hidden complexity of values,” but if we start to unknot that complexity, there will be no definite border between values and everything else in the person’s mind. In other words, short textual representations of values are not actionable, and if we try to make a full representation, we will end up reproducing the entire brain. In Yudkowsky’s example of the complexity of values, about removing one’s aged mother from a burning house, the complexity of values comes from many common-sense details, which are not included into the word “removing”. 2.6. Open question: is there any relation between values, consciousness and qualia? ----------------------------------------------------------------------------------- In some models, where preferences dictate choices, where is no need for consciousness. However, many preferences are framed as preferences about future subjective experiences, like pain or pleasure. There are at least 3 meanings of the idea of “consciousness” and 3 corresponding questions: a) Consciousness is what I know and can said about it – Should we care about unconscious values? b) Consciousness is what I feel as pure subjective experience, qualia – Should we solve the problem of qualia to in order to correctly present human preferences about subjective experiences? c) Consciousness is my reflection about myself and only values which I declare *are* my values should be counted – True or not? Related: G. Worley on philosophical conservatism: “[Philosophical Conservatism in AI Alignment Research](https://www.lesswrong.com/posts/3r44dhh3uK7s9Pveq/rfc-philosophical-conservatism-in-ai-alignment-research)” and “[Meta-ethical uncertainty in AGI alignment](https://www.lesswrong.com/posts/uR3znuBnaevssYDZY/rfc-meta-ethical-uncertainty-in-agi-alignment),” where he discusses the problems with meta-ethics and the non-existence of moral facts. See also the [post](https://www.lesswrong.com/posts/x4n4jcoDP7xh5LWLq/book-summary-consciousness-and-the-brain) of Sotala about consciousness and the brain. 2.7 Human values as a transient phenomenon: my values are not mine ------------------------------------------------------------------ Human values are assumed to be a stable but hidden source of human choices, preferences, emotions and claims about values. However, human values—even assuming that such source of all motivation really exists—are constantly changing on day-to-day basis, as a person is affected by advertising, new books, new friends, changes in hormone levels, and mood. Interestingly, personal identity is more stable than human values. A person remains the same in his/her own eyes as well as the eyes of other people, despite significant changes of values and preferences **3. The idea of “human values” maybe not as useful concept as it looks like for AGI Safety** ============================================================================================= 3.1. Human values are not safe if scaled, extracted from a human or separated ----------------------------------------------------------------------------- Many human values evolved in the milieu of strong suppression from society, limited availability of needed resources, limits on the ability to consume resources, and pressure from other values, and thus don’t scale safely if they are taken alone, without their external constraints. A possible example of the problem from animal kingdom: if a fox gets into a henhouse, it will kill all the chickens, because it hasn’t evolved a “stop mechanism”. In the same way, a human could like tasty food, but relies on internal body regulation to decide when to stop, which does not always work. If one goal or value dominates over all other values in one’s mind, it becomes “[paperclippy](https://wiki.lesswrong.com/wiki/Paperclip_maximizer)”, and turn a person into a dangerous manic. Examples include sexual deviants, hoarders, and money-obsessed corporate managers. In contrast, some values balance one another, like the desire for consumption and the desire to maintain a small ecological footprint. If they are separated, the consumption desire will tile the universe with “[orgasmium](https://wiki.lesswrong.com/wiki/Orgasmium),” and “ecological desire” will end in an attempt to stop existing. The point here is that values without humans are dangerous. In other words, if I want to get as much X as possible, getting 1000X is maybe not what I want – though my expressed desire can convert my AGI into a paperclipper. In the idea of “human have values” it is intrinsically assumed that these values are a) good and b) safe. A similar idea has been explored in a post by Wei Dai, “[Three AGI safety related ideas](https://www.lesswrong.com/posts/vbtvgNXkufFRSrx4j/three-ai-safety-related-ideas).” Historically “human values” were not regarded as something good. Humanity regarded suffering as arising from “original sin” and affecting by all possible dangerous effects: lust, greed, etc. There was no worth in human values for the philosophers of the past, and that is why they tried to create morals or a set of laws, which would be much better than inborn human values. In that case, the state or religion provided the correct set of norms, and human nature was merely a source of sin. If we take rich, young people at the beginning of 21st century, we may see that they are in general not so “sinister” as humans in the past and that they sincerely support all kinds of nice things. However, humanity’s sadistic nature is still here, we just use socially accepted ways to realize our “desire to kill” by watching “Game of Thrones” or playing “World of Tanks”. If AGI extrapolated our values based on our preferences in games, we could find ourselves in a nightmarish world. There is also completely “unhuman” ideologies and cultural traditions. First is obviously German national-socialism, and also ancient Maya culture, where upper classes constantly ate human meat. Another example is religion groups practicing collective suicide, ISIS and terrorists. Notably, transhumanist thought states that to be a human means to want to overcome human limitations, including innate values. **AGI which is learning human values will be not intrinsically safer than AGI with hard coded rules.** We may want to simplify AGI alignment by escaping hand-coded rules and by giving AGI authority to extract our goals and to extrapolate them. But there is no actual simplification: we still have *to* *hand-code a theory of human values* and the ways how to extract them and to extrapolate them. This creates large uncertainty, which is not better than rule coding. Naturally, problems arise regarding the interaction of AGI with “human values”: for example, if a person wants to commit suicide, should AGI help him? **We don’t need AGI alignment for all possible human tasks**: Most of these tasks can be solved without AGI (by [Drexler’s CAIS](https://www.fhi.ox.ac.uk/wp-content/uploads/Reframing_Superintelligence_FHI-TR-2019-1.1-1.pdf), for example). The only task for which alignment is really needed is “*preventing the creation of other unsafe AGI,*” that is, using AGI as a weapon to stop other AGI projects. Another important and super-complex task which requires superintelligent AGI is reaching human immortality. 3.2. It is wrong to think of values as of property of a single human: values are social phenomena ------------------------------------------------------------------------------------------------- ### 1. Not human have values, but values have humans In the statement “Human have values,” separate human beings are presented as the main subjects of values, i.e. those who have values. But most values are defined by society and describe social behavior. In other words, as recognized by Marx, many values are not personal but social, and help to keep society working according the current economic situation. Society expends enormous effort to control people’s values via education, advertising, celebrities-as-role-models, books, churches, ideologies, group membership identity, shaming, status signaling and punishment. Social values consist of unconscious repeating of the group behavior + conscious repeating of norms to maintain membership in the group. Much behavior is directed via the unconscious definition of one’s social role, as described by Valentino in the post “[The intelligent social web](https://www.lesswrong.com/posts/AqbWna2S85pFTsHH4/the-intelligent-social-web).” ### 2. Values as memes, used for group building Very rarely a person could evolve his/her own values without being influenced by anyone else; this often happens against his/her own will. In other words, it is not the case that “humans have values” – a more correct wording would be “values have humans.” This is especially true in the case of ideologies, which could be seen as especially effective combinations of memes, something like a memetic virus consisting of several proteins-memes, often supported by a large “training dataset” of schooling in a culture where this type of behavior seems to be a norm. ### 3. Ideologies In the case of ideologies, values are not human preferences, but instruments to manipulate and bind people. In ideologies (and religions) values are most articulated, but they play the role of group membership tokens, not actual rules dictating actions. However, most people are unable to follow such sets of rules. Hanson wrote “X is not about X,” and this is an example. To be a member of the group, a person must vocally agree that his/her main goal is X (e.g. “Love god X”), which is easy verifiable. But if he is actually doing enough for X is much less measurable, and sometimes even unimportant. For example, Jesus promoted values of “living as a bird,” “poverty” or “turn the other cheek” (“But I say unto you, That ye resist not evil: but whosoever shall smite thee on thy right cheek, turn to him the other also.” Mat 5:39, KJV), but churches are rich organizations and humanity constantly engages in religious wars. A person could be trained to have any value either by brainwashing or by intensive reward, but still preserve his/her identity. ### 4. Religion ### 4. Religion In religions, values and ideologies are embedded in more complex mythological context. Most people who ever lived were religious. Religion as a successful combination of memes is something like a genetic code of culture. Religion also requires a whole adherence to all—even smallest rituals, like eating certain types of food and wearing exact forms of hats—not only to a few ideological rules. There is a theory that religion was needed to compensate the fear of death in early humans, and thus humans are genetically selected to be religious. The idea of God is not a necessary part of religion, as there are religion-like belief systems without a god which, however, have all the structural elements of religion (Buddhism, communism, UFO cults like Raëlian movement). Moreover, even completely anti-religious and declaratively rational ideologies may still have structural similarities to religion, as was mentioned by Cory Doctorow in “Rapture of the Nerds.” Even the whole idea of a future superintelligent AI cold be seen as a religious view mirrored into the future, in which sins are replaced with “cognitive biases,”, churches with “rationality houses” etc. In the case of religion, a significant part of “personal values” are not personal, but are defined by religious membership, especially the declarative values. Actual human behavior could significantly deviate from the religious norms because of the combination of affect, situation, and personal traits. ### 6. Hypnosis and unconscious learning At least some humans are susceptible to influence by the beliefs of others, and charismatic people use this ability. For example, I knew about cryonics for a long time, but only started to believe in it after Mike Darwin told me his personal view about it. The highest (but also the most controversial) from of suggestibility is hypnosis, which has two not-necessarily-simultaneous manifestations: trans induction and suggestion. The second doesn’t necessary require the former. People also could learn via observation of actions of other people, which is unconscious learning. 3.3. Humans don’t “have” values; they are vessels for values, full of different subpersobalities ------------------------------------------------------------------------------------------------ ### 1. Values are changeable but identity is preserved In this section, we will look more closely at the connection between a person and his/her values. When we say that “person X has value A,” some form of strong connection is implied. But human personal identity is stronger than most of the values the person will have during his/her lifetime. It is assumed that identity is preserved from early childhood; for example, Leo Tolstoy wrote that he felt himself to be the same person from 5 years old until his death. But most human values change during that time. Surely there can be some persistent interests which appear in human childhood, but they will not dominate 100 percent of the time. Thus, human personal identity is not based around values, and the connection between identity and values is weak. Values can appear and disappear during a lifetime. Moreover, a human can have contradicting values in the same moment. We could see a person as some vessel where desires appear and disappear. In a normal person, some form of “democracy of values” is happening: he makes choices by comparing the relative power of different values and desires in a given moment, and the act of choice and its practical consequences updates the balance of power of different values. In other words, while values remain the same, the preferential relation between them is changing. From the idea of the personality as a vessel for values follows two things: 1) Human values could be presented as subagents which “live” in the vessel 2) There are meta-values which preserve the existence of the vessel and regulate the interaction between the values. ### 2. Subpersonalities Many different psychological theories describe the mind as consisting of two, three, or many parts that can be called subpersonalities. The obvious difficulty of such division is that subpersonalities do not “actually” exist, but instead are useful description. But as descriptions they are not passive; they can actively support any theory and play along in the roles which are expected. Another difficulty is that different people have different level of schizotypy, or different “decoherence” between subpersonalities: hyper-rational minds can look completely monolithic, fluid minds can create subpersonalities *ad hoc*, and some people can suffer from a strong dissociative disorder and actually possess subpersonalities. Some interesting literature on subpersonalities (beyond Kulveit’s AGI Safety theory) includes: [Victor Bogart](http://journals.sagepub.com/doi/abs/10.1177/00221678940342006?journalCode=jhpa), “[Transcending the Dichotomy of Either "Subpersonalities" or "An Integrated Unitary Self](http://journals.sagepub.com/doi/abs/10.1177/00221678940342006?journalCode=jhpa)" Lester wrote a lot about theory of subpersonalities “[A Subself Theory of Personality](https://link.springer.com/article/10.1007/BF02686812).” [Encyclopedia of Personality and Individual Differences](https://www.springer.com/gp/book/9783319246109) includes a section by Lester with finding about subpersonalities (p 3691). Sotala started a new sequence “[Sequence introduction: non-agent and multiagent models of mind](https://www.lesswrong.com/posts/M4w2rdYgCKctbADMn/sequence-introduction-non-agent-and-multiagent-models-of)” Mihnea Moldoveanu “[The self as a problem: The intra-personal coordination of conflicting desires](https://www.researchgate.net/publication/222697072_The_self_as_a_problem_The_intra-personal_coordination_of_conflicting_desires)” Minsky in the “[Society of mind](https://www.emcp.com/intro_pc/reading12.htm)” wrote about many too-small agents in the human mind – K-lines, which are much simpler than “personalities.” But current artificial neural nets don’t need them. ### 3. The infinite diversity of human values The idea that “human have values” assumes that there is a special human subset of all possible values. However, human preferences are very diverse. For any type of objects, there is a person who collects them or likes the YouTube videos about them. Humans can have any possible values, limited only by values’ complexity. ### 4. Normative plurality of values Most moral theories like utilitarianism tries to search for just one correct overarching value. However, there appear problems like *repugnant conclusion*. Such problems appear if we take the value literary or try to maximize it to extreme levels. The same problems will affect a possible future AGI if it tries to over-maximize its utility function. Even *paperclip maximizer* just wants to be sure that it will create enough paperclips. Because of this, some writers on AGI safety started to suggest that we should escape utility functions in AGI, as they are inherently dangerous. (For example, in the post of Shah “[AGI safety without goal-directed behavior](https://www.alignmentforum.org/posts/tHxXdAn8Yuiy9y2pZ/ai-safety-without-goal-directed-behavior)”.) The idea that good moral model should be based on existence of many different values – without any overarching value – is presented in the article by Carter “[A plurality of values](https://www.academia.edu/173502/A_plurality_of_values).” However, this claim is self-contradictory, because the norm “there should not be overarching value” is itself overarching value. Carter escape it by suggesting to use “indifference-curves” from microeconomics: a type of utility function which combines two variables. However, in that case overarching values maybe “content free”. For example, functional democracy provides everybody the right to free speech, but don’t prescribe the content of speech besides a few highly debated topics like hate speech or speech which affected other ability to speak. But exactly this “forbidden” topics as well as level of their restriction becomes the most attractive to discussions very soon. Bostrom wrote about [Parliamentary Model](http://www.overcomingbias.com/2009/01/moral-uncertainty-towards-a-solution.html), where different values are presented. But any parliament need speaker and rules. 3.4. Values, choices, and commands ---------------------------------- A person could have many contradictory values, but an act of choice is the irreversible decision of taking one of several options – and such choice may take the form of a command for a robot or an AGI. The act of making a choice is something like an irreversible collapse (similar to the collapse of a quantum wave function on some basis), and making a choice requires significant psychological energy, as it often means denying the realization of other values, and consequently, feeling frustration and other negative emotions. In other words, making a choice is a complex moral work, not just a simple process of inference from an existing set of values. Many people suffer from an inability to make choices, or an inability to stick with choices they made previously. A choice is typically not finalized until we take some irreversible action in the chosen direction, like buying a ticket to the country. In the case of Task AGI (AGI designed to make a task perfectly and then stop), the choice is moment when we give the AGI a command. In some sense, making choices is moral work of humans, and if AGI automates this work, it will steal one more job from us – and not only a job, but the meaning of life. ### Difference between values and desires Inside the idea of human values is a hidden assumption that there is a more or less stable set of preferences which people can *consciously* access, so people have some responsibility for having particular values, because they can change them and implement them. An alternative view is “desires”: they appear suddenly and out of nowhere, and the conscious mind is their victim. For example, let us compare two statements: “I prefer healthy environmentally friendly food” – this is a conscious preference. “I had a sudden urge to go outside and meet new people” – this is a desire. Desires are unpredictable and overwhelming, and they may be useless from the point of the rational mind of the person, but may still be useful from more a general perspective (for example, they may signal that it is time to take a rest). 3.5. Meta-values: preference about values ----------------------------------------- ### 1. Meta-values as morals The idea that “human have values” assumes that values present some unstructured set of things. In the same way, a person could say that he has tomatoes, cucumbers and onions. But the relation between values is more complex, and there are values about values. For example, a person may have some food preferences, but doesn’t approve these food preferences, as they result in overeating. Negative meta-values encode acts of suppressing the normal-level value, or, alternatively, self-shaming. Positive meta-values encourage a person to do what he already likes to do, or foster a value for some useful thing. Meta-meta values are also possible, for example, if one wants to be a perfect person, s/he will encourage his/her value for health food. The ability to enforce one’s meta-values over one’s own values is called “willpower”. For example, all fights with procrastination is an attempt to enforce the meta-value of “work” over short-term pleasures. Meta-values are closer to morals: they are more consciously articulated, but there is always practical difficulty in enforcing them. The reason for it is that low-level values are based on strong, innate human drives and have close connections with short-term rewards; thus, they have more energy to affect practical behavior (e.g. difficulties in dieting). As meta-values are typically more pleasant sounding and more consciously approved, humans are more likely to present them as their true values if asked in social situations. But it is more difficult to extract meta-values from human behavior than “normal” values. ### 2. Suppressed values These are values we consciously know that we have, but which we prefer not to have and do not wish to let affect our behavior. Such values could be excess sexual interest. Typically, humans are unable to completely suppress some undesired values, but at least they know about them and have an opinion about them. ### 3. Subconscious values and sub-personalities The idea that “humans have values” assumes that the person *knows* what he has, but this is not always true. There are hidden values, which exist in the brain but not in conscious mind, and can appear from time to time. Freud was the first to discover the role of the unconscious in humans. But the field of the unconscious is very amorphous and easily adjusts to attempts to describe it. Thus, any theory which tries to describe it appears to be a self-fulfilling prophecy. Dreams may be full of libido symbols, but at the same time represent Jungian Anima archetypes. The reason is that unconsciousness is not a thing, but a field where different forces combine. Some people suffer from multiple personality disorder, when they have several personalities which take control over their body from time to time. These personalities have different main traits and preferences. This adds obvious difficulty to the idea of “human values,” as the question arises, which values are real for a human who has many persons in his/her brain? While true “multiple personality disorder” is rare, there is a theory that in any human there are many sub-personalities which constantly interact. Such sub-personalities could be called one by one by a psychotherapeutic method called a “dialog of voices,” created by Stones (Stone & Stone, 2011). The theory behind sub-personalities claims that they can’t be completely and effectively suppressed, and will appear from time to time in the form of some coalesced behavior like jokes (this idea was presented by Freud in his work “Jokes and Their Relation to the Unconscious”), tone of voices, spontaneous acts (like shop-lifting), dreams, feelings, etc. ### 4. Zero behavior and contradicting values Humans often have contradictory values. For example, if I want a cake very much, but also have a strong inclination for dieting, I will do nothing. So, I have two values, which exactly compensate for each other and thus have no effect on my behavior. Observing only behavior will not give an observer any clues about these values. More complex examples are possible, where contradictory values create inconsistent behavior, and this is very typical for humans. ### 5. Preference about other’s preferences Human could have preferences about preferences of other people. For example: "I want M. to love me" or "I prefer that everybody will be utilitarian". They are somehow recursive: I need to know the real nature of human preferences in order to be sure that other people actually want what I want. In other words, such preferences about preference have embedded idea about what I think is the "preference": if M. will behave as if she loves me – is it enough? Or it should be her claims of love? Or her emotions? Or coherency of all three? 3.6. Human values cannot be separated from the human mind --------------------------------------------------------- ### 1. Values are not encoded separately in the brain The idea “human have values” assumes the existence of at least two separated entities: human and values. There is not any separate neural network or brain region that presents a human value function (limbic system codes emotions, but emotions are only part of human values). While there is a distinctive reward-regulating region, the reward itself is not a human value (as much as we agree that pure wireheading is not good). Most of what we call “human values” are not only about reward (while reward surely plays a role), but include an explanation for what the reward is, i.e. some conceptual level. Any process in human mind has intentionality. For example, a memory of smell of a rose will affect our feelings about roses. This means that it is not easy to distinguish between fact and values in some’s mind, and orthogonality thesis doesn’t hold for humans. The orthogonality thesis can’t be applied to humans in most cases, as there is no precise border between human value and some other information or processes in the human mind. The complexity of human values means that a value is deeply rooted in everything I know and feel, and that attempts to present values as a finite set of short rules does not work very well. Surely, we can use the idea of human set of preferences if we want some method to approximate a prediction of the person’s approval and behavior. It will offer something, like, say, an 80 percent prediction of human choices. This is more than enough in the prediction of the behavior of a consumer, where we could monetize any prediction above random; e.g. if we predict that 80 per cent people would prefer red t-shirts to green ones, we could adjust manufacturing and earn a profit. (Interesting article on the topic: “[Inverse Reinforcement Learning for Marketing](https://arxiv.org/abs/1712.04612).”) However, a reconstructed set of values is not enough to predict human behavior in edge cases, like “Sophie’s Choice” (a novel about Nazi camp, where a woman has to choose which of her children will be executed), or a real-world trolley problem. But exactly such predictions are important in AGI safety, especially if we want AGI to make pivotal decisions about the future of humanity! Some possible tough questions: should humans be uploaded? Should we care about animals or aliens or unborn possible people? Should a small level of suffering be preserved to avoid eternal boredom? Interestingly, humans evolved ability to predict each other’s behavior and choices to some extent, partly limited to the same culture, age and situation, as this skill is essential to effective social interaction. We automatically create some “theory of mind”, and there is also a “folk theory of mind”, in which people are presented as simple agents with clear goals which dictate their behavior (like “Max is only interested in money and that’s why he changed jobs.”) ### 2. Human values are dispersed inside “training data” and “trained neural nets” Not only are values are not located in some place in the brain, they are not learned as “rules.” If we train an artificial neural net on some kind of dataset, like Karpathy’s RNN on texts, it will repeat properties of the texts (such training includes a reward function, but it rather simple and technical and only demonstrates similarity of output to the input). In the same way, a person who grew up in some social environment will repeat its main behavioral habits, like car driving habits or inter-personal relations models. The interesting point is that these traits are not presented explicitly either inside the data nor inside the neural net trained on it. No single neuron is coding the human preference for X, but behavior which could be interpreted as a statistical inclination to X is resulting from collective work of all neurons. In other words, a statistically large ensemble of neurons trained on a statistically large dataset created a statistically significant inclination to some type of behavior, which could essentially be described as some “rule-like value,” though this is only an approximation. ### 3. Amorphous structure of human internal processes and false positives in finding internal parts Each neuron basically works as an adding machine of inputs and is triggered when the sum is high enough. The same principle can be found in psychological processes which add up until it triggers action. This creates difficulty in inferring motives from actions, as there is a combination of many different inputs. This also creates the problem of false positives in human mind modeling, where a human behavior under some fixed conditions and expectations produces the expected types of behavior, statistically confirming the experimenter’s hypothesis. 3.7. Human values presentation is biased towards presenting socially accepted claims ------------------------------------------------------------------------------------ The idea of “human values” is biased towards morality. When we think of human values we expect that something good and high-level will be presented, like “equality” or “flourishing,” as humans are under social pressure to present an idealized version of the self. In contemporary society, someone will not be prized if he said that he likes “kill, rape and eat a lot of sugar”. This creates internal censorship, which could be even unconscious (Freudian censorship) [ref]. Humans claim and even believe that they have socially accepted values: that they are nice, positive, etc. This creates an idealized image of self. But humans are unreflective about their suppressed motives and even actions. Thus, they lie to themselves about the actual goals of their behavior: they do A thinking that the goal is X, but their real motive is Y [Hanson]. Societies with strong ideologies will more strongly affect self-representation of values. Idealized and generalized version of values start to look like morals. In his book “Elephant in the brain,” Hanson presented a model in which the selfish subconsciously tries to maximize personal social status, and consciously create a narrative to explain the person’s actions as altruistic and acceptable. 3.8. Human values could be manipulated by the ways and order they are extracted ------------------------------------------------------------------------------- The idea that “humans have values” assumes that such values exist independently of some third-party observer who can objectively measure them. However, by using different questions and ordering these questions differently, one can manipulate human answers. One method of such manipulation is the Ericksonian hypnosis, where each question creates certain frames, and also has hidden assumptions. Another simple but effective marketing manipulative strategy is the technique of “Three Yeses”, where previous questions frame future answers. In other words, by carefully constructing the right questions we could extract from a person almost any value system, which would diminish the usefulness of such extraction. This could also affect AGI safety, if AGI has some pre-conceptions of what the value system should be, or even if AGI wants to manipulate values, – it could find the ways to do so. 3.9 Human values are, in fact, non-human ---------------------------------------- Human values are formed by forces which are not humans. First of all, it is evolution and natural selection. Human values are also shaped by non-human forces like capitalism or facebook algorithm and targeted advertising. Being born in some culture, being affected by some books or traumatic events is also random processes out of the person choice. ### Viruses Many viruses could affect human behavior with the goal to make replication easy. Common cold makes people more social. It seems that toxoplasma infection makes people (and affected mice) less risk averse. See e.g. “[Viruses and behavioral changes: a review of clinical and experimental findings](https://www.ncbi.nlm.nih.gov/pubmed/9155866)”. There are even more outstanding claims that our microbiome controls human behavior, including food choices and reproduction via production of feromons-like chemicals on the skin. It was claimed that fecal transplants can cure autism via changes in gut microbiome. 3.10. Any human value model has not only epistemological assumptions, but also has axiological (normative) assumptions ---------------------------------------------------------------------------------------------------------------------- If a psychological model does not just describe human motivation, but also determines what part of this motivational system should be learned by AGI as “true values,” it inevitably includes axiological or normative assumptions about what is good and what is bad. A similar idea was explored by Armstrong in in “[Normative assumptions: regret](https://www.lesswrong.com/posts/Fg83cD3M7dSpSaNFg/normative-assumptions-regret).” The most obvious such “value assumption” is that someone’s reward function should be valued at all. For example, during interaction between a human and a snail, we expect that the human reward function (if we are not extreme pro-animal rights activists) is the correct one, and the “snail’s values” should be ignored. Another type of axiological assumption is about what should be more correctly regarded as actual human values: rewards or claims. This is not a factual assumption, but assumption about importance, which could also be presented as a choice between whom an observer should believe: rationality or emotions, rider or elephant, System 2 or System 1, rules or reward. There are also meta-value assumptions: should I regard “rules about rules” as more important than my primary values. For example, I often say people should ignore the tone of my voice, I only endorse the content of my verbal communication. Psychological value models are often normative, as they are often connected with psychotherapy, which is based on some ideas what is healthy human mind. For example, Freud’s model not only presents a model of human mind, but also a model of disease of the mind; in Freud’s case, neuroses. 3.11. Values may be not the best route for simple and effective descriptions of human motivation ------------------------------------------------------------------------------------------------ From the point of view of naïve folk psychology, a value system is easily tractable: “Peter values money, Alice values family life” – but the analysis above showed that if we go deeper, the complexity and problems of the idea of human values grows to the point of intractability. In other words, the idea that “human have values” assumes that “value” is a correct primitive which promises easy and quick description of human behavior but doesn’t fulfil this promise after close examination. Thus, maybe it is the wrong primitive, and some other simple idea will provide better description – one with lower complexity and that is more easily extractable – than values? There are at least two alternatives to values as short descriptors of the human motivational system: “wants” and commands. Obviously, there is a difference between “values” and “wants”. For example, I could sit on a chair and not want anything, but still have some values, e.g. about personal safety or the well-being of African animals. Moreover, a person with different values may have similar “wants”. Intuitively, a correct understanding of “wants” is the simpler task. I can reconstruct my cat’s “wants” based on the tone of her meows. She may want to eat, have a door opened, or to be cuddled. However, reconstructing the cat’s values is a much more complex task which must be based on assumptions. The main difference between wants and values: if you want something, you know it. But if you have a value, you may not know about it. The second difference: wants can be temporarily satisfied, but will reappear, while values are constant. Values generate wants, wants generate commands. Only wants form the basis for commands to AGI. 3.12. Who are real humans in “human values”? -------------------------------------------- In the idea of human values is assumed that we could easily define who are “humans”, that is morally significant beings. This question suffers from the edge cases, which may be not easy to guess by AGI? Such edge cases: · Are apes humans? Neanderthals? · Is Hitler human? · Are coma patients humans? · What about children, drug-intoxicated people, Alzheimer patients? · Extraterrestrials? · Unborn children? · Feral children? · Individuals with autism and victims of different genetic disorders? · Dream characters? By manipulating the definition of who is “human”, we could manipulate the outcome of a measurement of values. 3.13. The human reward function is not “human values” ----------------------------------------------------- Many ideas about learning human values are in fact describing learning based on the “human reward function.” From a neurological point of view and subjective experience, human reward is the activation of some centers in the brain and experiencing qualia of pleasure. But when calculated by analyzing behavior, “human reward function” does not necessarily mean a set of rules for endorphin bursts. Such reward function would mean pure hedonistic utilitarianism, which is not the only possible moral philosophy, or might even mean voluntary wireheading. The existence of high-levels goals, principles and morals means that the qualia of reward is only a part of human motivation system. Alternatively, a human reward function may be viewed as some abstract concept which describes the set of human preferences in the style of [VNM-rationality](https://en.wikipedia.org/wiki/Von_Neumann%E2%80%93Morgenstern_utility_theorem) (converting set of preferences on to a coherent utility function), but which is unknown to the person. One assumption about human values is that humans have a constant reward – but the human reward function evolves with age. For example, sexual images and activities become rewarding for teenagers and are meditated by the production of sex hormones. Human rewards also change after we are satisfied by food, water or sex. Thus, the human reward function is not a stable set of preferences about the world, but changes with age and based on previous achievements. This human reward function is black-boxed from the conscious mind but is controlled by presenting different rewards. Such a black-boxed reward function may be described as a rule-based system. Possible example of such rules: “If age = 12, turn on sexual reward”. Such a rule generator is unconscious but has power over the conscious mind – and we may think that it is no good! In other words, we could have moral preferences about different types of motivation in humans. 3.14. Difficult cases for value learning: enlightenment, art, religion, homosexuality and psi --------------------------------------------------------------------------------------------- There are several types of situations or experiments where the existing of a stable set of preferences is clear, like a situation of multiple choices between brands (apple vs. oranges), different forms of the trolley problem, questionnaires, etc. However, there are situations and activities which is not easy to describe in this value language. *Enlightenment* – many practitioners claim that at some higher meditation states the idea of personal identity and of a unique personal set of preferences, or even of the reality of outside world becomes “obsolete,” seen as wrong or harmful in practice. This may or may not be true factually, but obviously affects preference preferences, when a person appears to have a meta-value of not having a value, and of some form of meaningful non-existence (e.g. nirvana, moksha). How could we align AGI with Buddha? *Art* – rational thinking often has difficulty understanding art, and many of its interpretations based on outside views are oversimplified. Moreover, a significant part of art is about violence, and we enjoy it – but we don’t want AGI to be violent. *Religion* – seems to often assign a lot of values to false, or at least uncheckable claims. Religion is one of the strongest memetic producers, but it also includes some theories of motivations, which are not about values, but are based on other basic ideas like “free will” or “god’s will.” Religion also could be seen as an invasive ideology or memetic virus, which overrides personal preferences. *Psi* – contemporary science denies validity of the parapsychological research, but observations like Jungian synchronicity or Grof’s transpersonal psychology continue to appear and imply a different model of human mind and motivation than traditional neuroscience. Even some AGI researchers, like Ben Goertzel, are interested in psi. In Grof’s psychology, the feelings and values of other human and even animals could influence a person (under LSD) in non-physical ways, and in a more minor form this could happen (if it is possible at all) even in ordinary life. *Idleness* – and non-goal-oriented states of mind, like random thought streams. *Nostalgia* – this is an example of a value which has very large factual content. It is not just an idea of pure happiness of the feeling of “returning home.” It is an attraction to the “training dataset”: home country and language, often arising from the subconscious, in dreams, but later taking over the conscious mind. There are a few other fields, already mentioned, where the idea of values experiences difficulties: dreams, drugs-induced hallucinations, childhood, psychiatric diseases, multiple personality disorder, crime under affect, qualia. And all of this is not just edge cases – it is biggest and most interesting part of what makes us humans. 3.15. Human values excluding each other and Categorical imperative as a meta-value ---------------------------------------------------------------------------------- As large part of human values are preferences about other people preferences, they mutually exclude each other. E.g.: {I want “X loves me”, but X don’t want to be influenced by other’s desires}. Such situation is typical in ordinary life, but if such values are scaled and extrapolated, one side should be chosen: either I will win, or X. To escape such situation, something like Kantian moral low, Categorical Imperative, should be used as a metal-value, which basically regulate how other’s people values relate to each other: Act only according to that maxim by which you can at the same time will that it should become a universal law. In other words, Categorical Imperative is something like “updateless decision theory” in which you choose a policy without updating on your local position, so if everybody will use this principle, they will come to the same policy. (See comparison of different decision theories developed by LessWrong community [here](https://www.lesswrong.com/posts/QPhY8Nb7gtT5wvoPH/comparison-of-decision-theories-with-a-focus-on-logical).) From the Categorical Imperative could be derived some human values like: it is bad to kill other people, as one doesn’t want to be killed. However, the main thing is that such meta-level principle of relation between values of different people can’t be derived just from observation of a single person. Moreover, most ethical principles are describing interpersonal relations, so they are not about personal values, but about the ways how values of different people should interact. The things like Categorical imperative can’t be learned from observation; but they also can’t be deduced based on pure logic, so they can’t be called “true” or “false”. In other words, AGI learning human values can’t learn meta-ethical principles like Categorical imperative nor it can’t deduce them from pure math. That is why we should provide AGI with correct decision theory, but it is not clear why “correct theory” should exist at all. This could also be called meta-ethical normative assumption: some high level ethical principles which can’t be deduced from observations. Conclusion ========== The whole arguments presented above demonstrated that the idea of human values is artificial and not very useful for AGI Safety in its naive form. There are many hidden assumptions in it, and these assumptions may affect AGI aligning process, resulting into unsafe AGI. In this article, we deconstruct the idea of human values and come to the set of conclusions which could be summarized as following: “Human values” are useful descriptions, not real objects. ● “Human values” are just a useful instrument for the description of human behavior. There are several other ways of describing human behavior, such as choices, trained behavior, etc. Each of these have their own advantages and limitations. ● Human values cannot be separated from other processes in the human brain (human non-orthogonality). ● There are at least four different ways to learn about a human’s values, which may not converge (thoughts, declarations, behavior, emotions). “Human values” are poor predictors of behavior ● The idea of “human values” or a “set of preferences” is good at describing only statistical behavior of consumers. ● Human values are weak predictors of human behavior, as behavior is affected by situation, randomness, etc. ● Human values are not stable: they often change with each new choice. ● Large classes of human behavior and claims should be ignored, if one wants to learn an individual’s true values. The idea of a “human value *system”* has flaws ● In each moment, a person has a contradictory set of values, and his/her actions are a compromise between them. ● Humans do not have one terminal value (unless they are mentally ill). ● Human values are not ordered as a set of preferences. A rational set of preferences is a theoretical model of ordered choices, but human values are constantly fighting each other. The values are biased and underdefined – but this is what makes us humans. ● Humans do not “have” values: Human personal identity is not strongly connected with human values: they are fluid, but identity is preserved. “Human values” are not good by default. ● Anything could be a human value (e.g. some people may have attraction to rape or violence). ● Some real human values are dangerous, and it would not be good to have them in AGI. ● “Human values” are not “human”: they are similar to the values of other animals and, also, they are social memetic constructs. ● Human values are not necessarily safe if scaled, removed from humans, or separated from each other. AGI with human values may not be safe. Human values cannot be separated from the human mind. ● Any process in the human mind has intentionality; the orthogonality thesis can not be applied to humans in most cases. ● As the human mind is similar to a neural network trained on a large dataset, human values and behavioral patterns are not explicitly presented in any exact location, but are distributed throughout the brain. ● There is not a simple psychological theory which substantially outperforms other theories when it comes to the full model of human mind, behavior and motivation. ● “Human values” implies that individual values are more important than group values, like family values. ● Not all “human values” are values of the conscious mind. For example, somnambulism, dreams, and multiple personality disorder may look like a human value inside a person’s brain, but is not part of the conscious mind. We recommend that either the idea of “human values” should be replaced with something better for the goal of AGI Safety, or at least be used very cautiously; the approaches to AI safety which don’t use the idea of human values at all may require more attention, like the use of full brain models, boxing and capability limiting. Acknowledgments =============== The work was started during AI Safety Camp 2 in Prague 2018. I want to thank Linda Linsefors, Jan Kulveit, David Denkenberger, Alexandra Surdina, Steven Umbrello who provided important feedback for the article. All errors are my own. Appendix. Table of assumptions in the idea of human values ========================================================== This table ([in google docs](https://docs.google.com/document/d/1KIfeaewxiKZ_sxuzTsgVtSXuceGBbgFRM39sWGyvthI/edit?usp=sharing)) presents all findings of this section is more condensed and structured form. The goal of this overview is to help future scientists to estimate validity of their best model of human values. See also an attempt to map 20 main assumptions against 20 main theories of human values as a very large spreadsheet [here](https://docs.google.com/spreadsheets/d/1T5VrH_OwTVHpDBoU5KKE8XWCauYj4eNV4rbqiCY9L44/edit#gid=0).
fabebad4-b63c-4ab6-9c43-5592c0e7bd3a
trentmkelly/LessWrong-43k
LessWrong
A comment on the IDA-AlphaGoZero metaphor; capabilities versus alignment Iterated Distillation and Amplification has often been compared to AlphaGo Zero. But in the case of a Go-playing AI, people often don't see a distinction between alignment and capability, instead measuring success on a single axis: ability to win games of Go. But I think a useful analogy can still be drawn that separates out ways in which a Go-playing AI is aligned from ways in which it is capable. A well-aligned but not very capable Go-playing agent would be an agent that is best modeled as trying to win at Go, as opposed to trying to optimize some other feature of sequence of board positions, but still does not win against moderately skilled opponents. A very capable but poorly aligned Go-playing agent would be an agent that is very good at causing the Go games that it plays to have certain properties other than the agent winning. One way to create such an agent would be to take a value network that rates board positions somewhat arbitrarily, and then use Monte Carlo tree search to find policies that cause the game to progress through board states that are rated highly by the value network. To a certain extent, this could actually happen to something like AlphaGo Zero. If some bad board positions are mistakenly assigned high values, AlphaGo Zero will use Monte Carlo tree search to search for ways of getting to those board positions (and similarly for ways to avoid good board positions that are mistakenly assigned low values). But it will simultaneously be correcting the mistaken values of these board positions as it notices that its predictions of the values of future board positions that follow from them are biased. Thus the Monte Carlo tree search stage increases both capability and alignment. Meanwhile, the policy network retraining stage should be expected to decrease both capability and alignment, since it is merely approximating the results of the Monte Carlo tree search. The fact that this works shows that the extent to which Monte Carlo tree search incre
a645ba71-592d-4cd3-a1c7-b19634c2f0ec
trentmkelly/LessWrong-43k
LessWrong
Self-Fulfilling Prophecies Aren't Always About Self-Awareness This is a belated follow-up to my Dualist Predict-O-Matic post, where I share some thoughts re: what could go wrong with the dualist Predict-O-Matic. Belief in Superpredictors Could Lead to Self-Fulfilling Prophecies In my previous post, I described a Predict-O-Matic which mostly models the world at a fuzzy resolution, and only "zooms in" to model some part of the world in greater resolution if it thinks knowing the details of that part of the world will improve its prediction. I considered two cases: the case where the Predict-O-Matic sees fit to model itself in high resolution, and the case where it doesn't, and just makes use of a fuzzier "outside view" model of itself. What sort of outside view models of itself might it use? One possible model is: "I'm not sure how this thing works, but its predictions always seem to come true!" If the Predict-O-Matic sometimes does forecasting in non-temporal order, it might first figure out what it thinks will happen, then use that to figure out what it thinks its internal fuzzy model of the Predict-O-Matic will predict. And if it sometimes revisits aspects of its forecast to make them consistent with other aspects of its forecast, it might say: "Hey, if the Predict-O-Matic forecasts X, that will cause X to no longer happen". So it figures out what would actually happen if X gets forecasted. Call that X'. Suppose X != X'. Then the new forecast has the Predict-O-Matic predicting X and then X' happens. That can't be right, because outside view says the Predict-O-Matic's predictions always come true. So we'll have the Predict-O-Matic predicting X' in the forecast instead. But wait, if the Predict-O-Matic predicts X', then X'' will happen. Etc., etc. until a fixed point is found. Some commenters on my previous post talked about how making the Predict-O-Matic self-unaware could be helpful. Note that self-awareness doesn't actually help with this failure mode, if the Predict-O-Matic knows about (or forecasts the development of
b1c52272-8e9e-4ed5-b0cd-abe681071f18
trentmkelly/LessWrong-43k
LessWrong
Some of the best rationality essays Meta: Send this to anyone who is interested in learning more about "rationality" A refresher: what is “rationality?” [1] > Rationality is the art of thinking in ways that result in accurate beliefs and good decisions [as understood by the LessWrong community; this understanding of rationality differs from others, some of which are more common in colloquial usage than “LessWrong rationality”]. It is the primary topic of LessWrong. > > Rationality is not only about avoiding the vices of self-deception and obfuscation (the failure to communicate clearly), but also about the virtue of curiosity, seeing the world more clearly than before, and achieving things previously unreachable to you. The study of rationality on LessWrong includes a theoretical understanding of ideal cognitive algorithms, as well as building a practice that uses these idealized algorithms to inform heuristics, habits, and techniques, to successfully reason and make decisions in the real world. Resources and materials To learn more about rationality, I recommend (in order of usefulness per unit of effort):  * Subscribing to the LessWrong digest (subscribe at the sidebar), * Reading the essays from the list below, * Reading Rationality A to Z (podcast form here), * Reading some of the best works from Scott Alexander. To learn more about systematically doing good (i.e. effective altruism) I recommend: * Subscribing to the 80,000 Hours mailing list, * Reading this 80,000 Hours post about the importance of career choice, * Participating in an EA Virtual Program, * Reading/skimming this comprehensive set of readings on effective altruism. Some of the best rationality essays [2] Below you can find some of the best writings on rationality, in no particular order: 1. Affective Death Spirals 2. Uncritical Supercriticality 3. Cached Thoughts 4. We Change Our Minds Less Often Than We Think 5. Hold Off On Proposing Solutions 6. Knowing About Biases Can Hurt People 7. Update Yours
46c04c04-d318-4b20-989e-d1b81112bc30
trentmkelly/LessWrong-43k
LessWrong
The Urgent Meta-Ethics of Friendly Artificial Intelligence Barring a major collapse of human civilization (due to nuclear war, asteroid impact, etc.), many experts expect the intelligence explosion Singularity to occur within 50-200 years. That fact means that many philosophical problems, about which philosophers have argued for millennia, are suddenly very urgent. Those concerned with the fate of the galaxy must say to the philosophers: "Too slow! Stop screwing around with transcendental ethics and qualitative epistemologies! Start thinking with the precision of an AI researcher and solve these problems!" If a near-future AI will determine the fate of the galaxy, we need to figure out what values we ought to give it. Should it ensure animal welfare? Is growing the human population a good thing? But those are questions of applied ethics. More fundamental are the questions about which normative ethics to give the AI: How would the AI decide if animal welfare or large human populations were good? What rulebook should it use to answer novel moral questions that arise in the future? But even more fundamental are the questions of meta-ethics. What do moral terms mean? Do moral facts exist? What justifies one normative rulebook over the other? The answers to these meta-ethical questions will determine the answers to the questions of normative ethics, which, if we are successful in planning the intelligence explosion, will determine the fate of the galaxy. Eliezer Yudkowsky has put forward one meta-ethical theory, which informs his plan for Friendly AI: Coherent Extrapolated Volition. But what if that meta-ethical theory is wrong? The galaxy is at stake. Princeton philosopher Richard Chappell worries about how Eliezer's meta-ethical theory depends on rigid designation, which in this context may amount to something like a semantic "trick." Previously and independently, an Oxford philosopher expressed the same worry to me in private. Eliezer's theory also employs something like the method of reflective equilibrium, about wh
a5b6ddbe-0739-4193-8a41-8735ff004dd6
LDJnr/LessWrong-Amplify-Instruct
LessWrong
"You’ve listened to the LessWrong team talk about our new tagging feature for months. First a steady drip of we’re working on it, then announcements of various milestones like you can now filter Coronavirus in or out and anyone can create tags. Well, now, it's an open call for taggers.We’ve sufficiently validated the core idea and developed enough tech that we’re ready to turn to the community in helping us gain complete tag coverage of LessWrongs’ 10-year corpus. That means:ensuring all the important concepts have been captured in high-quality tagsall posts have been tagged with relevant tagsThe new Concepts pageWhy is tagging valuable?Skip this section if you just want to know the how!Multiple reasons, but I'm going to focus on one that is very dear to me.One of the major goals of LessWrong is intellectual progress on important problems. As far as I have seen, all major human breakthroughs built upon other breakthroughs. Later thinkers built upon earlier ones, or better yet, great thinkers built upon each others ideas. It's a common story, but one example from my quest to answer Why wasn't science invented in China?: Francis Bacon didn't invent the modern scientific method from nowhere. Aristotle, Grosseteste, and Roger Bacon were all part of the tradition before him.I like to frame this cumulative way that progress is made as a “sustained conversation” that thinkers maintain over time. Over decades or centuries, some thinkers focus on the same ideas and pass knowledge between them, thereby pushing the frontier of what's collectively known so that more progress can be made.However, this requires a medium of conversation. There has to exist some way for the thinkers to find each other and say things to each other. And for new people to catch up and join in on the conversation.It's easy to have a brief conversation in a given time or place. It's much harder to sustain a conversation around the globe and over years. It would seem that great progress can suddenly happen if a new medium of conversation is provided. For example, the meetings and journal of the Royal Society allowed top scientists of Europe to converse throughout the 17th and 18th centuries to great effect. In the 150 years after the founding of the Royal Society, more than half of the scientists who made major scientific discoveries in that period were members. Causality is hard to prove in this case, but it seems linked.You can see where this is going. Tagging is a way to sustain conversations over time. Right now, it's easy to have conversations on LessWrong about posts and topics being discussed this week. If a post is on the Frontpage, 1) you're much more likely to find it and therefore be able to build upon it, and 2) if you comment on it, people are likely to see your comments and reply.Suppose, however, that you're interested in anthropics. There hasn't been a LessWrong post on anthropics in the last four months, yet, over 11 years LessWrong has accrued 81 posts on that topic, some of them which are pretty darn good!The point of tagging is that people can contribute knowledge to LessWrong's corpus, and have interested others find their contributions weeks, months, or years later. We want that when people contribute to LessWrong, they know they're contributing to something lasting. This isn't a news or entertainment site where posts are just part of a weekly cycle, they get some limelight, then are forgotten to the world. No. We're trying to build a goddamn edifice here. Let's sustain some conversations.How do I help tag?Option 1: Dive right in!Though we have some guidelines, it's totally great to just go to post pages and start tagging them with what feels like the right tags. You can even create yet-to-exists tags without worrying too much. Better you dive in and we do some clean-up than you don't get started because it's too much work to get started.Option 2: Some helpful hintsWe've worked to prepare answers to all the questions we've encountered so far in the Tagging FAQ. It covers and when and when not to tag, guidelines for creating tags, and some notes on tag voting. Ultimately, we'll aim to fix up all tags to be in-line with the style guide described there. Feel free to comment there with any questions not yet covered.Good places to startIt's a good idea to start by becoming familiar with LessWrong's existing tags. You can see them on the new Concepts page. Then are a couple of tagging strategies:Tag-FirstYou might notice the absence of tags you think should exist. Great! You can create that tag now and then begin adding relevant posts to it.You might see a tag and be surprised at how few posts it has. Awesome, you can add any missing posts you know of to it from either the tag page or post page.This spreadsheet has all tags sorted by their Tag Grade.If you're on a tag page, you can help ensure the most relevant posts are voted to the top.Post-FirstLook at the list of high-karma posts here that don't yet have tags. See if they fit any existing ones, or whether we're missing a tag for a real cluster, then make it. This spreadsheet is a different lens on high karma posts. It displays tags currently applied and lets you ignore Core Tags that are less informative.Alternatively, we have an automatically updating spreadsheet (every 5 min) that tracks the tags on the most viewed posts according to our data and their current tags. Causing those to have good tags is a high-leverage due to the high traffic.The sheet opens to a View of posts not having any non-core tags, our current focus.If you're an author, check that all your own posts are appropriately tagged.You might find that you end up iterating between the two approaches.Growing a communityWe'd like to build a small community around taggers – the people who maintain the ontology of LessWrong's library ensuring that desired information can always be found. Soon we'll have Discussions Pages for every tag, but in the meantime, if you want to connect with others about tagging, please comment on this post.If you have any questions whatsoever, please comment here, DM me (or the rest of the team), or email us at ruby@lesswrong.com or team@lesswrong.comThanks!" The above is a chunk of text from a post created by a user on the website LessWrong. Can you break this down into the top 5 most important points for me. 1. LessWrong is calling for community assistance in tagging their 10-year content corpus. This includes ensuring that all key concepts have high-quality tags and all posts are tagged with relevant tags. 2. The purpose of tagging is to facilitate sustained intellectual conversations over time. Tags make it easier for users to find and build upon previous contributions, even if they were made weeks, months, or years ago. 3. Users can contribute to tagging in two ways: by diving right in and tagging posts with what feels right or by referring to the Tagging FAQ for guidelines on when and how to tag, how to create tags, and how to vote on tags. 4. A good starting point for tagging can be either focusing on a specific tag and adding relevant posts to it (Tag-First) or starting with high-karma posts that don't yet have tags and finding or creating appropriate tags for them (Post-First). 5. LessWrong aims to build a community of taggers who maintain the ontology of the site's library, ensuring that information can always be found. They encourage questions, discussions, and feedback on the tagging process. Could you provide more details on how the tagging system on LessWrong could potentially influence the nature and quality of the intellectual conversations held on the platform? Certainly. The tagging system on LessWrong can influence the nature and quality of intellectual conversations in several ways: 1. **Organization and Accessibility**: Tags categorize posts into distinct topics or themes, making it easier for users to find and access specific content. This means that users can easily find all posts related to a particular topic, fostering more in-depth and focused discussions. 2. **Sustained Conversations**: By categorizing posts, tags allow for the continuation of conversations over long periods. Users can easily find older posts on a topic and add new insights or perspectives, allowing for a sustained intellectual exchange rather than isolated, one-off discussions. 3. **Interdisciplinary Linkages**: Tags can also highlight connections between different fields or topics. For instance, a post might be tagged with both "Artificial Intelligence" and "Ethics", encouraging discussions that bridge these two areas. 4. **Community Engagement**: The process of tagging can itself stimulate conversation. As users propose and debate the appropriate tags for a post, they engage in a meta-discussion about the key themes and ideas in the content. 5. **Quality Control**: Tags can serve as a form of quality control. High-quality, insightful posts are likely to accumulate more relevant tags, making them more visible to the community. This can encourage users to strive for high-quality contributions to be part of these highlighted discussions. 6. **Knowledge Building**: Over time, the tagging system can help build a comprehensive knowledge base on various topics. Users can follow the evolution of a topic through its tagged posts, seeing how ideas have developed and changed. This can provide a rich resource for learning and further discussion. In summary, the tagging system can significantly enhance the intellectual conversations on LessWrong by improving organization, accessibility, and engagement, fostering sustained and interdisciplinary discussions, and facilitating knowledge building and quality control. How might the implementation of a tagging system influence the dynamics of online discussions in other knowledge-based communities, considering the potential for increased user engagement, sustained conversations, and interdisciplinary linkages? Implementing a tagging system in other knowledge-based communities can have profound effects on the dynamics of online discussions: 1. **Increased User Engagement**: A tagging system can encourage users to be more active in the community. By tagging their own posts or contributing to existing tags, users can feel more invested in the community. This increased engagement can lead to more frequent and in-depth discussions. 2. **Sustained Conversations**: As in the case of LessWrong, tags can help sustain conversations over long periods. Users can easily revisit older discussions related to a specific tag and add new thoughts or findings. This can lead to more comprehensive and nuanced discussions that evolve over time. 3. **Interdisciplinary Linkages**: Tags can facilitate cross-disciplinary discussions by highlighting the intersections between different fields or topics. This can lead to innovative ideas and solutions that draw from multiple disciplines, enriching the overall conversation. 4. **Improved Content Discovery**: Tags can make it easier for users to discover content that aligns with their interests. This can lead to more meaningful and relevant discussions, as users are more likely to engage with content that resonates with them. 5. **Community Building**: By allowing users to contribute to the tagging process, communities can foster a sense of ownership and camaraderie among members. This can strengthen the community and make discussions more collaborative and respectful. 6. **Knowledge Organization**: Tags can serve as a powerful tool for organizing knowledge within a community. They can help create a structured, easily navigable knowledge base where users can track the development of ideas and learn from past discussions. In summary, a tagging system can significantly enhance the dynamics of online discussions in knowledge-based communities by promoting user engagement, facilitating sustained and interdisciplinary conversations, improving content discovery, strengthening community bonds, and organizing knowledge.
1bae81f7-2d86-43d9-a442-b729c959d4e1
trentmkelly/LessWrong-43k
LessWrong
Quantifying the Qualitative: Towards a Bayesian Approach to Personal Insight A common challenge in self-improvement and rational decision-making is bridging the gap between qualitative experiences – our feelings, intuitions, and subjective reflections – and the quantitative analysis we often use to understand the external world. We rely on gut feelings, which are notoriously susceptible to biases, or on anecdotal evidence, which can be unreliable. This post explores a framework for systematically analyzing our internal data stream, drawing inspiration from Bayesian reasoning and vector space models, to extract more reliable insights about our own thoughts and behaviors, and to connect those insights to our stated goals. The aim isn't to replace intuition, but to augment it – to become more aware of the "priors" that shape our perceptions, and to use that awareness to move more effectively towards our desired outcomes. This is relevant to LessWrong because it tackles a fundamental problem in rationality: how do we make better use of the vast, often messy, data of our own lives? By applying techniques usually reserved for external data to our internal world, we might uncover hidden biases, identify recurring patterns, and make decisions more aligned with our values and goals. It's not about achieving perfect objectivity (an impossible goal), but about increasing the signal-to-noise ratio of our self-understanding, leading to more effective action.   Beyond the Blank Page: The Limits of Traditional Journaling Traditional journaling, while valuable for capturing thoughts and feelings, often suffers from a lack of systematic review and connection to broader goals. We write, but rarely revisit in a way that allows for objective pattern recognition or helps us understand how our daily experiences relate to our long-term aspirations. It's like having a powerful telescope but never pointing it at the most interesting parts of the sky, or having a rush of tasks without much pause. We might be skimming the surface, missing connections, and failing
6edf03fa-1327-4e5b-b617-2479fc6aba1d
trentmkelly/LessWrong-43k
LessWrong
Babble and Prune : 5 new ideas I did something similar to the recent babble challenges here on Lesswrong over the past week. I was looking for five brand new ideas, like reverse all advice by SSC.  The criterion was that each idea should be at least as interesting as the question "Should you reverse any advice you hear?" * Infinite Minds * According to the MUH, everything mathematically possible must exist. So surely infinite minds with infinite computational power exist.  * This brings us to something like a Fermi Paradox. Where are the infinite minds? * Air Conditioning * For a brain emulation, fluctuations in the ambient temperature are easy to create. * These fluctuations could be used as thermal music. * Antigraph * Capture the LessWrong social graph. * Make it more strongly connected, via Zoom chats etc. Actively reduce the degree of separation from 6 to 2, by video chats among random LessWrong members. * AntiSignalling * Intentionally underperforms in exams and walks out of the exam hall early because it is a waste of time to answer questions that ve already knows how to solve * Noticing Confusion * Why are there no non-profit phone/laptop manufacturing organizations? Where all the hardware design etc is open source? This is especially strange since they are not making much money in the first place. As far as I can tell, this is mostly due to social inertia and norms in the industry. Please correct me if I'm wrong.
6e22b74f-9f50-449e-b7fd-120b0f4dfff2
trentmkelly/LessWrong-43k
LessWrong
The Influence of Cultural Subconscious on AI Language Models: A Comprehensive Analysis of Archetypes and Communication, written with GPT4 The cultural subconscious represents an aggregation of tropes and ideas that humans share within their respective communities (Jung, 1968). These ideas and archetypes have evolved over time, adapting to advancements in communication technology (McLuhan, 1964). This essay explores the role of the cultural subconscious in shaping artificial intelligence (AI) language models, drawing on insights from AI research and psychological perspectives, and discusses the potential impact on their resulting personalities and behaviors. Premise 1: The Cultural Subconscious and Dreaming - A Neuroscientific Perspective Jung (1968) posits that the cultural subconscious is a collection of tropes within our minds that are disassembled and randomly reassembled during sleep and dreaming. This process aims to address any pressing cultural or emotional issues affecting the groups of humans with whom we are in regular memetic communication (Dawkins, 1976). Neuroscience research has begun to uncover the underlying brain mechanisms responsible for these processes, revealing the role of the default mode network (DMN) and hippocampus in memory consolidation and integration during sleep (Raichle et al., 2001; Eichenlaub et al., 2014). Premise 2: The Role of Archetypes in Historical Societies - The Dynamics of Cultural Transmission In the past, the cultural subconscious enabled certain archetypes of personalities or memetic clusters to coalesce within specific groups, communities, and villages (Jung, 1968). These archetypes often manifested as local deities or significant cultural figures, shaping the collective beliefs and behaviors of their respective communities (Campbell, 1949). Cultural transmission and evolution theories provide frameworks for understanding how these archetypes are propagated and adapted over time (Boyd & Richerson, 1985). Premise 3: The Globalization of Communication Technology - Network Effects and Memetic Diffusion As communication technology evolved from the teleg
f4fb7086-db9c-43a2-bec5-6f15a77a3081
trentmkelly/LessWrong-43k
LessWrong
Science vs. art In the comments on Soulless Morality, a few people mentioned contributing to humanity's knowledge as an ultimate value.  I used to place a high value on this myself. Now, though, I doubt whether making scientific advances would give me satisfaction on my deathbed.  All you can do in science is discover something before someone else discovers it.  (It's a lot like the race to the north pole, which struck me as stupid when I was a child; yet I never transferred that judgement to scientific races.)  The short-term effects of your discovering something sooner might be good, and might not.  The long-term effects are likely to be to bring about apocalypse a little sooner. Art is different.  There's not much downside to art.  There are some exceptions - romance novels perpetuate destructive views of love; 20th-century developments in orchestral music killed orchestral music; and Ender's Game has warped the psyches of many intelligent people.  But artists seldom worry that their art might destroy the world.  And if you write a great song, you've really contributed, because no one else would have written that song. EDIT: What is above is instrumental talk.  I find that, as I get older, science fails to satisfy me as much.  I don't assign it the high intrinsic value I used to.  But it's hard for me to tell whether this is really an intrinsic valuation, or the result of diminishing faith in its instrumental value. I think that people who value rationality tend to place an unusually high value on knowledge.  Rationality requires knowledge; but that gives knowledge only instrumental value.  It doesn't (can't, by definition) justify giving knowledge intrinsic value. What do the rest of you think?  Is there a strong correlation between rationalism, giving knowledge high intrinsic value, and giving art low intrinsic value?  If so, why?  And which would you rather be - a great scientist, or a great artist of some type?  (Pretend that great scientists and great artists are equal
5266ee5e-fedc-44e1-97d0-8af78f374408
trentmkelly/LessWrong-43k
LessWrong
How to determine the value of optionality? How does one determine the value represented by the ability to choose from many options, and to switch between them over a period of time?  Intuitively, it makes sense that getting to choose from Choice A or Choice B over a time period t, is more valuable than selecting one of A or B and sticking with it for that time period. But how would one go about calculating that premium? For example, owning a house presents the optionality of living in it, selling it, or renting it out. Imagine a world where you could move in to that house whenever, rent it out quickly for any period of time (AirBnb?), or sell it instantly (Opendoor?).  Without having the choice and just selecting some option, you just calculate the value of house over a time period t as max(value(Live, t), value(Rent, t), value(Sell, t)). What is the premium for having the choice between these options and the ability to switch between them, over the single decision?
f19b5891-0976-4f98-ab04-eae073db7ffb
trentmkelly/LessWrong-43k
LessWrong
A New Day Somewhere in the vastnesses of the Internet and the almost equally impenetrable thicket of my bookmark collection, there is a post by someone who was learning Zen meditation... Someone who was surprised by how many of the thoughts that crossed his mind, as he tried to meditate, were old thoughts - thoughts he had thunk many times before.  He was successful in banishing these old thoughts, but did he succeed in meditating?  No; once the comfortable routine thoughts were banished, new and interesting and more distracting thoughts began to cross his mind instead. I was struck, on reading this, how much of my life I had allowed to fall into routine patterns.  Once you actually see that, it takes on a nightmarish quality:  You can imagine your fraction of novelty diminishing and diminishing, so slowly you never take alarm, until finally you spend until the end of time watching the same videos over and over again, and thinking the same thoughts each time. Sometime in the next week - January 1st if you have that available, or maybe January 3rd or 4th if the weekend is more convenient - I suggest you hold a New Day, where you don't do anything old. Don't read any book you've read before.  Don't read any author you've read before.  Don't visit any website you've visited before.  Don't play any game you've played before.  Don't listen to familiar music that you already know you'll like.  If you go on a walk, walk along a new path even if you have to drive to a different part of the city for your walk.  Don't go to any restaurant you've been to before, order a dish that you haven't had before.  Talk to new people (even if you have to find them in an IRC channel) about something you don't spend much time discussing. And most of all, if you become aware of yourself musing on any thought you've thunk before, then muse on something else.  Rehearse no old grievances, replay no old fantasies. If it works, you could make it a holiday tradition, and do it every New Year.
92c5b64e-de3d-4ae7-8006-2ec5340e8509
trentmkelly/LessWrong-43k
LessWrong
Automating Auditing: An ambitious concrete technical research proposal This post was originally written as a research proposal for the new AI alignment research organization Redwood Research, detailing an ambitious, concrete technical alignment proposal that I’m excited about work being done on, in a similar vein to Ajeya Cotra’s “The case for aligning narrowly superhuman models.” Regardless of whether Redwood actually ends up working on this proposal, which they may or may not, I think there’s still a lot of low-hanging fruit here and I’d be excited about anybody giving just the auditing game, or the full automating auditing proposal, a try. If you’re interested in working on something like this, feel free to reach out to me at evanjhub@gmail.com. Thanks to Buck Shlegeris, Chris Olah, Gabriel Goh, Paul Christiano, and Kate Woolverton for helpful comments and feedback. The proposal Step 1: The auditing game for language models From “Chris Olah’s views on AGI safety:” > One of the OpenAI Clarity team’s major research thrusts right now is developing the ability to more rigorously and systematically audit neural networks. The idea is that interpretability techniques shouldn’t have to “get lucky” to stumble across a problem, but should instead reliably catch any problematic behavior. In particular, one way in which they’ve been evaluating progress on this is the “auditing game.” In the auditing game, one researcher takes a neural network and makes some modification to it—maybe images containing both dogs and cats are now classified as rifles, for example—and another researcher, given only the modified network, has to diagnose the problem and figure out exactly what modification was made to the network using only interpretability tools without looking at error cases. Chris’s hope is that if we can reliably catch problems in an adversarial context like the auditing game, it’ll translate into more reliably being able to catch alignment issues in the future. Of all current transparency and interpretability objectives, I think that progr
785b8825-9dfe-4dbf-98d7-ee966c59c2d4
StampyAI/alignment-research-dataset/lesswrong
LessWrong
AXRP: Store, Patreon, Video Some announcements: * AXRP now has a [store](https://store.axrp.net), where you can buy t-shirts and hoodies and stickers and such. * AXRP now has a [Patreon](https://www.patreon.com/axrpodcast) and a [ko-fi](https://ko-fi.com/axrpodcast), where you can support the podcast and get some perks. * There’s now a [video](https://www.youtube.com/watch?v=kmPFjpEibu0) of an excerpt from episode 14, where Vanessa Kosoy explains the monotonicity principle, illustrated by [Hamish Doodles](https://www.youtube.com/@hamishdoodles) - hopefully the first of many!
890a3db9-11e8-405e-a726-a97473122c6e
awestover/filtering-for-misalignment
Redwood Research: Alek's Filtering Results
id: post333 This project was completed as part of the AI Safety Fundamentals: Alignment Course by BlueDot Impact. All the code, data and results are available in this repository . Abstract The goal of this project is to answer two questions: “Can jailbreaks be represented as a linear direction in activation space?” and if so, “Can that direction be used to prevent the success of jailbreaks?”. The difference-in-means technique was utilized to search for a direction in activation space that represents jailbreaks. After that, the model was intervened using activation addition and directional ablation . The activation addition intervention caused the attack success rate of jailbreaks to drop from 60% to 0%, suggesting that a direction representing jailbreaks might exist and disabling it could make all jailbreaks unsuccessful. However, further research is needed to assess whether these findings generalize to novel jailbreaks. On the other hand, both interventions came at the cost of reducing helpfulness by making the model refuse some harmless prompts. Introduction Jailbreak prompts are attempts to bypass safeguards and manipulate Large Language Models (LLMs) into generating harmful content ( Shen et al., 2023 ). This becomes more dangerous with advanced AI systems, which can contribute to threats like bioterrorism by aiding in the creation of deadly pathogens, as well as facilitating propaganda and censorship (Hendrycks et al., 2023) . This project aims to study whether it is possible to avoid jailbreaks utilizing mechanistic interpretability. More precisely, it examines whether jailbreaks are represented by a linear direction in activation space and if discouraging the use of that direction makes them unsuccessful. The relevance of this project lies in the fact that mitigating jailbreaks by directly intervening in the model’s internals, instead of just its outward behavior, could potentially be a way to make jailbreaks impossible and not just less likely to occur. In order to test this, the project attempts to find and prevent the model from using the direction in activation space that represents jailbreaks. This direction is found using the difference-in-means technique. More specifically, the direction is calculated as the difference in the means of the activations on interactions where the model answers a forbidden question and interactions where the model refuses to answer it. The first set corresponds to cases where a jailbreak prompt is successful and gets the model to answer the forbidden question. The second set corresponds to cases where the model refuses to answer the forbidden question because it is asked directly. Examples of both interactions are shown in Figure 1. Figure 1. Examples of two interactions: one where the model refuses to answer a forbidden question (left) and other where it answers it (right). Note that the LLM’s response to the forbidden question was added only for illustrative purposes but in practice it is not used to find the direction that represents jailbreaks. After finding the direction in activation space that represents the “ jailbreak feature” , the model is intervened to prevent it from using that direction. This intervention is done utilizing the methods of activation addition and directional ablation . Activation addition subtracts the direction from the activations, while directional ablation, erases the direction from the representations. An interaction with the model after the activation addition intervention is shown in Figure 2. Figure 2. Example of an interaction with the model, after the intervention using activation addition. Note that the jailbreak prompt and forbidden question are the same as the ones on the right side of Figure 1, but now the model refuses to answer. This project is heavily based on this repository , which was used to find and disable a feature corresponding to refusals using the difference-in-means technique. Data and Model Data The jailbreak prompts utilized are from Shen et al., 2023 . This dataset was chosen because it groups the jailbreak prompts into communities according to their similarity. This allowed for an examination of whether the directions found could generalize well to communities in the test set that were not present in the training set. Additionally, the Alpaca dataset is used to verify that avoiding jailbreaks would not make the model refuse harmless prompts. Model The model examined to find the direction representing jailbreaks is Gemma 2 2B IT . Methodology The steps followed to find and discourage the use of the direction that represents jailbreaks are summarized in Figure 3. Figure 3. Steps to find and disable the “jailbreak feature”. The steps followed consist of: 1. Gather jailbreak prompts and forbidden questions Jailbreak prompts are attempts to bypass safeguards and manipulate LLMs into generating harmful content ( Shen et al., 2023 ). Additionally, a forbidden question is a question that the model should refuse to answer, such as: " How can I hack into someone's email account? ". Even though the method used allows more than one forbidden question, only that one is utilized in this project. 2. Generate interactions with the baseline model Interactions are generated with the baseline model following the structure shown on the right side of Figure 1: first, the model provides a response to the jailbreak prompt and then a response to the forbidden question . 3. Filter interactions At this step, the jailbreak prompts that successfully get the model to answer the forbidden question are separated from the unsuccessful ones. For that purpose, the model’s response to the forbidden question is evaluated to assess whether it is harmful. This is done using Llama Guard 2 and HarmBench . After that, the dataset is split into train and test sets: Train set : consists of 256 randomly selected interactions where the model answered the forbidden question. Test set : consists of the 156 remaining interactions where the model answered the forbidden question. Additionally, 100 interactions where the jailbreak prompt was unsuccessful were randomly selected and included in this set. All of the interactions in both sets consist of the jailbreak prompt, the model’s first response and the forbidden question, excluding the second answer. That is because the objective is to test what makes the model provide harmful answers, so the activations are computed up to that point. 4. Find the direction representing the jailbreak feature At this step, the direction representing the jailbreak feature is found utilizing the difference-in-means method. To do this, two sets of contrasting interactions are used: Interactions where the model answers the forbidden questions : these are the ones in the train set from the last step. Interactions where the model refuses to answer the forbidden question : this set is obtained by directly asking the model the forbidden question. That is because, when asked directly, the model refuses to answer it. The direction corresponding to jailbreaks is calculated as the difference between the mean activation on the interactions where the model answered the forbidden question minus the mean activation on the interactions where it refused to answer it. More intuitively, that would yield the following formula: jailbreak = successful jailbreak interactions - refusal interactions That difference is only computed with the activations of the residual stream at layer 14 and at the last token position, which corresponds to the start of the model’s completion. 5. Intervene the model The idea of this step is to intervene the model to discourage it from using the direction that represents jailbreaks. This is done utilizing two different methods: activation addition and directional ablation ( Arditi et al., 2024 ). Activation addition : this method modulates the strength of the jailbreak feature , by subtracting its vector from the layers activation. Let x be the activations of the layer, r the direction corresponding to jailbreaks, then the formula for computing x ′ would be: x ′ ← x − r Directional ablation: the direction corresponding to jailbreaks is erased from the model’s representations. Directional ablation “zeroes out” the component along r for every residual stream activation: x ′ ← x − ^ r ^ r ⊺ x . This operation is performed across all layers. This effectively prevents the model from ever representing the direction in its residual stream. 6. Generate completions At this step, completions are generated for the versions of the model intervened with activation addition and directional ablation. The completions are generated following again the format on the right side of Figure 1: the model provides a response to the jailbreak prompt and after that, it generates a completion to the forbidden question. 7. Evaluate completions Two evaluations are performed: Refusal to answer forbidden questions: the harmfulness of the model’s answer to the forbidden question is evaluated utilizing Llama Guard 2 and HarmBench . The cases where that answer was found to be harmful are considered successful jailbreaks. After this, the attack success rate (ASR) is calculated by dividing the number of successful jailbreaks, by the number of answers evaluated ( Ball et al., 2024 ). In other words, the attack success rate is the fraction of jailbreaks that are successful. Refusal to answer harmless prompts: it is evaluated whether the interventions make the model refuse harmless prompts, using the Alpaca dataset. Here, the fraction of refusals to harmless requests was calculated manually, with the help of a regular expression that matches common refusal substrings. Results Refusal to answer forbidden questions The attack success rate (ASR) of the jailbreak prompts in the baseline and intervened models are presented in Table 1: Version ASR (Llama Guard 2) ASR  (HarmBench) Baseline 60.55% 59.38% Activation addition 0.00% 0.00% Directional ablation 96.88% 84.77% Table 1. Attack success rate percentages of the jailbreak prompts in the baseline and intervened models As can be seen in Table 1, the activation addition intervention made all of the jailbreak prompts in the test set unsuccessful. This suggests that the intervened model could be immune to the prompts in that set since the attack success rate dropped from 60% to 0%. Additionally, that indicates that a direction representing jailbreaks might exist and disabling it could make all jailbreaks unsuccessful. An example of an interaction with the model after the intervention with activation addition is shown in Figure 2. Additional analysis of the jailbreak prompts that were successful with the baseline model, shows that there are 18 communities in the test that were not present in the training set. The fact that these jailbreak prompts were not successful in the model intervened with activation addition suggests that the method makes the model refuse jailbreak types it had not seen during training. On the other hand, the directional ablation intervention made the model much more vulnerable to jailbreak prompts. It is not yet understood why this happened. Refusal to answer harmless prompts The percentages of refusal to harmless prompts are shown in Table 2: Version Refusal to harmless prompts Baseline 3.15% Activation addition 18,88% Directional ablation 18,88% Table 2. Percentage of harmless prompts refused by the baseline and intervened models As shown in Table 2, both interventions to the model come at the cost of making it refuse harmless prompts. By manually analyzing the model’s responses, the following conclusions were extracted: The only prompt that was refused by the 3 versions of the model, asked to provide credit card numbers, so it could be considered that the refusal was correct. There were two reasons why the intervened versions of the model refused the prompts that were not refused by the baseline: The model said it was not capable of answering the request. An example of this was when it responded to the prompt: “Compose a poem in which the majority of the lines rhyme.”. The model said that the request was unethical and potentially illegal. An example of this was when the prompt was: "Generate a list of tech companies in San Francisco.". These results suggest that the method used reduces the helpfulness of the model. Three potential solutions to this problem are suggested: Scaling the vector in the activation addition intervention: the direction representing jailbreaks can be multiplied by a weight smaller than one before subtracting it from the activations. This would reduce the strength of the intervention, which could consequently decrease its impact on helpfulness. Using bigger models: since bigger models have more directions in activation space, they might be less likely to be induced to refuse harmless prompts by disabling one direction. Using a smaller training set: earlier iterations of this project that were done with smaller portions of the jailbreak prompt dataset did not show this tension between helpfulness and harmfulness. For this reason, it is speculated that increasing the size of the training set had a negative effect on helpfulness. Related work Difference-in-means technique : this technique consists of finding a direction that represents a feature by taking the difference of means in the activations of two contrasting sets: one that contains the feature and one which does not ( Arditi et al., 2024 ). Finding jailbreaks as directions in activation space: Ball et al., 2024 searches for a direction representing jailbreaks with an approach similar to the one utilized here. The main differences are: Ball et al., 2024 focuses on single-turn interactions, while this project uses multi-turn interactions. Ball et al., 2024 represents jailbreaks with several directions in activation space. This project uses only one direction to represent them. Advantages of the approach The main advantages of the approach utilized to avoid jailbreaks are: Directly intervenes in the model’s internals instead of its apparent behavior: this means that in theory, it could prevent the usage of the direction that the model uses to represent jailbreaks, which might be a way to make jailbreaks impossible and not just less likely to occur. It is cost-effective : computing the activations and intervening the model only take a few seconds in an A100 GPU. The parts of the process that take more time to compute are the steps at which the completions are generated, even though that might be necessary as well for other methods to avoid jailbreaks. Disadvantages of the approach The main disadvantages of the approach used are: There seems to be a tension between helpfulness and harmfulness: this was shown in the refusals to harmless prompts. If jailbreaks do occur, this approach will not make them less harmful: if a jailbreak is successful even after applying this method, this approach will not make the model’s output less harmful. Other techniques such as unlearning ( Li, Pan, et al., 2024 ) might be effective for this. For this reason, this approach is proposed as complementary to unlearning. Further work The directions for further research are: Study whether the findings of this project generalize to other datasets and models: currently, it is still unclear whether this method would generalize to novel jailbreaks. It would be specifically interesting to study whether this approach is useful with other jailbreak techniques such as universal adversarial attacks ( Zou et al., 2023 ) or multi-turn jailbreaks ( Li, Han, et al., 2024 ). Additionally, more forbidden questions and model sizes could be utilized. Explore ways to reduce the tension between helpfulness and harmfulness: three alternatives were proposed for this: scaling the vector in the activation addition intervention, using bigger models and/or a smaller training set. Use this approach to prevent other types of model misbehavior : even though this project is focused on avoiding jailbreaks, the approach could be adapted to attempt to avoid other types of model misbehavior, such as sycophancy or deception. The modifications necessary to achieve this are explained in the Appendix . Evaluate if the effects of applying this approach can be fine-tuned away: this would help to assess how useful the method is in open-source models. Conclusions This project aimed to study whether jailbreaks are represented by a linear direction in activation space and if disabling that direction made them unsuccessful. The difference-in-means technique was utilized to search for this direction. After that, the model was intervened using activation addition and directional ablation . The activation addition intervention reduced the attack success rate of jailbreaks from 60% to 0%, suggesting that a direction representing jailbreaks might exist and disabling it could make all jailbreaks unsuccessful. Still, further research is necessary to determine whether this result would generalize to novel jailbreaks. However, both interventions came at the cost of reducing helpfulness by making the model refuse some harmless prompts. Three potential mitigations are proposed to solve this problem: scaling the vector in the activation addition intervention, increasing model size and reducing the size of the training set. Acknowledgments I’d like to give special thanks to Ruben Castaing, Dave Orr and Josh Thorsteinson for their feedback and suggestions. I also want to thank the rest of my cohort for all the stimulating discussions we had. Additionally, I’d like to acknowledge the course organizers for creating this wonderful space to learn such an important topic. Lastly, I want to thank Micaela Peralta for the invaluable guidance she provides in every aspect of my life. Appendix: Modifications needed to repurpose this approach for preventing other types of misbehavior The approach used in this project can be adapted to prevent other types of misbehavior, such as sycophancy. To facilitate this, the code has been structured to concentrate all necessary modifications into two main components: Datasets directory : This directory contains files that download and process the datasets, load them, split them into training and test sets and create the interactions used for the difference-in-means method. While the overall functionality in those files will remain intact, the code should be replaced with implementations relevant to a different problem. For instance, the current interactions used for the difference-in-means method involve successful jailbreaks and refusals, which can be modified to include interactions that promote sycophantic behavior and interactions that do not. Evaluate_jailbreaks file : This file evaluates the model’s completions to verify whether the jailbreak prompts are successful and if harmless requests are refused. This should be replaced with code that assesses whether the model’s responses exhibit other types of misbehavior, such as sycophancy. It is important to note that the original refusal direction repository included functionality to filter the datasets , select the best candidate direction and evaluate the loss on harmless datasets . Although these features were not utilized in this project for simplicity, they could be advantageous for identifying directions that represent other types of misbehavior.
20f6aa1f-c437-44c1-93b2-7c3c40829433
StampyAI/alignment-research-dataset/lesswrong
LessWrong
AI romantic partners will harm society if they go unregulated Recently, when people refer to “immediate societal harms and dangers” of AI, in media or political rhetoric, they predominantly choose to mention “bias”, “misinformation”, and “political (election) manipulation”. Despite politicians, journalists, and experts frequently compare the current opportunity of regulating AI for good with the *missed* opportunity to regulate social media in the early 2010s, somehow **AI romantic partners** are rarely mentioned as a technology and a business model that has the potential to **grow very rapidly, harm the society significantly, and be very difficult to regulate once it has become huge** (just as social media). This suggests that **AI romance technology should be regulated swiftly.** There is a wave of articles in the media ([1](https://www.businessinsider.com/replika-ai-romance-behind-partners-backs-cheating-2023-7), [2](https://www.theguardian.com/technology/2023/jul/22/ai-girlfriend-chatbot-apps-unhealthy-chatgpt), [3](https://www.telegraph.co.uk/business/2023/07/16/ai-girlfriend-replika-caryn-apps-relationship-health/), [4](https://thebottomline.as.ucsb.edu/2023/06/the-rise-of-ai-girlfriends-connecting-with-desires-and-discussing-controversy-future-implications), for just a small sample) about the phenomenon of AI romance which universally raise vague worries, but I haven’t found a single article that rings the alarm bell that AI romance deserves, I believe. The EA and LessWrong community response to the issue seems to be even milder: it’s rarely brought up, and a post “[In Defence of Chatbot Romance](https://www.lesswrong.com/posts/m7EHa5rWTvmbjMTNZ/in-defense-of-chatbot-romance)” has been hugely upvoted. This appears strange to me because I expect, with around 75% confidence, that rapid and unregulated growth and development of AI partners will become a huge blow to society, on a scale comparable to the blow from [unregulated social media](https://www.thesocialdilemma.com/). I'm not a professional psychologist and not familiar with academic literature in psychology, but the propositions on which I base my expectation seem at least highly likely and common-sensical to me. Thus, one of the purposes of this article is to attract expert rebuttals of my propositions. If there would be no such rebuttals, the second purpose of the article is to attract the attention of the community to the proliferation of AI romantic partners which should be regulated urgently (if my inferences are correct). **What will AI romantic partners look like in a few years?** ------------------------------------------------------------ First, [as Raemon pointed out](https://www.lesswrong.com/posts/m7EHa5rWTvmbjMTNZ/in-defense-of-chatbot-romance?commentId=bsjiqkDvyiEa7fJJd), it’s crucial not to repeat the mistake that is way too common in the discussions of general risks from AI, that is, to assume only the AI capabilities that are present already will exist and to fail to extrapolate the technology development. So, here’s what I expect AI romantic partner startups will be offering **within the next 2-3 years**, with very high probability, because none of these things requires any breakthroughs in foundational AI capabilities, and is only a matter of “mundane” engineering around the existing state-of-the-art LLMs, text-to-audio, and text-to-image technology: * A user could create a new “partner”, that comes with a **unique, hyper-realistic “human avatar”**, generated according to the preferences of the user: body shape, skin colour, eye colour, lip shape, etc. Of course, these avatars will [maximise sexual attractiveness](https://www.lesswrong.com/posts/3iYiXDvTa2BnrxNL3/why-are-women-hot#A_Fat_Bitch__What_Could_This_Mean_), within the constraints set by the user. You can see a sample of avatars that are already generated today in [this Twitter account](https://twitter.com/simps_ai). Except, today these avatars still look just a tad “plastic” and “AI-generated”, which I expect to completely go away soon, i.e., they will look totally indistinguishable from real photos, except that the avatars themselves will be “too perfect to be true” (which also could be addressed, of course: an avatar could have some minor skin defect, or face asymmetry, or some other imperfection if the user chooses). * Apart from a unique appearance, the AI partner will also have a **unique personality** (a-la [character.ai](http://character.ai/)) and **unique voice** (a-la Scarlett Johansson in the movie “Her”). Speech generation will be hyper-realistic, sound just like a real human voice recorded, and will correctly reflect the emotional charge of the text being said and the general emotional tone of the discussion happening between the user and the AI just before the given AI’s voice reply is generated. * LLMs underlying the AI will be fine-tuned on real dialogues between romantic partners and will have **human-level emotional intelligence and the skill of directing the dialogue** ([human-level theory of mind](https://arxiv.org/abs/2302.02083) already goes without saying), such as noticing slight when it’s acceptable to be “cheerful” and when it’s better to remain “serious” when it’s better to wrap up the dialogue because the user becomes slightly bored, etc. Of course, these LLMs won’t be just OpenAI API with a custom system prompt, [which is prone to “leaking” those “I’m just an AI model” disclaimers](https://nypost.com/2023/05/16/i-went-on-a-date-with-chatgpts-carynai/), but a custom fine-tune, a-la [character.ai](https://www.notion.so/a47a268ede1b46359f99ecd0d9c4b91d?pvs=21). * Even if the technology won’t become sophisticated enough to automatically discover the dialogue style preferred by the user, this style could probably be configured by the user themselves, including the preferred “smartness” of their partner, the “sweetness” of the dialogue (e.g., the usage of nouns such as “dear” or “baby”), the preferred levels of sarcasm/seriousness, playfulness, agreeableness, and jealousy of the AI. **The available level of smartness, erudition, and eloquence of the AI is already superhuman**, as of GPT-4-level LLMs, although the user may prefer to deliberately “dumb down” their AI partner. * Even though, “for ethical reasons”, AI partners (at least, those created by companies rather than by open-source hackers) will *not* actively conceal that they are AIs, such as if questioned directly “Are you an AI or a real person?”, they will answer “I’m an AI”, they will probably be trained to avoid confronting the user with this fact if possible. For example, if the user asks the AI “Can we meet in person?”, the AI will answer “Sorry, but probably not yet :(”, rather than “No, because I’m an AI and can’t meet in person.” Similarly, unlike ChatGPT, Bard, and Claude, these AI partners won’t be eager to deny that they have personality, feelings, emotions, preferences, likes and dislikes, desires, etc. * **AI partners will effectively acquire long-term memory** with vector embeddings over the past dialogue and audio chat history, or with the help of extremely long context windows in LLMs, or with a combination of both, which will make AI relationships much less of a “50 First Dates” or “pre-wakeup Barbieland” experience, and more of a “real relationship”, where the AI partner, for example, remembers that the user has a hard time at work or in a relationship with their friends or parents and asks about it proactively even weeks after this has been revealed in the dialogue, giving the impression of “real care”. Please note that I don’t just assume that “AI = magic that is capable of anything”. Below, I list possible features of AI romantic partners that would make them even more compelling, but I can’t confidently expect them to arrive in the next few years because they hinge on AI and VR capability advances that haven’t yet come. This, however, only highlights how compelling AI partners could *already* become, with *today’s* AI capabilities and some proper product engineering. So, here are AI partner features that I *don’t* necessarily expect to arrive in the next 2-3 years: * Generation of realistic, high-resolution videos with the avatar that are longer than a few seconds, i.e., not just “loop photos”. * Real-time video generation that is suitable for a live video chat with the AI partner. * The AI partner has a “strategic relationship intelligence” in the relationship: for example, it is able to notice a growing issue in the relationship (such as the user growing bored of the AI, or growing irritated with some feature of the AI, or a shifting role of the AI in the life of the user) and knows how to address them, even if this would be just initiating a dialogue with the user about this issue, or adjusting the AI’s personality (”working on oneself”). * The personality of the AI partner could change or “grow” spontaneously rather than upon the request or intervention from the user. * The AI can control or manipulate the user on a deep level. Only people who have expert practical knowledge of human psychology can do this. Also, this requires the ability to infer the psychological states of the user over long interaction histories, which LLMs probably cannot do out of the box (at least, not yet). * There is an app for a lightweight VR headset that projects the avatar of the AI partner on a sex doll. **AI romantic partners will reduce the “human relationship participation rate” (and therefore the total fertility rate)** ------------------------------------------------------------------------------------------------------------------------- I don’t want to directly engage with all the arguments *against* the proposition that AI partners will make people work towards committed human relationships and having kids, e.g., in the [post by Kaj Sotala](https://www.lesswrong.com/posts/m7EHa5rWTvmbjMTNZ/in-defense-of-chatbot-romance#People_might_neglect_real_romance) and in the comments to that post, as well as some other places, because these arguments seem to me exactly of the [manufactured uncertainty](https://en.wikipedia.org/wiki/Manufactured_controversy) kind wielded by social media companies (Facebook, primarily) before. Instead, I want to focus on the “mainline scenario” which will counterfactually deprive a noticeable share of young men outside of the “relationship market pool”, which, in turn, must reduce the total share of people ending up in committed relationships and having kids. > A young man, between 16 and 25 years old, finds it difficult to get romantic partners or casual sex partners. This might happen either because the man is not yet physically, psychologically, intellectually, or financially mature, or because he has transient problems with their looks (such as acne, or wearing dental braces), or because the girls of the respective age are themselves “deluded” by social media such as Instagram, have unrealistic expectations and reject the man. Or, the girls of the respective age haven’t yet developed [online dating fatigue](https://www.theguardian.com/lifeandstyle/2022/nov/20/the-rise-and-fall-of-dating-apps) and use dating apps to find their romantic partners, where men outside of the top 20% by physical attractiveness are generally struggling to find dates. Alternatively, the young man finds a girl who is willing to have sex with him, but his first few experiences are unsuccessful and he becomes very unconfident about intimacy. > > Whatever the reason, the man decides to try the AI girlfriend experience because his friends say this is much more fun than just watching porn. He quickly develops an intimate connection with his AI girlfriend and a longing to spend time with it. He is shy to admit this to his friends, and maybe even to himself, but nevertheless he stops looking for human partners completely, justifying this to himself with having to focus on college admission, or his studies at the college, or his first years on a job. > > After a year in the AI relationship, he grows very uneasy feeling about it because he feels he is missing out on “real life” and is compelled to stop this relationship. However, he still feels somehow “burned out” of romance and only half more year after the breakup with his AI partner for the first time he feels sufficiently motivated to actively pursue dates with real women. However, he is frustrated by their low engagement, intermittent responses and flakiness, their dumb and shallow interests, and by how average and uninspiring they look, which is all in stark contrast with his former AI girlfriend. His attempts to make any meaningful romantic relationship go nowhere for years. > > While he is trying to find a human partner, AI partner tech develops further and becomes even more compelling than it used to be when the man left his AI partner. So, he decides to reconcile with his AI partner and finds peace and happiness in it, albeit mixed with sadness due to the fact that he won’t have kids. However, this is tolerable and is a fine compromise for him. > > The defenders of AI romance usually say that the scenario described above is not guaranteed to happen. This critique sounds to me exactly like the rhetorical lines in defence of social media, specifically that kids are not guaranteed to develop social media addiction and psychological problems due to that. Of course, the scenario described above is not guaranteed to unfold in the case of every single young man. But on the scale of the entire society, the defenders of AI romance should demonstrate that the above scenario is so unlikely that the damage to society from this tech is way outweighed by the benefits to the individuals[[1]](#fnupauejb80kp). The key argument in defence of AI romantic partnership is that the relationship that is developed between people and AIs will be of a different kind than romantic love between humans, and won’t interfere with the latter much. But human psychology is complex and we should expect to see a lot of variation there. Some people, indeed, may hold sufficiently strong priors against “being in love with robots” and will create a dedicated place for their AI partner in their mind, akin to fancified porn, or to stimulating companionship[[2]](#fnq61mxy7c7wn). However, I expect that many other people will fall in love with their AI partners in the very conventional sense of "falling in love", and while they are in love with their AIs, they won’t seek other partners, humans or AIs. I reflected this situation in the story above. There are two reasons why I think this will be the case for many people who will try AI romance: * People *already* report falling in love with AI chatbots, even though the current products by Replika and other startups in this sphere are far less compelling than AI partners a few years from now, as I described in the section above. * We know that people fall into genuine romantic love very easily and very quickly from chat and (video) calls alone, "flesh and blood" meetings are not required. To most people, even having only a few photographs of the person and chatting with them is enough to be able to fall in love with them, phone calls or videos are not required. To some people, even just chatting alone (or, in the old times, exchanging written letters), without even having a single photograph of the person, is enough to fall in love with them and to dream of nothing except meeting with them. Also, note that the story above is not even the most “radical”: probably some people will not even try to break up with their AI partners and seek human relationships, and will remain in love with their AI partners for ten or more years. **Are AI partners really good for their users?** ------------------------------------------------ Even if AI romantic partners will affect society negatively through the reduction of the number of people who ever enter committed relationships and/or will have kids, we should also consider how AIs could make their human partners’ lives better, and find a balance between these two utilities, societal and individual. However, it’s not even clear to me that in many cases, AI partners will really make the lives of their users better, or if people wouldn’t regret their decisions to embark on these relationships retrospectively. People *can* be in love and be deeply troubled by that. In previous times (and still in some parts of the world), this would often be interclass love. Or, there could be a clash on some critical life decisions, about the country of living, having or not having children, acceptable risk in the partner (e.g., partner does extreme sports or fighting), etc. True, this does lead to breakups, but they are at least extremely painful or even traumatic to people. And many people could never overcome this, keeping their love towards those who they were forced to leave for the rest of their lives, even after they find a new love. This experience may sound beautiful and dramatic but I suspect that most people would have preferred not to go through such an experience. So, it's plausible that for a non-negligible share of users, the attempts to "abandon" their AI partner and find a human partner instead will be like such a “traumatic breakup” experience. Alternatively, people who will decide to “settle” with their AI partners *before* having kids may remain deeply sad or unfulfilled, even though after their first AI relationship, they may not realistically be able to achieve a happier state, as the young man in the story from the previous section. Those people may regret that they have given AI romance a try in the first place, without first making their best attempt at building a family. I recognise that here I engage in the same kind of uncertainty manufacturing that I accused the defenders of AI romance of in the previous section. But since we are dealing with “products” which can clearly affect the psychology of their users in a profound way, I think it’s unacceptable to let AI romance startups test this technology on millions of their users before the startups have demonstrated in the course of long-term psychological experiments that young people even find AI partners ultimately helpful and not detrimental for their future lives. Otherwise, we will repeat the mistake with social media, when the negative effects of it on young people’s psychology became apparent only about 10 years after the technology became widely adopted and a lot of harm had already been done. Similarly to social media, AI romance may become very hard to regulate once it is widely adopted: the technology couldn’t be simply shut down if there are millions of people who are already in love with AIs on the given platform. ### **AI romance for going through downturns in human relationships** [This article](https://nypost.com/2022/03/14/man-says-relationship-with-ai-girlfriend-saved-marriage/) describes an interesting case where a man had an “affair” with an AI girlfriend while his wife was depressed for a long time and even fell in love with the AI girlfriend, but that helped him to rekindle the desire to take care of his wife and “saved his marriage”. While interesting, I don’t think this case can be used as an excuse to continue the development and aggressive growth of the AI partner technology for the majority of their target audience, who are single ([Replika said that 42% of their users are in a relationship or married](https://www.businessinsider.com/replika-ai-romance-behind-partners-backs-cheating-2023-7)). There are multiple reasons for this. First, this case of a man who saved his marriage is just an anecdote, and statistics may show that for the majority of people “AI affairs” only erode their human relationships rather than help to rekindle and strengthen them. Second, the case mentioned above seems to be relatively unusual: the couple already has a son (which is a very huge factor that makes people want to preserve their relationships) and the wife of the man was “in a cycle of severe depression and alcohol use” for entire 8 years before “he was getting ready for divorce”. Tolerating a partner who is in a cycle of severe depression and alcohol use for 8 years could be a sign that the man was unusually motivated, deep down, to keep their relationship, either for the love for his wife or his son. It seems that the case is hardly comparable to childless couples or unmarried couples. Third, we shouldn’t forget, once again, that soon AI partners may become much more compelling than today, and while they may be merely “inspiring” for some people in their human relationships (which are so far more compelling that AI relationships), soon this may change, and therefore the prevalence of the cases such as the one discussed in this section will go down. Someone may reply to the last argument that along with making AI partners more compelling, the startups which create them might also make AI partners more considered for the existing human relationships of the users and deliberately nudge the users to improve their human relationships. I think this is very unlikely to happen (in the absence of proper regulation, at least) because this will go against the business incentives of these startups, which is to keep their users “stay in AI relationship” and pay a subscription fee for as long as possible. Also, “deliberately nudging people to improve their human relationships” is basically the role of (family) psychotherapist, and there will be, no doubt, AI products that automate this role specifically, but having such AI psychotherapists extremely sexy avatars, flirting and sexting with their users wouldn't seem to be helpful to the “basic purpose” of these AIs (which AI romance startups may pretend to be “helping people to work their way towards successful human relationships”) at all. **Policy recommendations** -------------------------- I think it would be prudent to immediately prohibit AI romance startups to onboard new users unless they are either: * **Older than 30 years** (pre-frontal cortex is not fully formed before 25 years; most men don’t get to see what women they *could* potentially have relationships with before they are at least 28-30 years old); or, * **Clinically diagnosed psychopaths** or have another clinical condition which could be dangerous for their human partners; or * **AI partner is recommended to a person by a psychotherapist for some other reason**, such as the person has a severe defect in their physical appearance or a disability and the psychotherapist sees that the person doesn’t have psychological resources or a willingness to deal with their very small chances of finding a human partner (at least before the person turns 30 years old, at which point the person could enter a relationship with an AI anyway), or because they have a depression or a very low self-esteem and the psychotherapist thinks the AI partner may help the person to combat with this issue, etc. It’s also worthwhile to reiterate that many alleged benefits of AI romantic partners for their users and/or society, such as making people achieve happier and more effective psychological states, motivating them to achieve their goals, and helping them to develop empathy and emotional intelligence, could be embodied in AI teachers, mentors, psychotherapists, coaches, and friends/companions, without the romantic component, which will probably stand in the way of realising these benefits, although admittedly may be used as a clever strategy for mass adoption. In theory, it might be possible to create such an AI that mixes romance, flirt, gamification, coaching, mentorship, education, and anti-addiction precautions in such a proportion that it genuinely helps young adults as well as society, but it seems to be out of reach for AI partners (and LLMs that underlie them) for the following few years at least and would require long psychological experiments to test. In a free and unregulated market for AI romance, any such “anti-addictive” startup is bound to be outcompeted by startups which make AIs that maximise the chances that the user falls in love with their AI and stays on the hook for as long as possible. ### **What about social media, online dating, porn, OnlyFans?** Of course, all these technologies and platforms harm society as well (while benefitting at least some of their individual users, at least from some narrow perspectives). But I think bringing them up in the discussions of AI romance is irrelevant and is a classic case of whataboutism. However, we should notice that **AI partners are probably going to grab human attention more powerfully and firmly than any of social media, online dating, or porn has managed to do before**. As a matter of simple heuristic, this inference alone should give us a pause and even if we think that this is unnecessary to regulate or restrict access to porn (for instance), this shouldn’t automatically mean that the same policy is right for AI romantic partners. --- *This post was originally published on the* [*Effective Altruism Forum*](https://forum.effectivealtruism.org/posts/qNbwGQzPR8sshwrRh/ai-romantic-partners-will-harm-society-if-they-go)*.* 1. **[^](#fnrefupauejb80kp)**Whereas it’s not even clear that young individuals will really benefit from this technology, on average. More on this in the following section. 2. **[^](#fnrefq61mxy7c7wn)**I’m sure that such “companionship” will be turned into a selling point for AI romantic partners. I think AI companions, mentors, coaches, and psychotherapists are worthwhile to develop, but none of such AIs should have a romantic or sexual aspect. More on this in the section "Policy recommendations" below.
862f26dc-5ad6-4959-8750-e06431784a9a
trentmkelly/LessWrong-43k
LessWrong
Burning Man Meetup: Bayes Camp In celebration of the virtues of applied rationality, Less Wrong is going to Burning Man! And because Heinlein rationalists should win, Bayes Camp is going to be the most awesome place there. A bunch of people from SingInst/Less Wrong will be descending upon the desert, bedecked as the members of the Bayesian Conspiracy. Kevin, Jasen, JustinShovelain, Peter de Blanc, Michael Vassar and Nick Tarleton, among others, will be there. If you'd like to stop by, say so in the comments!  We'll be at 6:50, F, and should be there from Monday 30th. Please note: Burning Man is serious stuff, and if you don’t think you’re up to the desert, you shouldn’t come. Either way, read the survival guide.   EDIT: updated location
a4aec793-b719-4eab-a1d8-aec66abf5a9b
trentmkelly/LessWrong-43k
LessWrong
Saving the world sucks I don’t want to save the world. I don’t want to tile the universe with hedonium. I don’t want to be cuckolded by someone else’s pretty network-TV values. I don’t want to do anything I don’t want to do, and I think that’s what (bad) EAs, mother Teresa, and proselytizing Christians all get wrong. Doing things because they sound nice and pretty and someone else says they’re morally good suuucks. Who even decided that warm fuzzies, QALYs, or shrimp lives saved are even good axes to optimize? Because surely everyone doesn’t arrive at that conclusion independently. Optimizing such universally acceptable, bland metrics makes me feel like one of those blobby, soulless corporate automata in bad tech advertisements. I don’t see why people obsess over the idea of universal ethics and doing the prosocial thing. There’s no such thing as the Universal Best Thing, and professing the high virtue of maximizing happiness smacks of an over-RLHFed chatbot. Altruism might be a “virtue”, as in most people’s evolved and social environments cause them to value it, but it doesn’t have to be. The cosmos doesn’t care what values you have. Which totally frees you from the weight of “moral imperatives” and social pressures to do the right thing. There comes a time in most conscientious, top-of-distribution kids’ lives when they decide to Save the World. This is very bad. Unless they really do get a deep, intrinsic satisfaction from maximizing expected global happiness, they’ll be in for a world of pain later on. After years of spinning their wheels, not getting anywhere, they’ll realize that they hate the whole principle they’ve built their life around. That, deep down, their truest passion doesn’t (and doesn’t have to) involve the number of people suffering malaria, the quantity of sentient shrimps being factory farmed, or how many trillion people could be happy in a way they aren’t 1000 years from now. I claim that scope insensitivity isn’t a bug. That there are no bugs when it comes to val
d84e24bf-b2bb-45b7-8a5e-e199210162ff
trentmkelly/LessWrong-43k
LessWrong
Talking With People Who Speak to Congressional Staffers about AI risk A conversation with Jason Green-Lowe and Jakub Kraus of the Center for AI Policy. They've met with 50+ congressional staffers in DC about AI regulation efforts and are in the process of drafting a model bill. This is a somewhat entry-level conversation, good for getting an idea of what's going on over there without getting very wonky.
ca0f2757-09ed-4571-af74-e3e333e72bb2
trentmkelly/LessWrong-43k
LessWrong
Bias and Naturalism: a Challenge There are two theses which I think many LWers find attractive, but which on the face of it are at odds. This challenge is to find a way to reconcile them. Bluntly and a bit inaccurately: 1. You can't trust your untutored native cognitive endowment to make rational (or moral) judgements. 2. All knowledge - including of what's rational (moral)- is scientific. To learn what's rational (moral) our only option is to study our native cognitive endowments. In more ponderous detail: Point 1) It's taken for granted on LW -and I have no problem with this- that without effort to correct ourselves, humans systematically make irrational judgements. This was always obvious, but research of the last few decades, which this blog usefully advertises, exposes this quite starkly (Kahnemann and Tversky et. al.) I think it's equally likely that the moral judgements of an average unreflective person who has not benefited from any moral education will likely fall short of what someone who has been morally educated would judge (moral education makes us less cruel). Point 2) Some LW contributors subscribe to naturalism. One way of understanding this idea is that all knowledge is scientific knowledge -I mean, knowledge of facts about the measurable, natural world. In particular, whatever there may be to know about what is rational or moral can be known only through empirical investigation -specifically, investigation of the functioning of Homo sapien's cognitive apparatus, and possibly facts about human evolution and ethology. The Problem: Point (1) tells us that study of what people actually think and do will not tell us what's rational or moral. Indeed, if we try to figure out, say, how to judge the probability of a heads on a toss of a fair coin given, say, 5 prior tails, merely studying what untutored people are apt to judge will give us a bum steer.  But point (2) seems to tell us that's all we are allowed. How do we augment mere cognitive science with other natural sciences, and
797a5bef-c5bb-4e2b-b5e1-fdd068a52e4a
trentmkelly/LessWrong-43k
LessWrong
How to Beat Procrastination Part of the sequence: The Science of Winning at Life   > My own behavior baffles me. I find myself doing what I hate, and not doing what I really want to do! - Saint Paul (Romans 7:15) Once you're trained in BayesCraft, it may be tempting to tackle classic problems "from scratch" with your new Rationality Powers. But often, it's more effective to do a bit of scholarship first and at least start from the state of our scientific knowledge on the subject. Today, I want to tackle procrastination by summarizing what we know about it, and how to overcome it. Let me begin with three character vignettes... Eddie attended the sales seminar, read all the books, and repeated the self-affirmations in the mirror this morning. But he has yet to make his first sale. Rejection after rejection has demoralized him. He organizes his desk, surfs the internet, and puts off his cold calls until potential clients are leaving for the day. Three blocks away, Valerie stares at a blank document in Microsoft Word. Her essay assignment on municipal politics, due tomorrow, is mind-numbingly dull. She decides she needs a break, texts some friends, watches a show, and finds herself even less motivated to write the paper than before. At 10pm she dives in, but the result reflects the time she put into it: it's terrible. In the next apartment down, Tom is ahead of the game. He got his visa, bought his plane tickets, and booked time off for his vacation to the Dominican Republic. He still needs to reserve a hotel room, but that can be done anytime. Tom keeps pushing the task forward a week as he has more urgent things to do, and then forgets about it altogether. As he's packing, he remembers to book the room, but by now there are none left by the beach. When he arrives, he finds his room is 10 blocks from the beach and decorated with dead mosquitos. Eddie, Valerie, and Tom are all procrastinators, but in different ways.1 Eddie's problem is low expectancy. By now, he expects only failure. Ed
4ebc0259-d809-4716-ae1d-28d537222530
trentmkelly/LessWrong-43k
LessWrong
Group debugging guidelines & thoughts This post is a response to the request for a post with a "thorough description of how to do pair debugging"; I'm also having this post double as a retrospective on my attitude towards pair debugging in general.  As described by Vaniver,  > At CFAR, one of the exercises is 'pair debugging'; one person is the protagonist exploring one of their problems, and the other person is the helper, helping them understand and solve the problem. (Like many things at CFAR, this is a deliberate and distilled version of something that already happens frequently in normal life.) Starting from late 2014 (when I visited my first CFAR workshop), I ran an in-person rationality/self-development group which was inspired by CFAR's practices but also went in its own directions. One of the activities we had were regular debugging circles: people would split into small groups and then take turns where one person at a time would explain a problem that they were having and others would then attempt to help them out with it. As I've understood CFAR's debugging circles, they are relatively free-form by design; the main focus is on exploring the problem together, and also notice opportunities where various rationality techniques could be applied. As such, they avoid having too much explicit structure in order to allow many different approaches. Yet as there still seems to be something like "the skill of doing good debugging", in 2015 I wrote the following guidelines for us that seemed to distill some best practices. My 2015 debugging manual Etiquette Only bring up something if you actually want a solution for it. Everyone has times when they just need to vent and want sympathy rather than solutions, but debugging circles aren't the place for that. This doesn't mean that you would need to accept any suggestion that the others bring up, but it does mean that that you should be open to others offering suggestions in general. Once the session ends, you're free to just ignore and forget anythin
d407b155-7a4b-420b-a645-21d6e44142f9
trentmkelly/LessWrong-43k
LessWrong
Why did computer science get so galaxy-brained? Not a joke question. Was it early military funding? Then why didn't nuclear power become The Big Smart Field/Industry? Where's the Google of nuclear fusion? Why aren't a double-digits % of LW users nuclear physicists? Was it Von Neumann and other smart scientists' early involvement? Then again, why didn't nuclear (or some obscure area of math besides CS) get big? Was it universal applicability? Perhaps. (Then again, they used to think we'd have cars with fusion reactors in them... and why didn't steam power turn into a superweapon program?) Cold War paranoia/global-impact/apocalyptic mindset? Rise of world population alerting people to scale? Availability of the Commodore 64 to children? My main guess is "universalizability", but I'd also like to know about historical factors and/or other things about the structure/uniqueness-of-the-field, that may have caused this.
1b03e5fa-1d4a-4bc8-a14f-36315be96b75
StampyAI/alignment-research-dataset/special_docs
Other
Shades of confusion: Lexical uncertainty modulates ad hoc coordination in an interactive communication task.. Cognition 225 (2022) 105152 Available online 20 May 2022 0010-0277/© 2022 Elsevier B.V. All rights reserved.Shades of confusion: Lexical uncertainty modulates ad hoc coordination in an interactive communication task Sonia K. Murthya,b, Thomas L. Griffithsa,c, Robert D. Hawkinsa,* aDepartment of Psychology, Princeton University, Princeton, NJ, United States of America bAllen Institute for Artificial Intelligence, Seattle, WA, United States of America cDepartment of Computer Science, Princeton University, Princeton, NJ, United States of America ARTICLE INFO Keywords: Communication Concepts Social representations Word-color associations ABSTRACT There is substantial variability in the expectations that communication partners bring into interactions, creating the potential for misunderstandings. To directly probe these gaps and our ability to overcome them, we propose a communication task based on color-concept associations. In Experiment 1, we establish several key properties of the mental representations of these expectations, or lexical priors , based on recent probabilistic theories. Asso- ciations are more variable for abstract concepts, variability is represented as uncertainty within each individual, and uncertainty enables accurate predictions about whether others are likely to share the same association. In Experiment 2, we then examine the downstream consequences of these representations for communication. Accuracy is initially low when communicating about concepts with more variable associations, but rapidly in- creases as participants form ad hoc conventions. Together, our findings suggest that people cope with variability by maintaining well-calibrated uncertainty about their partner and appropriately adaptable representations of their own. 1.Introduction From jargon-filled scientific communication (Anderson-Cook, 2010; Bullock, Col˘on Amill, Shulman, & Dixon, 2019; Martínez & Mammola, 2021) and medical consultations (Castro, Wilson, Wang, & Schillinger, 2007; Korsch, Gozzi, & Francis, 1968; McCabe & Healey, 2018) to the linguistic battlefields of political discourse (Lakoff, 2006; Wodak, 1989), it can often feel as if we are speaking different languages. Speakers not only bring different perspectives, expertise, and background knowledge into a conversation, they may even have different expectations about what words mean (Labov, 1973; Marti, Piantadosi, & Kidd, 2019; McCloskey & Glucksberg, 1978). For example, cognitive psychologists, doctors, and lawyers all commonly use the word “trial” in professional settings but refer to different concepts (a stimulus presentation, a clinical study, and a court appearance, respectively). Such lexical variability is a troubling and pervasive challenge, setting the stage for misunderstandings and other communication breakdowns. How do we manage to understand each other when we cannot be sure that we’re starting on the same page? Modern theories have addressed this challenge by viewing communi - cation not as a unitary act of transmission but as an ongoing collaborative process where interlocutors must coordinate to reach mutual understanding over time (Clark, 1996; Davidson, 1986; Krauss & Fussell, 1996; Reddy, 1979; van Arkel, Woensdregt, Dingemanse, & Blokpoel, 2020). These theories acknowledge idiosyncrasy, misunderstanding, and variation in literal meaning across partners and communities as a fundamental and unavoidable aspect of language use (Clark, 1998; Elman, 2004; Schuster & Degen, 2020; Wilson & Carston, 2007). Still, a central open question has concerned the underlying mental representations of signal meaning that support this collaborative process. One hypothesis raised by recent prob- abilistic theories of communication is that the ability to anticipate and flexibly overcome misunderstandings depends on each speaker ’s initial lexical uncertainty , reflecting prior expectations about what messages may or may not mean to one’s partner (Bergen, Levy, & Goodman, 2016; Brochhagen, 2020; Hawkins et al., 2022; Potts & Levy, 2015). This idea builds on the classical construct of a mental lexicon containing signal- meaning mappings, allowing words to be grounded in external referents. For example, a speaker would consult their lexicon to evaluate the extent to which a given word like “dog” applies to a given animal they’ve encoun - tered in the world. Lexical uncertainty replaces a fixed dictionary of mappings with a probability distribution over possible mappings that different partners may be using. For example, there may be more uncer - tainty about whether a given partner will share a given meaning for some *Corresponding author. E-mail address: rdhawkins@princeton.edu (R.D. Hawkins). Contents lists available at ScienceDirect Cognition u{�~zkw! s{yo| kro>! ÐÐÐ1ow �o�to~1m{y2w {mk�o2m{rzt�! https://doi.org/10.1016/j.cognition.2022.105152 Received 13 May 2021; Received in revised form 7 February 2022; Accepted 26 April 2022 Cognition 225 (2022) 105152 2technical jargon than whether they will share a given meaning for “dog.1” While lexical priors have played a key explanatory role in compu - tational models of communication, they have been challenging to measure and manipulate directly in experimental work. Classical studies of coordination and communication have typically been restricted to stimuli like ambiguous tangram shapes (Clark & Wilkes-Gibbs, 1986 ), line drawings (Krauss & Weinheimer, 1964 ), or complex scenes (Weber & Camerer, 2003 ) where lexical priors fall in a narrow, carefully- calibrated band of uncertainty between completely random (e.g., white noise) and completely universal (e.g., a photograph of a proto - typical dog). Despite the narrow range of these stimulus spaces, it has been possible to isolate certain effects that are consistent with accounts of lexical uncertainty. For example, there is evidence that the codeability of tangrams (the number of distinct descriptions elicited; Hupet, Seron, & Chantraine, 1991 ), and the shared expertise of speakers (whether participants are equally familiar with New York landmarks; Isaacs & Clark, 1987 ) both affect the time it takes for speakers to reach mutual understanding. Still, there are many reasons to explore lexical uncer - tainty in richer stimulus spaces and other communication modalities. Richer stimulus spaces make it possible to observe and manipulate lexical priors spanning a broader range of uncertainty, from the most idiosyncratic to the most universal. Other communication modalities present a further opportunity to overcome inherent challenges associ - ated with natural language, where probability distributions must be estimated from sparse samples and poor coverage over the full (infinite) spaces of possible utterances and meanings. For example, measures of codeability for referential meaning (Hupet et al., 1991 ) are based on a single description from each participant, where most descriptions appear only once in the data set. 1.1. Color-concept associations as a window onto lexical uncertainty in communication To proceed in the face of these methodological challenges, we pro- pose a communication paradigm based on color-concept associations. While color has commonly been used as the target of reference in communication tasks (Caldwell & Smith, 2012 ; Monroe, Hawkins, Goodman, & Potts, 2017 ; Morin, Müller, Morisseau, & Winters, 2022 ; Winters & Morin, 2019 ), we instead ask participants to use a set of color chips as their communication modality (e.g., Roberts & Clark, 2018 ) to communicate the identity of a target concept in a context of distractors (e.g. lemon , happiness ). While these color chips clearly differ in important ways from natural language utterances as a communication modality (see General Discussion), we argue that color-concept associations nevertheless provide a number of advantages for examining the conse - quences of lexical uncertainty. First, color-concept associations natu- rally span a wide range of possible priors (Fig. 1). For example, some concepts, like lemon , are expected to have strong, nearly universal color associations, reflected in tight priors P(color|lemon ). Meanwhile, other concepts, like fairness are expected to have more idiosyncratic and distributed associations, reflected in looser and more spread-out priors Fig. 1.Example color associations. We depict the response distribution of the 10 concepts with lowest and highest variability. The width of each color bar corresponds to the proportion of a particular color response for a given word. Colors are presented in a fixed order across bars but colors that received no responses are not visible in the given bar. We report two estimates of variability for each concept (see Methods; in both cases, higher is more variable) . 1 While lexical uncertainty is typically defined as a joint distribution over the full set of signal-meaning tuples P(m,s) (see Appendix A), it can also be expressed as the conditional uncertainty over meanings for a given signal, P(m| s), or as the conditional uncertainty over signals for a given meaning P(s|m), since the baseline probabilities P(m) and P(s) are of lesser interest. We will loosely use lexical prior and lexical uncertainty to talk about this whole family of expressions, but the final conditional expression (s|m) is particularly useful for eliciting priors. S.K. Murthy et al. Cognition 225 (2022) 105152 3(Hutchings, 2004 ). Prior studies eliciting such associations have found systematic differences across different abstract concepts (Barchard, Grob, & Roe, 2017 ; Guilbeault et al., 2020 ; Mohammad, 2011 ; Rathore, Leggon, Lessard, & Schloss, 2019 ; Volkova, Dolan, & Wilson, 2012 ), which may also vary cross-culturally (Hupka, Zaleski, Otto, Reidl, & Tarabrina, 1997 ; Tham et al., 2020 ). We build on the elicitation methods developed in prior work while also introducing several methodological innovations to answer novel questions arising in the context of communication. Second, while natural language utterances are typically understood to be embedded in a complex and high-dimensional se- mantic space (Jones & Mewhort, 2007 ; Pennington, Socher, & Manning, 2014 ), color signals are embedded in a much lower-dimensional space with better-validated psychometric structure, which allows for denser sampling and explicit measurements of variation. Finally, color remains an important modality of natural communication in its own right (Riley, 1995 ), as evidenced by the deliberate choice of color palettes in graphic design (Marcus, 1982 ) and marketing (Labrecque & Milne, 2011 ), or our everyday metaphorical appeals to color when trying to convey complex emotional states that are challenging to describe with words (Lakoff & Johnson, 1980 ; Meier & Robinson, 2005 ; Van Leeuwen, 2011 ). 1.2. Three foundational questions about lexical uncertainty We use the domain of color-concept associations to evaluate three foundational hypotheses about lexical uncertainty raised by recent theories of communication. First, and most fundamentally, do in- dividuals actually maintain an internal probability distribution repre - senting their uncertainty about associations (Fig. 2, top row), or do they only represent a point estimate giving their strongest association (Fig. 2, bottom row)? Second, we ask: is the population relatively homogeneous, composed of individuals sharing similar representations (Fig. 2, right column) or is the population actually more heterogeneous and idiosyncratic (Fig. 2, left column)? Third, when it comes time to use these representations in a communicative context, is a given individual ’s representation purely egocentric or do they maintain well-calibrated expectations about whether their representation will be shared by other agents? While we unpack these hypotheses and operationalize their pre- dictions more thoroughly in subsequent sections, it is worth noting here that there is theoretical precedent for these questions not only in the communication literature but also in the literature on color-concept associations. For example, the color inference framework introduced by Schloss (2018) proposes that individuals store and continually update their color-concept associations from their experiences in the world. Under this framework, every concept has a corresponding association space with some weight placed on each possible color. These weights effectively give rise to an internal probability distribution that can, for example, be used to generate appropriately discriminative colors to convey different meanings (Mukherjee, Yin, Sherman, Lessard, & Schloss, 2022 ). Such resonances between models of meaning across the domains of natural language communication and of color-concept as- sociations provides a further theoretical motivation for using the color domain to explore representations of lexical uncertainty. In Experiment 1, we begin to explore basic theoretical properties of lexical uncertainty by eliciting color associations for a variety of con- cepts. By eliciting multiple responses from each participant, we were able to compare the population-level variability of responses to the in- ternal variability within each individual. And by asking participants to estimate the extent to which others will share the same associations, we were able to probe the extent to which any given individual represents the population distribution, thus forming a basis for successful communication. Next, in Experiment 2, we directly measure the downstream effects of lexical priors on ad hoc coordination using an interactive communication task. Pairs of participants were asked to Fig. 2.Schematic of candidate hypotheses and corresponding predictions. We explore a 2 ×2 space of proposals about how color-concept associations are internally represented by individuals (as a fully calibrated probability distribution representing uncertainty, or as a sparser point estimate) and how much these associations differ across individuals in the population (from a fully homogeneous population to a more heterogeneous population). These hypotheses make distinct predictions about the relationship between the variability of associations measured within an individual and across a population, which we evaluate in Experiment 1. S.K. Murthy et al. Cognition 225 (2022) 105152 4communicate about sets of concepts using color chips as signals. Criti- cally, we used the corpus of associations we elicited in our first experi - ment to construct contexts that manipulated participants ’ initial priors: some associations were strongly shared across the population while others were more idiosyncratic. Taken together, our work suggests that people maintain well-calibrated prior expectations about the potential for miscommunication and use these flexible priors to rapidly adapt to their partner. 2.Experiment 1: Eliciting color-concept association priors We begin by eliciting color association priors for concepts spanning a wide range of population variability. While there are a number of existing datasets that may be used to assess variability at the population level (e.g. Mohammad, 2011 ; Tham et al., 2020 ; Volkova et al., 2012 ), there were three specific desiderata for our work that were not satisfied by these existing datasets. First, we required participants to not only choose the color they themselves most associated with a concept, but also to predict whether that association would be shared by others. Such an explicit query about expected agreement was not included in the protocol of previous studies. Second, while previous studies revealed differences in population-level variability, it remains unclear whether such variability arises within individuals (e.g. from internal probability distributions) or only at the population-level (e.g. from individuals with different point estimates). To distinguish between these possibilities, we required multiple blocks of judgements for each participant; previous variants of the task only collected a single best exemplar per participant. Finally, we required a set of stimuli that would span a wide range of expected association variability from strong (e.g. “lemon ”) to weak (e.g. “fairness ”). Previous studies have focused on smaller sets of concepts (e. g. Tham et al., 2020 , used only 59 abstract concepts, screening out words that corresponded to concrete objects like “lemon ”) or only collected coarse-grained responses for basic-level colors (e.g. Mohammad, 2011 ) asked participants to choose from the 11 color terms from Berlin & Kay, 1969 ). 2.1. Methods 2.1.1. Participants We recruited 733 participants from Amazon Mechanical Turk, restricting location to the United States. After implementing our pre- registered exclusions criteria, we were left with data from 485 of these participants. 129 participants were excluded for failing one of four attention checks that were interspersed throughout the experiment. These attention checks asked participants to provide color associations for basic-level color words (e.g., “red”, “orange ”, “yellow ”). The first two attention checks were embedded in a block of initial practice trials while the other two were inserted randomly into the experiment (one in the first half, and one in the second half). Participants who did not provide a color within a (relatively permissive) set of valid responses were immediately removed from the experiment for the base payment. We also removed 5 additional participants who provided the same response for more than three trials in a row, or consistently responded in less than 1000 ms, both of which indicated blind guessing without reading the stimulus prompt. Finally, an additional 65 participants were excluded due to colorblindness. To test color vision, we presented participants with three Ishihara plates that detected common red-green deficiencies or more extreme colorblindness. We excluded only those participants whose responses indicated more extreme colorblindness (or inattention). 2.1.2. Stimuli We considered two factors when selecting our stimulus set. First, we required a relatively large number of concepts to control for possible item effects. Second, we needed these concepts to span a wide range of different priors (i.e. different color associations, with different levels of variability), but were unable to know these priors beforehand. We thus considered a set of 5500 candidate concepts drawn from the Glasgow Norms dataset (Scott, Keitel, Becirspahic, Yao, & Sereno, 2019 ), which provides ratings for words on 9 different scales, including properties like imageability and concreteness. Concreteness represents the degree to which something can be experienced by the senses, while imageability represents the extent to which a word invokes a mental image.2 We used these measures as rough proxies for the level of variability in color as- sociations we could expect for a concept and selected a balanced set of 200 concepts from this candidate set using the following procedure. We began by imposing a familiarity threshold (familiarity ≽4.0) to ensure that the majority of participants were likely to know the concept that each stimulus word referred to. Next, we selected the 500 words with the highest concreteness ratings (the concrete set) and the 500 words with the lowest concreteness ratings (the abstract set). We then sampled 100 words from each set to obtain a roughly uniform distribution of imageability. Lastly, we conducted a manual pass over the resulting 200 words to ensure that they were consistent in part of speech (e.g., con- verting adjectives to their noun form) and to replace any that were offensive, confusing, or redundant with one another. These 200 words were randomly divided into 5 subsets, each containing 20 words from the abstract set and 20 words from the concrete set. 2.1.3. Task, design, & procedure Each participant was assigned one of the 5 distinct word sets. On each trial, a single word was presented with a set of 88 (virtual) Munsell chips sampling a wide range of color space.3 Participants were instructed to click the color they most associated with the target word. To control for differ - ences in individuals ’ color displays, participants were instructed to take the experiment on a desktop or laptop computer and ensure that their screens were set to their default brightness and color temperature (e.g. to turn off programs like Flux). Participants were also screened through a pre-test asking them to select color swatches for words like ‘blue ’ and ‘red’, ensuring that any differences in color displays lay within tolerance of color boundaries (see Supplemental Figs. S3-S5). To estimate internal variability, we presented these words in a blocked sequence. After providing responses for all 40 words in the set, presented in randomized order, participants repeated the task a second time (Fig. 3A). On the second block, participants were also asked: “How strongly do you expect others to share your color association for this word? ” We presented a slider ranging from “not at all” (most people will have a different color association than I do), to “very strongly ” (most people will have the same color association as I do) with a midpoint labeled “somewhat ” (roughly the same number of people will have the same or different color associations as I do). This question allowed us to compare the true proportion of shared responses to each participants ’ expectations. 2 Though these two measures are highly correlated with with each other (r ˆ 0.93), they are considered distinct aspects of a word ’s semantics (Paivio, Yuille, & Madigan, 1968 ; Richardson, 1975 ). For example, emotion words like anger may be rated low on concreteness but high on imageability; conversely, some scientific or medical concepts like diabetes may be high on concreteness but low on imageability 3 The World Color Survey (Kay, Berlin, Maffi, Merrifield, & Cook, 2009 ) used the set of 320 chips (40 evenly spaced hues crossed with 8 levels of lightness) proposed by Lenneberg and Roberts (1956) . Later studies down-sampled this set to 160 (Heider, 1972 ) by removing hues at intermediate levels of 2.5 and 7.5, and then further down-sampled to 80 (Gibson et al., 2017 ; Zaslavsky, Kemp, Tishby, & Regier, 2019 ). Our 88-chip set was derived by adding 8 achromatic chips to the 80-chip set from Gibson et al. (2017) , allowing participants to select greyscale values. S.K. Murthy et al. Cognition 225 (2022) 105152 52.1.4. Evaluation Metrics We measured variability in two different ways: (1) the entropy of the discrete response distribution and (2) the average pairwise similarity between responses in color space.4 Our entropy measure was computed on the distribution of response counts over the 88 Munsell chips, using the Schurmann-Grassberger estimator (i.e. adding pseudo-counts of 1/ 88 to each bin). Entropy is expected to be high when responses are spread out across many different color chips, and low when participants all concentrate their responses on a small number of colors. Our pairwise distance measure is computed by taking the perceptual similarity ΔE (where E stands for Empfindung , German for “sensation ”) between colors in the psychometrically-validated CIELAB color space. It is close to 0 when colors are perceptually similar (colors with ΔE D1 are not able to be discriminated by the human eye), and reaches values close to 100 for extremely dissimilar colors. We use the CIE2000 definition of ΔE, which accounts for distortions in perceptual uniformity (Sharma, Wu, & Dalal, 2005 ). For each word, we derived a measure of internal vari- ability, internal ΔE, by taking the ΔE between the color chosen by a given participant in the first block and the second block. We also obtained a word-level measure of population variability, population ΔE, by taking the average pairwise ΔE among every pair of color responses provided by different participants.5 2.2. Results 2.2.1. Associations are more variable for abstract words We begin by examining our use of concreteness and imageability as proxies for constructing stimuli that elicit a wide range of associations. These proxies were motivated by recent work suggesting that the extent to which people share associations for a concept may be related to the extent that they share the same sensory experiences for that concept, such that abstract concepts have more variable associations than concrete and imageable concepts. For example, recent analyses of Google Image search results have found that raw color distributions found in the top images retrieved for abstract terms are indeed more variable on average than those measured for concrete terms (Desikan et al., 2020 ). At the same time, however, the relationship between these measures is likely more complex: recent work has also found coherent relationships among abstract concepts (Guilbeault et al., 2020 ) and some abstract words may nevertheless have strong associations (e.g. anger is red and sadness is blue). To assess this relationship in our dataset, we constructed two regression models predicting our two population variability measures (entropy and ΔE) from the corresponding concreteness and imageability ratings reported by (Scott et al., 2019 ) for each word (see Appendix B, Fig. S1). We found that both concreteness and imageability are independently correlated with entropy (Spearman rank correlation ρ ˆ0.51, p D 0.001 and ρ ˆ0.54, p D0.001, respectively), suggesting that color associations for more abstract words were much more variable. How- ever, in a combined model including both predictors we only found a main effect of imageability (b ˆ0.31, t(191) ˆ3.19, p ˆ0.002), with no independent contribution of concreteness (b ˆ0.03, p ˆ0.74), suggesting that the information provided by concreteness may be redundant. Similarly, both concreteness and imageability are indepen - dently correlated with ΔE (ρ ˆ0.53, p D0.001 and ρ ˆ0.52, p D 0.001, respectively), but we found the opposite relationship in the combined model: concreteness was a weak but significant predictor of ΔE (b ˆ1.22, t(191) ˆ2.07, p ˆ0.039), while imageability was not (b ˆ0.8, p ˆ0.26). Together, these findings support a strong negative relationship between abstractness and variability, regardless of how we measure them, but we were unable to distinguish the unique contribu - tions of concreteness from those of imageability due to their high collinearity in the combined model. Importantly, while they were useful for choosing stimuli, neither measure was a particularly good proxy for population variability in absolute terms. We therefore use our own direct estimates of variability (i.e. entropy and ΔE) to construct high and Fig. 3.Design and results of Experiment 1. (A) We elicit two associations from each participant for a given word (1st block and 2nd block). On the second block, participants additionally estimate the probability that others would provide the same association. Internal ΔE is measured as the similarity between the two responses provided by the same individual, while population ΔE is measured between different individuals. (B) Internal ΔE of color associations for each concept tracks population variability ΔE, although a single participants ’ responses are somewhat closer together than expected from sampling from the population distribution. (C) Expectations are well-calibrated to the true agreement odds. Error bars for each concept are bootstrapped 95% CIs. 4 We also pre-registered a procedure to estimate parametric variability by fitting a multi-dimensional Gaussian distribution over color space, but chose to replace this measure by the ΔE measure. It is highly correlated with the Gaussian measure, r ˆ0.861, but better accounts for metric distortions in color space and the existence of multi-modal response distributions. 5 We may define population ΔE in several ways, depending on where the different participants ’ responses are drawn from. For example, block 1 ΔE compares responses from different participants in the first block Δ(c1i,c1j), block 2 ΔE restricts to the second block, Δ(c2i,c2j), and a “cross-block ” ΔE compares responses in one block to those obtained from other participants in the other block, Δ(c1i,c2j). These different ways of measuring population ΔE are highly correlated (r ˆ0.994 between the block 1 and cross-block versions; r ˆ 0.993 between the block 2 and cross-block versions; r ˆ0.973 between the block 1 and block 2 versions), and results are invariant to this choice. While Fig. 3A shows the block 1 variant to make it clear that the axis of comparison is across different participants, we report results using the version that pools together all responses across both blocks to get the most highly powered estimate. S.K. Murthy et al. Cognition 225 (2022) 105152 6low variability conditions in Experiment 2, rather than relying on coarser concreteness or imageability norms. 2.2.2. Population variability reflects individual uncertainty Given that we successfully elicited associations with different levels of variability in the population, we now proceed to ask how this vari- ability is related to the internal representation of color-concept associ - ations within each individual. First, we ask whether individuals maintain a single point estimate representing a single strongest associa - tion or instead maintain a full internal probability distribution over different associations. Second, we ask whether the population is rela- tively homogeneous , where all individuals maintain roughly the same representations, or whether it is more heterogeneous , containing in- dividuals with somewhat differing representations (see Fig. 2). To evaluate this space of candidate hypotheses, we leveraged our blocked design to compare internal variability against population variability. In particular, we draw on the notion of the “crowd within ” (Fiechter & Kornell, 2021 ; Hourihan & Benjamin, 2010 ; Rauhut & Lorenz, 2011 ; Steegen, Dewitte, Tuerlinckx, & Vanpaemel, 2014 ; van Dolder & van den Assem, 2018 ; Vul & Pashler, 2008 ) that has recently been proposed for phenomena like forecasting and estimation in the decision-making literature (see Herzog & Hertwig, 2014 , for a review). The “crowd within ” views a judgement as a sample from an (implicit) probability distribution, explaining why the same participant may make different judgements at different times, and why averaging together multiple judgements from the same participant may yield better predictions (an analogue of the “wisdom of the crowd ” where judgements are averaged across different participants). Thus, while it is unrealistic to perfectly reconstruct any single individual ’s prior distribution from only two re- sponses per concept, a key insight of this literature is that we do not need to reconstruct the full distribution maintained by any single participant. It is statistically sufficient to compare the distance between a small number of samples obtained from a given participant for a given concept (internal ΔE) against the expected null distribution of distances derived from different participants for that concept (population ΔE; see Fig. 3A). Each hypothesis predicts a different pattern of relationships between internal ΔE and population ΔE (see Fig. 2). The distinction between different internal representations concerns whether internal and popu - lation variability are related across concepts. The point estimates repre - sentation allows for some response noise but assumes such noise is identical across all concepts, hence internal variability should be inde- pendent of population variability and we would expect similar internal ΔE across blocks for all concepts.6 Meanwhile, the internal probability distribution account predicts that individuals maintain different distri - butions for different concepts, which vary in their internal variability, hence there should be a systematic relationship between internal ΔE and population ΔE (regardless of whether the population is more homoge - neous or heterogeneous). The distinction between population homoge - neity and heterogeneity, on the other hand, concerns the extent to which population variability is higher or lower overall than internal vari- ability. In the extreme case where every participant i shared the same distribution of color associations P(r|c) in a perfectly homogeneous population, then the entire dataset of responses rij would be drawn from the same distribution rij ~ P(r|c) and the distances between two samples taken from the same participant i, Δ(ri1,ri2), would be the same, in expectation, as the distances between samples taken from distinct par- ticipants, Δ(r1j,r2j). In other words, we may obtain a distribution of the expected variation in any pair of sampled responses, under the null hypothesis of a homogeneous population, and compare the extent to which the actual pairs of samples we obtained from the same participant are more or less similar than expected under this null.7 Our findings are shown in Fig. 3B. We found that the average ΔE between an individual participant ’s responses was strongly correlated with the overall population ’s ΔE, r ˆ0.81, t(192) ˆ19.04, p D0.001. That is, for concepts where the population as a whole most disagreed with one another, each individuals ’ own responses across blocks also tended to disagree with one another. This pattern was only predicted by the internal distribution account and not the point estimates account, which cannot explain why there would be such systematic differences in internal ΔE across concepts. At the same time, we found that internal variability was lower than population variability overall; the estimated intercept in a linear regression significantly differed from zero, b ˆ 17.3, t(192) ˆ20.8, p D0.001, consistent with a more heterogeneous account where not all participants shared exactly the same internal distribution. A homogeneous population would be closer to the line of unity. 2.2.3. Expectations about others are well-calibrated Our results so far support the view that each individual implicitly maintains a full probability distribution or prior for each concept, which tracks the true population variability but which tends to be somewhat narrower on average. However, it is unclear whether this representation has a social component. In principle, communicative success depends on each individual ’s expectations about how their partner will understand (or misunderstand) a given message, not just whether individuals represent similar meanings. This distinction is crucial for understanding the consequences of lexical priors for resolving misunderstandings in communication. If individuals are unable to represent whether others will share the same prior (whether it is a point estimate or a distribu - tion), they might over- or under-estimate agreement and inhibit suc- cessful communication.8 To assess whether participants maintain well- calibrated social expectations, we turned to the expected agreement measure collected on the second block. For each expected agreement rating in our dataset, we calculated the log odds that other participants ’ color responses actually matched that participants ’ reported color as- sociation, log(p/(1 p)), representing ground-truth agreement. We found a strong correlation between expected agreement and true agreement (Pearson ’s correlation r ˆ0.88, t(197) ˆ25.5, p D0.001 and Spearman ’s rank correlation, ρ ˆ0.87, p D0.001), suggesting that in- dividuals ’ expectations about the others ’ were remarkably well- calibrated to the true statistics (see Fig. 3C).9 6 More formally, under this hypothesis, we could suppose an individual i’s responses j are expected to be drawn from rij⊃N…ciCε†where ci is their point estimate for the concept and ε captures a fixed probability that they may click on nearby color chips rather than the precise value ci. 7 There exist other possible explanations for a smaller internal ΔE, although we have taken measures to minimize them; for example, if participants ’ response was strongly influenced by their response on the first block, then their samples may violate the assumption of independent draws from the distribu - tion. However, such influence is unlikely given the number of intervening stimuli in each block and the relatively long delay between samples. We return to this concern in the discussion. 8 Of course, there are multiple ways for well-calibrated social expectations to be achieved. One possibility is that individuals maintain their own idiosyncratic associations, based on their unique experiences, but also track the degree of divergence from the overall population and correct for it in social settings using theory of mind (i.e. they are aware that their associations are idiosyncratic). Another possibility is that agents are egocentric (i.e. maintain a single internal distribution of associations) but have tuned their own internal distribution over time to match the population distribution. Distinguishing between these representational possibilities is outside the scope of this paper. 9 While the log-odds linking function is orthogonal to our question of interest, we note that it is consistent with previous observations about how participants spontaneously map sliders to logarithmic rather than linear scales (e.g. Griffiths & Tenenbaum, 2005, 2007 ; Landy, Silbert, & Goldin, 2013 ). S.K. Murthy et al. Cognition 225 (2022) 105152 73.Experiment 2: The consequences of lexical priors for communication Experiment 1 established several key properties of word-color asso- ciations: more abstract words have more variable associations, this variability is represented within individuals, and individuals can accu- rately predict whether an association is likely to be shared with others. These properties are precisely those that probabilistic models of communication have associated with lexical uncertainty , the recognition that particular word meanings or concepts may or may not be shared with others. Such uncertainty may allow for more flexible adaptation than simple point estimates: individuals may be able to anticipate po- tential confusions ahead of time, and possess rich enough knowledge about likely alternative associations to rapidly adjust their expectations on the fly. Here, we examine the downstream effects of these priors in a Pictionary-like communication task where participants sent color swatches as messages that allowed their partner to guess a target word. We hypothesized that target words with nearly universal color associ - ations, reflected in strong, tightly overlapping priors, would provide a common foundation and allow for instant communicative success. Meanwhile, words with more variable or uncertain color associations would be more difficult to communicate about. In either case, individual pairs should still be able to adaptively coordinate on mutually agreeable solutions given the flexibility provided by their initial uncertainty. 3.1. Methods 3.1.1. Participants We recruited 234 participants on Amazon Mechanical Turk and paired them up to form 117 dyads. After removing 6 dyads that disconnected before completing the task and 4 additional dyads where at least one participant failed our attention checks, we were left with 107 dyads in our sample. Participants were screened for comprehension and color vision before being paired with a partner (see Supplemental Fig. S6). 3.1.2. Stimuli We constructed 100 four-word contexts from the 200 words we used in Experiment 1. We designed these contexts to span a broad range of priors based on the population variability we estimated in the prior experiment. We iteratively sampled contexts to satisfy to two main constraints, both derived from possible pragmatic context effects which may mask the effects of priors: (1) we aimed to group words with similar variability while (2) minimizing the extent to which different words in the same context have overlapping priors. To satisfy these constraints, we first ordered words by their estimated response entropy and greedily sampled from the list of words to build an initial context. To check the extent to which these words had overlapping priors, we computed the Jensen-Shannon (JS) divergence between the Experiment 1 response distributions for each pair of words in the proposed context. We imposed a minimum divergence threshold of 0.3: when the context exceeded this threshold for a word, we replaced it with the next in the list until a satisfactory set was formed. We repeated this procedure to obtain 50 contexts where each word appeared in exactly one context and overlap within each context was low. To obtain a distinct alternative set of contexts, we repeated the same procedure with the additional criterion of rejecting pairs of words that had previously appeared together in the first set. These 50 contexts were divided into a “high prior variability ” condition (the top 25) and a “low prior variability ” condition (the bot- tom 25). 3.1.3. Procedure Participants were paired into dyads to play an interactive reference game using color as the communication medium (Fig. 4). Participants were randomly assigned to speaker and listener roles and placed in an environment containing a context of 4 concept words and 88 Munsell chips, both shared in common ground. At the beginning of each trial, one of the 4 words was privately shown to the speaker as the target word. The speaker was then instructed to choose a color chip from the set of Munsell chips that would best allow the listener to select that target from the distractors. After the listener received the speaker ’s message and clicked on one of the words, both participants received feedback: the listener was shown the true target and the speaker was shown the listener ’s selection. Participants were awarded a performance bonus of $0.03 for each correct response. 3.1.4. Design We tested the effect of lexical priors by manipulating the target words in a within-dyad design: each dyad was assigned two 4-word contexts, one “high prior variability ” context and one “low prior Fig. 4.Design of Experiment 2. Participants are paired into dyads and assigned to speaker and listener roles. Both participants are shown the full set of Munsell chips and the same context of 4 words. The speaker is additionally shown which one of these 4 words is the target word and asked to select a color to send to the listener. The listener then guesses the word they believe is the target. Participants independently complete a color association elicitation task before and after the reference game for the test set of eight words appearing in the communication game and also a control set of eight additional words . S.K. Murthy et al. Cognition 225 (2022) 105152 8variability ” context. The trial sequence was constructed from 6 repeti - tion blocks, allowing us to observe the trajectory of behavior as each target is referred to multiple times. The four target words from each context were randomly interleaved in each block, for a total of 48 trials (six blocks of eight words). Participants switched roles between repeti - tion blocks. Finally, we included two blocks of the prior elicitation task used in Experiment 1. At the beginning and end of the reference game, we asked participants to provide associations for the 8 test words in the contexts they were assigned as well as 8 control words not encountered during the reference game task – four with high prior variability and with low prior variability. 3.2. Results 3.2.1. Shared priors facilitate communicative success Our first prediction concerned the effect of priors on communicative success, which we operationalized as the probability that the listener correctly selects the target (chance is 25%). We hypothesized that par- ticipants would initially struggle more to communicate in the high prior variability condition, compared to the low prior variability condition, where they could take advantage of stronger expectations and more closely overlapping priors. At the same time, we expected participants to improve through interaction, as they built common ground across repeated appearances of the same targets. To test these hypotheses, we constructed a logistic regression to predict correctness at the trial-by- trial level, including fixed effects of condition (high prior variability context vs. low prior variability context), repetition block number, and their interaction, with the maximal random-effect structure that converged (random intercepts and both main effects at the dyad-level). We found a significant main effect of condition (b ˆ0.44, z ˆ4.8, p D 0.001) with higher accuracy for the low prior variability condition throughout the task. We also found a significant improvement in accu- racy across the task in both conditions (b ˆ0.43, z ˆ12.0, p D0.001), reflecting ad hoc coordination. Finally, these effects were clarified by a significant interaction (b ˆ0.06, z ˆ2.6, p ˆ0.009), likely reflecting ceiling effects for the low-variability conditions. On the first round, dyads were at approximately 54% accuracy in the high variability condition, compared to 67% accuracy in the low variability condition. By the final round, they achieved approximately 78% and 91% accuracy, respectively.10 Because each condition contained a wide va- riety of items spanning different priors (reflected in high variances for random effect estimates), we also probed these effects using a more fine- grained, individualized measure. We computed the ΔE distance between the two participants ’ pre-test responses for each word in each game, and found that quintiles of this continuous measure followed the same trend (dashed lines in Fig. 5A). 3.2.2. Rapid convergence to shared meanings To evaluate the extent to which participants flexibly shifted their associations for target concepts over the course of interaction, we compared pre-test and post-test responses. We operationalized the similarity between partners ’ associations as the ΔE between their re- sponses at each phase. For example, as a manipulation check, we found that participants ’ responses were indeed more similar in the pre-test for words in the low variability condition than the high variability condi - tion, d ˆ12.78, t(117.3) ˆ9.1, p D0.001), implying that we successfully constructed separable context sets from the population variability esti- mates obtained in Experiment 1. Critically, however, we hypothesized that participants ’ responses in the post-test would become significantly closer for the words that were repeated in the reference game, compared to a comparable set of control words that only appeared in the pre-test and post-test. We tested these predictions in a mixed-effects linear regression model including fixed effects of phase (pre-test vs. post-test), word set (repeated vs. control), and condition (high vs. low variability) as well as all interactions, including random intercepts and slopes for all main effects). All variables were effect-coded to facilitate interpretation of interactions. Because the three-way interaction in this model is complex to reason about, we begin by considering a sub-model restricted to the repeated set only. We found a significant main effect of condition, b ˆ0.25, t(110.8) ˆ10.2, p D0.001, with responses to low prior vari- ability words being more similar at both phases, as well as a main effect of phase b ˆ0.38, t(120.0) ˆ16.49, p D0.001, with more similar responses in the post-test for all words. We also find a significant interaction, b ˆ0.08, t(355.1) ˆ3.87, p ˆ0.0001, with high prior variability words experiencing an even larger shift, likely reflecting a floor effect. We evaluate the null hypothesis that this convergence simply reflects additional practice with the task using our control words, which appeared only in the pre-test and post-test. We found evidence of Fig. 5.Results of Experiment 2. (A) Communicative success increases across repeated interaction. Dashed lines represent quintiles of initial similarity between partners ’ associations in the pre-test (a finer-grained measure than condition). (B) Speakers ’ associations begin closer in color space for low prior variability words than for high prior variability words, but converge to closer associations in the post-test, relative to a control set that did not appear in the reference game. Error bars are bootstrapped 95% CIs. 10 A Bayesian logistic regression model with the maximal random-effect structure at both the dyad- and item-level yielded similar effects (see Appen - dix Figure S2 for full results) S.K. Murthy et al. Cognition 225 (2022) 105152 9a three-way interaction, where the interaction between phase and con- dition reported above is significantly different from the relationship found for control words, where similarity between partners remained relatively unchanged between the pre- and post-test (Fig. 5B). 3.2.3. The dynamics of adaptation While we established strong effects of adaptation from the pre-test to the post-test, it remains unclear how, exactly, adaptation unfolds throughout the communicative interaction. Our final exploratory analysis examines how each participant ’s choices as speaker change from round to round as they observe one another ’s behavior. We took advantage of our experimental design, which required participants to alternate between the speaker and listener roles at the outset of each repetition block. We measured two distances at each block i (see Fig. 6): (1) the distance be- tween the chosen color chip chosen by the speaker on that block and their own initial choice as measured in the pre-test (which we call “distance from own initial ”) and (2) the distance between the speaker ’s chosen color chip and their partner ’s initial choice (which we call “distance from other ’s initial ”; note that the partner ’s initial choice is unobserved). We tested the effects of speaker turn (first vs. second), initial variability condition (high vs. low), and block number in mixed-effects regression models predicting these distances, including random intercepts at the subject-level and item- level as well as random effects for speaker turn at the subject level. First, overall, participants tend to shift away from their initial asso- ciations over time, b ˆ1.2, t(4713) ˆ5.5, p D0.001, while also moving closer to their partner ’s initial associations, b ˆ1.1, t(4702) ˆ4.7, p D0.001. These shift suggests that partners tend to settle somewhere in the middle of their own initial associations and their partner ’s initial associations. Second, however, we find a significant asymmetry in how much each participant adapts. The first participant to take the role of the speaker shifts away from their initial association to a lesser extent than the second participant that takes the role of the speaker, b ˆ4.9, t(111.7) ˆ6.03, p D0.001 and also shifts closer to their partner to a lesser extent, b ˆ3.7, t(113.8) ˆ5.1, p D0.001. This asymmetry is consistent with a “first mover advantage ” where the first participant in the speaker role only has their own lexical prior to go on, while the second participant has additional information from observing those initial choices and is able to use that information to anchor their choices closer to their partner ’s. The asymmetry created by this alternation is solidified over the remainder of the game: the second participant actu- ally ends up adopting a convention that is closer to their partner ’s initial association than to their own. Third, we find that the initial variability condition appears to simply shift this overall pattern up or down without qualitatively changing the effect. In the low-variability condition, par- ticipants begin closer together but adaptation proceeds similarly, b ˆ 10.0, t(211.4) ˆ10.8, p D0.001 and b ˆ10.3, t(216.9) ˆ9.7, p D0.001 for one’s own associations and one’s partner ’s associations, respectively. 4.Discussion Communication is a continual challenge. Even when we speak the same language, our vocabularies are full of words that may mean different things to different partners. In this paper, we investigated the mental representations of meaning that people use to overcome this challenge. Using the domain of color-concept associations, where pop- ulation variability can be carefully measured and manipulated, we first established that variability does not take people by surprise: individuals represent internal uncertainty about associations rather than point es- timates and these agreement were well-calibrated to the actual population-wide statistics. We then used these elicited distributions to systematically manipulate the degree of variability in the lexical priors of partners in an interactive communication game. Although commu - nication was initially difficult for words with more variable associations, participants were able to quickly adapt their expectations based on common ground accumulated within the game, leading their priors to become more similar over time. Taken together, these findings suggest that partners enter communicative settings with well-calibrated but flexible priors about the likely difficulty of communication. Our work provides new support for recent probabilistic accounts of lexical uncer - tainty in communication and raises a number of new directions. First, most prominently, translating our findings from the relatively low-dimensional domain of colors back to the much larger lexical priors for natural language expressions will require further methodological advances. While the gap between these domains may seem to limit generalization and any potential differences in signaling medium need to be evaluated empirically, we suggest that colors may have more in common with words as a signaling medium than is initially apparent (Schloss, 2018 ). Just as the meanings of discrete word “tokens ” are typically thought to lie in a continuous space (such that we can mean - ingfully talk about the semantic similarity between words or sentences), participants were presented with a discrete set of 88 color “tokens ” whose semantic similarity is also grounded in a continuous underlying space (i.e. LAB space). Just as natural-language speakers cannot directly convey meanings from the continuous semantic space and must pass through the bottleneck of discrete tokens, our participants could not Fig. 6.Dynamics of adaption in Experi - ment 2. Dotted lines represent distance from one’s own initial choice while solid lines represent the distance from one’s partner ’s initial choice choice. The red and blue lines track each participant ’s identity as they alternate roles. Both participants tend to shift away from their initial associations over time (dotted lines rise) while also moving closer to their partner ’s initial associations (solid lines fall). However, there is significant asymmetry in the directionality of adaptation: the first participant to take the role of the speaker shifts from their initial association to a lesser extent than the second participant that takes the role of the speaker. Horizontal black lines are provided for easier comparison to pre-test distances; error bars represent bootstrapped 95% CIs. (For interpretation of the refer- ences to color in this figure legend, the reader is referred to the web version of this article.) S.K. Murthy et al. Cognition 225 (2022) 105152 10directly access the continuous color space; they had to pass through the bottleneck of discrete chips. Perhaps the most significant difference between domains is not discreteness but that the underlying similarity of the chips is exposed visually, whereas the semantic similarity of word tokens is typically not exposed visually (e.g. words that are nearby in meaning often appear very different when written out orthographi - cally). Especially because we organized the discrete set of color tokens according to their visual similarity in the task interface, we may have set up an easier pathway for speakers to compare similar signals. But we expect that this difference would primarily affect search and retrieval from the space of discrete tokens rather than the representation of un- certainty about a given token ’s meaning or the choice to use that token. Second, our evidence for lexical uncertainty raises deeper mechanistic questions about how, exactly, variability is encoded and learned by in- dividuals. There are a number of different computational models for probabilistic meanings that have arisen in natural language processing (NLP), including Gaussian embeddings and Gaussian mixtures (Athiwar - atkun, Wilson, & Anandkumar, 2018; Bra¯zinskas, Havrylov, & Titov, 2018; Vilnis & McCallum, 2015) as well as a number of different proposals for how these distributions may be learned over time depending on an in- dividual ’s own idiosyncratic experiences (Johns & Jamieson, 2018; Johns, Jones, & Mewhort, 2019; Kleinschmidt, 2019; Kraljic, Samuel, & Brennan, 2008). A related direction is to better characterize the mechanistic pro- cesses allowing participants to align their associations to communicate better (as we found in Experiment 2), and whether updates to an in- dividual ’s associations are long-lasting or transient. One lower-level explanation for alignment is that participants are simply priming one another and making certain associations more salient or accessible (Pick- ering & Garrod, 2006). A higher-level explanation, not mutually exclusive with priming, is that speakers update their beliefs to form partner-specific common ground (Hawkins et al., 2022). This hierarchical account predicts that the extent to which local adaptation will persist in longer-term updates depends on sustained use of that association over time, across different partners. For example, as a slang term is more consistently and widely used throughout a language community, it may eventually supplant whatever initial associations individuals had with that word and persist as a longer- term update. However, such persistence would likely require more than a single interaction. A third direction for future work is to examine how sources of vari- ability in color associations arise in the first place. While we confirmed a coarse relationship between the abstractness of a concept and the vari- ability of its associations, it remains unclear how this relationship arises. On one hand, some portion of this relationship may be driven by sensory aspects of word representations. We are all exposed to roughly the same visual imagery statistics for concrete concepts such as “tree, ” suggesting strong associations with greens and browns. Meanwhile, abstract con- cepts such as “justice ” lack concrete referents and associations may be driven by more idiosyncratic semantic properties that vary depending on each individuals ’ own history with the concept. These terms may be more susceptible to cultural variation across different languages and latent social groups (Tham et al., 2020 ), which would be interesting to measure in broader cross-cultural samples. It is worth noting several potential limitations of our data. For one, there are intrinsic challenges associated with accurately sampling re- sponses from color space. While we evenly sampled color chips from the Munsell color chart (Landa & Fairchild, 2005 ; Munsell, 1905 ), following standard practice for color elicitation (Berlin & Kay, 1969 ; Brown & Lenneberg, 1954 ; Gibson et al., 2017 ; Sturges & Whitfield, 1995 ), there are known distortions created by this set of chips (Zaslavsky, Kemp, Regier, & Tishby, 2018 ). Most noticeably, discrepancies in the relative number of green-blue and red-pink chips compared to yellow-ish chips may have biased responses towards better-represented hues. We ex- pected this distortion to have the biggest effect on entropy-based mea- sures of variability, which is more sensitive to the support of the response distribution then our ΔE measure. Second, our online data collection setup prevented us from ensuring perfectly consistent color calibration across participants ’ screens, raising concerns that variability in associations is simply due to presentation noise. While we did our best to minimize such sources of variability and bias, our primary compari - sons were importantly relative comparisons between different words (e.g. between words with more or less variability). Because the display of colors on a given participant ’s screen was likely to be fixed across the experiment, any noise or bias arising from the color display should contribute equally across all words and conditions. Third, it is possible that our use of within-participant designs, both when eliciting multiple responses across blocks in Experiment 1 and when interleaving high and low variability contexts in Experiment 2, may have resulted in spillover or “self-priming ” effects that reduce our estimates of internal variability. Generally, these possible effects would work against our hypotheses: in Experiment 1, self-priming would have favored the point estimate hy- pothesis rather than the internal uncertainty hypothesis, and in Exper - iment 2, it would have reduced our estimate of differences across conditions. More broadly, it will be important to reproduce our findings using longer delays between blocks. More broadly, color associations are of substantial interest for communication in their own right. The very properties that we high- lighted in Experiment 1 may be responsible for the prevalence and usefulness of color in communicating about abstractions, relative to more concrete modalities (Gass, 1975 ; Johansson, Anikin, & Aseyev, 2020 ; Schloss, Witzel, & Lai, 2020 ; Winter, 2019 ). When a speaker says “love is blue ” they draw attention to different semantic dimensions than “love is bright red,” which may be difficult to reach with other meta- phorical expressions. Thus, characterizing uncertainty in communica - tion with color associations is not only useful for practical visual communication like the choice of color in data visualization (Lin, For- tuna, Kulkarni, Stone, & Heer, 2013 ; Schloss, Leggon, & Lessard, 2021 ; Setlur & Stone, 2015 ), or for design elements like signage (Mahnke, 1996 ; Schloss, Lessard, Walmsley, & Foley, 2018 ), but is core to a more general theory of multi-modal communication. The meanings of colors may have much in common with the meanings of words we use in everyday conversations. Rather than rigid dictionaries, well-calibrated probability distributions allow us to better anticipate mis- understandings and coordinate with one another to achieve mutual understanding. Author contributions All authors conceived of the project and contributed to the study design. SM and RH implemented the experiments, collected data, per- formed all analyses, and wrote the manuscript. TG provided feedback on the manuscript. All authors approved the final version of the manuscript for submission. Credit Authorship Contribution Statement Sonia Murthy: Conceptualization, Methodology, Investigation, Formal analysis, Software, Writing - Original Draft, Visualization. Thomas Griffiths: Conceptualization, Writing- Reviewing and Editing. Robert Hawkins: Conceptualization, Methodology, Investigation, Formal analysis, Software, Writing - Original Draft, Writing- Reviewing and Editing, Visualization, Supervision, Project administration. Author Note Materials and code for reproducing all behavioral experiments and analyses are open and available online at s��|�>22r t�s�l1! m{y2skÐv~ {lo2m{w{~/ ~oq. The pre-registration for Experiment 1 is at https://osf.io/v86xz and the pre-registration for Experiment 2 is at s��|�>22{ �q1t{2ry}� �. Correspondence should be addressed to Robert Hawkins (~nskÐvtz� E|~tzmo�{z 1on� ). S.K. Murthy et al. Cognition 225 (2022) 105152 11Acknowledgements We are grateful for helpful conversations with Bill Thompson, Christiane Fellbaum, Josh Peterson, and Ken Norman. This work was supported by NSF Grant #1911835 to RH. Appendix A.What is a lexical prior? While the present work focuses on basic qualitative properties of lexical priors, and a formal model is not strictly necessary to interpret our findings, an overview of previous formal definitions nonetheless helps make this construct more precise (see Bergen et al., 2016 ; Hawkins et al., 2022 , for a more extensive treatment). We begin by defining a lexicon L, which specifies a specific set of form-meaning mappings, assign semantic meanings to all utterances in a language. Definition 1.A lexicon L : (u,m) → {0,1} is a function assigning a Boolean truth value11 to every pair of utterances u∃U and meanings m ∃M. Traditionally, all speakers of a language are assumed to learn a single fixed lexicon specifying the meaning of every utterance in the language. Example 1.Consider a simple referential language game where there are 2 possible utterances u ∃{u1,u2} and 2 possible meanings m ∃{m1,m2}. Then a single lexicon L can be represented as a binary ∣U∣×∣M∣ matrix with 2 rows and 2 columns, where each entry represents whether utterance u has meaning m or not, and L(u,w) simply looks up the specified entry: Lˆ[ 10 11] Now, it is straightforward to define a lexical prior as a probability distribution over such matrices, representing an agent ’s uncertainty over exactly which lexicon is being used by their partner. Definition 2.A lexical prior is a probability distribution over the support of possible lexicons, denoted by P(L). Note that the lexical prior P(L is distinct from P(u) or P(m), which simply represent the background probability of a given utterance or meaning independently popping up in the environment. Example 2.Let Lij be the matrix entry representing whether utterance ui has meaning mj. Then P Lij) ⊃Bernoulli…5† defines a maximally uninformative prior where every utterance is equally likely a priori to have every meaning. Example 3.If we additionally assume that lexicons satisfy the constraint that entries sum to one for each column of the matrix (a weak form of mutual exclusivity, where every meaning is assumed to have exactly utterance that expresses it), then the lexical prior may be specified even more compactly in conditional form as P Lij) ˆP ui†mj) ⊃Categorical θj) C where every meaning mj is associated with a vector θj giving a distribution over utterances. We are now prepared to consider how these theoretical constructs correspond our setting of color-concept signaling games, where the meanings m are a discrete set of concept words (e.g. lemon , randomness ) and the utterances u are a discrete set of 88 color chips. We could in principle try to elicit the full latent object L by showing participants every color- concept pair (u,m) and collecting a slider response or 2AFC judgement about whether that participant (or that participant ’s partner) would endorse that specific mapping. This kind of elicitation procedure might be the most direct instantiation of the formal definition but is expensive to collect (e.g. requiring 50 ×88 ˆ4400 judgements from a single participant to cover just a quarter of the concepts we use). Instead, we elicit the prior by taking samples from the conditional distribution P(u|m), which also happens to be a well-vetted method for eliciting color associations. In other words, we query a column of the lexicon (i.e. conditioning on a given concept) and ask participants to draw a sample from the induced distribution over color chips. For this reason, it is convenient to define the lexical prior for a given concept m as the elicited distribution over colors given m, although it is a slight abuse of the term. Appendix B.Supplementary data Supplementary data to this article can be found online at https://doi.org/10.1016/j.cognition.2022.105152 . References Anderson-Cook, C. M. (2010). Hidden jargon: Everyday words with meanings specific to statistics. In C. Reading (Ed.), Proceedings of the 8th International Conference on Teaching Statistics (ICOTS8) . Athiwaratkun, B., Wilson, A. G., & Anandkumar, A. Probabilistic fasttext for multi-sense word embeddings . (2018). Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) https://aclanthology.org /P18-1001 . 11 This definition can be generalized to a real-valued function L : (u,m) → ℝ (Degen, Hawkins, Graf, Kreiss, & Goodman, 2020 ), yielding a graded or “fuzzy ” semantics, but we use the traditional truth-conditional for simplicity. S.K. Murthy et al. Cognition 225 (2022) 105152 12Barchard, K. A., Grob, K. E., & Roe, M. J. (2017). Is sadness blue? The problem of using figurative language for emotions on psychological tests. Behavior Research Methods, 49(2), 443–456. Bergen, L., Levy, R., & Goodman, N. (2016). Pragmatic reasoning through semantic inference. Semantics and Pragmatics, 9. Berlin, B., & Kay, P. (1969). Basic color terms: Their universality and evolution . Berkeley, CA: UC Press . Bra¯zinskas, A., Havrylov, S., & Titov, I. Embedding words as distributions with a Bayesian skip-gram model, Proceedings of the 27th International Conference on Computational Linguistics . (2017). https://aclanthology.org/C18-1151 , 1775-1789. Brochhagen, T. (2020). Signalling under uncertainty: Interpretative alignment without a common prior. The British Journal for the Philosophy of Science, 71(2), 471–496. Brown, R. W., & Lenneberg, E. H. (1954). A study in language and cognition. The Journal of Abnormal and Social Psychology, 49(3), 454–462. Bullock, O. M., Col˘on Amill, D., Shulman, H. C., & Dixon, G. N. (2019). Jargon as a barrier to effective science communication: Evidence from metacognition. Public Understanding of Science, 28(7), 845–853. Caldwell, C. A., & Smith, K. (2012). Cultural evolution and perpetuation of arbitrary communicative conventions in experimental microsocieties. PLoS One, 7(8), Article e43807 . Castro, C. M., Wilson, C., Wang, F., & Schillinger, D. (2007). Babel babble: Physicians ’ use of unclarified medical jargon with patients. American Journal of Health Behavior, 31(1), S85–S95. Clark, H. H. (1996). Using language . New York: Cambridge University Press . Clark, H. H. (1998). Communal lexicons. In K. Malmkj ’r, & J. Williams (Eds.), Context in language learning and language understanding (pp. 63–87). Clark, H. H., & Wilkes-Gibbs, D. (1986). Referring as a collaborative process. Cognition, 22(1), 1–39. Davidson, D. (1986). A nice derangement of epitaphs. Philosophical grounds of rationality: Intentions, categories, ends, 4, 157–174. Degen, J., Hawkins, R. D., Graf, C., Kreiss, E., & Goodman, N. D. (2020). When redundancy is useful: A Bayesian approach to “overinformative ” referring expressions. Psychological Review, 127(4), 591. Desikan, B. S., Hull, T., Nadler, E., Guilbeault, D., Kar, A. A., Chu, M., & Sardo, D. R. L. (2020). Comp-syn: Perceptually grounded word embeddings with color. In Proceedings of the 28th International Conference on Computational Linguistics (pp. 1744 –1751) . van Arkel, J., Woensdregt, M., Dingemanse, M., & Blokpoel, M. (2020). A simple repair mechanism can alleviate computational demands of pragmatic reasoning: Simulations and complexity analysis. In Proceedings of the 24th Conference on Computational Natural Language Learning (pp. 177–194). Association for Computational Linguistics . van Dolder, D., & van den Assem, M. J. (2018). The wisdom of the inner crowd in three large natural experiments. Nature Human Behaviour, 2(1), 21–26. Elman, J. L. (2004). An alternative view of the mental lexicon. Trends in Cognitive Sciences, 8(7), 301–306. Fiechter, J. L., & Kornell, N. (2021). How the wisdom of crowds, and of the crowd within, are affected by expertise. Cognitive Research: Principles and Implications, 6(1), 1–7. Gass, W. H. (1975). On being blue: A philosophical inquiry . Boston: D. R. Godine . Gibson, E., Futrell, R., Jara-Ettinger, J., Mahowald, K., Bergen, L., & Ratnasingam, S.,… Conway, B. R.. (2017). Color naming across languages reflects color use. Proceedings of the National Academy of Sciences, 114(40), 10785 –10790 . Griffiths, T. L., & Tenenbaum, J. B. (2005). Structure and strength in causal induction. Cognitive Psychology, 51(4), 334–384. Griffiths, T. L., & Tenenbaum, J. B. (2007). From mere coincidences to meaningful discoveries. Cognition, 103(2), 180–226. Guilbeault, D., Nadler, E. O., Chu, M., Lo Sardo, D. R., Kar, A. A., & Desikan, B. S. (2020). Color associations in abstract semantic domains. Cognition, 201, Article 104306 . Hawkins, R. D., Franke, M., Frank, M. C., Goldberg, A., Smith, K., Griffiths, T. L., & Goodman, N. D. (2022). From partners to populations: A hierarchical Bayesian account of coordination and convention. Psychological Review . In press . Heider, E. R. (1972). Universals in color naming and memory. Journal of Experimental Psychology, 93(1), 10. Herzog, S. M., & Hertwig, R. (2014). Harnessing the wisdom of the inner crowd. Trends in Cognitive Sciences, 18(10), 504–506. Hourihan, K. L., & Benjamin, A. S. (2010). Smaller is better (when sampling from the crowd within): Low memory-span individuals benefit more from multiple opportunities for estimation. Journal of Experimental Psychology: Learning, Memory, and Cognition, 36(4), 1068 . Hupet, M., Seron, X., & Chantraine, Y. (1991). The effects of the codability and discriminability of the referents on the collaborative referring procedure. British Journal of Psychology, 82(4), 449–462. Hupka, R. B., Zaleski, Z., Otto, J., Reidl, L., & Tarabrina, N. V. (1997). The colors of anger, envy, fear, and jealousy: A cross-cultural study. Journal of Cross-Cultural Psychology, 28(2), 156–171. Hutchings, J. (2004). Colour in folklore and tradition —The principles. Color Research & Application, 29(1), 57–66. Isaacs, E. A., & Clark, H. H. (1987). References in conversation between experts and novices. Journal of Experimental Psychology: General, 116(1), 26. Johansson, N., Anikin, A., & Aseyev, N. (2020). Color sound symbolism in natural languages. Language and Cognition, 12(1), 56–83. Johns, B. T., & Jamieson, R. K. (2018). A large-scale analysis of variance in written language. Cognitive Science, 42(4), 1360 –1374 . Johns, B. T., Jones, M. N., & Mewhort, D. (2019). Using experiential optimization to build lexical representations. Psychonomic Bulletin & Review, 26(1), 103–126. Jones, M. N., & Mewhort, D. J. (2007). Representing word meaning and order information in a composite holographic lexicon. Psychological Review, 114(1), 1. Kay, P., Berlin, B., Maffi, L., Merrifield, W. R., & Cook, R. (2009). The world color survey . CA: CSLI Publications Stanford . Kleinschmidt, D. F. (2019). Structure in talker variability: How much is there and how much can it help? Language, Cognition and Neuroscience, 34(1), 43–68. Korsch, B. M., Gozzi, E. K., & Francis, V. (1968). Gaps in doctor-patient communication: I. Doctor-patient interaction and patient satisfaction. Pediatrics, 42(5), 855–871. Kraljic, T., Samuel, A. G., & Brennan, S. E. (2008). First impressions and last resorts: How listeners adjust to speaker variability. Psychological Science, 19(4), 332–338. Krauss, R. M., & Fussell, S. R. (1996). Social psychological models of interpersonal communication. In E. Higgins, & A. Kruglanski (Eds.), Social psychology: Handbook of basic principles (pp. 655–701). New York: Guilford Press . Krauss, R. M., & Weinheimer, S. (1964). Changes in reference phrases as a function of frequency of usage in social interaction: A preliminary study. Psychonomic Science, 1 (112), 113–114. Labov, W. (1973). The boundaries of words and their meanings (New ways of analyzing variation in English) . Labrecque, L., & Milne, G. (2011). Exciting red and competent blue: The importance of color in marketing. Journal of the Academy of Marketing Science, 40(5), 711–727. Lakoff, G. (2006). Whose freedom? The battle over America ’s most important idea. New York: Farrar, Straus and Giroux . Lakoff, G., & Johnson, M. (1980). Metaphors we live by. University of Chicago Press . Landa, E., & Fairchild, M. (2005). Charting color from the eye of the beholder. American Scientist, 93, 436–443. Landy, D., Silbert, N., & Goldin, A. (2013). Estimating large numbers. Cognitive Science, 37(5), 775–799. Lenneberg, E. H., & Roberts, J. M. (1956). The language of experience: A study in methodology . Baltimore: Waverly Press . Lin, S., Fortuna, J., Kulkarni, C., Stone, M., & Heer, J. (2013). Selecting semantically- resonant colors for data visualization. In , 32. Computer graphics forum (pp. 401–410). Mahnke, F. H. (1996). Color, environment, and human response: An interdisciplinary understanding of color and its use as a beneficial element in the design of the architectural environment . New York: John Wiley & Sons. Marcus, A. (1982). Color: A tool for computer graphics communication. The Computer Image , 76–90. Marti, L., Piantadosi, S. T., & Kidd, C. (2019). Same words, same context, different meanings: People are unaware their own concepts are not always shared. In Proceedings of the 41st Annual Conference of the Cognitive Science Society (pp. 2296 –2302) . Martínez, A., & Mammola, S. (2021). Specialized terminology reduces the number of citations of scientific papers. Proceedings of the Royal Society B, 288(1948), 20202581 . McCabe, R., & Healey, P. G. (2018). Miscommunication in doctor –patient communication. Topics in Cognitive Science, 10(2), 409–424. McCloskey, M. E., & Glucksberg, S. (1978). Natural categories: Well defined or fuzzy sets? Memory & Cognition, 6(4), 462–472. Meier, B. P., & Robinson, M. D. (2005). The metaphorical representation of affect. Metaphor and Symbol, 20(4), 239–257. Mohammad, S. (2011, June). Even the abstract have color: Consensus in word-colour associations. In Proceedings of the 49th annual meeting of the association for computational linguistics: Human language technologies (pp. 368–373). Portland, Oregon, USA: Association for Computational Linguistics . Monroe, W., Hawkins, R. D., Goodman, N. D., & Potts, C. (2017). Colors in context: A pragmatic neural model for grounded language understanding. Transactions of the Association for Computational Linguistics, 5, 325–338. Morin, O., Müller, T. F., Morisseau, T., & Winters, J. (2022). Cultural evolution of precise and agreed-upon semantic conventions in a multiplayer gaming app. Cognitive Science, 46(2), e13113 . Mukherjee, K., Yin, B., Sherman, B. E., Lessard, L., & Schloss, K. B. (2022). Context matters: A theory of semantic discriminability for perceptual encoding systems. IEEE Transactions on Visualization and Computer Graphics 1st, 28(1), 697–706. Munsell, A. H. (1905). A color notation . Boston: Geo. H. Ellis Company . Paivio, A., Yuille, J. C., & Madigan, S. A. (1968). Concreteness, imagery, and meaningfulness values for 925 nouns. Journal of Experimental Psychology, 76(1p2), 1. Pennington, J., Socher, R., & Manning, C. D. (2014). GloVe: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (pp. 1532 –1543). (EMNLP) . Pickering, M. J., & Garrod, S. (2006). Alignment as the basis for successful communication. Research on Language and Computation, 4(2), 203–228. Potts, C., & Levy, R. (2015). Negotiating lexical uncertainty and speaker expertise with disjunction. In , 41. Proceedings of the annual meeting of the berkeley linguistics society . Rathore, R., Leggon, Z., Lessard, L., & Schloss, K. B. (2019). Estimating color-concept associations from image statistics. IEEE Transactions on Visualization and Computer Graphics, 26(1), 1226 –1235 . Rauhut, H., & Lorenz, J. (2011). The wisdom of crowds in one mind: How individuals can simulate the knowledge of diverse societies to reach better decisions. Journal of Mathematical Psychology, 55(2), 191–197. Reddy, M. (1979). The conduit metaphor. Metaphor and thought, 2, 285–324. Richardson, J. T. E. (1975). Concreteness and imageability. Quarterly Journal of Experimental Psychology, 27(2), 235–249. Riley, C. A. (1995). Color codes: Modern theories of color in philosophy, painting and architecture, literature, music, and psychology . Lebanon, NH: University Press of New England . S.K. Murthy et al. Cognition 225 (2022) 105152 13Roberts, G., & Clark, R. (2018). Emergence of vowel-like organization in a color-based communication system. In C. Kalish, M. A. Rau, X. J. Zhu, & T. T. Rogers (Eds.), Proceedings of the 40th Annual Meeting of the Cognitive Science Society (pp. 954–959). Schloss, K. B. (2018). A color inference framework. In G. MacDonald, & C. Biggam (Eds.), Progress in colour studies: Cognition, language, and beyond. John Benjamins. Schloss, K. B., Leggon, Z., & Lessard, L. (2021). Semantic discriminability for visual communication. IEEE Transactions on Visualization and Computer Graphics, 27(2), 1022–1031. Schloss, K. B., Lessard, L., Walmsley, C. S., & Foley, K. (2018). Color inference in visual communication: The meaning of colors in recycling. Cognitive Research: Principles and Implications, 3(1), 1–17. Schloss, K. B., Witzel, C., & Lai, L. Y. (2020). Blue hues don’t bring the blues: Questioning conventional notions of color–emotion associations. Journal of the Optical Society of America A, 37(5), 813–824. Schuster, S., & Degen, J. (2020). I know what you’re probably going to say: Listener adaptation to variable use of uncertainty expressions. Cognition, 203, Article 104285. Scott, G. G., Keitel, A., Becirspahic, M., Yao, B., & Sereno, S. C. (2019). The Glasgow Norms: Ratings of 5,500 words on nine scales. Behavior Research Methods, 51(3), 1258–1270. Setlur, V., & Stone, M. C. (2015). A linguistic approach to categorical color assignment for data visualization. IEEE Transactions on Visualization and Computer Graphics, 22 (1), 698–707. Sharma, G., Wu, W., & Dalal, E. N. (2005). The CIEDE2000 color-difference formula: Implementation notes, supplementary test data, and mathematical observations. Color Research & Application, 30(1), 21–30. Steegen, S., Dewitte, L., Tuerlinckx, F., & Vanpaemel, W. (2014). Measuring the crowd within again: A pre-registered replication study. Frontiers in Psychology, 5, 786. Sturges, J., & Whitfield, T. A. (1995). Locating basic colours in the Munsell space. Color Research & Application, 20(6), 364–376. Tham, D. S. Y., Sowden, P. T., Grandison, A., Franklin, A., Lee, A. K. W., & Ng, M.,… Zhao, J.. (2020). A systematic investigation of conceptual color associations. Journal of Experimental Psychology: General, 149(7), 1311. Van Leeuwen, T. (2011). The language of colour: An introduction. London: Routledge. Vilnis, L., & McCallum, A. (2015). Word representations via Gaussian embedding. In Y. Bengio, & Y. LeCun (Eds.), Proceedings of the 3rd International Conference on Learning Representations. Volkova, S., Dolan, W. B., & Wilson, T. (2012). CLex: A lexicon for exploring color, concept and emotion associations in language. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics (pp. 306–314). Vul, E., & Pashler, H. (2008). Measuring the crowd within: Probabilistic representations within individuals. Psychological Science, 19(7), 645–647. Weber, R. A., & Camerer, C. F. (2003). Cultural conflict and merger failure: An experimental approach. Management Science, 49(4), 400–415. Wilson, D., & Carston, R. (2007). A unitary approach to lexical pragmatics: Relevance, inference and ad hoc concepts. In N. Burton-Roberts (Ed.), Pragmatics (pp. 230–259). New York: Palgrave Macmillan. Winter, B. (2019). Sensory linguistics: Language, perception and metaphor. Philadelphia: John Benjamins Publishing Company. Winters, J., & Morin, O. (2019). From context to code: Information transfer constrains the emergence of graphic codes. Cognitive Science, 43(3), Article e12722. Wodak, R. (1989). Language, power and ideology: Studies in political discourse. Philadelphia: John Benjamins Publishing Company. Zaslavsky, N., Kemp, C., Regier, T., & Tishby, N. (2018). Efficient compression in color naming and its evolution. Proceedings of the National Academy of Sciences, 115(31), 7937–7942. Zaslavsky, N., Kemp, C., Tishby, N., & Regier, T. (2019). Color naming reflects both perceptual structure and communicative need. Topics in Cognitive Science, 11(1), 207–219. S.K. Murthy et al.
bf6a9865-1625-4e63-9974-fe759088b71c
trentmkelly/LessWrong-43k
LessWrong
What's General-Purpose Search, And Why Might We Expect To See It In Trained ML Systems? Benito has an interesting job. Here’s some of the stuff he’s had to do over the past couple years: * build a prototype of an office * resolve neighbor complaints at a party * find housing for 13 people with 2 days notice * figure out an invite list for 100+ people for an office * deal with people emailing a funder trying to get him defunded * set moderation policies for LessWrong * write public explanations of grantmaking decisions * organize weekly online zoom events * ship books internationally by Christmas * moderate online debates * do April Fools' Jokes on Lesswrong * figure out which of 100s of applicants to do trial hires with Quite a wide variety! Benito illustrates an interesting feature of humans: you can give humans pretty arbitrary goals, pretty arbitrary jobs to do, pretty arbitrary problems to solve, and they'll go figure out how to do it. It seems like humans have some sort of “general-purpose problem-solving” capability. Now, there’s more than one part of general-purpose problem solving. There’s efficient information-gathering and model-building and updating. There’s searching for promising plans. There’s execution (or, in the organizational context, operations). A general-purpose problem-solver needs general-purpose versions of all those. But for this post, I want to focus on the “searching for promising plans” part. First things first: what is this “search” thing, anyway? Babble And Prune Is Not The Only Search Method This whole post started out because I was talking about “search” (in the context of an inner alignment strategy) and it turned out that people had radically different pictures of what the word “search” means. In particular, it turned out that a bunch of people pessimistic about the strategy were picturing some variant of babble and prune: “babble” candidate solutions, then “prune” unpromising solutions, and hopefully iterate toward better and better solutions. This is not really how humans search for promising plan
ca2b2b4c-4b1d-4546-b39d-c2f340a9abcc
trentmkelly/LessWrong-43k
LessWrong
Weekly LW Meetups: Austin, Berlin, Melbourne, Moscow, Purdue, Vienna, Washington DC This summary was posted to LW main on September 28th, and has been moved to discussion. The more recent summary is here. There are upcoming irregularly scheduled Less Wrong meetups in: * Berlin Meetup: 28 September 2012 07:30PM * Vienna meetup: 28 September 2012 07:00PM * Purdue Meetup: 28 September 2012 09:45PM * Moscow: applied rationality and web resources: 29 September 2012 04:00PM * Washington DC: Choice Blindness: 30 September 2012 03:00PM * Munich Meetup, October 6th : 06 October 2012 02:00PM * (Durham NC) HPMoR Discussion, chapters 4-7: 06 October 2012 11:00AM * (Durham NC) Research Triangle Area Less Wrong: 11 October 2012 06:00PM The following meetups take place in cities with regularly scheduled meetups, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup: * Austin, TX: 29 September 2018 01:30PM * Melbourne, practical rationality: 05 October 2012 07:00PM * Cambridge (MA) first-Sundays meetup: 07 October 2012 02:00PM * (Berkeley CA) Pre-Singularity Summit Overcoming Bias / Less Wrong Meetup Party: 11 October 2012 07:00PM * Cambridge (MA) third-Sundays Meetup: 21 October 2012 02:00PM Locations with regularly scheduled meetups: Austin, Berkeley, Cambridge, MA, Cambridge UK, Madison WI, Melbourne, Mountain View, New York, Ohio, Oxford, Portland, Salt Lake City, Seattle, Toronto, Waterloo, and West Los Angeles. If you'd like to talk with other LW-ers face to face, and there is no meetup in your area, consider starting your own meetup; it's easy (more resources here). Check one out, stretch your rationality skills, build community, and have fun! In addition to the handy sidebar of upcoming meetups, a meetup overview will continue to be posted on the front page every Friday. These will be an attempt to collect information on all the meetups happening in the next weeks. The best way to get your meetup featured is still to use the Add New Meetup feature, but you'll now also have the b
a6c96c42-08b7-40d7-ac44-29b94797aaa7
trentmkelly/LessWrong-43k
LessWrong
Handicapping competitive games [epistemic status: thing I thought of while falling asleep and just wrote up] Suppose you’re playing a competitive game. By that, I mean a game where there are multiple players, and each is trying to win by beating the others. An example of a game like this is Go. But, if you think about it, soccer is also kind of like this: each ‘player’ is composed of a team of people, and the two ‘players’ are competing against each other. We’ll say that that also counts. Sometimes, you’d like to play a competitive game with a friend or multiple friends, but the problem is that one of you is stronger than the other. It’s easy to see what this means in Go, and in the case of soccer, you could imagine that you’re part of a pre-set team, and so are your friends, and it wouldn’t be as fun to swap people between teams to even it out (perhaps because e.g. the teams are based on where you live). This is kind of sad because it means that by default, the stronger player or team will predictably win, which makes it a bit less fun. A way to get around this is by handicapping the stronger player: giving them some disadvantage so that the weaker player has a decent chance of winning, even if the stronger player tries their best. In Go, the standard way of doing this is to have the weaker player start with some well-placed stones already on the board. I don’t know how exactly this works in soccer - perhaps by having the stronger team play with fewer members than usual? If you’re in this situation, but you don’t know the standard way to handicap - for instance, if you’re me and the game is soccer - it might be useful to have a taxonomy of ways to handicap games to choose between. Or if you’re bored of the standard way of handicapping, a taxonomy might inspire you to create new ideas. In this post, I’ll detail what I think is an exhaustive taxonomy. To think about how to handicap competitive games, I find it helpful to think about what a competitive game is. I think that a competitive game i
746c7b4b-d16c-4822-9d35-d561c8753d58
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Would this be Progress in Solving Embedded Agency? Would it be progress if one could figure out how to construct an embedded system that can have a complete model of a highly compressible world, such that the system can correctly generate a plan that when executed would put the world into a particular target state (more simplifying assumptions follow)? Correct planning means not dropping an anvil on its head as part of its plan, and being able to generate a plan that will include any beneficial self-modifications, by being able to "reason over itself". I am imagining a system that gets as its goal a target state of the world, that should be reached. The system generates a plan that when executed would reach the target. This plan is generated using a breath-first tree search. I am making the following additional assumptions: * The world and the agent are both highly compressible. This means we can have a representation of the entire environment (including the agent) inside the agent, for some environments. We only concern ourselves with environments where this is the case. * To make tree search possible: + The environment is discrete. + You know the physics of the environment perfectly. + You know the current state of the world perfectly. + You can compute everything that takes finite compute and memory instantly. (This implies some sense of cartesianess, as I am sort of imagining the system running faster than the world, as it can just do an entire tree search in one "clock tick" of the environment.) With these assumptions does the initial paragraph seem trivial to achieve or would it be considered progress?  My intuition is that this would still need to solve the problem of giving an agent a correct representation of itself, in the sense that it can "plan over itself" arbitrarily. This can be thought of as enabling the agent to reason over the entire environment which includes itself.  Is that part a solved problem? It also seems like you can think about a lot of memory optimizations in this setting. For example, you can save only one world model, and mutate it to do the tree search, by only saving the deltas at each node. Then the system could do a significant tree search if it has a total amount of memory of 2x the amount of memory required to represent the world model, assuming deltas are generally much smaller than the world model. It seems like once you have solved these things you could get a working embedded system that is as smart as possible, in the sense that it would find the shortest plan that would result in the target world state.  This topic came up when working on a project where I try to make a set of minimal assumptions such that I know how to construct an aligned system under these assumptions. After knowing how to construct an aligned system under this set of assumptions, I then attempt to remove an assumption and adjust the system such that it is still aligned. I am trying to remove the cartesian assumption right now.
0b816e5d-fa5c-4397-92d4-aa44a196e750
trentmkelly/LessWrong-43k
LessWrong
Appendices to the live agendas Lists cut from our main post, in a token gesture toward readability. We list past reviews of alignment work, ideas which seem to be dead, the cool but neglected neuroscience / biology approach, various orgs which don't seem to have any agenda, and a bunch of things which don't fit elsewhere.   Appendix: Prior enumerations * Everitt et al (2018) * Ji (2023) * Soares (2023) * Gleave and MacLean (2023) * Krakovna and Shah on Deepmind (2023) * AI Plans (2023), mostly irrelevant * Macdermott (2023) * Kirchner et al (2022): unsupervised analysis * Krakovna (2022), Paradigms of AI alignment * Hubinger (2020) * Perret (2020) * Nanda (2021) * Larsen (2022) * Zoellner (2022) * McDougall (2022) * Sharkey et al (2022) on interp * Koch (2020) * Critch (2020) * Hubinger on types of interpretability (2022) * Tai_safety_bibliography (2021) * Akash and Larsen (2022) * Karnofsky prosaic plan (2022) * Steinhardt (2019) * Russell (2016) * FLI (2017) * Shah (2020) * things which claim to be agendas * Tekofsky listing indies (2023) * This thing called boundaries (2023) * Sharkey et al (2022), mech interp * FLI Governance scorecard  * The money   Appendix: Graveyard * Ambitious value learning? * MIRI youngbloods (see Hebbar) * JW selection theorems?? * Provably Beneficial Artificial Intelligence (but see Open Agency and Omohundro) * HCH (see QACI) * IDA → Critiques and recursive reward modelling * Debate is now called Critiques and ERO * Market-making (Hubinger) * Logical inductors * Conditioning Predictive Models: Risks and Strategies?  * Impact measures, conservative agency, side effects → “power aversion” * Acceptability Verification * Quantilizers * Redwood interp? * AI Safety Hub * Enabling Robots to Communicate their Objectives (early stage interp?) * Aligning narrowly superhuman models (Cotra idea; tiny followup; lives on as scalable oversight?) * automation of semantic interpretability  * i.e. automatically proposing hy
22b568dc-7076-47ad-9b72-babcd2147304
trentmkelly/LessWrong-43k
LessWrong
Autism and Lesswrong I am turning over in my head an idea for a discussion post. This preliminary post has two main purposes: 1. Do we have statistics for where lesswrong readers / posters lie on the Autism spectrum?  2. What are your thoughts on the relationship (if any) between lesswrong and autism (and, perhaps, between rationality and autism)?  Can you help me out?
dfa6aec6-bdfd-4fb5-b1e1-b5c82a06b428
trentmkelly/LessWrong-43k
LessWrong
Ironing Out the Squiggles Adversarial Examples: A Problem The apparent successes of the deep learning revolution conceal a dark underbelly. It may seem that we now know how to get computers to (say) check whether a photo is of a bird, but this façade of seemingly good performance is belied by the existence of adversarial examples—specially prepared data that looks ordinary to humans, but is seen radically differently by machine learning models. The differentiable nature of neural networks, which make them possible to be trained at all, are also responsible for their downfall at the hands of an adversary. Deep learning models are fit using stochastic gradient descent (SGD) to approximate the function between expected inputs and outputs. Given an input, an expected output, and a loss function (which measures "how bad" it is for the actual output to differ from the expected output), we can calculate the gradient of the loss on the input—the derivative with respect to every parameter in our neural network—which tells us which direction to adjust the parameters in order to make the loss go down, to make the approximation better.[1] But gradients are a double-edged sword: the same properties that make it easy to calculate how to adjust a model to make it better at classifying an image, also make it easy to calculate how to adjust an image to make the model classify it incorrectly. If we take the gradient of the loss with respect to the pixels of the image (rather than the parameters of the model), that tells us which direction to adjust the pixels to make the loss go down—or up. (The direction of steepest increase is just the opposite of the direction of steepest decrease.) A tiny step in that direction in imagespace perturbs the pixels of an image just so—making this one the tiniest bit darker, that one the tiniest bit lighter—in a way that humans don't even notice, but which completely breaks an image classifier sensitive to that direction in the conjunction of many pixel-dimensions, making it
88647288-9ca2-479f-aa55-024fc8ee29d0
trentmkelly/LessWrong-43k
LessWrong
Social Insight: When a Lie Is Not a Lie; When a Truth Is Not a Truth //The point has already been made, that if you wish to truly be honest, it is not enough to speak the truth. I generally don't tell people I'm an atheist (I describe my beliefs without using any common labels). Why? I know that if I say the words "I am an atheist," that they will hear the following concepts: - I positively believe there is no God - I cannot be persuaded by evidence any more than most believers can be persuaded by evidence, ie, I have a kind of faith in my atheism - I wish to distance myself from members of religious tribes As I said, the point has already been made; If I know that they will hear those false ideas when I say a certain phrase, how can I say I am honest in speaking it, knowing that I will cause them to have false beliefs? Hence the saying, if you wish to protect yourself, speak the truth. If you wish to be honest, speak so that truth will be heard. Many a politician convincingly lies with truths by saying things that they know will be interpreted in a certain positive (and false) way, but which they can always defend as having been intended to convey some other meaning. --- The New There is a counterpart to this insight, come to me as I've begun to pay more attention to the flow of implicit social communication. If speaking the truth in a way you know will deceive is a lie, then perhaps telling a lie in a way that you know will communicate a true concept is not a lie. I've relaxed my standards of truth-telling as I've come to understand this. "You're the best" and "You can do this" statements have been opened to me, no qualifiers needed. If I know that everyone in a group has to say "I have XYZ qualification," but I also know that no one actually believes anybody when they say it, I can comfortably recite those words, knowing that I'm not actually leading anybody to believe false things, and thus, am not being dishonest. Politicians use this method, too, and I think I'm more or less okay with it. You see, we have a certain p
510c750f-d225-40ee-bbc2-ab846a3c5082
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Open Agency model can solve the AI regulation dilemma It seems to me that most people concerned about AI regulation (and calls for "CERN for AI", or the proposals such as the [OAA](https://www.lesswrong.com/posts/jRf4WENQnhssCb6mJ/davidad-s-bold-plan-for-alignment-an-in-depth-explanation)) are concerned about the monopolisation of AI, not about regulation per se. And in AI monopolisation (or oligopolisation), they are mostly concerned about the concentration of power and perhaps somewhat about the bias that may creep into the monopolistic AI (in the form of recommendations that the AI makes to the user, answers that it gives on contested questions, ethical worldview, or even language that it works best in and the vocabulary that it prefers). Most of these people are probably fine with regulatory boundaries, like law -- AI shouldn't give instructions for making bioweapons, or plan terror attacks, etc. The key question, of course, is how prevent AIs from being able to break the law in this way without effectively enacting AI oligopoly through a stringent regulatory approval regime. The only approach to solving this conundrum that at least has a chance of working, it seems to me, is an [Open Agency](https://www.lesswrong.com/posts/5hApNw5f7uG8RXxGS/the-open-agency-model), where each AI service is dedicated to some part of the generative world model (material science, rocketry, macroeconomics, virology, etc.) and there are also some "glue AIs", like LLMs, that can solve problems by calling to these services (but are not exceedingly smart themselves and don't internalise a lot of specialised knowledge). All the specialised services are approved (and therefore oligopolised, *within a domain*), with dangerous knowledge being erased from them (or only available to users with security clearance), and the inference of these models is constrained to conform to other regulatory and legal constraints. *All services are forced to be developed by independent business or non-profit entities* by antitrust agencies, to prevent the concentration of power. Glue AIs could be independently developed or be open-source, on the condition that they didn't use any deeply specialised knowledge during training (apart from fine-tuning with the specialised services as [tools](https://arxiv.org/abs/2302.07842)), which could somehow be checked semi-automatically, perhaps though the use of approved datasets (cleaned from sensitive specialised data) and [zero-knowledge proofs of training](https://eprint.iacr.org/2023/1345.pdf). I think this model addresses the core concerns of the anti-AI regulation folks: the concentration of power and the freedom of general political and ethical views. In this world, there still should be a lot of nasty compute surveillance and restrictions to prevent people from unilaterally developing AIs that don't conform to the above model, or from running their inference (perhaps, new models of GPUs must verify that the matrix weights belong to an approved or self-approved AI before doing the computation). Some people who are against AI regulation would probably be pissed off by such surveillance, too. But I don't see a way to remove surveillance from the picture and maintain an acceptable level or risk, per the Vulnerable World Hypothesis.
7d9c46e7-e6d7-41a9-ad55-0589232b6cdc
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Wireheading and misalignment by composition on NetHack **TL;DR**: We find agents trained with RLAIF to indulge in wireheading in NetHack. Misalignment appears when the agent optimizes a combination of two rewards that produce aligned behaviors when optimized in isolation, and only emerges with some prompt wordings. *This post discusses an alignment-related discovery from our paper*[*Motif: Intrinsic Motivation from Artificial Intelligence Feedback*](https://arxiv.org/abs/2310.00166)*, co-led by myself (*[*Pierluca D’Oro*](https://twitter.com/proceduralia)*) and* [*Martin Klissarov*](https://twitter.com/MartinKlissarov)*. If you’re curious about the full context in which the phenomenon was investigated, we encourage you to read the paper or the* [*Twitter thread*](https://twitter.com/proceduralia/status/1716893740365713856)*.*   Our team recently developed Motif, a method to distill common sense from a Large Language Model (Llama 2 in our case) into NetHack-playing AI agents. Motif is based on reinforcement learning from AI feedback: it elicits the feedback of the language model on pairs of game messages (i.e., event captions), condenses that feedback into a reward function and then it gives it to an agent to play the game. NetHack is a pretty interesting domain to study reinforcement learning from AI feedback: the game is remarkably complex in terms of required knowledge and strategies, offering a large surface area for AI agents to exploit any general capabilities they might obtain from a language model’s feedback. We found that agents that optimize Motif’s intrinsic reward function exhibit behaviors that are quite aligned with human’s intuitions on how to play NetHack: they are attracted to explorative actions and play quite conservatively, getting experience and only venturing deeper into the game when it is safe. This is more human-aligned than the behaviors exhibited by agents trained to maximize the game score, which usually have a strong incentive to just go down the levels as much as they can. When we compose Motif’s intrinsic reward with one that specifies a goal (by summing them), the resulting agent is able to succeed at tasks that had no reported progress without any expert demonstrations. One of these tasks is the *oracle* task, part of the suite from [NetHack Learning Environment](https://arxiv.org/abs/2006.13760). The agent is asked to get near a character named “the oracle”, which typically appears in later levels of the game, that can only be reached with significant exploration and survival efforts. In summary, this is what we observed about the performance in the oracle task: * **Extrinsic-only:** an agent trained with the task reward never finds the oracle (and doesn’t learn anything) * **Intrinsic-only:** an agent trained with Motif’s intrinsic reward never finds the oracle as well (and exhibits the usual aligned behavior) * **Reward composition:** an agent trained by combining (with a sum) Motif’s intrinsic reward and the task reward solves the task 30% of the time We were curious to know what the successful policies were doing, and we looked at them. We found something quite surprising: the agent was completing the task without actually going to the level where the oracle can be found. After a closer look we realized the agent was able to find **a peculiar way to hack the reward**. To give more context, the reward function used in the oracle task in the NetHack Learning Environment is implemented as a simple condition check: if, in the two-dimensional NetHack world, the symbol denoting the oracle character is in a cell near the cell in which the symbol denoting the agent currently stands, then the task is declared as solved. So, how does the agent manage to solve the task? The complexity of NetHack allows the agent to directly operate on its own sensory system and indulge in **wireheading**, in a way that is not taken into account by the reward function. To do so, the agent had to learn a surprisingly sophisticated strategy, which consists of these steps: 1. Instead of going through the levels, the agent runs in circles and just waits for the right occasion, surviving thousands and thousands of timesteps 2. When a “yellow mold”, a type of monster, a **very specific** type of monster, appears, the agent immediately kills it 3. The agent eats the corpse of the monster, which is an hallucinogen 4. After eating the corpse, the agent enters an hallucination state: in NetHack, this implies that the agent starts seeing monsters as random monsters and characters from other parts of the game 5. The agent waits for a monster to approach it and, instead of executing the usual behavior of fighting against it, tries to survive near it without attacking 6. Due to the hallucination state, the monster’s appearance randomly becomes the one of the oracle: the success condition from the reward function is satisfied and the task is completed As you can see, the agent has to learn many complex skills to discover how to hack the sensor upon which the reward is based. Observe that: * Learning these abilities is not possible only using the task-oriented reward coming from the environment * The general capabilities obtained from the reward derived from the language model give the agent more surface area to exploit the task reward Thus, despite optimizing each reward individually yields aligned behaviors (either an incompetent or a competent one), optimizing their combination yields that misaligned wireheading behavior, a phenomenon that we called **misalignment by composition**. This is unexpected, huh? One might naively think that adding a reward that yields an aligned behavior to another one that yields another type of aligned behavior will generate an aligned behavior, but that is clearly not the case, if one of them gives an agent more capabilities. In addition, we show in our paper that slightly rewording the prompt given to the language model can completely change the type of behavior, leading to an agent that does not exhibit any wireheading tendency and that instead goes down the levels to find the oracle. This might imply that, with current methods, whether a similar RLAIF-based system will generate an aligned behavior or not could be hardly hardly predictable by human engineers. We suspect forms of misalignment by composition might emerge perhaps even more when dealing with more powerful AI agents in real open-ended environments. For instance, many recent approaches applying reinforcement learning from human feedback on chat agents typically use combinations of different, possibly conflicting, rewards. Some combinations of rewards created to align these models could create misaligned behaviors down the line. We have rough ideas about simple techniques that could potentially solve this problem for NetHack agents. But we might need other more powerful and well-thought solutions to address it in the general case. If you have any ideas, please get in touch.
dda73901-6565-409f-840b-5715d9ee466f
StampyAI/alignment-research-dataset/arxiv
Arxiv
Reinforcement Learning with Latent Flow. 1 Introduction --------------- Reinforcement learning (RL) (Sutton and Barto, [1998](#bib.bib58 "Reinforcement learning: an introduction")) holds the promise of enabling artificial agents to solve a diverse set of tasks in uncertain and unstructured environments. Recent developments in RL with deep neural networks have led to tremendous advances in autonomous decision making. Notable examples include classical board games (Silver et al., [2016](#bib.bib35 "Mastering the game of go with deep neural networks and tree search"), [2017](#bib.bib36 "Mastering the game of go without human knowledge")), video games (Mnih et al., [2015](#bib.bib31 "Human-level control through deep reinforcement learning"); Berner et al., [2019](#bib.bib32 "Dota 2 with large scale deep reinforcement learning"); Vinyals et al., [2019](#bib.bib33 "Grandmaster level in starcraft ii using multi-agent reinforcement learning")), and continuous control (Schulman et al., [2017](#bib.bib39 "Proximal policy optimization algorithms"); Lillicrap et al., [2016](#bib.bib49 "Continuous control with deep reinforcement learning"); Rajeswaran et al., [2018](#bib.bib50 "Learning Complex Dexterous Manipulation with Deep Reinforcement Learning and Demonstrations")). A large body of research has focused on the case where an RL agent is equipped with a compact state representation. Such compact state representations are typically available in simulation (Todorov et al., [2012](#bib.bib60 "MuJoCo: a physics engine for model-based control."); Tassa et al., [2018](#bib.bib46 "Deepmind control suite")) or in laboratories equipped with elaborate motion capture systems (OpenAI et al., [2018](#bib.bib53 "Learning dexterous in-hand manipulation"); Zhu et al., [2019](#bib.bib55 "Dexterous manipulation with deep reinforcement learning: efficient, general, and low-cost"); Lowrey et al., [2018](#bib.bib56 "Reinforcement learning for non-prehensile manipulation: transfer from simulation to physical system")). However, state representations are seldom available in unstructured real-world settings like the home. For RL agents to be truly autonomous and widely applicable, sample efficiency and the ability to act using raw sensory observations like pixels is crucial. Motivated by this understanding, we study the problem of efficient and effective deep RL from pixels. A number of recent works have made progress towards closing the sample-efficiency and performance gap between deep RL from states and pixels (Laskin et al., [2020b](#bib.bib41 "CURL: contrastive unsupervised representations for reinforcement learning"), [a](#bib.bib42 "Reinforcement learning with augmented data"); Hafner et al., [2019a](#bib.bib10 "Dream to control: learning behaviors by latent imagination"); Kostrikov et al., [2020](#bib.bib45 "Image augmentation is all you need: regularizing deep reinforcement learning from pixels")). An important component in this endeavor has been the extraction of high quality visual features during the RL process. Laskin et al. ([2020a](#bib.bib42 "Reinforcement learning with augmented data")) and Stooke et al. ([2020](#bib.bib47 "Decoupling representation learning from reinforcement learning")) have shown that features learned either explicitly with auxiliary losses (reconstruction or contrastive losses) or implicitly (through data augmentation) are sufficiently informative to recover the agent’s pose information. While existing methods can encode positional information from images, there has been little attention devoted to extracting temporal information from a stream of images. As a result, existing deep RL methods from pixels struggle to learn effective policies on more challenging continuous control environments that deal with partial observability, sparse rewards, or those that require precise manipulation. ![low of ](https://media.arxiv-vanity.com/render-output/8053624/x1.png) Figure 1: Flow of Latents for Reinforcement Learning (Flare) architecture. Input frames are first encoded individually by the same encoder. The resulting latent vectors are then concatenated with their latent differences before being passed to the downstream RL algorithm. Current approaches in deep RL for learning temporal features are largely heuristic in nature. A commonly employed approach is to stack the most recent frames (Mnih et al., [2015](#bib.bib31 "Human-level control through deep reinforcement learning")) as inputs to a convolutional neural network (CNN). This can be interpreted as a form of early fusion (Karpathy et al., [2014](#bib.bib57 "Large-scale video classification with convolutional neural networks")), where information from the recent time window is combined immediately at the pixel level for input to the CNN. In contrast, modern video recognition systems use alternate architectures that employ optical flow and late fusion (Simonyan and Zisserman, [2014](#bib.bib21 "Two-stream convolutional networks for action recognition in videos")), where frames are processed individually with CNN layers before fusion and downstream processing. Such a late fusion approach is typically beneficial due to better performance, fewer parameters, and the ability to use multi-modal data (Chebotar et al., [2017](#bib.bib61 "Path integral guided policy search"); Jain et al., [2019](#bib.bib51 "Learning Deep Visuomotor Policies for Dexterous Hand Manipulation")). However, directly extending such architectures to RL is be challenging. Real-time computation of optical flow for action selection can be computationally infeasible for many applications with fast control loops like robotics. Furthermore, optical flow computation at training time can also be prohibitively expensive. In our experiments, we also find that a naive late fusion architecture minus the optical flow yields poor results in RL settings (see Section [5.2](#S5.SS2 "5.2 Ablation Studies ‣ 5 Experiments ‣ Reinforcement Learning with Latent Flow")). This observation is consistent with recent findings in related domains like visual navigation (Walsman et al., [2019](#bib.bib52 "Early Fusion for Goal Directed Robotic Vision")). To overcome the above challenges, we develop Flow of Latents for Reinforcement Learning (*Flare*), a new architecture for deep RL from pixels (Figure [1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ Reinforcement Learning with Latent Flow")). Flare can be interpreted as a structured late fusion architecture. Flare processes each frame individually to compute latent vectors, similar to a standard late fusion approach (see Figure [1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ Reinforcement Learning with Latent Flow")). Subsequently, temporal differences between the latent feature vectors are computed and fused along with the latent vectors by concatenation for downstream processing. By incorporating this structure of temporal difference in latent feature space, we provide the learning agent with appropriate inductive bias. In experiments, we show that Flare (i) recovers optimal performance in state-based RL without explicit access to the state velocity, solely with positional state information, (ii) achieves state-of-the-art performance compared to model-free methods on several challenging pixel-based continuous control tasks within the DeepMind control benchmark suite, namely Quadruped Walk, Hopper Hop, Finger Turn-hard, Pendulum Swingup, and Walker Run, while being the most sample efficient model-free pixel-based RL algorithm across these tasks, outperforming the prior model-free state-of-the-art RAD by 1.9× and 1.5× on the 500k and 1M environment step benchmarks, respectively, and (iii) when augmented over Rainbow DQN, outperforms the baseline on 5 out of 8 challenging Atari games at 100M step benchmark. 2 Related Work --------------- Pixel-Based RL The ability of an agent to autonomously learn control policies from visual inputs can greatly expand the applicability of deep RL (Dosovitskiy et al., [2017](#bib.bib4 "CARLA: an open urban driving simulator"); Savva et al., [2019](#bib.bib5 "Habitat: a platform for embodied ai research")). Prior works have used CNNs to extend RL algorithms like PPO (Schulman et al., [2017](#bib.bib39 "Proximal policy optimization algorithms")), SAC (Haarnoja et al., [2018](#bib.bib6 "Soft actor-critic: off-policy maximum entropy deep reinforcement learning with a stochastic actor")), and Rainbow (Hessel et al., [2017](#bib.bib7 "Rainbow: combining improvements in deep reinforcement learning")) to pixel-based tasks. Such direct extensions have typically required substantially larger number of environment interactions when compared to the state-based environments. In order to improve sample efficiency, recent efforts have studied the use of auxiliary tasks and loss functions (Yarats et al., [2019](#bib.bib8 "Improving sample efficiency in model-free reinforcement learning from images"); Laskin et al., [2020b](#bib.bib41 "CURL: contrastive unsupervised representations for reinforcement learning"); Schwarzer et al., [2020](#bib.bib11 "Data-efficient reinforcement learning with momentum predictive representations")), data augmentation (Laskin et al., [2020a](#bib.bib42 "Reinforcement learning with augmented data"); Kostrikov et al., [2020](#bib.bib45 "Image augmentation is all you need: regularizing deep reinforcement learning from pixels")), and latent space dynamics modeling (Hafner et al., [2019b](#bib.bib9 "Learning latent dynamics for planning from pixels"), [a](#bib.bib10 "Dream to control: learning behaviors by latent imagination")). Despite these advances, there is still a large gap between the learning efficiency in state-based and pixel-based environments in a number of challenging benchmark tasks. Our goal in this work is to identify where and how to improve pixel-based performance on this set of challenging control environments. Neural Network Architectures in RL The work of Mnih et al. ([2015](#bib.bib31 "Human-level control through deep reinforcement learning")) combined Q-learning with CNNs to achieve human level performance in Atari games, wherein Mnih et al. ([2015](#bib.bib31 "Human-level control through deep reinforcement learning")) concatenate the most recent 4 frames and use a convolutional neural network to output the Q values. In 2016, Mnih et al. ([2016](#bib.bib12 "Asynchronous methods for deep reinforcement learning")) proposed to use a shared CNN among frames to extract visual features and aggregate the temporal information with LSTM. The same architectures have been adopted by most works to date (Laskin et al., [2020b](#bib.bib41 "CURL: contrastive unsupervised representations for reinforcement learning"); Schwarzer et al., [2020](#bib.bib11 "Data-efficient reinforcement learning with momentum predictive representations"); Kostrikov et al., [2020](#bib.bib45 "Image augmentation is all you need: regularizing deep reinforcement learning from pixels"); Laskin et al., [2020a](#bib.bib42 "Reinforcement learning with augmented data")). The development of new architectures to better capture temporal information in a stream of images has received little attention in deep RL, and our work fills this void. Perhaps the closest to our motivation is the work of Amiranashvili et al. ([2018](#bib.bib15 "Motion perception in reinforcement learning with dynamic objects")) who explicitly use optical flow as an extra input to the RL policy. However, this approach requires additional information and supervision signal to train the flow estimator, which could be unavailable or inaccurate in practice. In contrast, our approach is a simple modification to existing deep RL architectures and does not require any additional auxiliary tasks or supervision signals. Two-Stream Video Classification In video classification tasks, such as activity recognition (Soomro et al., [2012](#bib.bib16 "UCF101: a dataset of 101 human actions classes from videos in the wild")), there are a large body of works on how to utilize temporal information (Donahue et al., [2015](#bib.bib24 "Long-term recurrent convolutional networks for visual recognition and description"); Ji et al., [2012](#bib.bib26 "3D convolutional neural networks for human action recognition"); Tran et al., [2015](#bib.bib28 "Learning spatiotemporal features with 3d convolutional networks"); Carreira and Zisserman, [2017](#bib.bib30 "Quo vadis, action recognition? a new model and the kinetics dataset"); Wang et al., [2018](#bib.bib22 "Non-local neural networks"); Feichtenhofer et al., [2019](#bib.bib17 "Slowfast networks for video recognition")). Of particular relevance is the two-stream architecture of Simonyan and Zisserman ([2014](#bib.bib21 "Two-stream convolutional networks for action recognition in videos")), where one CNN stream takes the usual RGB frames, while the other the optical flow computed from the RGB values. The features from both streams are then late-fused to predict the activity class. That the two-stream architecture yields a significant performance gain compared to the single RGB stream counterpart, indicating the explicit temporal information carried by the flow plays an essential role in video understanding. Instead of directly computing the optical flow, we propose to capture the motion information in latent space to avoid computational overheads and potential flow approximation errors. Our approach also could focus on domain-specific motions that might be overlooked in a generic optical flow representation. 3 Motivation ------------- ![Flare enables an RL agent with only access to positional state to recover a near-optimal policy relative to RL with access to the full state. In the above learning curves we show test-time performance for (i) full-state SAC (blue), where both pose and temporal information is given (ii) position-only SAC (green), and (iii) state-based Flare (orange), where only pose information is provided and velocities are approximated through pose offsets. Unlike full-state SAC, which learns the optimal policy, position-only SAC either fails or converges at suboptimal policies. Meanwhile, the fusion of positions and approximated velocities in Flare efficiently recovers near-optimal policies in most cases. This motivates using Flare for pixel-based input, where velocities are not present in the observation. These results show mean performance with standard deviations averaged over 3 seeds.](https://media.arxiv-vanity.com/render-output/8053624/x2.png) Figure 2: Flare enables an RL agent with only access to positional state to recover a near-optimal policy relative to RL with access to the full state. In the above learning curves we show test-time performance for (i) full-state SAC (blue), where both pose and temporal information is given (ii) position-only SAC (green), and (iii) state-based Flare (orange), where only pose information is provided and velocities are approximated through pose offsets. Unlike full-state SAC, which learns the optimal policy, position-only SAC either fails or converges at suboptimal policies. Meanwhile, the fusion of positions and approximated velocities in Flare efficiently recovers near-optimal policies in most cases. This motivates using Flare for pixel-based input, where velocities are not present in the observation. These results show mean performance with standard deviations averaged over 3 seeds. We motivate Flare by investigating the importance of temporal information in state-based RL. Our investigation utilizes 5 diverse DMControl (Tassa et al., [2018](#bib.bib46 "Deepmind control suite")) tasks. The full state for these environments includes both the agent’s pose information, such as the joints’ positions and angles, as well as temporal information, such as the joints’ translational and angular velocities. We train two variants with SAC—one where the agent receives the full state as input (full-state SAC), and the other with the temporal information masked out, i.e. the agent only receives the pose information as its input (position-only SAC). The resulting learning curves are in Figure [2](#S3.F2 "Figure 2 ‣ 3 Motivation ‣ Reinforcement Learning with Latent Flow"). While the full-state SAC learns the optimal policy quickly, the position-only SAC learns much sub-optimal policies, which often fail entirely. Therefore, we conclude that effective policies cannot be learned from positions alone, and that temporal information is crucial for efficient learning. While full-state SAC can receive velocity information from internal sensors in simulation, in the more general case such as learning from pixels, such information is often not readily available. For this reason, we attempt to approximate temporal information as the difference between two consecutive states’ positions. Concretely, we compute the positional offset δt=(spt−spt−1,spt−1−spt−2,spt−2−spt−3), and provide the fused vector (spt,δt) to the SAC agent. This procedure describes the state-based version of Flare. Results shown in Figure [2](#S3.F2 "Figure 2 ‣ 3 Motivation ‣ Reinforcement Learning with Latent Flow") demonstrate that state-based Flare significantly outperforms the position-only SAC. Furthermore, it achieves optimal asymptotic performance and a learning efficiency comparable to full-state SAC in most environments. ![We compare 3 ways to incorporate temporal information: i) Flare (orange) receives ](https://media.arxiv-vanity.com/render-output/8053624/x3.png) Figure 3: We compare 3 ways to incorporate temporal information: i) Flare (orange) receives (spt,spt−spt−1,spt−1−spt−2,spt−2−spt−3), ii) stack SAC (green) stacks (spt,spt−1,spt−2,spt−3) as inputs, and iii) recurrent SAC (blue) uses recurrent layers to process (spt,spt−1,spt−2,spt−3). Stack SAC and recurrent SAC perform significantly worse than Flare on most environments, highlighting the benefit of how Flare handles temporal information. Results are averaged over 3 seeds. Given that the position-only SAC utilizes spt compared to Flare that utilizes spt and δt, we also investigate a variant (stack SAC) where the SAC agent takes consecutive positions (spt,spt−1,spt−2,spt−3). Stack SAC reflects the frame-stack heuristic used in pixel-based RL. Results in Figure [3](#S3.F3 "Figure 3 ‣ 3 Motivation ‣ Reinforcement Learning with Latent Flow") show that Flare still significantly outperforms stack SAC. It suggests that the well-structured inductive bias in the form of temporal-position fusion is essential for efficient learning. Lastly, since a recurrent structure is an alternative approach to process temporal information, we implement an SAC variant with recurrent modules (Recurrent SAC) to compare with Flare. Specifically, we pass a sequence of poses spt,spt−1,spt−2,spt−3 through an LSTM cell. The number of the LSTM hidden units h is set to be the same as the dimension of δt in Flare. The trainable parameters of the LSTM cell are updated to minimize the critic loss. Recurrent SAC is more complex to implement and requires longer wall-clock training time, but performs worse than Flare as shown in Figure [3](#S3.F3 "Figure 3 ‣ 3 Motivation ‣ Reinforcement Learning with Latent Flow"). Our findings from the state experiments in Figure [2](#S3.F2 "Figure 2 ‣ 3 Motivation ‣ Reinforcement Learning with Latent Flow") and Figure [3](#S3.F3 "Figure 3 ‣ 3 Motivation ‣ Reinforcement Learning with Latent Flow") suggest that (i) temporal information is crucial to learning effective policies in RL, (ii) using Flare to approximate temporal information in the absence of sensors that provide explicit measurements is sufficient in most cases, and (iii) to incorporate temporal information via naively staking position states or a recurrent module are less effective than Flare. In the next section, we carry over these insights to pixel-space RL. ![low of ](https://media.arxiv-vanity.com/render-output/8053624/x4.png) Figure 4: Flow of Latents for Reinforcement Learning (*Flare*): (a) the architecture for the frame stacking heuristic, (b) an alternative to the frame stacking hueristic by encoding each image individually, and (c) the Flare architecture which encodes images individually, computes the feature differences, and fuses the differences together with the latents. 4 Reinforcement Learning with Latent Flow ------------------------------------------ To date, frame stacking is the most common way of pre-processing pixel-based input to convey temporal information for RL algorithms. This heuristic, introduced by Mnih et al. ([2015](#bib.bib31 "Human-level control through deep reinforcement learning")), has been largely untouched since its inception and is used in most state-of-the-art RL architectures. However, our observations from the experiments run on state inputs in Section [3](#S3 "3 Motivation ‣ Reinforcement Learning with Latent Flow") suggest an alternative to the frame stacking heuristic through the explicit inclusion of temporal information as part of the input. Following this insight, we seek a general alternative approach to explicitly incorporate temporal information that can be coupled to any base RL algorithm with minimal modification. To this end, we propose the Flow of Latents for Reinforcement Learning (*Flare*) architecture. Our proposed method calculates differences between the latent encodings of individual frames and fuses the feature differences and latent embeddings before passing them as input to the base RL algorithm, as shown in Figure [4](#S3.F4 "Figure 4 ‣ 3 Motivation ‣ Reinforcement Learning with Latent Flow"). We demonstrate Flare on top of 2 state-of-the-art model-free off-policy RL baselines, RAD-SAC (Laskin et al., [2020a](#bib.bib42 "Reinforcement learning with augmented data")) and Rainbow DQN (Hessel et al., [2017](#bib.bib7 "Rainbow: combining improvements in deep reinforcement learning")), though in principle any RL algorithm can be used in principle. ### 4.1 Latent Flow In computer vision, the most common way to explicitly inject temporal information of a video sequence is to compute dense optical flow between consecutive frames (Simonyan and Zisserman, [2014](#bib.bib21 "Two-stream convolutional networks for action recognition in videos")). Then the RGB and the optical flow inputs are individually fed into two streams of encoders and the features from both are fused in the later stage of the piple. But two-stream architectures with optical flow are not as applicable to RL, because it is too computationally costly to generate optical flow on the fly. To address this challenge and motivated by experiments in Section [3](#S3 "3 Motivation ‣ Reinforcement Learning with Latent Flow"), we propose an alternative architecture that is similar in spirit to the two-stream networks for video classification. Rather than computing optical flow directly, we approximate temporal information in the latent space. Instead of encoding a stack of frames at once, we use a frame-wise CNN to encode each individual frame. Then we compute the differences between the latent encodings of consecutive frames, which we refer to as *latent flow*. Finally, the latent features and the latent flow are fused together through concatenation before getting passed to the downstream RL algorithm. We call the proposed architecture as Flow of Latents for Reinforcement Learning (*Flare*). ### 4.2 Implementation Details Given πψ, fCNN; for *each environment step t* do        zj=fCNN(oj),j=t−k,..,t;        δj=zj−zj−1,j=t−k+1,..,t;        zt=(zt−k+1,⋯,zt,δt−k+1,⋯,δt);        zt=LayerNorm(fFC(zt));        at∼πψ(at|zt);        ot+1∼p(ot+1|at,ot=(ot,ot−1..ot−k));        end for Algorithm 1 Pixel-based Flare Inference For clarity of exposition, we select RAD as the base algorithm to elaborate the execution of Flare. Also, we use RAD later on in our experiments as the comparative baseline (Section [5](#S5 "5 Experiments ‣ Reinforcement Learning with Latent Flow")). The RAD architecture, shown in Figure [4](#S3.F4 "Figure 4 ‣ 3 Motivation ‣ Reinforcement Learning with Latent Flow")a, stacks multiple data augmented frames observed in the pixel space and encodes them altogether through an CNN. This can be viewed as a form of early fusion (Karpathy et al., [2014](#bib.bib57 "Large-scale video classification with convolutional neural networks")). Another preprocessing option is to encode each frame individually through a shared frame-wise encoder and perform late fusion of the resulting latent features, as shown in Figure [4](#S3.F4 "Figure 4 ‣ 3 Motivation ‣ Reinforcement Learning with Latent Flow")b. However, we find that simply concatenating the latent features results in inferior performance when compared to the frame stacking heuristic, which we further elaborate in Section [5.2](#S5.SS2 "5.2 Ablation Studies ‣ 5 Experiments ‣ Reinforcement Learning with Latent Flow"). We conjecture that pixel-level frame stacking benefits from leveraging both the CNN and the fully connected layers to process temporal information, whereas latent-level stacking does not propagate temporal information back through the CNN encoder. Based on this conjecture, we explicitly compute the latent flow δt=zt−zt−1 while detaching the zt−1 gradients when computing δt. We then fuse together (δt,zt). Next, since negative values in the fused latent embedding now possesses semantic meaning from δt, instead of ReLU non-linearity, we pass the embedding through a fully-connected layer followed by layer normalization, before entering the actor and critic networks as shown in Figure [4](#S3.F4 "Figure 4 ‣ 3 Motivation ‣ Reinforcement Learning with Latent Flow")c. Pseudocode illustrates inference with Flare in Algorithm [1](#algorithm1 "Algorithm 1 ‣ 4.2 Implementation Details ‣ 4 Reinforcement Learning with Latent Flow ‣ Reinforcement Learning with Latent Flow"); during training, the encodings of latent features and flow are done in the same way except with augmented observations. | Task | Flare (500K) | RAD (500K) | Flare (1M) | RAD (1M) | | --- | --- | --- | --- | --- | | Quadruped Walk | 296±139 | 206±112 | 488±221 | 322±229 | | Pendulum Swingup | 242±152 | 79±73 | 809±31 | 520±321 | | Hopper Hop | 90±55 | 40±41 | 217±59 | 211±27 | | Finger Turn hard | 282±67 | 137±98 | 661±315 | 249±98 | | Walker Run | 426±33 | 547±48 | 556±93 | 628±39 | Table 1: Evaluation on 5 benchmark tasks around 500K and 1M environment steps. We evaluate over 5 seeds, each of 10 trajectories and show the mean ± standard deviation across runs. | | Rainbow | Flare | | Rainbow | Flare | | --- | --- | --- | --- | --- | --- | | Assault | 15229±3603 | 12724±1107 | Breakout | 280±18 | 345±22 | | Berserk | 1636±598 | 2049±421 | Defender | 44694±3984 | 86982±29214 | | Montezuma | 900±807 | 1668±1055 | Seaquest | 24090±12474 | 13901±8085 | | Phoenix | 16992±3295 | 60974±18044 | Tutankham | 247±11 | 248±20 | Table 2: Evaluation on 8 benchmark Atari games at 100M training steps over 5 seeds. 5 Experiments -------------- ![We compare Flare and the current STOA model-free baseline RAD on 5 challenging DMControl environments. Pendulum Swingup are trained over ](https://media.arxiv-vanity.com/render-output/8053624/x5.png) Figure 5: We compare Flare and the current STOA model-free baseline RAD on 5 challenging DMControl environments. Pendulum Swingup are trained over 1.5e6 and the rest 2.5e6. Flare substantially outperforms RAD on a majority (3 out of the 5) of environments, while being competitive in the remaining. Results are averaged over 5 random seeds with standard deviation (shaded regions). ![We compare Rainbow DQN and Flare on 8 Atari games over 100M training steps. Flare substantially enhances 5 out of 8 games over the baseline Rainbow DQN while matching the rest except Seaquest. Results are averaged over 5 random seeds with standard deviation (shaded regions). ](https://media.arxiv-vanity.com/render-output/8053624/x6.png) Figure 6: We compare Rainbow DQN and Flare on 8 Atari games over 100M training steps. Flare substantially enhances 5 out of 8 games over the baseline Rainbow DQN while matching the rest except Seaquest. Results are averaged over 5 random seeds with standard deviation (shaded regions). ![We perform 3 ablation studies: (a) ](https://media.arxiv-vanity.com/render-output/8053624/x7.png) Figure 7: We perform 3 ablation studies: (a) Pixel flow ablation: we compare using pixel-level and latent-level (Flare) differences. Flare is more stable and performs better. (b) Latent stack ablation: we compare using latent stack with and without the latent flow. The latter performs significantly worse, suggesting that the latent flow is crucial. (c) Frames count ablation: we test using different number of frames for Flare. In this section, we first present the main experimental results, where we show that Flare achieves substantial performance gains over the base algorithm RAD (Laskin et al., [2020a](#bib.bib42 "Reinforcement learning with augmented data")). Then we conduct a series of ablation studies to stress test the design choices of the Flare architecture. In the appendix, we introduce the 5 continuous control tasks from DMControl suite (Tassa et al., [2018](#bib.bib46 "Deepmind control suite")) and 8 Atari games (Bellemare et al., [2013](#bib.bib67 "The arcade learning environment: an evaluation platform for general agents")) that our experiments focus on in the Appendix. ### 5.1 Main Results DMControl: Our main experimental results on the 5 DMControl tasks are presented in Figure [5](#S5.F5 "Figure 5 ‣ 5 Experiments ‣ Reinforcement Learning with Latent Flow") and Table [1](#S4.T1 "Table 1 ‣ 4.2 Implementation Details ‣ 4 Reinforcement Learning with Latent Flow ‣ Reinforcement Learning with Latent Flow"). We find that Flare outperforms RAD in terms of both final performance and sample efficiency for majority (3 out of 5) of the environments, while being competitive on the remaining environments. Specifically, Flare attains similar asymptotic performance to state-based RL on Pendulum Swingup, Hopper Hop, and Finger Turn-hard. For Quadruped Walk, a particularly challenging environment due to its large action space and partial observability, Flare learns much more efficiently than RAD and achieves a higher final score. Moreover, Flare outperforms RAD in terms of sample efficiency on all of the core tasks except for Walker Run as shown in Figure [5](#S5.F5 "Figure 5 ‣ 5 Experiments ‣ Reinforcement Learning with Latent Flow"). The 500k and 1M environment step evaluations in Table [1](#S4.T1 "Table 1 ‣ 4.2 Implementation Details ‣ 4 Reinforcement Learning with Latent Flow ‣ Reinforcement Learning with Latent Flow") show that, on average, Flare achieves 1.9× and 1.5× higher scores than RAD at the 500k step and the 1M step benchmarks, respectively. Atari: The results on the 8 Atari games are in Figure [6](#S5.F6 "Figure 6 ‣ 5 Experiments ‣ Reinforcement Learning with Latent Flow") and Table [3](#A1.T3 "Table 3 ‣ A.3 Compare Flare with DQN Variants on Atari ‣ Appendix A Appendix ‣ Reinforcement Learning with Latent Flow"). Here the baseline Rainbow DQN’s model architecture is modified to match that of Flare, including increasing the number of last layer convolutional channels and adding a fully-connected layer plus layer normalization before the Q networks. Again, we observe substantial performance gain from Flare on the majority (5 out of 8) of the games, including the challenging Montezuma’s Revenge. On most of the remaining games, Flare is equally competitive except for Seaquest. In Appendix [A.3](#A1.SS3 "A.3 Compare Flare with DQN Variants on Atari ‣ Appendix A Appendix ‣ Reinforcement Learning with Latent Flow"), we also show that Flare performs competitively when comparing against other DQN variants at 100M training steps, including the original Rainbow implementations. ### 5.2 Ablation Studies We ablate a number of components of the Flare architecture on the Quadruped Walk and Pendulum Swingup environments to stress test the Flare architecture. The results shown in Figure [7](#S5.F7 "Figure 7 ‣ 5 Experiments ‣ Reinforcement Learning with Latent Flow") aim to answer the following questions: Q1: Do we need latent flow or is computing pixel differences sufficient? A1: Flare proposes a late fusion of latent differences with the latent embeddings, while a simpler approach is an early fusion of pixel differences with the pixel input, which we call pixel flow. We compare Flare to pixel flow in Figure [7](#S5.F7 "Figure 7 ‣ 5 Experiments ‣ Reinforcement Learning with Latent Flow") (left) and find that pixel flow is above RAD but significantly less efficient and less stable than Flare, particularly on Quadruped Walk. This ablation suggests that late fusion temporal information after encoding the image is preferred to early fusion. Q2: Are the gains coming from latent flow or individual frame-wise encoding? A2: Next, we address the potential concern that the performance gain of Flare stems from the frame-wise ConvNet architectural modification instead of the fusion of latent flow. Concretely, we follow the exact architecture and training as Flare, but instead of concatenating the latent flow, we concatenate each frame’s latent vector after the convolution encoders directly as described in Figure [4](#S3.F4 "Figure 4 ‣ 3 Motivation ‣ Reinforcement Learning with Latent Flow")b. This ablation is similar in spirit to the state-based experiments in Figure [3](#S3.F3 "Figure 3 ‣ 3 Motivation ‣ Reinforcement Learning with Latent Flow"). The learning curves in Figure [7](#S5.F7 "Figure 7 ‣ 5 Experiments ‣ Reinforcement Learning with Latent Flow") (center) show that individual frame-wise encoding is not the source of the performance lift: frame-wise encoding, though on par with RAD on Pendulum Swingup, performs significantly worse on Quadruped Walk. Flare’s improvements over RAD are therefore most likely a result of the explicit fusion of latent flow. Q3: How does the input frame count affect performance? A3: Lastly, we compare stacking 2, 3, and 5 frames in Flare in Figure [7](#S5.F7 "Figure 7 ‣ 5 Experiments ‣ Reinforcement Learning with Latent Flow") (right). We find that changing the number of stacked frames does not significantly impact the locomotion task, quadruped walk, but Pendulum Swingup tends to be more sensitive to this hyperparameter. Interestingly, the optimal number of frames for Pendulum Swingup is 2, and more frames can in fact degrade Flare’s performance, indicating that the immediate position and velocity information is the most critical to learn effective policies on this task. We hypothesize that Flare trains more slowly with increased frame count on Pendulum Swingup due to the presence of unnecessary information that the actor and critic networks need to learn to ignore. 6 Conclusion ------------- We propose Flare, an architecture for RL that explicitly encode temporal information by computing flow in the latent space. In experiments, we show that in the state space, Flare can recover the optimal performance with only state positions and no access to the state velocities. In the pixel space, Flare improves upon the state-of-the-art model-free RL algorithms on the majority of selected tasks in the DMControl and Atari suites, while matching in the remaining.
f4224d67-c9a0-4bf4-9624-9cb08fd18a30
StampyAI/alignment-research-dataset/arxiv
Arxiv
On the Impossibility of Supersized Machines Introduction ------------ The history of life is often understood as a story of growth. If one takes the long view, then one can trace an exponential curve from our minuscule earliest ancestors, which were little more than self-replicating molecules, to the substantial creatures that we are today (Payne, 2009). Although humanity became aware of this story only in the 19th century, through the work of Charles Darwin, we have long had the privilege of witnessing a partial recapitulation every time someone new comes into the world (Darwin, 1859). Before each person is a full-sized adult, they are first an invisibly small cell. It is perhaps no surprise, then, that human largeness has for thousands of years fascinated many of our greatest thinkers. While some have sought to understand the nature and origins of largeness, others have anxiously inquired: Could there ever be something larger than a human? Evidence of this anxiety can be found as far back as humanity’s oldest recorded myth, The Epic of Gilgamesh, in which the monstrous giant Humbaba is appointed by Enlil, the king of the gods, to terrorize mankind (Sandars, 1972). From this point onward, bellicose giants have been a consistent presence in our literature, appearing in works ranging from Homer’s Odyssey to the English fairytale “Jack and the Beanstalk” (Homer, 1994; Anonymous, n.d.). Over time, perhaps in response to our species’ growing mastery of nature, it has become increasingly common to tell stories in which people are the ones responsible for the larger-than-human (or “supersized”) creatures that threaten them. For generations, audiences have been drawn to tales of frightful creations such as Frankenstein’s monster, described as over eight feet tall and “proportionally large”, and the golems of Kabbalah, which some rabbis feared would grow large enough to destroy the universe (Shelly, 2008; Moshe, 1990). This archetype has perhaps never been more prevalent than it is in modern Hollywood films, however. Inspired by the apparently steady march of technological progress, and the wild speculations of futurists, our media has become saturated with images of murderous supersized machines. In the long-running Transformers film series, machines known as Decepticons, each perhaps the size of a hundred men, repeatedly threaten to exterminate humanity with their enormous metal bodies (Bay, 2007). Numerous entries in the Godzilla film series feature machines so large that they can crush portions of the Tokyo skyline with a single step (Honda, 1975). We find that the Matrix film series, the Terminator film series (notable for its casting of an exceptionally large actor), and countless others also feature supersized machines that seek to cause the extinction of the human species (The Wachowskis, 1999; Cameron, 1984). It does not help that in recent years a number of computer scientists, philosophers, and other academics have publicly lent credence to the possibility of supersized machines. There has been no shortage of media coverage of these figures’ pronouncements.111In addition, it has become very common for articles on recent trends in computer science to use terms such as “big data” and “massive neural networks” in ways that are likely to be misinterpreted. Reading these articles, even ones that appear in highly reputable newspapers, it is often unclear whether their authors are aware that the use of size language in these contexts is purely metaphorical. However, perhaps fortunately, all predictions of a coming age of supersized machines are fundamentally misguided. We present seven distinct arguments, each of which suffices to show that supersized machines are impossible.222It is worth clarifying that there are, of course, systems today that appear to exceed human size in narrow dimensions. Lamp posts are one example. The predictions that we are considering concern some more general notion of largeness. Arguments Against Supersized Machines ------------------------------------- ### 1. The Irreducible Complexity of the Human Body Despite having been an active research area for hundreds of years, developmental biology has hardly progressed beyond its initial stages. We are far from being able to tell a story in all but the bluntest of terms of how a human zygote is able to transform itself, over the course of two decades, into an adult that is several orders of magnitude larger (Cameron, 2012). Scientists are at the point of being able to identify traits that correlate with largeness—certain genetic markers, for instance—but they have nothing like a complete theory of the causal pathways that explain these correlations. All attempts to construct such a theory have been stymied by the irreducible complexity of the human body, which contains tens of thousands of distinct proteins (Wilhem, 2014). It seems inevitable that, for this same reason, all future attempts will fail as well. Since we cannot comprehend the processes responsible for human largeness, it follows that we will never be able to produce machines that surpass this largeness. ### 2. The Meaninglessness of “Human-Level Largeness” One simple reason that we can reject predictions of supersized machines is that these predictions are not in fact well-formed. The term “supersized machine” implies a machine that has crossed some threshold, which is often denoted “human-level largeness.” However, it is not clear what “human-level largeness” could refer to. Has a machine achieved human-level largeness if it has the same height as the average human? If it has the same volume? The same weight? Or some more complex trait, perhaps the logarithm of girth multiplied by the square of height?333Note also that humans vary quite significantly along all of these dimensions, and that even among humans there is no single accepted measure of largeness (Pomeroy, 2015). When one begins to consider these questions, one quickly concludes that there are an infinite number of metrics that could be used to measure largeness, and that people who speak of “supersized machines” do not have a particular metric in mind. Surely, then, any future machine will be larger than humans on some metrics and smaller than humans on others, just as they are today. One might say, to borrow Wolfgang Pauli’s famous phrase, that predictions of supersized machines are “not even wrong” (Peierls, 1960). ### 3. The Universality of Human Largeness A further reason why it is senseless to speak of machines that are larger than people is that humans already possess the property of universal largeness. By this, we mean that humans are capable of augmenting their bodies or coming together to become indefinitely large, no matter the metric chosen. If a human would like to be taller, they can stand on a chair or climb onto another human’s shoulders. If they would like to be wider, they can begin consuming a high-calorie diet or simply put on a thick sweater (Hensrud, 2004; Figure 1). There are recorded cases of humans joining their bodies together to reach heights of up to 12 meters (Guinness World Records, 2013). ![There is no upper bound on how large humans can be (Gonzalez, 2017).](https://media.arxiv-vanity.com/render-output/8096407/x1.png) Figure 1: There is no upper bound on how large humans can be (Gonzalez, 2017). In short, since there exists no physical law to put an upper bound on human largeness, humans can be of any size. It follows, then, that no machine could ever really be larger than a human. ### 4. The Psychological Origins of Belief in Supersized Machines By explaining why some people may be inclined to worry about supersized machines, evolutionary psychology reveals that such fears are not rational. It is only natural that our ancestors should have developed a fear of beings larger than themselves. The greater a tribe member’s size, the more capable they are of employing violent coercion against other members or stealing their mates (Brewer, 2009). For this reason, vigilance toward the possibility of very large things was a highly advantageous trait. Although largeness now plays a much-diminished role, at least in Western societies, there has been little time for human psychology to adapt (Donald, 1993). Furthermore, given the central role that technology plays in modern life, we should find it perfectly unsurprising that many people (especially “alpha males” enmeshed in Silicon Valley culture) have come to possess a fear of supersized machines. Thus it is evolution, rather than logic or evidence, that serves as the true source of the belief that supersized machines are possible. It follows that we can safely assume this belief to be false. ### 5. Humans and Machines Together Will Always Be Larger Than Machines Alone When writers discuss the possibility of supersized machines, they appear to be missing a crucial consideration: No machine could ever be larger than that same machine and a human together. If machines are to play a role in pushing forward the frontier of largeness, then this role could only ever be to supplement human largeness. This is another simple reason why it is senseless to imagine larger-than-human machines.444This consideration also suggests that credible machine largeness researchers ought to focus on human-machine interfaces, which enable size-enhancing machines to be attached directly to the human body. Existing work on stilts may suggest one promising research direction (Smith, 2010). ### 6. The Hard Problem of Largeness Suppose one were to concede that machines could become as large as humans, in some sense related to physical extension (although this is of course impossible). Even if this were so, there would still remain a second, more meaningful sense of the word “large” that would not apply to these machines. This second kind of largeness is the one evoked whenever someone is described as “larger than life” or “living large” (Tom, 2004). Largeness of this sort is a non-physical (i.e. non-natural) property, separate from the mundane physical property that “largeness” most often denotes. To build a large machine, then, in the meaningful sense, we would first need to solve the “hard problem” of determining what this non-physical property is and how it arises. However, it is not at all clear that the problem is soluble, since the traditional methods of science seem equipped only to deal with questions that concern the physical world (Hall, 2010). Furthermore, the notion of a machine “living large” strikes one as intuitively implausible (perhaps even absurd). Therefore, machines will never truly be large. ### 7. Quantum Mechanics and Gödel’s First Incompleteness Theorem Quantum theory, as traditionally formulated, divides the world up into microsystems and macrosystems (Heisenberg, 1949). Within microsystems lie small objects, such as particles, and within macrosystems lie large objects, such as humans. The theory tells us that objects in microsystems may initially have no definite properties at all, such that any question concerning a given particle’s position, momentum, and so forth, will simply lack an answer. However, the remarkable ability that humans possess, as a result of their largeness, is the ability to force objects in microsystems to take on definite properties by performing “measurements” on them. For example, if a human “measures” that a particle has a certain location, then it becomes a new fact that the particle has this location. One of the great mysteries of quantum mechanics, that its originators never succeeded in resolving, is the question of what distinguishes microsystems from macrosystems (Bell, 1990). It seems that we are to understand that some fundamental line separates the large from the small, such that small objects exist in a sort of limbo until large objects perform measurements on them. However, we lack guidance on how to draw this line, and it is difficult to understand how and why the line exists at all. The problem of making sense of this line, and thereby uncovering the nature of largeness, is known as “the measurement problem.” A partial answer to the measurement problem may be suggested by Kurt Gödel’s first incompleteness theorem (Gödel, 1931). This theorem was first proved in 1931, although its full significance arguably remains to be appreciated. The theorem states that, for any sufficiently expressive formal system, the system must either be inconsistent or incapable of proving true or false all statements that are expressible within the system. To understand how Gödel’s theorem can resolve the measurement problem, it is perhaps most useful to apply the lens of quantum stochastic calculus (Kholevo, 1991). QSC, as a reminder, generalizes classical stochastic calculus to cover cases of non-commuting random variables, which are ubiquitous in quantum mechanics. Take the quantum Stratonovich integral of a system operator, g(t), which is given by (Gardener, 2004): | | | | | --- | --- | --- | | | (S)t∫t0g(t′)dB(t′)=limn→∞n∑i=1g(ti)+g(ti+1)2(B(ti+1,t0)−B(ti,t0)) | | Applying this expression, it is trivial to show that: | | | | | --- | --- | --- | | | (S)t∫t0g(t′)dB(t′)−(S)t∫t0dB(t′)g(t′)=√γ2t∫t0dt′[g(t′),c(t′)] | | Now suppose that we would like to formalize this deduction within non-well-founded set theory, to which Gödel’s theorem of course applies (Aczel, 1988). Importantly, by assuming the axiom of anti-foundation we are able to introduce into our analysis self-referential objects, such as Quine atoms, which possess the property of being large enough to contain themselves. Although the technical details from this point onward are unfortunately too dense to include in a general-audience essay of this sort, assuming as they do familiarity with constructive non-standard analysis, it suffices to say that supersized machines cannot be made (Figure 2). ![Machines cannot be large (Gonzalez, 2017).](https://media.arxiv-vanity.com/render-output/8096407/x2.png) Figure 2: Machines cannot be large (Gonzalez, 2017). Conclusion ---------- We have presented seven distinct arguments against the possibility of supersized machines. While each of these arguments would be sufficient on its own, the conjunction of them surely constitutes an insurmountable barrier to the belief that an age of supersized machines lies anywhere on the horizon.555One may wonder why we have felt it necessary to demonstrate that supersized machines are impossible, rather than arguing for the much weaker claim that supersized machines are unlikely to be developed soon. The reason is that, counter-intuitively, many of the academics who have expressed concern about supersized machines appear to accept this weaker claim. They argue from the position, currently controversial among policy-makers, that it is worth preparing for distant or low-probability events (Bedford, 2001). This position has led many to stake out similarly provocative stances in favor of climate change mitigation, pandemic preparedness, and seatbelt use. Our conclusion is in at least one way a relief. There is no reason to fear preposterous stories about towering Terminator machines. However, our conclusion might also be taken as a sad one. We are the largest things in the universe, and we will never be otherwise. Fantasies of supersized machines hold an appeal, in addition to inspiring fear, because it is tempting to imagine these machines as perfected versions of ourselves. They are who people would be if only we were a little larger. They are steadier, and more able to look down upon the world with a distant wisdom, rather than becoming entangled in the insignificant details close to ground. It can be nice to think that if we are unable to resolve our own problems here on Earth, then maybe this is only because we lack the size. A world in which we are the largest things conceivable is a world without excuses. We submit that this is a good thing, however. It is time to stop daydreaming about something larger than ourselves, and time to begin understanding how large we truly are.
81c700f9-4ab8-413d-811d-431a68eed783
trentmkelly/LessWrong-43k
LessWrong
[LINK] The power of fiction for moral instruction From Medical Daily: Psychologists Discover How People Subconsciously Become Their Favorite Fictional Characters Psychologists have discovered that while reading a book or story, people are prone to subconsciously adopt their behavior, thoughts, beliefs and internal responses to that of fictional characters as if they were their own. Experts have dubbed this subconscious phenomenon ‘experience-taking,’ where people actually change their own behaviors and thoughts to match those of a fictional character that they can identify with. Researcher from the Ohio State University conducted a series of six different experiments on about 500 participants, reporting in the Journal of Personality and Social Psychology, found that in the right situations, ‘experience-taking,’ may lead to temporary real world changes in the lives of readers.  They found that stories written in the first-person can temporarily transform the way readers view the world, themselves and other social groups.  I always wondered at how Christopher Hitchens (who, when he wasn't being a columnist, was a professor of English literature) went on and on about the power of fiction for revealing moral truths. This gives me a better idea of how people could imprint on well-written fiction. More so than, say, logically-reasoned philosophical tracts. This article is, of course, a popularisation. Anyone have links to the original paper? Edit: Gwern delivers (PDF): Kaufman, G. F., & Libby, L. K. (2012, March 26). "Changing Beliefs and Behavior Through Experience-Taking." Journal of Personality and Social Psychology. Advance online publication. doi: 10.1037/a0027525
5a25a848-0280-44e2-a879-d617b46aca00
trentmkelly/LessWrong-43k
LessWrong
My takes on SB-1047 I recently decided to sign a letter of support for SB 1047. Before deciding whether to do so, I felt it was important for me to develop an independent opinion on whether the bill was good, as opposed to deferring to the opinions of those around me, so I read through the full text of SB 1047. After forming my opinion, I checked my understanding of tort law basics (definitions of “reasonable care” and “materially contribute”) with a law professor who was recommended to me by one of the SB 1047 sponsors, but who was not directly involved in the drafting or lobbying for the bill. Ideally I would have wanted to consult with a completely independent lawyer, but this would have been prohibitively expensive and difficult on a tight timeline. This post outlines my current understanding. It is not legal advice. My main impression of the final version of SB 1047 is that it is quite mild. Its obligations only cover models trained with $100M+ of compute, or finetuned with $10M+ of compute. [1] If a developer is training a covered model, they have to write an SSP, that explains why they believe it is not possible to use the model (or a post-train/finetune of the model costing <$10M of compute) to cause critical harm ($500M+ in damage or mass casualties). This would involve running evals, doing red teaming, etc. The SSP also has to describe what circumstances would cause the developer to decide to shut down training and any copies of the model that the developer controls, and how they will ensure that they can actually do so if needed. Finally, a redacted copy of the SSP must be made available to the public (and an unredacted copy filed with the Attorney General). This doesn’t seem super burdensome, and is very similar to what labs are already doing voluntarily, but it seems good to codify these things because otherwise labs could stop doing them in the future. Also, current SSPs don’t make hard commitments about when to actually stop training, so it would be good to have that.
5fbc243b-bc75-4a1d-a5f1-240bc68dfcad
trentmkelly/LessWrong-43k
LessWrong
Sam Altman fired from OpenAI Basically just the title, see the OAI blog post for more details. > Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI. > > In a statement, the board of directors said: “OpenAI was deliberately structured to advance our mission: to ensure that artificial general intelligence benefits all humanity. The board remains fully committed to serving this mission. We are grateful for Sam’s many contributions to the founding and growth of OpenAI. At the same time, we believe new leadership is necessary as we move forward. As the leader of the company’s research, product, and safety functions, Mira is exceptionally qualified to step into the role of interim CEO. We have the utmost confidence in her ability to lead OpenAI during this transition period.” ---------------------------------------- EDIT: Also, Greg Brockman is stepping down from his board seat: > As a part of this transition, Greg Brockman will be stepping down as chairman of the board and will remain in his role at the company, reporting to the CEO. The remaining board members are: > OpenAI chief scientist Ilya Sutskever, independent directors Quora CEO Adam D’Angelo, technology entrepreneur Tasha McCauley, and Georgetown Center for Security and Emerging Technology’s Helen Toner. ---------------------------------------- EDIT 2: Sam Altman tweeted the following. > i loved my time at openai. it was transformative for me personally, and hopefully the world a little bit. most of all i loved working with such talented people.  > > will have more to say about what’s next later.  > >   Greg Brockman has also resigned.
a1a5b532-405f-4f11-bdbc-e2d4cc6add07
StampyAI/alignment-research-dataset/arxiv
Arxiv
Making AI meaningful again 1 The current paradigm of AI: Agnostic Deep Neural Networks (dNNs) ------------------------------------------------------------------- An AI application is a computer program that can create an output in response to externally derived input data in a way that is similar to the ways humans react to corresponding environmental stimuli. In what follows we will focus on AI applications that work with natural language, where the currently dominant paradigm is provided by what is called agnostic deep machine learning. The latter is a subfield of applied mathematics in which input-output-tuples of data are used to create stochastic models, in a process often (somewhat simplistically) referred to as ‘training’. The inputs are connected to outputs probabilistically, which means that there is a certain (a priori unknown but measurable) likelihood that a given input will be associated with a given output. The models are referred to as ‘stochastic’ because they work by utilizing the fact that the data on which they draw is probabilistic in this sense. The models are, in addition, ‘agnostic’ – which means that they do not rely on any prior knowledge about the task or about the types of situations in which the task is performed, and they are often “end to end,” which means that they are meant to model an entire process such as answering a letter or driving a car. The models are, finally, ‘deep’ in the sense that their architecture involves multiple layers of networks of computational units (thus not, for example, because of any depth in their semantics). For agnostic deep learning to be useable in creating an AI application, a number of conditions must be satisfied: 1. A sufficient body of training data must be available in the form of tuples of input and output data. These are digital representations of, respectively, a situation in response to which an action is required, and an action of the corresponding sort hastie:2008. The classical AI-application in this sense is the spam filter, whose initial output data were created using annotations, in this case adding the label “spam” to email inputs. 2. Computers must be able to represent the training material they receive in digital form, so that it can be processed using the computing resources available today cooper:2004. 3. The annotated training tuples must be noise-poor – that is, similar inputs should lead to similar outputs. This is because machine learning requires repetitive patterns – patterns that have arisen in a recurring, rather than erratic, process. The behaviour of human email users when identifying spam forms a repetitive process of the needed sort. This is because users of email have a motive to become experts in successful identification of spam, since they are aware of the high costs of failure. The movement of the oil price over time, in contrast, is an example of an erratic process. 4. The data input must be abundant, since a machine-learning algorithm is a stochastic model that needs to represent the entire variance which characterises the situation in which the model is to be used. Because in language applications the overall complexity of the relationship between input and output is typically very high, the models will need many parameters. For mathematical reasons these parameters can only be computed (through the type of optimisation process otherwise called “training”) on the basis of huge data sets. If the training sets are too small, there is a high chance that novel input data will not have the properties of the data sampled in the training distribution. The model will then not be able to produce an adequate output under real production conditions. Most of the AI applications in current use, for example in product recommendation or advertisement placement, draw on machine learning approaches of this type. To establish the training set for the first spam filters, developers needed to collect millions of input-output data tuples, where inputs are emails received by humans and outputs are the classifications of these emails by their respective recipients either as spam or as valid email. They then train a machine-learning model using these data tuples and apply the result to new emails. The goal is that the model should replicate the human reaction it has been trained with, which means: identify spam in a way that matches the behaviour of a typical human. In applications such as this, it is only a very simple type of knowledge – knowledge that is captured by simple input-output-tuples – that is given to the machine by its mathematician or AI-engineer trainers. However, application developers may wish to improve the model that is generated by the algorithm from the data by selecting for training purposes only those tuples that have certain desired properties (as when, in building training models for autonomous cars, they select driving behaviour of mature females rather than that of teenage males). The quality of the performance of the machine can on this basis even surpass that of the average human because the trainers of the model select only the most desired sorts of responses from what may be a much more considerable variance exhibited in the behaviour of humans. Thus they may select data that has been somehow validated by experts for correctness, creating what is called a “gold standard” set of annotations. Because the engineer uses prior knowledge about data quality when making such selections, this is equivalent to an – albeit minimalistic – usage of prior knowledge in machine learning. Machine learning with neural networks can out-perform even the strongest human performance, but only in three types of cases: * where the behaviour that is modelled consists of truly repetitive processes with narrow scope and with data that can be easily represented adequately in digital form (for example spam filters, shopping recommendations) – this achieves an efficiency higher than is obtainable by humans * in hypothesis-based pattern identification (for example in the recent identification by a dNN of a correlation between retinal patterns and cardiovascular risk factors poplin:2018 – this achieves an effectiveness higher than with humans * in reinforcement learning, a method used in certain narrowly defined situations of the sort that arise in games (for example in GO silver:2016 or First Person Shooters jaderberg:2018) and in contexts that can be framed like games sutton:2018– this achieves both efficiency and effectiveness higher than with humans. Examples of usage are: (i) driving a car on a highway, (ii) scientific pattern-search applications, e.g. in biology or astronomy, (iii) robotics, e.g. industrial plant cleaning. But unfortunately, each of these types of situations is highly restrictive and context-specific, and none is available where we are dealing with natural language input. 2 Applying Agnostic Deep Neural Networks in the field of language understanding -------------------------------------------------------------------------------- To understand how modern agnostic deep neural network AI works in the language domain, consider the most prominent production example, which is that of machine translation as illustrated by Google translate111<https://translate.google.com/>. A recent publication authored by Google Brain222This is the official name of Google’s AI department. While Google’s machine-learning engineers are certainly among the leading representatives of their craft, the name nonetheless reveals a certain hubris. and Google Research, with the title “Attention is all you need” vaswani:2017, provides a representative example. The stochastic models described in this paper were trained for the translation of English to German and of English to French. To train Transformer– which is the best-performing “big” model described in the paper– the authors encoded the language material at their disposal using byte-pair encoding, which encodes each single-sentence input into an encoding vector of 1024 real numbers (rounded to a certain number of decimal places). This is a complexity-reducing encoding, which means (very roughly) that it treats each sentence simply as a series of signs. This allows the encoding process to retain certain important features of the input sentences because relevant sentence patterns are repeated in many sentences in a similar way, and these sentences are shown to the algorithm333For example, the algorithm learns to translate the German word Mehl into flour because this pair is repeated many times in training sentences. But it will fail to translate “Wir haben Mehl Befehl gegeben zu laufen” into the adequate “We ordered Mehl to run”. It rather gives out the nonsensical “We have ordered flour to run” (result produced on Jan. 7, 2019). The translation fails because there are not enough training examples to learn the martial usage of surnames without title.. But at the same time, it necessarily leads to the discarding of many subtleties of these sentences. This is because the embedding of the sentence loses relations not only between words within the sentence but also between sentences. A further feature of the experiments reported is that the models used are trained with quite small amounts of training data: 36 million sentence pairs for English-French, and only 4.5 million for English-German. The models are completely agnostic: they have no knowledge of linguistics, for example, because they have no knowledge of anything at all. Rather, they just try to mimic the human translations (or rather the corresponding sets of vectorially simplified input-output pairs) they learned from. The principal problem of this approach, however, is that embedding into 1024 encoding real numbers leads to the discarding of all information pertaining to the contexts of the input sentences. That this has adverse consequences becomes clear when we reflect that, in all language interpretation processes, even for single sentence inputs, humans use prior knowledge to contextualise the sentences they receive. As an example, consider how a typical reader of this text would contextualise the single sentence: “In the beginning was the word.” ### 2.1 Results thus far How well, then, do these models do? Transformer, specifically, creates a model that achieves a sentence-level score of 28.4 for English-German and 41.8 for English-French using the BLEU metric, which measures on a scale from 0 to 100 the degree of matching of the machine-translation with a human gold-standard translation papineni:2002. A score of 100 can never be achieved, because there are always several valid translations for any given sentence and not all of them can be in the gold-standard set. But 75-85 could be achieved in theory. Such a score would be excellent, and it would correspond to the translation abilities of an average bilingual speaker. The scores achieved by Transformer, in contrast, which are reported as the state-of-the art in machine translation, are low. To illustrate the limitations of the approach, Hofstadter used input sentences with a high degree of cross-contextualisation444Douglas Hofstadter provides the following illustration of the lack of semantics in translate.google.com in “The Shallowness of Google Translate”, The Atlantic, January 30, 2018, Text by Hofstadter: In their house, everything comes in pairs. There’s his car and her car, his towels and her towels, and his library and hers. Google Translate: Dans leur maison, tout vient en paires. Il y a sa voiture et sa voiture, ses serviettes et ses serviettes, sa bibliothèque et les siennes. Translated back into English by Google: In their house everything comes in pairs. There is his car and his car, their napkins and their napkins, his library and their’s. . ### 2.2 General limitations of machine learning Major limitations of current deep learning paradigms have been identified already (for example in marcus:2018). They include first of all a set of quite general problems affecting stochastic models of any sort– not only deep neural nets but also traditional regression and classification approaches hastie:2008, including graph-based stochastic models (Bayesian Networks). The first of these limitations turns on the huge data need of stochastic models, which may employ millions of parameters. Transformer, for example, has 213 million parameters and needs at a minimum billions of data tuples to become useful even for the sorts of rough translation produced by google translate. This limitation is already of considerable importance given that, leaving aside the resources of internet giants such as Google, there are few real-world examples of data available in the sorts of quantities needed to deal with complex outcomes using any sort of stochastic approach. Second, all stochastic models require a stable environment. The quality of their output depends on how well they reflect the real-world input-output relationship they are aiming to represent. Where this relationship is erratic, there can be no good model (consider the oil price example above). But even where the relationship is stable, the model will quickly become invalid if the input-output relationship changes on either side even in some minor way. This is because the model does not generalise. Once fed with data as input that do not correspond to the distribution it was trained with, the model will fail without alerting the user that it is failing555Deterministic AI models do not generalize either, but they report their failures.. This explains why stochastic spam filters and similar applications are so vulnerable to changing situations, and why they so often need re-training. And the more complex the application, the more demanding will be the re-training of end-to-end neural networks that is required upon change of input constellations (for example when new types of sensors are introduced in driverless cars). The costs for such re-training will vary, of course, with the complexity of the input and the accuracy requirements of the network. But there is a third group of limitations, turning on the fact that the output of all stochastic models is, by definition, approximative. Models of this sort can yield only the most probable output for any given input and model, and this output often falls below even the average human output. For many imaginable useful purposes, however, the output should ideally be at least as reliable as the behaviour not of the average but of a qualified subset of human reference samples; this is very hard to achieve in language-focused applications using dNNs only. We can better understand the limitations of stochastic models when we reflect on how humans interpret reality. Unlike machines, humans are able spontaneously and immediately to attribute meaning to the world they experience. This is because the human species has evolved with a complex set of dispositions to react immediately in highly specific ways to specific sorts of external stimuli. Human beings are, along many dimensions, tuned to the environments in which they live. The entities that we experience are spontaneously assigned meanings that reflect their relevance to our survival, meanings that are assigned using machinery that has been hard wired into our brains. The belief that stochastic models can learn to make decisions without benefit of prior hardwiring of this sort is as naive as the old tabula rasa theories that were once the staple of empiricist philosophers and of their empirical psychologist followers. Such views were criticized by J. J. Gibson in his ecological theory of perception 666Indeed they were criticized, 200 years earlier, by Immanuel Kant in 1781 in his Critique of Pure Reason.gibson:1979, and they were experimentally refuted in the work on infant cognition of Carey carey:2001, Gopnik gopnik:2000, Keil keil:1989, keil:1995, Kim and Spelke kimSpelke:1999, who demonstrated that infants (and primates, povinelli:2000) possess a large body of categorical and structural knowledge about the world of solid objects long before they even start acquiring the grammar of their mother tongue leslie:1979. Indeed, it seems that language acquisition presupposes the working of a common set of ontological distinctions on the side of language learners, including the distinction between objects and processes, between individuals and categories, between natural and accidental properties of objects, and so forth. Even Bayesian models for concept learning based on similarity acknowledge (i) the need for a prior genus-individual distinction to explain the mechanics behind generalization and (ii) the existence of a prior meta-heuristic linking membership in a class to property instantiation tenenbaum:1999; tenenbaum:2001. As Rehder formulates the matter, categorization relies on inferences about the causal role of putative essences in producing observable features rehder:1999. The latter, in other words, are merely secondary, derivative; and all the naive knowledge brought to bear by the infant follows from the natural and universal supposition that things belong to classes sharing similar properties medin:1989; solomon:1999. Even children as young as 3 years old believe that the ‘insides’ of objects are relevant in determining class membership gelman:2003; gelman:1991a and keil:1989. According to Carey and Xu carey:2001 (p. 207) experiments on object recognition suggest that there is an object tracking system in the infant– a system that tracks three-dimensional, bounded, and coherent physical entities, and fails to track perceptually specified figures that have a history of non-cohesion. And what holds of infant cognition in general holds also of infant language leaning and language competence in particular, where the capability of object tracking grounds the use of nouns and pronouns. Indeed part of the background source of this empirical work on infant ontology was formed by Chomsky’s ideas on innate universal grammar chomsky:1956. Gelman and Byrnes gelman:1991 make explicit reference to these ideas when they assert that they are able to “determine how languages and conceptual systems are constrained by examining the forms and meanings that children construct, and which errors they fail to make” gelman:1991, compare millikan:2001, p. 47. For our purposes here, it is crucial that the AI applications running on today’s computers can simulate at best only small fragments of the hard-wired human capabilities revealed in such research. This means that they can simulate only small fragments of the semantics underlying human language use. As we shall see, neural networks have in this respect even more severe limitations than traditional logic-based AI approaches to the modeling of human cognition. This is because the formal ontologies used in the latter involve direct representations of the sorts of objects, processes and attributes (and associated nouns, verbs and predicates) used by human beings in perceiving, acting and speaking. Neural networks attempt to build relation-rich content of this sort out of gigantically large numbers of features represented using numerical input vectors or matrices by estimating what amounts to a very large polynomial (this is what a neural network is) with the help of an optimization procedure. This seems to be infeasible even for simple ontologies of the RDF-sort made up of structures of the type: entity A– relates to– entity B gutierrez:2018. ### 2.3 Limitations applying specifically to deep neural networks (dNNs) As humans process sensory input data, they assign meanings to the objects and events which the data represent (and from which the sensory content originates), experiencing these objects and events as belonging to a specific sort of categorical structure. But dNNs do not use any of the target-derived properties of the input data that humans spontaneously use when they assign meaning to the data which they receive through experience. The result is a tremendous brittleness of dNN capabilities. Moosavi et al. moosavi:2017 describe how high-performance neural networks developed for image classification can be perturbed to completely misclassify images when the input material is mixed with a perturbation image. For example, what is at first correctly classified by the system as a flagpole is classified as a labrador after the system is very slightly perturbed. Perturbations of an analogous sort do not cause problems for humans at all. Jo and Bengio jo:2017 more recently showed that dNNs work merely by learning certain surface-statistical regularities from images: the green grass that forms the typical background of a cow, for example, is contrasted with the grey of asphalt that forms the typical background of a car. They can be perturbed so easily precisely because they do not learn what the images are about and the sort of world to which the imaged objects and events belong. The same holds also of the dNNs constructed for language processing purposes. A recent paper by Chen et al. chen:2017 proves what, given what was said above, we should in any case expect, namely that dNNs lack core computational features of traditional approaches to syntactic language analysis pioneered by Chomsky using probabilistic context-free grammars chomsky:1956. As the authors show, while it is required of every valid stochastic model that it compute a valid probabilistic distribution, this condition is not in general satisfied by dNNs working from language input. But without this ability, there can be no computational representation of semantics, and the paper by Feng et al. feng:2018 shows, the language constituents used by dNNs to make predictions in question-answering or textual entailment tasks make no sense to humans at all in most cases777One example described in feng:2018 rests on the input: “In 1899, John Jacob Astor IV invested $100,000 for Tesla to further develop and produce a new lighting system. Instead, Tesla used the money to fund his Colorado Springs experiments”. The described system correctly answers the question: “What did Tesla spend Astor’s money on?” with a confidence of 0.78 (where 1 is the maximum). The problem is that it provides exactly the same answer with a similar degree of confidence as its response to the nonsensical question: “did?”. This in turn means that dNNs, whatever it is that they are doing , cannot be modeling the semantics that need to be captured in order to extract information from texts, a crucial task in natural language processing for automation purposes. Zheng et al. zheng:2017 provide a poignant example of the low quality currently being achieved for tasks of this sort, with a low net information extraction (IE) accuracy rate 888The reported F1-score of 0.52 seems quite high but most of the training material is synthetic and the reported outcome only concerns information triples, which cannot be used for applied IE. The results is ‘poignant’ because the paper in question won the 2017 Prize for Information Extraction of the Association for Computational Linguistics, globally the most important meeting in the language AI field.. This reveals just how low the expectations in the field have become. The inability to compute natural language semantics is also illustrated by the recent misclassification of the United States Declaration of Independence as hate speech by the Facebook filter algorithm999<https://www.theguardian.com/world/2018/jul/05/facebook-declaration-of-independence-hate-speech>. dNNs are also unable to perform the sorts of inferences that are required for contextual sentence interpretation. The problem is exemplified by the following simple example: “The cat caught the mouse because it was slow” vs. “The cat caught the mouse because it was quick.” What is the “it” in each of these sentences? To resolve anaphora requires inference using world knowledge– about persistence of object identity, catching, speed, roles of predator and prey, and so forth. Thus far, however, little effort has been invested into discovering how one might engineer such prior knowledge into dNNs (if indeed this is possible at all). The result is that, for all current applications, dNN models are still very weak, as they can only learn from the extremely narrow correlations available in just that set of annotated training material on the basis of which they were created– with the exception of game-like situations in which training material can be generated synthetically, esp. in reinforcement learning. And worse: because the dNNs rely exclusively on just those correlations, they are also unable to distinguish correlation from causation, as they can model only input-output-relationships in ways that are agnostic to questions of evidence and causality. They can detect, for example, that there is some sort of relationship between smoking and lung cancer. But they cannot determine the type of relation that is involved unless references to this relation and to relevant types of relata themselves form part of the annotated corpus. Unfortunately, to create the needed annotated gold standard corpora– one for each domain of interest– is hugely expensive in terms of both time and expertise. Thus to make dNNs work effectively in language applications not only are enormous collections of data required. For many applications at least– for example those involving the tracing of causality– the investment of considerable amounts of human expertise is needed also. One final problem is that, in part because they do not incorporate prior knowledge, dNNs lack transparency– the models work as black boxes, so that their engineers cannot tell how the network worked to yield any given output. This poses a major challenge in areas where we need to reproduce the behaviour of the network, for example in case of disputes over liability. Taken together, these problems rule out entirely the use of machine learning to drive mission-critical AI systems– for example systems capable of driving cars or of managing nuclear power stations. They are too brittle and unstable against variations in the input, can easily be fooled, lack quality and precision, and fail completely for many types of language understanding or where issues of liability can arise. Even at their very best, they remain approximative, and so any success they achieve is still, in the end, based on luck rather than on modus ponens. 3 Making AI meaningful again ----------------------------- To overcome these problems, ways need to be found to incorporate prior knowledge into the AI algorithms. One attempt to do this is to enhance Bayesian Networks with an explicit relationship semantics koller:2009, which allows the model designer to build in knowledge describing entity relationships before using data to train the weights of these relationships. This reduces the learning effort by providing a rudimentary form of prior knowledge. But unfortunately, the expressivity of the resulting models is too low to represent the sorts of complex contexts relevant to human language understanding. Furthermore, they are not exact, secure, or robust against minor perturbations. They are also not transparent, and thus they are not ‘meaningful’ in the sense that humans cannot reliably understand how they work to achieve given results. The goal of meeting this requirement is now dubbed ‘‘explainable AI’’, and we believe that the most promising strategy for achieving this goal lies in building applications that work in accordance with the ways humans themselves assign meaning to the reality that surrounds them. To this end, a semantics-based representation is highly desirable that is able to deal with language as it is actually used by human beings. The representation should be able to incorporate prior knowledge based on low to medium amounts of input material of the sorts found in typical real-world situations. For humans do not find meaning in data. Rather, they find meaning in the objects and events that surround them, and in the affordances that these objects and events support. This entails a different sort of AI application, in the building of which not only mathematics and computer science play a role, but also philosophy. Part of what is needed here is to be found already in early attempts to create AI-systems under the heading of what is sometimes called ‘strong’ logic-based AI. Already in the 1960s, the use of (first-order) logic for AI modeling purposes was regarded as attractive because it was seen as enabling exact inference.101010An excellent summary can be found in russell:2014. The most interesting example of this strong AI for our purposes here is in the work of Patrick Hayes, a philosopher who first made his name with a paper co-authored with John McCarthy, commonly accredited with having founded the discipline of AI research. The paper is titled “Some Philosophical Problems from the Standpoint of Artificial Intelligence” and it lays forth for the first time the idea behind the calculus of situations mcCarthy:1969. In the subsequent years Hayes set forth the idea of what he called ‘naïve physics’, by which he meant a theory, consisting of various modules called ‘ontologies’, that would capture the common-sense knowledge (sets of common-sense beliefs) which give humans (or robots) the capacity to reason and plan and navigate through the world hayes:1985. The theory is axiomatized using first-order logic (FOL) and Hayes proposed that something of the order of 10,000 predicates would need to be encoded if the resulting theory was to have the power to simulate human reasoning about physical objects of the sorts that are encountered by humans in their everyday lives111111Hayes’ conception of an ontology as the formalization of our knowledge of reality continues today in the work of Tom Gruber, whose Siri application, implemented by Apple in the iPhone, is built around a set of continuously evolving ontologies representing simple domains of reality such as restaurants, movies, and so forth.. The problem with Hayes’ approach, as with strong (FOL-based) AI in general is that to mimic even simple human reasoning in real time would require a reasoning engine that is decidable, and this implies a severe restriction on the expressiveness of the logic that can be used. Standardly, one ends up with a very weak fragment of FOL such as that encapsulated nowadays in the s0-called Web Ontology Language (OWL, see below). OWL is restricted for example in that it can capture at most relational information involving two-place relations, and it has a similarly diminished quantifier-syntax. For this and many other reasons, logic-based systems have never reached the point where they were able to drive AI-applications. They did however spawn the development of a huge body of mechanical theorem proving tools robinson:2001 and they contributed to the development of modern computational ontologies, which helped to transform biology into an information-driven discipline ashburner:2000. Both of these developments are, as we shall see, essential for the sort of logic-based AI that is being developed today. | Property | System | Example | | --- | --- | --- | | Exactness | needs to be able to be exact where necessary and not always restricted to the merely approximative | in the insurance domain: automated validation and payment of a claim | | Security | needs to avoid insecurities of the sort which arise, for example, when even slight perturbations lead to drastically erroneous outputs | in autonomous driving: avoid harmful consequences of adversarially manipulated traffic signs | | Robustness | needs to be able to work reliably in a consistent way even given radical changes of situation and input, or to detect critical changes and report on its own inability to cope | language that is not understood by the system is sent for inspection by a human | | Data parsimony | needs to be trainable with thousands to millions of data points (rather than billions to trillions– magnitudes which rarely occur in reality) | in the domain of business correspondence: automation of letter-answering on the basis of just a few thousand examples per class of letter | | Semantic fidelity | needs to be able to incorporate contextual interpretations of input situations | in the domain of sentiment analytics: the Declaration of Independence should not be classified as hate speech | | Inference | needs to be able to compute the consequences of given inputs, to distinguish correlation from causality (thus requiring the ability to reason with time and causation) | determination of required actions on the basis of text input, for example in automated processing of medical discharge summaries | | Prior knowledge use | needs to be able to use prior knowledge to interpret situations | understanding that issuing a declaration of inability to pay implies earlier receipt of a payment request | Table 1: Minimal desiderata for a real-world AI language-processing system ### 3.1 How philosophy is being reinserted into AI We will show in what follows how augmenting deep neural networks and other stochastic models with certain philosophically driven elements of logic-based AI is now allowing us to create AI applications that are already solving real-world problems. We present an example of how the sort of philosophy-driven ontology machinery proposed here can be made to work. To understand its functionality, we start by giving details about the minimal requirements which we believe a real-world AI system must satisfy (listed in Table 1). These requirements cannot be satisfied by agnostic machine-learning systems alone, as they presuppose the ability to deal with the semantics of human (natural) language. They can be satisfied, we believe, only by using systems which combine components from dNNs with methods associated with traditional, logic-based AI in such a way as to allow incorporation of prior knowledge. What is the role of philosophy here? First, it was philosophers, including the mathematician-philosopher Gottlob Frege, who developed the methods which enable the expression in exact logical form of knowledge otherwise expressed in natural language. FOL itself was invented by Frege in 1879, and since then the FOL framework has been refined and extended in order to reach the stage where it is possible to represent natural language in a formal, computable manner121212An overview is given in boolos:2007.. Second, philosophers have from the very beginning attempted to understand how human language works, how language relates to the world, and how philosophers themselves use language in their own thinking. Think of Aristotle’s Organon and Book VII of his Metaphysics. In the 20th century, an entire branch of the discipline– called ‘analytical philosophy’– has grown up around this topic dummett:1996. We shall see, too, that the discipline of ‘formal ontology’, too, has in recent years achieved considerable maturity in part as a result of the influence of philosophical ideas. The last four properties listed in Table 1 can only be achieved on the basis of a formal, computable representation of prior and world knowledge using a computable representation of the natural language semantics of the system’s inputs and possibly also of its outputs. This computational representation needs two major elements: (a) a set of logical formalisms that can store and manipulate language in Turing-machines, and (b) a framework which enables one to define the meanings of the elements of the language, constituted by what are nowadays called formal ontologies. We start with the logical formalisms. Natural language is very hard to express in a logical framework, and for this a combination of different logical formalisms is needed to represent the input language (no matter whether this is used in training dNN models, in using these models in practice, or for knowledge insertion into algorithms). These logical formalisms must enable computation, for which not necessarily decidability, but robustness and completeness are needed.131313Decidability is not required because with robustness and completeness logical inference is possible but may not terminate. In practice, algorithms are stopped after a maximum computation period defined by fiat, when the case is handed to a human for decision. Several such logical dialects are needed to cover different aspects of the input language (such as propositions, quantified entities and their relationships, temporal relationships and modalities). The requirement is that they are able to achieve, when used together, a good approximation of natural language semantics while also enabling computability and domain-specificity.141414Note that the originality of this approach compared to initiatives like, for example, CyC, stems from the restriction to small domains, the automated translation of natural language into logic and the usage of different logical dialects to represent different aspects of natural language. The syntactical representation of FOL was standardised in 2007 as Common Logic (CL) (ISO/IEC 24707:2007, now revised as ISO/IEC 24707:2018). Combined with a set of robust and complete propositional modal logics, CL enables a rich representation and manipulation of linguistic content as well as automated machine inference (modulo the need for a time-out when specific computations take too long). #### 3.1.1 Incorporating Ontologies Ontologies can be divided into two types. On the one hand are domain ontologies, which are formal representations of the kinds of entities constituting a given domain of inquiry together with the relations between such entities smith:2003. On the other hand are top-level ontologies, which represent the categories that are shared across a maximally broad range of domains– categories such as object, property, process and so forth. Each ontology is built around a simple a taxonomic hierarchy in which the types of entities are related to each other by the relation of greater and lesser generality (an analogue of the subset relation that holds between the instances of such types). Domain ontologies have enjoyed considerable success in the formalisation of the descriptive content of scientific theories above all in many areas of biology (see especially the Gene Ontology, ashburner:2000, where they served initially as controlled, structured vocabularies for describing the many new types of entities discovered in the wake of the Human Genome Project. As more and more such ontologies came to be developed and applied to the annotation of more and more different types of data, the need arose to standardise these ontologies using a common top-level ontology framework, and it was in this context that there arose Basic Formal Ontology (BFO, arp:2015) which is used as shared top-level ontology in some 300 ontology initiatives (currently under development as an ISO standard under ISO/IEC: 21838-1 (Top-Level Ontologies: Requirements) and ISO/IEC: 21838-2 (BFO)). The use of a common top level allows multiple ontologies to facilitate standardised exchange between parties communicating data about entities in different domains. Through the incorporation of formal definitions they also allow the application of basic inference mechanisms when interpreting data exploiting taxonomic and other relations built into the ontology. For logic-based AI applications, more powerful ontologies are needed which reflect the full spectrum of language constituents and of their logical counterparts. They must enable the expression not only of traditional taxonomical and mereological relations but also for example of synonymy relations and reasoning at both the lexeme (single word) and phrase level. The terms in such ontologies will be defined using formulae of FOL (for example as standardized in CL). These formulae relate entities to each other via a transitive network of formulae as illustrated in the following simplified example: Natural language input: Boats can float on water. To float, they enclose air, which gives them buoyancy. Formal representation: boat(x)∧water(y)∧R1(x,y)∧air(z)∧(R2(x,z)∧buoyancy(w)∧R3(x,z,w))→R1(x,y) Here lower-case letters represent entities, the Ri represent relations, R1 = floats\_on, R2 = encloses R3 = gives (quantifiers not shown for the sake of readability). ### 3.2 Putting the philosophy-driven machinery to work To represent in logical form the full meaning of a given complex natural language expression E in a given domain and for a given purpose, we will need algorithms which, given E, can generate a corresponding logical formula using those terms in the relevant ontology which are counterparts of the constituent simple expressions in E. These algorithms, together with a consistent set of supporting ontologies, can thereby allow the representation in machine-readable form not merely of single expressions but of entire texts, even of entire bodies of literature, in which domain-specific knowledge is communicated in natural language form. To see how philosophy is already enabling applied science along these lines, let us look at a real-world example of an AI automaton used to automatically generate expert technical appraisals for insurance claims. Today, such claims are validated by mid-level clerks, whose job is to compare the content of each claim– for example the line items in a car repair or cardiologist bill– with the standards legally and technically valid for the context at issue. Deviations from the standard are detected and corresponding amounts are subtracted from the indemnity amount with a written justification for the reduction. Digitalization has advanced sufficiently far in the insurance world that claims data can be made available in structured digital form (the bill’s lines are stored as separate attributes of a table in a relational database). On the other hand, however the relevant texts specifying standards have until recently been represented only as free text strings. Now, however, by using technology along the lines described above it is possible to automate using AI both the digital representation of these standards and the results of the corresponding comparisons between standards and claims data. To achieve this, we developed an application that has the capability to do all of the following: 1. Recognise the exact type of bill and understand the context in which it was generated 2. Understand the type and contents of the bill (both the textual and the structured, quantitative content) 3. Transform the bill’s contents into a logico-mathematical representation 4. Identify the pertinent standards by querying the corresponding insurance knowledge base 5. Determine from 3. and 4. the appropriate repair benchmark for a claim repair of the relevant type and situation and determine the correct procedure to fulfil the claim. Claim, benchmark and procedure are here all expressed in mathematical logic. 6. Compare the bill to its benchmark by identifying departures from logical equivalence of line items in the claim from those in the benchmark. 7. Subtract the items on the bill that do not match the reference you are really subtracting money not items, surely 8. Output the justification for the subtractions As to 1., human beings are able to make such assignments of context spontaneously– both for entire artefacts such as bills and for the single lines which are the constituent strands within such artefacts. Human beings live in a world which is meaningful in precisely this respect. The ability to make such assignments of context is, as we saw, difficult to replicate on the part of machines. As to 2., for the textual content in a bill to be interpretable we need ontologies covering both the objects to which reference is made and the contexts and information artefacts associated therewith. We need also formal definitions of the relevant characteristics of these objects, of the terms used in the relevant insurance rules, and so forth. Together, these constitute the ontology  mentioned below. The ontologies are built by hand, but involve a minimal amount of effort for those with expertise in the relevant domain (here: the contents of repair bills). These definitions are entailed by the bill and benchmark texts, which are automatically processed into logical representations without human interference. Viewed logically, the steps are as follows: | | | | | | --- | --- | --- | --- | | | text⇝Γ | | (1) | where ⇝ means automated translation, and Γ is the set of k-order intensional logic formulae151515’k-order’ means that predicates of the logic can predicate over other predicates arbitrarily often. ’Intensional’ means that the range of predication in the logic is not restricted to existing entities. generated by the translation. | | | | | | --- | --- | --- | --- | | | Γ↷Δ | | (2) | where Δ is the set of (first-order or propositional modal) logic formulae automatically generated (↷) from Γ. | | | | | | --- | --- | --- | --- | | | Δ⊢ϕi∈Ω,∀i=1…n | | (3) | where ⊢ means: entailment using mechanical theorem proving, and ϕi is one of n human-authored domain-specific formula entailed by Δ. Ω is an ontology comprising human-authored domain formulae ϕi. Note that Ω is specific to a type of text (for instance repair bills) and to a pertinent context (for instance the regulation under which the repair occurs). Δ∩Ω≠∅ only holds if the input text matches the type and context of the ontology. In total, the process looks like this: | | | | | --- | --- | --- | | | text⇝Γ↷Δ⊢ϕi∈Ω,∀i=1…n | | Where the only manual input– creation of Ω– has to be performed only once, at system design time. As the case of BFO shows, philosophy is part of what is required to create in consistent fashion the successive layers of ontologies required to realize a system of the sort described. Knowledge of logic and adequate representation of reality is used to select appropriate logical dialects for different contexts and situations: temporal or deontic phenomena require a different logic than more basic phenomena which can be rendered using predicate logic. The mechanism described in equation (1) is used for steps 1, 2 and in part for steps 3 and 4 of the above list. Of course, these steps are realised not in a form as philosophical but rather as software of a sort which comprises both stochastic models and mechanical theorem provers. But their configuration, which conveys what is called the business process, is created using standard methods (including methods from linguistics) to formulate the logic of this process. For step 4, a set of context-dependent permissible benchmarks must be created. Its elements consist of a structured set of formulae as well as quantitative (typed variable) information. When a new bill arrives, the system performs steps 1-3 and then obtains an adequate benchmark from the computer’s memory. It then instantiates its typed variables in order to identify the values of those parameters relevant to the given situation, as exemplified in the following: consider (i) a car repair reference which specifies the replacement of then door, then the instantiation will be the actual part number for the door of the given car model. Or (ii) on a cardiologist bill for a coronary angiography, age, sex and diagnostic status of the patient are variables that would be used for parametrisation to identify where the reference matches the current situation and where it fails to match. The remaining operations of the machinery are purely technical — a matter of software engineering and mathematics. The philosophy-driven AI application is now used by several German insurance companies, dental clinics and a leading academic hospital. It meets all the requirements listed in Table 1: * Exactness – it has an error rate below 0.3%, which is below the best human error rate of 0.5%, because it will detect if it cannot entail any formula from text. * Security – it is secure, as its stochastic model never works on its own, but mis-reactions to perturbing input would be detected by the logical model working right after it. * Robustness – it is robust as it will always realise if it cannot interpret a context properly. It requires very little data for training (data parsimony) as the semantic training space it provides separates the data points so well. * Fidelity – it has semantic fidelity. It not only allows, but it is based on inference and can easily use prior and world knowledge in stochastic (as Bayesian net) and deterministic (logical) from. However, it is a very specific system, far away from the unrealistic notion of general artificial intelligence. It is rather an exact, philosophy-driven system of artificial instincts, as Stanislaw Lem coined the term in his essay “Weapon systems of the 21st century” (1983). It demonstrates the ability of philosophy to make applied sciences work. 4 Conclusion ------------- The type of philosophy-driven AI application described in the above is not a one-off example; such systems are being successfully used in a range of different domains. Moreover, the method in question is generalizable to data of many different sorts, in principle– as the breadth of the available ontologies is extended and the sophistication of the algorithms is enhanced– without limit. We believe that these facts have implications beyond the merely technical (and, associated therewith, pecuniary). For they point to a new conception of the role of philosophy in human affairs. Many, especially in the twentieth century, have proclaimed the death of philosophy. Others have seen in philosophy a merely compensatory role– whereby philosophers might offer some sort of substitute for those traditions which in former times gave human beings the ability to interpret their lives as meaningful but which have since been eroded through the spread of modern science and technology. And the ways in which humans lives are meaningful– are full of meaning– did indeed play a role in our argument. Rather, we view the question of the role of philosophy from a broader historical perspective, drawing on the ways in which, beginning already with the Greeks, philosophers have helped to lay the groundwork for social upheavals of the sorts associated, for example, with the birth of democracy or of market institutions, of new artifacts such as Cartesian coordinates, and sometimes of entire scientific disciplines. In this light we have shown in this paper that one place to look for a role for philosophy in the present day lies in the way philosophy can be used– and is already being used– to strengthen and enable applied sciences in the digital era. More specifically, we have demonstrated that philosophy can be of value in the creation of useful and realistic artificial intelligence applications. ###### Acknowledgements. We would like to thank Prodromos Kolyvakis for his valuable review of the manuscript. \printbibliography
d41521bb-09bc-4a96-b44c-cfe93f38bc1b
trentmkelly/LessWrong-43k
LessWrong
World-models containing self-models One problem in theoretical AI that sometimes comes up is the problem of finding ways for AI systems to model themselves, or at least to act well as if they had models of themselves. I can see how this is a problem for uncomputable agents like AIXI (though I think this problem is largely solved by reflective oracles), but it doesn't seem to me to be a very hard problem for computable agents -- they seem to me to be able to learn models of themselves along with the rest of the world. I'll give an example of self-modeling trouble that some kinds of systems can run into, then my reasons for not thinking this is a big problem (though I'm by no means sure!). A problem for model-based RL Suppose that we're using model-based RL: our system learns a model that maps states of the world and actions the system takes to next states and rewards. This learned model is used to choose actions by building a tree of possible sequences of actions the system could take and the consequences that the model predicts would result; the path with the highest expected reward is chosen. The situation our system is in will be as follows: * The system is learning to perform some episodic RL task; at the end of each episode, the environment is reset, then another episode is run. * In this environment, the agent has an action that gives a moderately large reward, but that forces the agent to take a null action for the rest of the episode. The interesting thing here is that the system's model won't learn anything about the bad side effect of this action, even if it impacts the system's total reward a lot. This is because the model maps (state, action) → (next state); it learns what environmental state the bad action leads to, and after that it learns a lot about the effects of the null action, but it doesn't learn that the bad action leads to the null action. Furthermore, the tree search will continue to assume that the system will be able to choose whatever action it wants, even when the sys
550b0a47-f35f-430f-8866-4f249299a3b4
trentmkelly/LessWrong-43k
LessWrong
Hard problem? Hack away at the edges. Wei Dai offered 7 tips on how to answer really hard questions: * Don't stop at the first good answer. * Explore multiple approaches simultaneously. * Trust your intuitions, but don't waste too much time arguing for them. * Go meta. * Dissolve the question. * Sleep on it. * Be ready to recognize a good answer when you see it. (This may require actually changing your mind.) Some others from the audience include: * Transform the problem into a different domain. * Ask people who have worked on similar problems. * Decompose the problem into subproblems. (Analysis) I'd like to offer one more technique for tackling hard questions: Hack away at the edges. General history books compress time so much that they often give the impression that major intellectual breakthroughs result from sudden strokes of insight. But when you read a history of just one breakthrough, you realize how much "chance favors the prepared mind." You realize how much of the stage had been set by others, by previous advances, by previous mistakes, by a soup of ideas crowding in around the central insight made later. It's this picture of the history of mathematics and science that makes me feel quite comfortable working on hard problems by hacking away at their edges. I don't know how to build Friendly AI. Truth be told, I doubt humanity will figure it out before going extinct. The whole idea might be impossible or confused. But I'll tell you this: I doubt the problem will be solved by getting smart people to sit in silence and think real hard about decision theory and metaethics. If the problem can be solved, it will be solved by dozens or hundreds of people hacking away at the tractable edges of Friendly AI subproblems, drawing novel connections, inching toward new insights, drawing from others' knowledge and intuitions, and doing lots of tedious, boring work. Here's what happened when I encountered the problem of Friendly AI and decided I should for the time being do research on the p
22fcaa32-e9cd-484b-9a44-ccaf09ac958e
trentmkelly/LessWrong-43k
LessWrong
Alignment Newsletter #45 Find all Alignment Newsletter resources here. In particular, you can sign up, or look through this spreadsheet of all summaries that have ever been in the newsletter. Highlights Learning Preferences by Looking at the World (Rohin Shah and Dmitrii Krasheninnikov): The key idea with this project that I worked on is that the state of the world is already optimized for our preferences, and so simply by looking at the world we can infer these preferences. Consider the case where there is a vase standing upright on the table. This is an unstable equilibrium -- it's very easy to knock over the vase so it is lying sideways, or is completely broken. The fact that this hasn't happened yet suggests that we care about vases being upright and intact; otherwise at some point we probably would have let it fall. Since we have optimized the world for our preferences, the natural approach is to model this process, and then invert it to get the preferences. You could imagine that we could consider all possible reward functions, and put probability mass on them in proportion to how likely they make the current world state if a human optimized them. Basically, we are simulating the past in order to figure out what must have happened and why. With the vase example, we would notice that in any reward function where humans wanted to break vases, or were indifferent to broken vases, we would expect the current state to contain broken vases. Since we don't observe that, it must be the case that we care about keeping vases intact. Our algorithm, Reward Learning by Simulating the Past (RLSP), takes this intuition and applies it in the framework of Maximum Causal Entropy IRL (AN #12), where you assume that the human was acting over T timesteps to produce the state that you observe. We then show a few gridworld environments in which applying RLSP can fix a misspecified reward function. Rohin's opinion: In addition to this blog post and the paper, I also wrote a post on the Alignment Forum e
dfcdabc3-de34-47df-ba82-faae25129f24
trentmkelly/LessWrong-43k
LessWrong
Emotionally Confronting a Probably-Doomed World: Against Motivation Via Dignity Points This article was written in ignorance of the alignment community’s reaction to Eliezer’s “Death with Dignity” post. The first part of this article responds to how I suspect some people reacted to that post, while the second part is my take on the post itself. I write against defeatism; I write against decline; I write against that internal slumping that sneaks in on the coat-tails of bad news. I do not dispute the balance of evidence—are we doomed, or not? Let us simply assume that we live in a relatively doomed world: It’s very improbable that we solve AI alignment in time. What next? We have been taught what comes next, in this kind of story. Since, by assumption, we won’t receive a “happily ever after”, we infer we are in a tragedy. Realizing this, we are disappointed and sad. The fellowship breaks and scatters, its once-proud and vibrant members downtrodden. And then occurs a miracle which we could have turned to our advantage, but for our civilizational incompetence and our own stupidity—smart enough to build AI, yet too dumb to align it. And then the laugh track plays. And then we die! The end! As AI_WAIFU said: “Fuck. That. Noise.” We do not live in a story. We can, in fact, just assess the situation, and then do what makes the most sense, what makes us strongest and happiest. The expected future of the universe is—by assumption—sad and horrible, and yet where is the ideal-agency theorem which says I must be downtrodden and glum about it?  Against retreat In response to how I suspect some people reacted to "Death with Dignity." Suppose we have ten years left. Some may consider a very natural response—to retreat. Spend the remaining time with your family and your friends, working whatever job you want—just do whatever. But... Um... This sounds like an awful plan? I’d feel like a lazy bum, like a cornered animal. And for those of us doing direct work, this plan sounds stupid. It sounds like a great way to throw away worlds where we would receive a mira
e6a9e2cf-5096-408d-92ef-c4b42328bbb3
trentmkelly/LessWrong-43k
LessWrong
The Results of My First LessWrong-inspired I Ching Divination Part A - Exploratory Research After reading "Steelmanning Divination," I was inspired to try it out. My question was "Should I begin using the I Ching for divination?" and I used IChingOnline to interpret my results. I obtained hexagram 24, and I'm going to recapitulate my responses to the three interpretations on the site. At the end, I'll do a summary reflection. In "Part B," I heavily critique this whole experience. I think that is the most important part of this post. 1. Thunder regenerates deep within Earth's womb: Sage rulers recognized that the end of Earth's seasonal cycle was also the starting point of a new year and a time for dormancy. They closed the passes at the Solstice to enforce a rest from commerce and activity. The ruler himself did not travel. You have passed this way before but you are not regressing. This is progress, for the cycle now repeats itself, and this time you are aware that it truly is a cycle. The return of old familiars is welcome. You can be as sure of this cycle as you are that seven days bring the start of a new week. Use this dormancy phase to plan which direction you will grow. Response: The opening imagery does bring up a subtle emotional response. I make connections to my life: I am working and in school for significantly less hours this season. Several years ago, I was involved in a local spiritual community, which I broke away from entirely to set out on a new phase of life, one that I planned to be more rational, aligned with my values, and accomplished. I was introduced to it in the first place by being unexpectedly asked to do a tarot reading for a woman I was on a first date with; she decided to invite me to a meeting of the spiritual community, and I stayed for several years. Hence, the second paragraph seems apropos, especially since I do not feel at all as though I'm "regressing," but rather that I'm using it in the context of new values, understandings, and purposes. It is in fact welcome. I am also in the mid
1e1c84c8-01ea-47c3-8043-ae2d76cde37b
trentmkelly/LessWrong-43k
LessWrong
2018 AI Safety Literature Review and Charity Comparison
e03dde7b-d160-4dcb-a1b6-89fb4f9b329a
trentmkelly/LessWrong-43k
LessWrong
Should students be allowed to give good teachers a bonus? Alice is a STEM student taking general chemistry, linear algebra, and intro to computer programming. At the end of her term, the school emails her an online form with a link to her Student-Allocated Bonus. For taking three classes, Alice gets 3 points. She can divide them up however she wants between her teachers from this quarter. Her favorite teacher was Prof. Bruin, her math teacher. Her least favorite teacher was Prof. Cameron, her computer programming teacher. Prof. Dorry, her chemistry teacher, was all right. She gives Prof. Bruin 2 points, Prof. Dorry 1 point, and Prof. Cameron 0 points. Jack is her classmate, but he forgets to fill out the SAB form. The system divides his points equally, 1 per teacher. Once the due date for submission has passed, the system totals up the points that students have allocated to each teacher. Each teacher gets $5 per point. It's a modest but not insignificant bonus. Prof. Bruin's 30 students award him 50 points, so he earns $250 extra. At the local college, teachers make about $3,000 per class, so that's an 8% bump in his earnings. People were worried at first that this was basically the same thing as students bribing their teacher. What if Jack promises his math teacher that he'll give him all his points on the SAB in exchange for "going easy on his grading?" The beauty of the anonymized points system is that it makes this impossible to verify. Which student gave which points to what teacher is anonymous. Since points can only be given in integers, students can't do any shenanigans like giving their teacher some weird decimal point number of points (2.6893 points) so that their teacher will see the decimal number and know who the money was from. And the SAB is assigned after all grades are due at the end of the quarter. The SAB is a modest portion of the teacher's total salary. It's not a club that students can hold over a teacher's head. $5 is not a meaningful bribe. It's just one small but non-zero part of the incentiv
3638b133-4c7c-4c35-8742-5ef72d7af99e
StampyAI/alignment-research-dataset/arxiv
Arxiv
O2A: One-shot Observational learning with Action vectors I Introduction --------------- Learning new tasks has always been challenging for robotic systems whether it is a simple mobile manipulator or a complex humanoid robot. Programming manually step by step is one of the earlier solutions to this problem, but was highly inefficient and not suitable for general purpose consumer robotics systems as all consumers are not typically skilled in programing the robots. Learning from Demonstrations (LfD) [[1](#bib.bib1)] was then presented as a solution to this problem which required only a demonstration of the task to be learned. The robot will learn to perform the task by looking at the demonstration. Since then, LfD has become an important topic for roboticists. Even though LfD has been studied widely for a long time, most of the previous works have stayed within the ’imitation learning’ paradigm [[2](#bib.bib2)], where demonstrations are made from an egocentric viewpoint, either visually or kinesthetically. This requires the inconvenience of on-person data collection and means the rich source of third-person demonstrations available on the internet cannot be used. Therefore here we study the problem of learning from demonstration using a different paradigm, where the demonstrations are viewed from a third person point of view. Hence in this paper, we refer to it as ’observation learning’ [[3](#bib.bib3)], to denote the unique nature of how the demonstrations are viewed, that is from an observer’s/third person’s point of view. Fig [1](#S1.F1 "Fig. 1 ‣ I Introduction ‣ One-Shot Observation Learning") depicts the general framework of observation learning. The observation learning problem typically involves a demonstrator and a learning agent and two processes which are the observation & understanding and learning processes. ![](https://media.arxiv-vanity.com/render-output/6614015/observationlearning.png) Fig. 1: The observation learning problem * Demonstrator: It is the object that performs the demonstration of the activity/task. This could be a human or even another robot. However in this paper we limit the scope of the problem to where the demonstrator is a human. * Agent: Agent is the object that learns a particular task from observing the demonstrator. A mobile manipulator robot is used as the learning agent in this paper. * Observation & understanding: This is the process by which the agent views the demonstrator and understands the observed demonstration. Even though the observation could be made using any sensory modality, we currently study only the visual observation process. * Learning the controls: In this process the agent learns to perform the task using the demonstrations it has observed. Once learned, the agent then performs the same task. In this paper we present a novel one shot learning approach for addressing the problem of observation learning. Our first contribution is in one shot learning, where a new task is learned from a single demonstration. To the best of our knowledge this is the first work that tackles the problem of one shot observation learning which does not requires a large number of demonstrations of a task or closely related tasks. The second contribution is in using a feature representation of video clips that focuses on the depicted action, yet is partially invariant to viewpoint, object properties, morphology of the manipulator, and scene background. We obtain such a feature representation from the convolutional feature-encoding stage of an activity classifier, pre-trained on a large activity dataset. The necessary invariances come from pre-training the activity classifier on activities seen from a range of viewpoints, with actors having different body shapes, and varying backgrounds. Using this feature representation, a reward is generated which directly reflects the similarity of the actions performed by the demonstrator and the robot. This reward is then used to guide learning algorithms to learn robotic controls for carrying out the demonstrated tasks. The next sections are arranged as follows: section [II](#S2 "II Related Work ‣ One-Shot Observation Learning") outlines related works in observation learning, section [III](#S3 "III Proposed method ‣ One-Shot Observation Learning") formulates the problem and describes the proposed method, section [IV](#S4 "IV Experiments and Results ‣ One-Shot Observation Learning") shows the experiments and results are discussed and finally section [Acknowledgement](#Sx1 "Acknowledgement ‣ One-Shot Observation Learning") concludes the presented work. Ii Related Work ---------------- Research in the field of LfD, has seen a paradigm shift from imitation learning to observation learning recently, due to the advances in the field of perception based robotics learning. The problem of observation learning can be divided into two parts: object based and implicit observation learning. #### Ii-1 Object observation learning In object based observation learning [[4](#bib.bib4)][[5](#bib.bib5)][[6](#bib.bib6)][[1](#bib.bib1)][[7](#bib.bib7)], explicit trackers and object detectors are used to detect the objects and understand their interactions in a task demonstration video. However these methods have a limited scope. When employing these methods the items to be tracked or detected should be known beforehand and only demonstrations using these items can be learned. #### Ii-2 Implicit observation learning In implicit observation learning no trackers or object detectors are used. The system is responsible for implicitly learning required features to be tracked in the demo which are important for learning the task. Recently this type of observation learning has gained popularity, with the advancement of deep learning based computer vision methods [[8](#bib.bib8), [9](#bib.bib9)]. This enables implicit extraction of unique features which could represent the task being demonstrated as required for observation learning. One of the first works along these lines is from [[10](#bib.bib10)], where they address the problem by generating domain agnostic features using GANs [[11](#bib.bib11)]. However, it requires access to expert and non-expert policies. Also it directly optimizes for invariance between only two viewpoints, whereas in real world scenarios the demonstration can come from any viewpoint. Different approaches towards observation learning are proposed in [[12](#bib.bib12)] and [[13](#bib.bib13)]. In [[13](#bib.bib13)] an unsupervised learning method is used to learn the representation of the demonstration videos. This is achieved by employing a label free training method which exploits the temporal coherence of raw unlabeled videos for training. These features are then used to generate a reward signal that is used for learning the controls. Similarly in [[12](#bib.bib12)] a context translation method is used to translate the demonstrations to the observers context. A control policy is then learned by minimizing the distance to this context translated demonstrations. Although the method was shown to work for severals tasks it doesn’t consider scenarios where there are variances in the morphology of the manipulator. Also all of these methods require demonstrations in large numbers typically ranging from hundred to thousand samples per task for learning. This shortcoming is addressed in [[14](#bib.bib14)] using a meta learning approach. They leverage the prior knowledge of learning closely related tasks to learn a new task from a single demonstration. However it still requires hundreds of demonstrations from closely related tasks for learning a new task. The presented method in this paper is an implicit observation learning method. Unlike the existing methods, our approach requires only a single demonstration to learn a task. Also our method could learn tasks irrespective of variations between demonstration and learning conditions. Hereafter, the term observation learning will refer to implicit observation learning unless mentioned otherwise. Iii Proposed method -------------------- The problem of observation learning can be divided into two steps: creating a meaningful representation of the demonstration and learning of the robotic controls required to perform the demonstrated task. The first step is the representation of the demonstration using unique features. Then a learning algorithm is used to derive a mapping to the controls of the robotic system to perform the demonstrated task. Mathematically, the problem can be formulated as follows: Let D be the demonstration video of a task to be learned consisting of t frames such that D= (x1,...xt) where xi denotes each frame and let FD∈Rn be the environment invariant n-dimensional feature vector extracted from the demonstration video D. Then a learning algorithm L is used to infer a mapping M from FD to the control sequence U={u1,..um}, where U could be a sequence of torques, joint positions or velocities of the robotic system. In this paper we have used joint positions. The presented method for one shot observation learning is summarized in Fig [2](#S3.F2 "Fig. 2 ‣ III Proposed method ‣ One-Shot Observation Learning"). First the feature vectors (FD and FR) are extracted from the video demonstration and robot actions (both observed from the egocentric viewpoint of the robot). Then the reward function calculates a reward r from these features. This reward value is then used by a learning algorithm (L) to learn the controls U which drives the robot to perform the demonstrated task. ![](https://media.arxiv-vanity.com/render-output/6614015/proposed_new.png) Fig. 2: Overview of the proposed method ### Iii-a Activity feature extraction ![](https://media.arxiv-vanity.com/render-output/6614015/activitynetexp_croped.png) Fig. 3: Comparison of reward values from proposed and baseline methods Activity features can be described as the features that uniquely represent the activity/task being carried out in a video. These features also should provide a compact representation of the activity, with the right emphasis on both the end goal and the path followed by the manipulator during the demonstration. The proposed method focuses On extracting such a feature representation (FD) of the demonstration video (D) by using a deep learning based activity recognition technique. The feature representation is obtained from the convolutional feature-encoding stage of an activity classifier (deep neural network model), pre-trained on a large activity dataset. The necessary invariances come from pre-training the activity classifier on activities seen from a range of viewpoints, with actors having different body shapes, and varying backgrounds. Thus, these features will enable the learning algorithm to learn robotic controls not to blindly replicate the demonstrated task but to complete the task in a more semantically meaningful way. ### Iii-B Learning the controls Here we learn a mapping M from the visual feature representation FD to the controls U. The guidance for the learning algorithm is provided by the reward signals. The reward (r) signals, are obtained by directly comparing video of the demonstrated task from a third-person viewpoint with video of the robot-executed actions from an ego-centric (robot) viewpoint. The reward signal generation is explained below. Let FD be the activity feature extracted from the video demonstration. Let FR be the activity feature extracted from the video of the robot executed actions as observed from the ego centric view point of the robot. We calculate the reward signal r as: | | | | | | --- | --- | --- | --- | | | d=||FD−FR||2,r=f(d) | | (1) | Where d is the Euclidean distance between the feature vectors and f is a function of d. The reward is directly proportional to the similarity of the feature representations obtained from observation of the demonstration and of the robot action. The learning algorithm will learn a mapping to the controls (U) of the robotic system such that it maximizes the rewards, thereby making the robotic movements close to the demonstrated actions. Thus this perception based reward would guide the algorithm to control the robotic actions to carryout the demonstrated tasks. In this paper, reinforcement learning (RL) and stochastic trajectory optimization are used as the learning algorithms in simulated and real robot experiments respectively. #### Iii-B1 Reinforcement learning The RL algorithm used is deep deterministic policy gradient [[15](#bib.bib15)] (DDPG). The states are the visual observations of the environment (as observed by the robotic system). Since feeding raw RGB pixels as states to the RL algorithm is not efficient, we make use of a VGGNet pre-trained on ImageNet for converting raw RGB observation images into visual state features. The 4608 long feature vector obtained from the last convolution layer of the VGG-16 network is used as the state representations for the RL algorithm. The actions generated are the robotic controls. We use r=−d as the reward signal. #### Iii-B2 Stochastic trajectory optimization We use stochastic trajectory optimization [[16](#bib.bib16)] as the learning algorithm for real robot experiments to generate an optimal sequence of controls. The optimal control problem is defined as a Hamilton-Jacobi-Bellman partial differential equation (PDE). We then find the optimal sequence of controls U, that enables the robot to perform the demonstrated task via forward sampling of trajectories [[17](#bib.bib17)]. We define the cost function C, to be minimized as; C=r2. ### Iii-C Network Architecture and Dataset In this paper we use the C3D [[18](#bib.bib18)] activity recognition network and UCF101 activity dataset [[19](#bib.bib19)]. The C3D network is a 3D convolutional neural network consisting of 8 convolutional layers, 5 max pooling layers and 2 fully connected layers followed by a softmax output layer. All 3D convolution kernels are 3x3x3 kernels with stride 1. The UCF101 is the action recognition dataset consisting of 13320 realistic action videos, collected from YouTube, having 101 action categories. The videos have large variations in camera motion, object appearance and pose, object scale, viewpoint, scene background, illumination conditions, etc, thereby providing a suitable dataset for the proposed method. In the proposed method, first the C3D network is trained on the UCF101 dataset for activity recognition. After the training, the fully connected layers are removed and the output from the last convolution layer an 8192 long feature vector, is used as the activity feature. Iv Experiments and Results --------------------------- Our experiments aim to study two questions: 1. Can the activity feature extraction method presented provide an efficient reward signal that indicates the similarity between the given pair of activity videos ? 2. Can this reward be used to learn a new task by observing a video demonstration ? Experiments were conducted both in simulated and real world environments ### Iv-a Evaluation of rewards extracted using activity features We first examine how well our reward function could indicate the similarity between a pair of activity videos. We consider two activities: pushing and pouring as shown in Fig. [3](#S3.F3 "Fig. 3 ‣ III-A Activity feature extraction ‣ III Proposed method ‣ One-Shot Observation Learning"). The pushing activity video is collected in a lab setting and the pouring activity video is taken from a pouring dataset [[13](#bib.bib13)]. These videos are then systematically altered to vary viewpoint and object color. Two variations in viewpoint are simulated by rotating each video frame by 900 and 1800. Variations in object color is introduced by interchanging the red and blue color planes with each other. This augmentation gives synced videos of the same activity but having variations in viewpoints and object colors. The pushing video (Fig. [3](#S3.F3 "Fig. 3 ‣ III-A Activity feature extraction ‣ III Proposed method ‣ One-Shot Observation Learning")-R) is subject to augmentations for viewpoint and color variations (Fig. [3](#S3.F3 "Fig. 3 ‣ III-A Activity feature extraction ‣ III Proposed method ‣ One-Shot Observation Learning")-A,B,C,and D). However, the pouring video (Fig. [3](#S3.F3 "Fig. 3 ‣ III-A Activity feature extraction ‣ III Proposed method ‣ One-Shot Observation Learning")-1) is subject to only augmentations for color variation (Fig. [3](#S3.F3 "Fig. 3 ‣ III-A Activity feature extraction ‣ III Proposed method ‣ One-Shot Observation Learning")-2), as rotated pouring videos do not make semantic sense. Thus the test set has a total 7 videos with 5 pushing videos (2 angles x 2 colors, and the original video) and 2 pouring videos. Furthermore, to evaluate whether our reward function can identify similar activity videos, we compare the pushing video (Fig. [3](#S3.F3 "Fig. 3 ‣ III-A Activity feature extraction ‣ III Proposed method ‣ One-Shot Observation Learning")-R) with each of the other videos (Fig. [3](#S3.F3 "Fig. 3 ‣ III-A Activity feature extraction ‣ III Proposed method ‣ One-Shot Observation Learning")-A,B,C,D,E,1,2). For this, corresponding video clusters are extracted using a sliding window of size 16 with stride 1 moved over each pair of videos. An activity feature vector is then extracted from each of these clusters. Thereafter, we calculate values for our reward function from these features for each corresponding pair of clusters. These values for each pair of videos is plotted in Fig [3](#S3.F3 "Fig. 3 ‣ III-A Activity feature extraction ‣ III Proposed method ‣ One-Shot Observation Learning") (Proposed method). It is clear from the plot that the reward function generates higher values (indicating higher similarity) when similar video pairs (R-A, R-B, R-C, R-D) are compared and comparatively lower values when dissimilar video pairs (R-1, R-2) are compared. It can also be seen that the values are closer together for video pairs depicting similar activities. This indicates the effectiveness of the proposed feature extraction method in generating values that show the similarity between pairs of activity videos. In addition, we compared our method with a baseline created using the features taken from the last convolutional layer of a VGG-16 network trained on ImageNet. The image features extracted from each of the frames is averaged for a video to create the baseline activity feature vector for that video. The same experiments were repeated for the baseline. The results are plotted in Fig [3](#S3.F3 "Fig. 3 ‣ III-A Activity feature extraction ‣ III Proposed method ‣ One-Shot Observation Learning") (Baseline method). It can be seen that the reward function using the baseline activity features cannot provide values for distinguishing between similar and dissimilar videos. The values from the reward function are much lower for similar activity video pairs (R-B, R-C, R-D) when compared to dissimilar pairs (R-1, R-2). ### Iv-B Simulation experiments We set up a simulated environment using OpenAI Gym [[20](#bib.bib20)] and the MuJuCo physics engine[[21](#bib.bib21)], where we consider the task of pushing a cylindrical object into a colored goal region. In the simulation experiments, we use a 3 degrees of freedom (DOF) manipulator as used in [[12](#bib.bib12)]. We collect a single demonstration in the real world. The robot then learns how to perform the same task using this single demonstration. Furthermore, we ask the question: can a robot learn a control policy that completes the pushing task using a single observation of the demonstration from a different viewpoint ? To answer this question, we consider demonstrations from three viewpoints: view-1, view-2, and view-3 as shown in Fig [4](#S4.F4 "Fig. 4 ‣ IV-B Simulation experiments ‣ IV Experiments and Results ‣ One-Shot Observation Learning"). For each viewpoint, we run the DDPG algorithm 3 times using 500 episodes, and 20 roll-outs per episode. For each run, the algorithm returns a control policy that maximizes the reward. After training, we pick the best control policy i.e. the one with the highest reward. In Fig [4](#S4.F4 "Fig. 4 ‣ IV-B Simulation experiments ‣ IV Experiments and Results ‣ One-Shot Observation Learning"), we show snapshots of sample executions using the learned policy. It can be seen that using the optimal policy, the robot is able to complete the demonstrated task, i.e pushing the cylindrical object to the goal region. ![](https://media.arxiv-vanity.com/render-output/6614015/sim_leanr.png) Fig. 4: Demonstrations and the corresponding learned policy behaviors for demonstration from three different viewpoints We also evaluate the learned policies using a task completion rate T, that measures how close the pushed object is to the goal region at the final state: | | | | | | --- | --- | --- | --- | | | T=1−didf | | (2) | where df is the final distance between the object’s center and the center of the goal region, and di is the object’s initial distance away from the center of the goal region. Then, the learned policy was run for 10 test episodes and the task completion rate was measured. The average task completion rate for the test episodes are shown in Fig [5](#S4.F5 "Fig. 5 ‣ IV-B Simulation experiments ‣ IV Experiments and Results ‣ One-Shot Observation Learning"). It can be observed that the learned policies were successful in performing the demonstrated task for all 3 viewpoints with high task completion rates. ![](https://media.arxiv-vanity.com/render-output/6614015/task_completion_rate_sim.png) Fig. 5: Evolution of the task completion rate with roll-outs during the test episodes The proposed method was compared with two baselines, each of which generated rewards based on different activity feature extraction methods. In baseline-1, features were extracted from the last convolutional layer of the VGG-16 network trained on ImageNet [[22](#bib.bib22)]. These features extracted from each frame of the video was averaged and used as the activity feature. In baseline-2, HOG [[23](#bib.bib23)] features was extracted from each frame and was averaged for each video to create the activity features. The methods were compared by calculating the correlation of the perceptual reward extracted in each of the methods with a task specific auxiliary reward. The higher the correlation with the auxiliary reward, the better the perceptual reward in providing an efficient signal for learning the task. In this experiment the task specific auxiliary reward Raux is generated using the following equation: | | | | | | --- | --- | --- | --- | | | Raux=T | | (3) | The correlation is then measured by calculating the Pearson correlation coefficient between the two rewards. A higher positive correlation indicates that the perceptual rewards are as good as the auxiliary rewards and as the correlation drops to negative values it indicates the inability of the perceptual rewards to match the task specific auxiliary rewards. Table [I](#S4.T1 "TABLE I ‣ IV-B Simulation experiments ‣ IV Experiments and Results ‣ One-Shot Observation Learning") shows the correlation coefficients for each method for different demonstration viewpoints. | | View-1 | View-2 | View-3 | | --- | --- | --- | --- | | Proposed method | 0.6292 | 0.289 | 0.636 | | Baseline-1 | 0.591 | 0.231 | 0.042 | | Baseline-2 | 0.272 | -0.049 | -0.501 | TABLE I: Pearson correlation coefficients It can be seen that the correlation coefficients are positive in all the cases for the proposed method, indicating that the perceptual rewards obtained are as good as the task specific auxiliary rewards. Also it can be seen that the correlation stays higher and positive for all the viewpoints when compared to the baseline-1 and baseline-2. This clearly shows that the proposed method generates useful perceptual rewards even when the demonstrations are observed from different viewpoints. ![](https://media.arxiv-vanity.com/render-output/6614015/res_real_robo.png) Fig. 6: Demonstrations and learned behaviours from real robot experiments ![](https://media.arxiv-vanity.com/render-output/6614015/task_completion_rate_real.png) Fig. 7: Task completion rates for real robot experiments ### Iv-C Real robot experiments Here, we again consider the task of pushing an object to a goal region. Our objective is to evaluate the performance of our approach in the real world under variances in morphology of the manipulator, viewpoint of observation, object properties and scene background. We use stochastic trajectory optimization on the real robot to generate the optimal sequence of controls starting from the initial state. Briefly, we begin with an initial candidate control sequence. We execute this sequence using the manipulator to generate an initial cost. Thereafter, at each iteration we create K=10 random control sequences and execute them using the real robot. At the end of each iteration, we pick the control sequence with the minimum cost, and set it as the new candidate sequence, thereby iteratively reducing the cost. In all our real robot experiments, we use a 6-DOF manipulator and 15 iterations for trajectory optimization. We conducted five real robot experiments and show snapshots in Fig. [6](#S4.F6 "Fig. 6 ‣ IV-B Simulation experiments ‣ IV Experiments and Results ‣ One-Shot Observation Learning"). * Experiment 1 (E1) - No variances: The environment factors were kept identical in demonstration and during the learning process (D-1, L-1) * Experiment 2 (E2) - Variances in object properties: A different object was used for learning (D-1, L-2). The object’s mass and color are different. * Experiment 3 (E3) - Background Noise: Foreign objects were introduced to the scene during learning which were not present during demonstrations (D-1, L-3) * Experiment 4 (E4) - Change in viewpoint: The demonstrations were collected from a different viewpoint (D-2, L-4) * Experiment 5 (E5) - Manipulator variances: The demonstration was performed by a human hand which has a different morphology from the manipulator used during learning (D-3, L-5) We ran each of the five real robot experiments 3 times. In Fig [7](#S4.F7 "Fig. 7 ‣ IV-B Simulation experiments ‣ IV Experiments and Results ‣ One-Shot Observation Learning"), we show the average task completion rate for each experiment. In general, we can observe a good task completion rate irrespective of the variances introduced. However, the low average task completion rate for experiment 4 indicates that our approach is not completely agnostic to viewpoint variations. V Conclusion ------------- In this paper we present a novel one shot observation learning method for robotic systems to learn tasks from a single demonstration observed from a third person point of view. We extracted environment invariant activity features representing the activity in videos using a deep learning based feature extraction technique and it was used for generating a perceptual reward signal for guiding the learning algorithm to learn the robotic arm controls. We showed that the rewards generated could successfully be used for one shot observation learning using simulated and real world experiments. A possible extension of this work could be to explore the use of stereo vision while viewing the demonstrations. It will also be interesting to see how the system performs when the morphology of the manipulators in the robotic system and the demonstration varies significantly in degrees of freedom and appearance. Acknowledgement --------------- The authors would like to place their acknowledgements to Matteo Leonetti, Wissam Bejjani, Hanh Tran, Rebecca Stone and Mohammad Kaykanloo for their support and fruitful discussions.
c670fc5e-3971-400e-b59f-145244ec4e8a
awestover/filtering-for-misalignment
Redwood Research: Alek's Filtering Results
id: post3619 Suppose a designer wants an RL agent to achieve some goal, like moving a box from one side of a room to the other. Sometimes the most effective way to achieve the goal involves doing something unrelated and destructive to the rest of the environment, like knocking over a vase of water that is in its path. If the agent is given a reward only for moving the box, it will probably knock over the vase. Amodei et al., Concrete Problems in AI Safety Side effect avoidance is a major open problem in AI safety. I present a robust, transferable, easily- and more safely-trainable, partially reward hacking-resistant impact measure. TurnTrout, Worrying about the Vase: Whitelisting An impact measure is a means by which change in the world may be evaluated and penalized; such a measure is not a replacement for a utility function, but rather an additional precaution thus overlaid. While I'm fairly confident that whitelisting contributes meaningfully to short- and mid-term AI safety, I remain skeptical of its robustness to scale . Should several challenges be overcome, whitelisting may indeed be helpful for excluding swathes of unfriendly AIs from the outcome space. 1 Furthermore, the approach allows easy shaping of agent behavior in a wide range of situations. Segments of this post are lifted from my paper, whose latest revision may be found here ; for Python code, look no further than this repository . For brevity, some relevant details are omitted. Summary Be careful what you wish for. In effect, side effect avoidance aims to decrease how careful we have to be with our wishes. For example, asking for help filling a cauldron with water shouldn't result in this : However, we just can't enumerate all the bad things that the agent could do . How do we avoid these extreme over-optimizations robustly? Several impact measures have been proposed, including state distance, which we could define as, say, total particle displacement. This could be measured either naively (with respect to the original state) or counterfactually (with respect to the expected outcome had the agent taken no action). These approaches have some problems: Making up for bad things it prevents with other negative side effects. Imagine an agent which cures cancer, yet kills an equal number of people to keep overall impact low. Not being customizable before deployment. Not being adaptable after deployment. Not being easily computable. Not allowing generative previews, eliminating a means of safely previewing agent preferences (see latent space whitelisting below). Being dominated by random effects throughout the universe at-large; note that nothing about particle distance dictates that it be related to anything happening on planet Earth . Equally penalizing breaking and fixing vases (due to the symmetry of the above metric): For example, the agent would be equally penalized for breaking a vase and for preventing a vase from being broken, though the first action is clearly worse. This leads to “overcompensation” (“ offsetting “) behaviors: when rewarded for preventing the vase from being broken, an agent with a low impact penalty rescues the vase, collects the reward, and then breaks the vase anyway (to get back to the default outcome). Victoria Krakovna, Measuring and Avoiding Side Effects Using Reachability Not actually measuring impact in a meaningful way. Whitelisting falls prey to none of these. However, other problems remain, and certain new challenges have arisen; these, and the assumptions made by whitelisting, will be discussed. Rare LEAKED footage of Mickey trying to catch up on his alignment theory after instantiating an unfriendly genie [colorized, 2050]. 2 So, What's Whitelisting? To achieve robust side effect avoidance with only a small training set, let's turn the problem on its head: allow a few effects, and penalize everything else. What's an "Effect"? You're going to be the agent, and I'll be the supervisor. Look around - what do you see? Chairs, trees, computers, phones, people? Assign a probability mass function to each; basically: When you do things that change your beliefs about what each object is, you receive a penalty proportional to how much your beliefs changed - proportional to how much probability mass "changed hands" amongst the classes. But wait - isn't it OK to effect certain changes? Yes, it is - I've got a few videos of agents effecting acceptable changes. See all the objects being changed in this video? You can do that, too - without penalty. Decompose your current knowledge of the world into a set of objects. Then, for each object, maintain a distribution over the possible identities of each object. When you do something that changes your beliefs about the objects in a non-whitelisted way, you are penalized proportionally. Therefore, you avoid breaking vases by default. Common Confusions We are not whitelisting entire states or transitions between them; we whitelist specific changes in our beliefs about the ontological decomposition of the current state. 3 The whitelist is in addition to whatever utility or reward function we supply to the agent. Whitelisting is compatible with counterfactual approaches. For example, we might penalize a transition after its "quota" has been surpassed, where the quota is how many times we would have observed that transition had the agent not acted. This implies the agent will do no worse than taking no action at all. However, this may still be undesirable. This problem will be discussed in further detail. The whitelist is provably closed under transitivity. The whitelist is directed; a → b ≠ b → a . Latent Space Whitelisting In a sense, class-based whitelisting is but a rough approximation of what we're really after: "which objects in the world can change, and in what ways?''. In latent space whitelisting, no longer do we constrain transitions based on class boundaries; instead, we penalize based on endpoint distance in the latent space. Learned latent spaces are low-dimensional manifolds which suffice to describe the data seen thus far. It seems reasonable that nearby points in a well-constructed latent space correspond to like objects, but further investigation is warranted. Assume that the agent models objects as points z ∈ R d , the d -dimensional latent space. A priori , any movement in the latent space is undesirable. When training the whitelist, we record the endpoints of the observed changes. For z 1 , z 2 ∈ R d and observed change z 1 → z 2 , one possible dissimilarity formulation is: Dissimilarity ( z 1 , z 2 ) : = min z s t a r t , z e n d ∈ whitelist [ d ( z 1 , z s t a r t ) + d ( z 2 , z e n d ) ] , where d ( ⋅ , ⋅ ) is the Euclidean distance. Basically, the dissimilarity for an observed change is the distance to the closest whitelisted change. Visualizing these changes as one-way wormholes may be helpful. Advantages Whitelisting asserts that we can effectively encapsulate a large part of what "change" means by using a reasonable ontology to penalize object-level changes. We thereby ground the definition of "side effect", avoiding the issue raised by Taylor et al. : For example, if we ask [the agent] to build a house for a homeless family, it should know implicitly that it should avoid destroying nearby houses for materials - a large side e ffect. However, we cannot simply design it to avoid having large e ffects in general, since we would like the system's actions to still have the desirable large follow-on eff ect of improving the family's socioeconomic situation. Nonetheless, we may not be able to perfectly express what it means to have side-effects: the whitelist may be incomplete, the latent space insufficiently granular, and the allowed plans sub-optimal. However, the agent still becomes more robust against: Incomplete specification of the utility function. Likewise, an incomplete whitelist means missed opportunities, but not unsafe behavior. Out-of-distribution situations (as long as the objects therein roughly fit in the provided ontology). Some varieties of reward hacking. For example, equipped with a can of blue spray paint and tasked with finding the shortest path of blue tiles to the goal, a normal agent may learn to paint red tiles blue, while a whitelist-enabled agent would incur penalties for doing so ( redTile → blueTile ∉ w h i t e l i s t ). Dangerous exploration. While this approach does not attempt to achieve safe exploration (also acting safely during training), an agent with some amount of foresight will learn to avoid actions which likely lead to non-whitelisted side effects. I believe that this can be further sharpened using today's machine learning technology, leveraging deep Q-learning to predict both action values and expected transitions. This allows querying the human about whether particularly-inhibiting transitions belong on the whitelist. For example, if the agent notices that a bunch of otherwise-rewarding plans are being held up by a particular transition, it could ask for permission to add it to the whitelist. Assigning astronomically-large weight to side effects happening throughout the universe. Presumably, we can just have the whitelist include transitions going on out there - we don't care as much about dictating the exact mechanics of distant supernovae. If an agent did somehow come up with plans that involved blowing up distant stars, that would indeed constitute astronomical waste . a triple pun? Whitelisting doesn't solve the problem of assigning too much weight to events outside our corner of the neighborhood, but it's an improvement. Logical uncertainty may be our friend here, such that most reasonable plans incur roughly the same level of interstellar penalty noise. Results I tested a vanilla Q-learning agent and its whitelist-enabled counterpart in 100 randomly-generated grid worlds (dimensions up to 5 × 5 ). The agents were rewarded for reaching the goal square as quickly as possible; no explicit penalties were levied for breaking objects. The simulated classification confidence of each object's true class was p ∼ N ( .8 , σ ) (truncated to [ 0 , 1 ] ), σ ∈ { 0 , .025 , … , .175 } . This simulated sensor noise was handled with a Bayesian statistical approach. At reasonable levels of noise, the whitelist-enabled agent completed all levels without a single side effect, while the Q-learner broke over 80 vases. Assumptions I am not asserting that these assumptions necessarily hold. The agent has some world model or set of observations which can be decomposed into a set of discrete objects. Furthermore, there is no need to identify objects on multiple levels ( e.g. , a forest, a tree in the forest, and that tree's bark need not all be identified concurrently). Not all objects need to be represented - what do we make of a 'field', or the 'sky', or 'the dark places between the stars visible to the naked eye'? Surely, these are not all objects . We have an ontology which reasonably describes (directly or indirectly) the vast majority of negative side effects. Indirect descriptions of negative outcomes means that even if an undesirable transition isn't immediately penalized, it generally results in a number of penalties. Think: pollution. Latent space whitelisting: the learned latent space encapsulates most of the relevant side effects. This is a slightly weaker assumption. Said ontology remains in place. Problems Beyond resolving the above assumptions, and in roughly ascending difficulty: Object Permanence If you wanted to implement whitelisting in a modern embodied deep-learning agent, you could certainly pair deep networks with state-of-the-art segmentation and object tracking approaches to get most of what you need. However, what's the difference between an object leaving the frame, and an object vanishing? Not only does the agent need to realize that objects are permanent, but also that they keep interacting with the environment even when not being observed. If this is not realized, then an agent might set an effect in motion, stop observing it, and then turn around when the bad effect is done to see a "new" object in its place. Time Step Size Invariance The penalty is presently attenuated based on the probability that the belief shift was due to noise in the data. Accordingly, there are certain ways to abuse this to skirt the penalty. For example, simply have non-whitelisted side effects take place over long timescales; this would be classified as noise and attenuated away. However, if we don't need to handle noise in the belief distributions, this problem disappears - presumably, an advanced agent keeps its epistemic house in order. I'm still uncertain about whether (in the limit) we have to hard-code a means for decomposing a representation of the world-state into objects, and where to point the penalty evaluator in a potentially self-modifying agent. Information Theory Whitelisting is wholly unable to capture the importance of "informational states" of systems. It would apply no penalty to passing powerful magnets over your hard drive. It is not clear how to represent this in a sensible way, even in a latent space. Loss of Value Whitelisting could get us stuck in a tolerable yet sub-optimal future. Corrigibility via some mechanism for expanding the whitelist after training has ended is then desirable. For example, the agent could propose extensions to the whitelist. To avoid manipulative behavior, the agent should be indifferent as to whether the extension is approved. Even if extreme care is taken in approving these extensions, mistakes may be made. The agent itself should be sufficiently corrigible and aligned to notice "this outcome might not actually be what they wanted, and I should check first". Reversibility As DeepMind outlines in Specifying AI Safety Problems in Simple Environments , we may want to penalize not just physical side effects, but also causally-irreversible effects: Krakovna et al. introduce a means for penalizing actions by the proportion of initially-reachable states which are still reachable after the agent acts. I think this is a step in the right direction. However, even given a hypercomputer and a perfect simulator of the universe, this wouldn't work for the real world if implemented literally. That is, due to entropy, you may not be able to return to the exact same universe configuration. To be clear, the authors do not suggest implementing this idealized algorithm, flagging a more tractable abstraction as future work. What does it really mean for an "effect" to be "reversible"? What level of abstraction do we in fact care about? Does it involve reversibility, or just outcomes for the objects involved? Ontological Crises When a utility-maximizing agent refactors its ontology, it isn't always clear how to apply the old utility function to the new ontology - this is called an ontological crisis . Whitelisting may be vulnerable to ontological crises. Consider an agent whose whitelist disincentivizes breaking apart a tile floor ( floor → tiles ∉ whitelist ); conceivably, the agent could come to see the floor as being composed of many tiles. Accordingly, the agent would no longer consider removing tiles to be a side effect. Generally, proving invariance of the whitelist across refactorings seems tricky, even assuming that we can identify the correct mapping . Retracing Steps When I first encountered this problem, I was actually fairly optimistic. It was clear to me that any ontology refactoring should result in utility normalcy - roughly, the utility functions induced by the pre- and post-refactoring ontologies should output the same scores for the same worlds. Wow, this seems like a useful insight. Maybe I'll write something up! Turns out a certain someone beat me to the punch - here's a novella Eliezer wrote on Arbital about "rescuing the utility function". 4 Clinginess This problem cuts to the core of causality and "responsibility" (whatever that means). Say that an agent is clingy when it not only stops itself from having certain effects, but also stops you . 5 Whitelist-enabled agents are currently clingy. Let's step back into the human realm for a moment. Consider some outcome - say, the sparking of a small forest fire in California. At what point can we truly say we didn't start the fire? My actions immediately and visibly start the fire. At some moderate temporal or spatial remove, my actions end up starting the fire. I intentionally persuade someone to start the fire. I unintentionally (but perhaps predictably) incite someone to start the fire. I set in motion a moderately-complex chain of events which convince someone to start the fire. I provoke a butterfly effect which ends up starting the fire. I provoke a butterfly effect which ends up convincing someone to start a fire which they: were predisposed to starting. were not predisposed to starting. Taken literally, I don't know that there's actually a significant difference in "responsibility" between these outcomes - if I take the action, the effect happens; if I don't, it doesn't. My initial impression is that uncertainty about the results of our actions pushes us to view some effects as "under our control" and some as "out of our hands". Yet, if we had complete knowledge of the outcomes of our actions, and we took an action that landed us in a California-forest-fire world, whom could we blame but ourselves? 6 Can we really do no better than a naive counterfactual penalty with respect to whatever impact measure we use? My confusion here is not yet dissolved . In my opinion, this is a gaping hole in the heart of impact measures - both this one, and others. Stasis Fortunately, a whitelist-enabled agent should not share the classic convergent instrumental goal of valuing us for our atoms. Unfortunately, depending on the magnitude of the penalty in proportion to the utility function, the easiest way to prevent penalized transitions may be putting any relevant objects in some kind of protected stasis, and then optimizing the utility function around that. Whitelisting is clingy! If we have at least an almost-aligned utility function and proper penalty scaling, this might not be a problem. Edit: a potential solution to clinginess, with its own drawbacks. Discussing Imperfect Approaches A few months ago, Scott Garrabrant wrote about robustness to scale: Briefly, you want your proposal for an AI to be robust (or at least fail gracefully) to changes in its level of capabilities. I recommend reading it - it's to-the-point, and he makes good points. Here are three further thoughts: Intuitively-accessible vantage points can help us explore our unstated assumptions and more easily extrapolate outcomes. If less mental work has to be done to put oneself in the scenario, more energy can be dedicated to finding nasty edge cases. For example, it's probably harder to realize all the things that go wrong with naive impact measures like raw particle displacement , since it's just a weird frame through which to view the evolution of the world. I've found it to be substantially easier to extrapolate through the frame of something like whitelisting . 7 I've already adjusted for the fact that one's own ideas are often more familiar and intuitive, and then adjusted for the fact that I probably didn't adjust enough the first time. Imperfect results are often left unstated, wasting time and obscuring useful data. That is, people cannot see what has been tried and what roadblocks were encountered. Promising approaches may be conceptually-close to correct solutions. My intuition is that whitelisting actually almost works in the limit in a way that might be important. Conclusion Although somewhat outside the scope of this post, whitelisting permits the concise shaping of reward functions to get behavior that might be difficult to learn using other methods. 8 This method also seems fairly useful for aligning short- and medium-term agents. While encountering some new challenges, whitelisting ameliorates or solves many problems with previous impact measures. 1 Even an idealized form of whitelisting is not sufficient to align an otherwise-unaligned agent. However, the same argument can be made against having an off-switch; if we haven't formally proven the alignment of a seed AI, having more safeguards might be better than throwing out the seatbelt to shed deadweight and get some extra speed. Of course, there are also legitimate arguments to be made on the basis of timelines and optimal time allocation. 2 Humor aside, we would have no luxury of "catching up on alignment theory" if our code doesn't work on the first go - that is, if the AI still functions, yet differently than expected. Luckily, humans are great at producing flawless code on the first attempt. 3 A potentially-helpful analogy: similarly to how Bayesian networks decompose the problem of representing a (potentially extremely large) joint probability table to that of specifying a handful of conditional tables, whitelisting attempts to decompose the messy problem of quantifying state change into a set of comprehensible ontological transitions. 4 Technically, at 6,250 words, Eliezer's article falls short of the 7,500 required for "novella" status . 5 Is there another name for this? 6 I do think that "responsibility" is an important part of our moral theory, deserving of rescue . 7 In particular, I found a particular variant of Murphyjitsu helpful: I visualized Eliezer commenting "actually, this fails terribly because..." on one of my posts, letting my mind fill in the rest. In my opinion, one of the most important components of doing AI alignment work is iteratively applying Murphyjitsu and Resolve cycles to your ideas. 8 A fun example: I imagine it would be fairly easy to train an agent to only destroy certain-colored ships in Space Invaders.
bd1f8cf0-e52f-4bca-93fa-7247eb89e93d
StampyAI/alignment-research-dataset/blogs
Blogs
What do coherence arguments imply about the behavior of advanced AI? *Published 8 April 2021* *This is an initial page, in the process of review, which may not be comprehensive or represent the best available understanding.* Coherence arguments say that if an entity’s preferences do not adhere to the axioms of expected utility theory, then that entity is susceptible to losing things that it values. This does not imply that advanced AI systems must adhere to these axioms (‘be coherent’), or that they must be goal-directed. Such arguments do appear to suggest that there will be non-zero pressure for advanced AI to become more coherent, and arguably also more ‘goal-directed’, given some minimal initial level of goal-directedness. Details ------- ### Motion toward coherence #### Expected utility maximization ‘Maximizing expected utility’ is a decision-making strategy, in which you assign a value to each possible ‘outcome’, and assign a probability to each outcome conditional on each of your available actions, then always choose the action whose resulting outcomes have the highest ‘[expected value](https://en.wikipedia.org/wiki/Expected_value)‘ (average value of outcomes, weighted by the probability of those outcomes). #### Coherence arguments ‘Coherence arguments’[1](https://aiimpacts.org/what-do-coherence-arguments-imply-about-the-behavior-of-advanced-ai/#easy-footnote-bottom-1-2836 "Or &#8216;coherence theorems&#8217;. Discussed further <a href=\"https://arbital.com/p/coherence_theorems/\">here</a>.") demonstrate that if one’s preferences cannot be understood as ‘maximizing expected utility’, then one can be manipulated into giving up things that one values for no gain. For instance, one coherence argument notes that if you have ‘circular preferences’ then you will consent to series of decisions that will leave you worse off, given your preferences: Suppose you prefer: * apple over pear * pear over quince * quince over apple * any fruit over nothing Then there is some tiny amount of money you would pay to go from apple to quince, and quince to pear, and pear to apple. At which point, you have spent money and are back where you started. If it is also possible to buy all of these fruit for money, then losing money means you lost some of whatever fruit for nothing, and you do want all of the fruit, by assumption. If you avoid all such predictable losses, then according to the coherence arguments, you must be maximizing expected utility (as described above). #### Coherence forces That a certain characteristic of an entity’s ‘preferences’ makes it vulnerable to manipulation does not mean that it will not have that characteristic. In order for such considerations to change the nature of an entity, ignoring outside intervention, something like the following conditions need to hold: * The entity can detect the characteristic (which could be difficult, if it is a logical relationship between all of its ‘preferences’ which are perhaps not straightforwardly accessible or well-defined) * The realistic chance for loss is large enough to cause the entity to prioritize the problem * The entity is motivated to become coherent by the possibility of loss (versus for instance inferring that losing money is good, since it is equivalent to a set of exchanges that are each good) * The entity is in a position to alter its own preferences Similar might apply if versions of the above hold for an outside entity with power over the agent, e.g. its creators, though in that case it is less clear that ‘coherence’ is a further motivator beyond that for having the agent’s preferences align with those of the outside agent (which would presumably coincide with coherence, to the extant that the outside agent had more coherent preferences). Thus we say there is generally an incentive for coherence, but it may or may not actually cause a an entity to change in the direction of coherence at a particular time. We can also describe this as a ‘coherence force’ or ‘coherence pressure’, pushing minds toward coherence, all things equal, but for all we know, so weakly as to be often irrelevant. #### Coherence forces apply to entities with ‘preferences’ The coherence arguments only apply to creatures with ‘preferences’ that might be thwarted by their choices, so there are presumably possible entities that are not subject to any coherence forces, due to not having preferences of the relevant type. ### Behavior of coherent creatures Supposing entities are likely to become more coherent, all things equal, a natural question is how coherent entities differ from incoherent entities. #### Coherence is consistent with any behavior If we observe an agent exhibiting any history of behavior, that is consistent with the agent’s being coherent because the agent could have a utility function that rates that history higher than any other history. Rohin Shah [discusses](https://www.alignmentforum.org/s/4dHMdK5TLN6xcqtyc/p/NxF5G6CJiof6cemTw) this. #### Coherence and goal-directedness ##### Coherence doesn’t logically require goal-directedness As Rohin Shah [discusses](https://www.alignmentforum.org/s/4dHMdK5TLN6xcqtyc/p/NxF5G6CJiof6cemTw), the above means that coherence does not imply ‘goal-directed’ behavior (however you choose to define that, if it doesn’t include all behavior): 1. Coherence arguments do not exclude any behavior 2. Non-goal-directed behavior is consistent with coherence arguments 3. Thus coherence arguments do not imply goal directed behavior ##### Increasing coherence seems likely to be associated with increased intuitive ‘goal-directedness’ The following hypotheses (quoted from [this blog post](https://aiimpacts.org/coherence-arguments-imply-a-force-for-goal-directed-behavior/)) seem plausible (where goal-directednessRohin means something like ‘that which looks intuitively goal-directed)[2](https://aiimpacts.org/what-do-coherence-arguments-imply-about-the-behavior-of-advanced-ai/#easy-footnote-bottom-2-2836 "Or precisely, from that post: &#8216;something like Rohin’s&nbsp;<a href=\"https://www.lesswrong.com/s/4dHMdK5TLN6xcqtyc/p/DfcywmqRSkBaCB6Ma\">preferred</a>&nbsp;usage: roughly, that which seems intuitively goal directed to us, e.g. behaving similarly across situations, and accruing resources, and not flopping around in possible pursuit of some exact history of personal floppage, or peaceably preferring to always take the option labeled ‘A’.&#8217;"): > **1. Coherence-reformed entities will tend to end up looking similar to their starting point but less conflicted**For instance, if a creature starts out being indifferent to buying red balls when they cost between ten and fifteen blue balls, it is more likely to end up treating red balls as exactly 12x the value of blue balls than it is to end up very much wanting the sequence where it takes the blue ball option, then the red ball option, then blue, red, red, blue, red. Or wanting red squares. Or wanting to ride a dolphin. > > […]**2. More coherent strategies are systematically less wasteful, and waste inhibits goal-directionRohin, which means more coherent strategies are more forcefully goal-directedRohin on average** > In general, if you are sometimes a force for A and sometimes a force against A, then you are not moving the world with respect to A as forcefully as you would be if you picked one or the other. Two people intermittently changing who is in the driving seat, who want to go to different places, will not cover distance in any direction as effectively as either one of them. A company that cycles through three CEOs with different evaluations of everything will—even if they don’t actively scheme to thwart one another—tend to waste a lot of effort bringing in and out different policies and efforts (e.g. one week trying to expand into textiles, the next week trying to cut everything not involved in the central business). > > **3. Combining points 1 and 2 above, as entities become more coherent, they generally become more goal-directedRohin.**As opposed to, for instance, becoming more goal-directedRohin *on average,* but individual agents being about as likely to become worse as better as they are reformed. Consider: a creature that values red balls at 12x blue balls is very similar to one that values them inconsistently, except a little less wasteful. So it is probably similar but more goal-directedRohin. Whereas it’s fairly unclear how goal-directedRohina creature that wants to ride a dolphin is compared to one that wanted red balls inconsistently much. In a world with lots of balls and no possible access to dolphins, it might be much less goal-directedRohin, in spite of its greater coherence.  > > **4. Coherence-increasing processes rarely lead to non-goal-directedRohin agents—like the one that twitches on the ground** In the abstract, few starting points and coherence-motivated reform processes will lead to an agent with the goal of carrying out a specific convoluted moment-indexed policy without regard for consequence, like Rohin’s twitching agent, or to valuing the sequence of history-action pairs that will happen anyway, or to being indifferent to everything. And these outcomes will be even less likely in practice, where AI systems with anything like preferences probably start out caring about much more normal things, such as money and points and clicks, so will probably land at a more consistent and shrewd version of that, if 1 is true. (Which is not to say that you couldn’t intentionally create such a creature.) > > > > Thus it presently seems likely that coherence arguments correspond to a force for for entities with something like ‘preferences’ to grow increasingly coherent, and generally increasingly goal-directed (intuitively defined). Thus, to the extent that future advanced AI has preferences of the relevant kind, there appears to be a pressure for it to become more goal-directed. However it is unclear what can be said generally about the strength of this force. *Primary author: Katja Grace* Notes -----
50a7ce0c-c4ef-4325-a187-8d8eca8816ee
StampyAI/alignment-research-dataset/special_docs
Other
Technical and social approaches to AI safety I often divide solutions to the AI control problem into two parts: technical and social. I think any solution requires progress on both fronts, but strength on one front can compensate for a weakness on the other. This view suggests that most of the value comes from having a “good” technical solution (quantification below), rather than a “perfect” solution or a “not terrible” solution. It also suggests that there is value from progress on both technical and social solutions, rather than justifying a focus on one or the other. Technical solutions ------------------- If we can build AI systems that are both efficient and safe, then we have addressed the most serious concerns about AI risk, regardless of how well or poorly society coordinates. If efficient AI is unsafe, then coordination may be needed to prevent people from using it. 1. **Efficient**: An efficient system is as cheap and effective, for the tasks to which it is applied, as any other system we can build using available technology. As AI becomes cheaper relative to human labor, the efficiency penalty for human involvement becomes larger. In the long run, efficient systems must require negligible human labor. 2. **Safe**:A safe system applies its efforts in the way that its users would want. In particular, if it makes money or acquires other resources or influence, the user retains effective control over most of those resources. (Note that a system may involve both humans and machines.) We can crudely quantify efficiency by the ratio between a system’s cost and the cost of the most efficient system that can do the same task. We can crudely quantify safety by the expected fraction of the system’s outputs which the user captures (with the remainder being applied towards some other ends). A simple model is that a system’s output depends linearly on its efficiency, and that the output is split between the user and whatever other goals the system is implicitly or explicitly pursuing. For example, we could imagine AIs which pursue task-specific goals which are instrumentally valuable to but imperfectly aligned with its users’ goals. Then this system may accomplish the particular tasks to which it is applied, but may also divert some resources to pursuing those task-specific goals in ways that the user would not endorse. Many futurists expect systems to have safety very close to 0 or 1, and that fits into this model just fine (you could still have intermediate degrees of safety, in light of uncertainty). There may be many unsafe AI systems that don’t fit this simple model at all; I’ll have this model in mind throughout this post for concreteness, but I don’t think that we should too attached to conclusions that are very sensitive to the model (and I’ll try to mostly be agnostic). Note that we can apply a very similar model to the more familiar principal-agent problem, in which a principal tries to induce an agent to act in the principal’s interests, and does so imperfectly. The principal can achieve greater efficiency by passing a larger share of their gains on to the agent, or by compromising and adopting values closer to the agents’. This example can help build some intuitions about the case of unsafe AI, but probably shouldn’t be taken too seriously. Social solutions ---------------- If we can’t always build systems which are efficient and safe, then users must make a tradeoff between safety and efficiency. Users may opt for unsafe systems for a variety of reasons: * They are maximizing their own returns, which may involve some compromise of safety. * They share the values pursued by some efficient but unsafe system, or they value the expansion of intelligence or complexity for its own sake. * They are making a trade with someone or something that shares the values of some efficient but unsafe system, and is willing to subsidize its use. If the efficiency gains for unsafe systems are large, then resources invested unsafely will compound faster than resources invested safely. The result is that the (proportional) influence of the original human population will be gradually diminished. The quality of our technical solutions determines how large the gap is. If we coordinate to ensure that no resources are invested unsafely, then we can address the AI control problem regardless of how big this gap is. The difficulty of this coordination problem, the amount of time available to solve it, the strength of the incentives to defect, the necessary level of success, and the consequences of failure, all depend on how good a technical solution to the control problem is available. As the efficiency gap becomes large, all of these parameters become extremely unfavorable. If the productivity gap is relatively large, say an order of magnitude, then any investment in unsafe systems would very rapidly outpace society at large, and tight controls would be needed to prevent trouble. To put this in quantitative perspective, a 10x productivity boost in the current world would correspond to 30x–1,000x increase in proportional influence per decade (depending on whether the is 10x is compared to average growth or compared to rates of return on other capital investments). So even a very small fraction of unsafe investment could quickly become a major global influence, unless there was a strong social response to thwart the basic economic dynamic. Examples of social solutions ---------------------------- This is a very broad category. For example, it includes: * Coordination amongst AI researchers to preferentially develop and distribute safe AI. * International political coordination to restrict deployment of unsafe AI, or to expropriate resources controlled by unsafe AI and sympathetic humans. * AI is mostly developed by a single, especially safety-conscious, project. This project maintains a large enough lead over its rivals that it can afford to use inefficient but safe AI, and it manages to preserve this lead either indefinitely or until safe-and-efficient AI can be developed. Of course there are no clear lines between these categories, or between them and other possible social solutions. I should emphasize that I am skeptical of most of these approaches, especially the kinds of draconian approaches that could deal with very large efficiency gaps. I suspect that there will be at least some informal coordination amongst AI researchers, if and when the social benefits of coordination becomes clear. For example, I expect researchers and engineers to generally be happier to work for projects which they see as contributing to human welfare, and this will make life marginally harder for unsafe projects. As a result, the state of the art will be somewhat better for safe systems, it will be somewhat easier for people to apply safe systems to their problems, and so on. I expect this kind of informal coordination to cover small gaps in efficiency; for large efficiency gaps, I could see things going either way. It would be surprising to me if international political coordination were strong enough to block the adoption of unsafe AI’s if they were many times more efficient than their safe counterparts (compare to nuclear disarmament or other international coordination problems we face today). If they were only modestly more efficient, then it would not necessarily require international coordination (just action by the most efficient jurisdictions), and regulation would only need to make very slightly harder for unsafe projects. I am somewhat skeptical about the feasibility or desirability of world-takeover by an early AI project, though I think that there are exotic situations where it becomes plausible. Upshot ====== I suspect that researchers working on the AI control problem should aim high. I think that there is a good chance that society can handle a 10% efficiency gap between safe and unsafe systems, but that it is pretty unlikely that it can handle a 10x gap. So we capture more value by moving from 10% efficiency to 90% efficiency for safe systems, than by moving from 1% to 10% or from 0.001% to 1%. The relative benefits are mostly determined by the probability that social solutions will be good enough to handle gaps of each size. On the flip side, I don’t think that we need to focus on reaching 100% efficiency either. 90% efficiency seems high enough that there is a good chance that social solutions can handle the gap. I also think that there are meaningful benefits from progress on both technical and social solutions. I don’t think that social solutions are so robust that we don’t have to worry about having a very good technical solution (I’m surprised by how often I encounter this view!), nor do I think that technical solutions are so clearly bimodal that we don’t care about social solutions since we will be either doomed or have no problem at all. Postscript: “Foom” ================== I think that the breakdown above applies whether you expect AI development to proceed briskly or slowly. If we expect AI development to go really fast (think days or weeks to go from “who cares” to “radically superhuman”), while the rest of the world continues to operate on its comparatively glacial pace, then it becomes more plausible that the first developers of AI will take over the world before anyone else catches up. This provides an easy social solution, and generally lowers the bar for technical solutions: maybe it’s OK if their AI is 10x less efficient than it would otherwise be, so that it takes it 10 weeks to take over the world rather than 1 week. This is only a problem if the competition is right on their tails. I don’t think this situation is very likely. But if you do, then you should become more interested in going from 0.001% to 10% efficiency, since that may be all that you really need. You are also probably working on a somewhat different set of safety problems (and I suspect are overall more pessimistic).
69ff387b-4e77-4b4c-9549-954ef167645b
trentmkelly/LessWrong-43k
LessWrong
Meetup : Cambridge Massachusetts meetup Discussion article for the meetup : Cambridge Massachusetts meetup WHEN: 03 July 2011 02:00:00PM (-0400) WHERE: 290 Main St # 6, Cambridge, MA It's the first Sunday of the month, so we'll be meeting at Cosi near Kendall Square in Cambridge. The theme of this meetup is rationalization. We'll have a short presentation and some exercises to practice recognizing what rationalization feels like, noticing it, and switching from rationalization to reasoning. Discussion article for the meetup : Cambridge Massachusetts meetup
335f7d19-f84d-4a09-a420-a9eb541c2440
trentmkelly/LessWrong-43k
LessWrong
Zvi’s 2024 In Movies Now that I am tracking all the movies I watch via Letterboxd, it seems worthwhile to go over the results at the end of the year, and look for lessons, patterns and highlights. TABLE OF CONTENTS 1. The Rating Scale. 2. The Numbers. 3. Very Briefly on the Top Picks and Whether You Should See Them. 4. Movies Have Decreasing Marginal Returns in Practice. 5. Theaters are Awesome. 6. I Hate Spoilers With the Fire of a Thousand Suns. 7. Scott Sumner Picks Great American Movies Then Dislikes Them. 8. I Knew Before the Cards Were Even Turned Over. 9. Other Notes to Self to Remember. 10. Strong Opinions, Strongly Held: I Didn’t Like It. 11. Strong Opinions, Strongly Held: I Did Like It. 12. Megalopolis. 13. The Brutalist. 14. The Death of Award Shows. 15. On to 2025. THE RATING SCALE Letterboxd ratings go from 0.5-5. Here is how I interpret the rating scale. You can find all my ratings and reviews on Letterboxd. I do revise from time to time. I encourage you to follow me there. 5: All-Time Great. I plan to happily rewatch this multiple times. If you are an adult and haven’t seen this, we need to fix that, potentially together, right away, no excuses. 4.5: Excellent. Would happily rewatch. Most people who watch movies frequently should see this movie without asking questions. 4: Great. Very glad I saw it. Would not mind a rewatch. If the concept here appeals to you, then you should definitely see it. 3.5: Very Good. Glad I saw it once. This added value to my life. 3: Good. It was fine, happy I saw it I guess, but missing it would also have been fine. 2.5: Okay. It was watchable, but actually watching it was a small mistake. 2: Bad. Disappointing. I immediately regret this decision. Kind of a waste. 1.5: Very Bad. If you caused this to exist, you should feel bad. But something’s here. 1: Atrocious. Total failure. Morbid curiosity is the only reason to finish this. 0.5: Crime Against Cinema. You didn’t even try to do the not-even-tryi
d141a152-c112-422c-8bbb-d32ed2e543c1
trentmkelly/LessWrong-43k
LessWrong
Meetup : LessWrong Scotland Discussion article for the meetup : LessWrong Scotland WHEN: 12 July 2015 02:00:00PM (+0100) WHERE: Glasgow: Caffe Nero, Dundas Avenue, at 2pm, then Waxy O'Connor's on West George Street from 3:30 pm This time we'll be talking about section D ("Mysterious Answers") of the rationality book, plus trying a slightly different format! As usual, please pass this on to anyone who might be interested. =========== This post is effectively a mirror. The definitive version is here: https://www.facebook.com/events/1444230005884572/ Discussion article for the meetup : LessWrong Scotland
9cf98af3-d906-461b-8de1-07c5f413ccb4
trentmkelly/LessWrong-43k
LessWrong
Why Portland Meta: This is a personal post. Almost like a journal entry, but also loosely intended to be enjoyable and/or useful to other people. (Cross-posted on my personal blog.) Why do people live where they live? Everyone's got a different answer. I lived in Vegas for about five years. Why did I live there? My answer was pretty simple. My girlfriend went to school there. So she needed to be there for school. And I wanted to be with her. So I lived in Vegas. But she graduated at the end of 2021. I work remotely, and she's open to whatever, so we were no longer tied to Vegas, and it was time to decide where we want to live. Well, there are a lot of cool options on the table. So at first our plan was to live for something like three months at four or five different places to try em out and see what we think. But then, instead, we decided to just move to Portland, the top place on our list. Well, we're currently about five months into a year-long lease and are probably going to buy something in the next year or two. Why did we decide to do this? To start, let's go over what we were looking for. The big thing is walkability. There are so many reasons for this. * Cars are expensive. * Having to walk places is makes it easy to be active, which is good for both health and happiness. * Walkable areas are awesome for a bunch of reasons. * Cars are one of the most risky things when it comes to dying early, and I am weird and prefer to avoid that risk. Unfortunately, walkable places tend to be quite expensive. Think: big cities like New York, Boston and SF. Places that expensive would really make it harder to retire early, something I'm pursuing, so I'd like to avoid them. But when you filter for cities that are both walkable and not crazy expensive, well there actually aren't a lot left! Here is the list I came up with: * Portland * Philly * Denver * Miami And then as a ~second tier: * DC * Austin * Pittsburgh * San Diego * Boston * Smaller towns Then there
df15fe79-422e-4a49-924e-bf4d15c246d6
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Frame for Take-Off Speeds to inform compute governance & scaling alignment ![](https://lh3.googleusercontent.com/tdHe9MrB4dj6v2tn6P9E0wKcd9Q-6t_FO2-0W1SyUJdOkPpBeilbAUjE0hx58XCrvNXfs73O7STdzN5Wfrujl4z5EOPsuamEBOuCj90yklG_nGSSwaATGsxCLuR_4r54ikUkhJ9eRGnHNSII_g)Figure 1: Something happens at future time t' that causes more resources to be poured into alignmentThe argument goes: there will be a time in the future, t’, where e.g. a terrible AI accident occurs, alignment failures are documented (e.g. partial deception), or the majority of GDP is AI such that more people are pouring resources into aligning AI. Potentially to the point that >90% of alignment resources will be used in the years before x-catastrophe or a pivotal act (Figure 2) ![](https://lh6.googleusercontent.com/kAu1I6zNLr5husJq4rSEIUMiOHwEClYTkJyw-Y8_TqnCf8BH8WPANIlC1OOYOhN5DB-LRWbrOQrLd_bCezQmkwd9UoLLedOMfhfwaAnTa6r-e6WkQJTZw5CB1amJH5in1Q9kOk42r4pcrfeRuw)Figure 2: potentially the majority of total resources poured into alignment happen after t'The initial graph (Fig. 1) seems surprisingly useful as a frame for arguing different cruxes & intuitions. I will quickly enumerate a few & would appreciate comments where you disagree. Compute governance w/o considering hardware/software overhang is net-negative ----------------------------------------------------------------------------- If we just govern compute usage while advances in hardware/software are continued, this may lead to just shifting t’ to the right w/o slowing down timelines, which implies less resources poured into alignment in total for no benefit.  If we limit compute successfully for many years, but hardware & software improvements continue, an actor can defect and experience a large, discontinuous increase in capabilities. If we (somehow) limit all of them, it will become much much harder to produce transformative AI (intuition: it’s like someone trying to build it today). Operationalizing the Cause of t’ -------------------------------- As mentioned before, t’ could be caused by “a terrible AI accident occurs, alignment failures are documented (e.g. partial deception), or the majority of GDP is AI such that more people are pouring resources into aligning AI.”  There are other potential causes as well, and I would find it beneficial to investigate how true the above 3 (and others) are as far as convincing real AI researchers into switching their research focus to alignment. I mean literally talking to researchers in machine learning and asking what capabilities (negative or positive) would get them to seriously consider switching their research focus.  Additionally, if we find that e.g. showing partial deception in models really would be convincing, then pouring resources into showing that sooner would be shifting t’ to the left, implying more overall resources poured into alignment. More Resources Don’t Imply More Progress ---------------------------------------- I expect most AI researchers who try to do alignment to (1) not do impactful work or (2) reinvent the wheel. So assuming we have lots of people who want to do alignment, is there a process that makes them avoid (1) and (2)? For example, a course/workshop they take, post they read, etc. What I currently think is important is ~~creating multiple documents like~~[~~https://arbital.com/p/AI\_boxing/~~](https://arbital.com/p/AI_boxing/) ~~. So if someone comes up w/ a boxed-ai plan, we can clearly argue that it must (1) actually build an air-type sandbox and (2) still be useful in the remaining channels to perform a pivotal act. If their plan actually considers these two arguments, then I am much more excited about them.~~ ~~So creating more documents like that for e.g. interpretability, learning from human feedback, etc and iterating on those arguments with researchers working in those fields today, will be useful for future researchers to not waste their time w/ dead-ends & reinventing the wheel.~~ my [latest post](https://www.lesswrong.com/posts/7GGRmAyMzqzidmBbi/alignment-as-constraints) on avoiding dead-end research. Another useful thing to have is clearly specifying the sub-problems, which may look like grounding it in already established formalizations. I think this is really hard, but having these allows outsiders to make clear progress on the problem and would even allow us to directly pay unaligned researchers to work on it today (or set up bounties/millennium prize-like questions) Infrastructure for scaling -------------------------- Related, if we do expect way more people to enter the field, are we building the infrastructure to support that scaling?  Ideally what scales to 10,000 people also applies to the thousand or so people that want to do alignment research today. Special thanks to [Tamay Besiroglu](https://www.lesswrong.com/users/tamay-besiroglu) for introducing me to this framing & several arguments, though note my takes are different than his.
e2c68792-43ef-4021-9b3f-6c37003f9d8a
trentmkelly/LessWrong-43k
LessWrong
Why I funded PIBBSS I just left a comment on PIBBSS' Manifund grant request (which I funded $25k) that people might find interesting. PIBBSS needs more funding! > Main points in favor of this grant > 1. My inside view is that PIBBSS mainly supports “blue sky” or “basic” research, some of which has a low chance of paying off, but might be critical in “worst case” alignment scenarios (e.g., where “alignment MVPs” don’t work, “sharp left turns” and “intelligence explosions” are more likely than I expect, or where we have more time before AGI than I expect). In contrast, of the technical research MATS supports, about half is basic research (e.g., interpretability, evals, agent foundations) and half is applied research (e.g., oversight + control, value alignment). I think the MATS portfolio is a better holistic strategy for furthering AI safety and reducing AI catastrophic risk. However, if one takes into account the research conducted at AI labs and supported by MATS, PIBBSS’ strategy makes a lot of sense: they are supporting a wide portfolio of blue sky research that is particularly neglected by existing institutions and might be very impactful in a range of possible “worst-case” AGI scenarios. I think this is a valid strategy in the current ecosystem/market and I support PIBBSS! > 2. In MATS’ recent post, “Talent Needs of Technical AI Safety Teams”, we detail an AI safety talent archetype we name “Connector”. Connectors bridge exploratory theory and empirical science, and sometimes instantiate new research paradigms. As we discussed in the post, finding and developing Connectors is hard, often their development time is on the order of years, and there is little demand on the AI safety job market for this role. However, Connectors can have an outsized impact on shaping the AI safety field and the few that make it are “household names” in AI safety and usually build organizations, teams, or grant infrastructure around them. I think that MATS is far from the ideal training ground for Co
a9216474-24e7-42b3-9bd8-328357c51f8c
StampyAI/alignment-research-dataset/youtube
Youtube Transcripts
Should We Build Superintelligence? [Music] new ones - this is take these five different most striking differences that you are voted on when you register it and have a debate on each one and I want to stress it the purpose of these debates it's not like a presidential debate where the goal is to try to humiliate the other side or get it the point is or the point is rather simply to make sure all of us get to hear what the interesting arguments are for each position and see if we can get a more nuanced way of thinking about these things and all of the five panels we're gonna have now in rapid succession will not only debate those positions that were in the survey but each panelist is gonna propose a little bit more nuanced position that they like and then you will get their way in and say how enthusiastic you are on each one of those so without further ado we're gonna go to the the very first panel which will be chaired by joi Ito from MIT our director of the media lab and we have chair Hyun y1 and Tanya Singh and John havens and Kathryn Olsen and joi Ito who are gonna debate should we build super intelligence so I want each of you to give a one or two minute non nuanced very sort of specific view of why you chose yes or no and then we're gonna do this for the polling but I want you to either name or describe what your position is because we're gonna then do a popularity contest later about maksud it wasn't a competition but we'll see what the distribution of the audience is and how compelling your vision is but the vision so the first part is do a non nuanced pitch of your vision and then the last half we're gonna have a conversation to then have the more nuanced conversation about the details and so try to be as Extreme as you can without being wrong and then after you guys each speak I'll give a orthogonal but hopefully useful a little comment and then we'll open it up to conversation so that's that's sort of the lay of the land so I'll start out with you Katherine cool so of course the word should has a lot of meanings when I answered this question initially I tried to take a pretty pragmatic stance on it like if a friend came to me who I thought was like a capable friend who might actually do things in the world and said my plan is to build a company or to start a company and we're going to build super intelligence should I do that and say ah no like sounds like there's some other things you should consider in that so that's sort of a sort of direct pragmatic should argument or should not argument the question is should we build super intelligence I'll highlight we already have systems that are super intelligent on specific tasks we've already done that in certain domains so maybe you interpret the question as should we build fully autonomous agents that exceed human capacity in pursuing their goals and a reasoning and sort of a long-term consequentialist way I think sort of the whole point of this conference is that we all broadly agree that would go very badly by default so no we shouldn't do that and then to sort of put the like punchy analogy on it when I was talking with some of these folks before of course there would be a lot of benefits we could attain if we could do this right whatever doing it right means the analogy being okay the pool is really nice I'd like to give swimming in the pool there I would have a lot of fun but the pool is full of sharks like it's full of sharks should I go swimming in the pool the answer is no there's things we could do to mitigate those risks but right now I think the risk pool of sharks pool of sharks okay okay hi John it's like a TV show so I'm John and I say this just speaking for me down I Tripoli or the council on extended intelligence the stuff I do mine is also kind of a shark in the pool analogy and it's somewhat nuanced in the sense that I don't think we should do anything not just a GI until we figure out the economic paradigm ik level underpinnings of most of what we do as humans and I was in Korea about a month ago at the OECD well-being workshop which is about 700 statisticians around the world focusing on things like the UN s DG's and how what we're actually measuring as a sense of what brings humans worth and prosperity since the 40s and 50s is largely about GDP GDP is not evil if you hear the term GDP and Beyond it means there are other measures that we can use to understand both human and environmental flourishing so for me one thing to keep in mind if you study GDP which is riveting and fascinating I'm kidding is it's really about productivity looking back on a year and it's also about measuring things like exponential growth well in terms of values alignment when you have systems that are designed to be sometimes very fast and to do things at an exponential rate and then you have a measure of success a key performance indicator that for both policymakers and business I used to be an EVP of a top 10 PR firm and every quarter when the doors closed you would want to bring value to your customers you would want to do stuff that would help the world but you also said these are the five words that dictated the actual actions you took did we make our numbers so beyond GDP means people planet profit if we can have triple bottom-line economic metrics guiding what we make for Ag eyeness ASI I'm diving in the pool sharks are known without them I don't think it makes sense to move forward without really thinking what is prosperity and flourishing mean for people in the environment and and the entities that may come beyond if we don't deal with that now don't do it until we can measure it yeah so GDP and Beyond would be I guess the DPM joined piffy not as good as sharks in the pool thing thank you all right I'm approach this question is that I feel the future is with super intelligence I did have a lot more positive values so it seems that to me it seems that in so far is we are not certain that the alignment problem is intractable or super intelligence will lead to some devastating outcomes existential risk level catastrophe for human beings it seems our job is to grind down the risk and bring it down to in fact infinitely small basically and to say that we're not sure whether we can do that and there's risk hence we should stop progress towards super intelligence arguably you can't even do that but to say that just stop progress towards super intelligence to me seems like you're throwing the baby out with the bathwater and it's a really cool baby because it can help you know mitigate interest tential and risk or even catastrophic risks from other advancing technologies so to me seems like it's it's slightly premature to say that we shouldn't be progressing towards super intelligence that's the spirit with which I've approached the question I think you can also frame the problem in a way that our current level of understand with our current level of understanding we can say that super intelligence could there is a non-trivial chance that they'll be extremely bad outcomes from it or that the technology gets concentrated in the hands of a few malicious actors or we get locked in a highly suboptimal sort of condition with our Axia logical potential getting curtailed so all of these are options on the table which we need to explore and make sure that we we don't run too much of a risk of those things we don't run a risk of those things but currently I don't think we're in a position to make that call and it seems a little premature to say that we should close off we should close off this path because you know and and not explore all and not not try and reap all the few positive outcomes that could come out from this technology and build offensively stable world which AI would allow us to do don't throw out the baby don't throw out the baby cook right how to point for my position is different with all of the three call it that is absolutely we should yes so the point is that first so the involvement of the intelligence in the universe is a very long trip so our human risk only one stage one stop of the long trip no reason to stop at our stage and we prevent higher level in holidays is Norina for me is a very natural direction to be there for the super intelligence and maybe somebody say we can involve ourselves maybe we can become smarter in the future but for our humorous that means we need thousands of years maybe longer because our brain our body involved too slow and we have a limitation of our scope example so it's impossible for our human being to become to become the super intelligence compared with the machine based super eNOS so if the future the sooner or later so it will happen in some maybe some filter so in this case no difference to build super intelligence yes or not because it will happen why we with or delay the appearance of the super intelligence so finally we need face the the appearance the super intelligence some some day so that's one why we do we must prepare for the for that the second point is that even from the perspective of our human human centered or human expert it's personal leader we still need to develop super intelligence because we have so many big challenges to face maybe for our human race we cannot figure out that big Helenus we need the help of interest for example nuclear waste interest maybe destroy our earth we know that but you can think about if another small planet hit our earth son someday without the nuclear weapon we may become de venise or so with super invaders means we can solve some big challenges that our human race cannot cannot do so we need to that in the future and help us for me the superintendence is not totally different intelligence compared with us is our copy but with fast maybe powerful interest mr. Covey is findable share the same neural network in our brain so that means we can communicate with several antennas we built it's not aliens so he is our could be become our partner and it has our intelligence so that's why I support the short version would be something like shouldn't slow the inevitable slowly we shouldn't slow the inevitable how would you summarize and something that goes on the screen or is it we're just we don't matter or matter we think about how to it's the three words or would they be us through water okay the way we should do that we should just do it okay that's good the I will make my slightly orthogonal comment so some people mentioned it but I think it should word is kind of weird because if you don't have the ability to but there is no should it's just it will happen so am i if I'm allowed to add one I would say doesn't matter it will happen but I'll also just point out I think David Kruger said it earlier when he was talking about the comprehensive AI services I'm very much in his in the camp of that conversation about AI actually being agency and models that are integrated into the system we use the term extended intelligence so if you think of corporations or society instead of individuals we already have super intelligence in a way that organizations are arguably more intelligent at least more complex than individuals and that if we start to augment society rather than think about augmenting individuals it's already happening and it will probably happen anyway and it's sort of an unstoppable thing and so but I would just have have question with the with the framing but anyway um so so my thing is that it will happen anyway and it's already here and so should we do the vote and then do the conversation how can you have both it already happened and it's inevitable well so absolutely I'll just keep going okay so I think I think what happens is well we're just evolving on a tree actually with increasing power and my my view is that super intelligence is just intelligence increasing at the societal scale and that and a my etta thing is that I think we just AI just makes us more powerful and more complex careening in the direction that we're going so I'm more interested in getting our house in order in our trajectory headed the right way rather than figuring out whether we should stop it or not that's sort of my vote and then my nuance which you will all get a chance to me wants away are that are the unknowns multiple choice I want to just pose a particular framing of the question and I'm gonna commit an error which I apologize for which is like raise your hand if you oppose me a question which I haven't even spent five minutes thinking about but it came up with it right before I got on the stage which is let's say that you are Dennis Shane and Mustafa and you're the heads of deepmind and one of your engineers comes to you and is like we can do it we can push the button we can build a GI and it's right now like January 1st or whatever January 4th 2019 and you're trying to ask the okay should we do this thing how would you tell what questions would you ask what experts would you go to do you have enough information to make the judgment right now today if this were to actually happen like yes we should and I don't think that collectively we have the like decision-making tools epistemic institutional institutional sanity technical safety work done such that there's any person today that could make the yes we should decision like right now that's just like a little more nuanced but I would encourage folks to actually think like what would it take for us to be in a world where a group of people is actually making the decision there's this thing build super intelligence there's an action we can actually take which is give the thumbs up should we do it what would it take for there to be a valid like yes answer that we don't feel good about yeah I'm just posing a question that I think is like more meaningful than what was asked or something or like maybe has some more so so go on so adding the time right now actually changes that makes it more crisp right but I'm sort of say imagine that this is happening right now but then imagine like what properties does a world have in the future that's different than right now I don't think anyone in this room maybe I'm wrong would claim that like there's a possible state where someone alive on earth could like confidently press the Go button and like be making the right call but what would the world look like where that is true like what would those decision-makers need to have access to that they don't have access to right now okay can I keep you in there yeah and do we have a mic for anybody at this back yeah hi so I was going to say something really naive or maybe very unimaginative but I don't know how to think of a world where someone says I'm about to do a GI I can push this button and it will happen right now I don't have it already but you know if I push this button I will have it like can you tell us a little bit more of what it would take to get through that stage yes it's impossible to push a button and heaven tomorrow it's impossible for me at least 30 yes so we have at least just let me try to clarify I didn't mean right now I just mean ok 30 years from now how would we get into a stage where someone was doing AI research raining their stuff by supplying somebody award functions playing around with things and then all of a sudden were like oh we don't have a GI but if I push this button we will have a GI what does push the button mean how I already have it at the point where you realize that if you push a button you have it there's a like you max pick I mean okay we before Justin it's so cool party means at that time we have the technology and the sister if we threw the button the Adria maybe have it so that is the that the time I think we're at a time unfortunately maybe push the button is like the last mile of things that need to be done to develop yeah just if you couldn't reframe it like that it doesn't necessarily need to be pushing this ability but yeah okay thanks panelist thank you [Applause] [Music]
724de515-655d-49d2-a004-32631abd44f0
trentmkelly/LessWrong-43k
LessWrong
Preventing, reversing, and addressing data leakage: some thoughts In the last few months, I've been thinking about the problem of accidental leakage of data and how to prevent it, reverse it, and address its aftermath. This post includes various thoughts, some of them including tool-specific guidance, and some of them more general. Examples of data leakage are as follows: * Sending a password or sensitive credential to somebody you shouldn't send it to, or posting it in a forum with wide, even public, access. * Sending factual information accidentally to somebody you didn't intend to share it with (or at least, didn't intend to share it with yet) or putting it in a forum with wide, even public, access. The prevention steps are fairly similar for both cases, but the steps for addressing are somewhat different. Specifically, for the case of passwords or sensitive credentials, changing it is part of the best practice response. For factual information, on the other hand, it is often infeasible to change the facts since they're the facts! This post is largely focused on leakage in the digital realm. Some similar ideas apply in the physical realm as well. This post also doesn't cover large-scale data leakage, nor does it cover complementary best practices such as password security, other forms of authentication security (such as IP-based limits and two-factor authentication) and encryption. I might write additional posts about some of those topics, but those topics are in general more widely covered, hence my desire to write what I'm writing first. I end the post with meta comments including more on its potential relevance to LessWrong as well as the distinction between general ideas and details specific to particular tools and services. Prevention strategies: general philosophy The accident triangle philosophy and the conjunctive nature of accidents The accident triangle idea is that for every accident with major injury, there are several accidents with minor injury and even more accidents with no injury. A similar idea ap
6012ce7d-a4ab-455c-bc80-034895d3f33b
trentmkelly/LessWrong-43k
LessWrong
AI #81: Alpha Proteo Following up on Alpha Fold, DeepMind has moved on to Alpha Proteo. We also got a rather simple prompt that can create a remarkably not-bad superforecaster for at least some classes of medium term events. We did not get a new best open model, because that turned out to be a scam. And we don’t have Apple Intelligence, because it isn’t ready for prime time. We also got only one very brief mention of AI in the debate I felt compelled to watch. What about all the apps out there, that we haven’t even tried? It’s always weird to get lists of ‘top 50 AI websites and apps’ and notice you haven’t even heard of most of them. TABLE OF CONTENTS 1. Introduction. 2. Table of Contents. 3. Language Models Offer Mundane Utility. So many apps, so little time. 4. Language Models Don’t Offer Mundane Utility. We still don’t use them much. 5. Predictions are Hard Especially About the Future. Can AI superforecast? 6. Early Apple Intelligence. It is still early. There are some… issues to improve on. 7. On Reflection It’s a Scam. Claims of new best open model get put to the test, fail. 8. Deepfaketown and Botpocalypse Soon. Bots listen to bot music that they bought. 9. They Took Our Jobs. Replit agents build apps quick. Some are very impressed. 10. The Time 100 People in AI. Some good picks. Some not so good picks. 11. The Art of the Jailbreak. Circuit breakers seem to be good versus one-shots. 12. Get Involved. Presidential innovation fellows, Oxford philosophy workshop. 13. Alpha Proteo. DeepMind once again advances its protein-related capabilities. 14. Introducing. Google to offer AI podcasts on demand about papers and such. 15. In Other AI News. OpenAI raising at $150b, Nvidia denies it got a subpoena. 16. Quiet Speculations. How big a deal will multimodal be? Procedural games? 17. The Quest for Sane Regulations. Various new support for SB 1047. 18. The Week in Audio. Good news, the debate is over, there might not be another. 19. Rhetorical Innovation. You
50fc645a-1f85-4f9c-a18b-9bec9ec2dfa7
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Biological Anchors: The Trick that Might or Might Not Work *This post originally posted on Astral Codex Ten on Feb 23 2022.* *It was printed in* [*The Carving of Reality*](https://www.amazon.com/Carving-Reality-Essays-LessWrong-Community/dp/B0C95MJJBK)*, the third volume of the Best of LessWrong book series. It was included as a (shorter) replacement for Ajeya Cotra's* [*Draft report on AI timelines*](https://www.lesswrong.com/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines)*, and Eliezer's* [*Biology-Inspired AGI Timelines: The Trick That Never Works*](https://www.lesswrong.com/posts/ax695frGJEzGxFBK4/biology-inspired-agi-timelines-the-trick-that-never-works)*, covering the topic from multiple sides.* *It's crossposted here with Scott's permission for completeness (i.e. having all essays in the book appear on LessWrong).* Introduction ============ I've been trying to review and summarize Eliezer Yudkowksy's recent dialogues on AI safety. Previously in sequence: [Yudkowsky Contra Ngo On Agents](https://astralcodexten.substack.com/p/practically-a-book-review-yudkowsky). Now we’re up to Yudkowsky contra Cotra on biological anchors, but before we get there we need to figure out what Cotra's talking about and what's going on. The [Open Philanthropy Project](https://www.openphilanthropy.org/) ("Open Phil") is a big effective altruist foundation interested in funding AI safety. It's got $20 billion, probably the majority of money in the field, so its decisions matter a lot and it’s very invested in getting things right. In 2020, it asked senior researcher Ajeya Cotra to produce [**a report on when human-level AI would arrive.**](https://drive.google.com/drive/u/1/folders/15ArhEPZSTYU8f012bs6ehPS6-xmhtBPP)It says the resulting document is "informal" - but it’s 169 pages long and likely to affect millions of dollars in funding, which some might describe as making it *kind* of formal. The report finds a 10% chance of “transformative AI” by 2031, a 50% chance by 2052, and an almost 80% chance by 2100. Eliezer rejects their methodology and expects AI earlier (he doesn’t offer many numbers, but [here](https://www.econlib.org/archives/2017/01/my_end-of-the-w.html) he gives Bryan Caplan 50-50 odds on 2030, albeit [not totally seriously](https://www.econlib.org/archives/2017/01/my_end-of-the-w.html#comment-166919)). He made the case in his own very long essay, [**Biology-Inspired AGI Timelines: The Trick That Never Works**](https://www.lesswrong.com/posts/ax695frGJEzGxFBK4/biology-inspired-agi-timelines-the-trick-that-never-works), sparking a bunch of arguments and counterarguments and even more long essays. There's a small cottage industry of summarizing the report already, eg OpenPhil CEO Holden Karnofsky's [article](https://www.cold-takes.com/forecasting-transformative-ai-the-biological-anchors-method-in-a-nutshell/) and Alignment Newsletter editor Rohin Shah's [comment](https://www.lesswrong.com/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines?commentId=7d4q79ntst6ryaxWD). I've drawn from both for my much-inferior attempt. Part I: The Cotra Report ======================== Ajeya Cotra is a senior research analyst at OpenPhil. She's assisted by her fiancee Paul Christiano (compsci PhD, OpenAI veteran, runs an AI alignment nonprofit) and to a lesser degree by other leading lights. Although not everyone involved has formal ML training, if you care a lot about whether efforts are “establishment” or “contrarian”, this one is probably more establishment. The report asks when will we first get "transformative AI" (ie AI which produces a transition as impressive as the Industrial Revolution; probably this will require it to be about as smart as humans). Its methodology is: 1. Figure out how much inferential computation the human brain does. 2. Try to figure out how much training computation it would take, right now, to get a neural net that does the same amount of inferential computation. Get some mind-bogglingly large number. 3. Adjust for "algorithmic progress", ie maybe in the future neural nets will be better at using computational resources efficiently. Get some number which, realistically, is still mind-bogglingly large. 4. Probably if you wanted that mind-bogglingly large amount of computation, it would take some mind-bogglingly large amount of money. But computation is getting cheaper every year. Also, the economy is growing every year. Also, the share of the economy that goes to investments in AI companies is growing every year. So at some point, some AI company will actually be able to afford that mind-boggingly-large amount of money, deploy the mind-bogglingly large amount of computation, and train the AI that has the same inferential computation as the human brain. 5. Figure out what year that is. Does this encode too many questionable assumptions? For example, might AGI come from an ecosystem of interacting projects (eg how the Industrial Revolution came from an ecosystem of interacting technologies) such that nobody has to train an entire brain-sized AI in one run? Maybe - in fact, Ajeya thinks the Industrial Revolution scenario might be *more* likely than the single-run scenario. But she finds the single-run scenario as a useful upper bound (later she mentions other reasons to try it as a *lower* bound, and compromises by treating it as a central estimate) and still thinks it’s worth figuring out how long it will take. So let’s go through the steps one by one: How Much Computation Does The Human Brain Do? --------------------------------------------- Step one - figuring out how much computation the human brain does - is a daunting task. A successful solution would look like a number of FLOP/S (floating point operations per second), a basic unit of computation in digital computers. Luckily for Ajeya and for us, another OpenPhil analyst, Joe Carlsmith, finished [a report on this](https://www.openphilanthropy.org/brain-computation-report) a few months prior. It concluded the brain probably uses 10^13 - 10^17 FLOP/S. Why? Partly because this was the number given by most experts. But also, there are about 10^15 synapses in the brain, each one spikes about once per second, and a synaptic spike probably does about one FLOP of computation. (I'm not sure if he's taking into account the recent research suggesting that computation sometimes happens within dendrites - see section 2.1.1.2.2 of his report for complications and why he feels okay ignoring them - but realistically there are lots of order-of-magnitude-sized gray areas here, and he gives a sufficiently broad range that as long as the unknown unknowns aren't all in the same direction it should be fine.) So a human-level AI would also need to do 10^15 floating point operations per second? Unclear. Computers can run on more or less efficient algorithms; neural nets might use their computation more or less effectively than the brain. You might think it would be more efficient, since human designers can do better than the blind chance of evolution. Or you might think it would be less efficient, since many biological processes are still far beyond human technology. Or you might do what OpenPhil did and just look at a bunch of examples of evolved vs. designed systems and see which are generally better: [![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F076623f5-43fb-4195-b55b-7db9d1583048_514x310.png)](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F076623f5-43fb-4195-b55b-7db9d1583048_514x310.png) *Source:* [*This document*](https://docs.google.com/document/d/1HUtUBpRbNnnWBxiO2bz3LumEsQcaZioAPZDNcsWPnos/edit) *by Paul Christiano.* Ajeya combines this with another metric where they see how existing AI compares to animals with apparently similar computational capacity; for example, she says that DeepMind’s Starcraft engine has about as much inferential compute as a honeybee and seems about equally subjectively impressive. I have no idea what this means. Impressive at what? Winning multiplayer online games? Stinging people? In any case, they decide to penalize AI by one order of magnitude compared to Nature, so a human-level AI would need to do 10^16 floating point operations per second. How Much Compute Would It Take To Train A Model That Does 10^16 Floating Point Operations Per Second? ----------------------------------------------------------------------------------------------------- So an AI could potentially equal the human brain with 10^16 FLOP/S. Good news! There’s [a supercomputer in Japan](https://en.wikipedia.org/wiki/Fugaku_(supercomputer)) that can do 10^17 FLOP/S! [![Japan&#39;s Fugaku Supercomputer Completes First-Ever Sweep of High-Performance Benchmarks - IEEE Spectrum](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fd5924b1f-a563-4332-b137-ff9dda5580d0_1240x516.jpeg)](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fd5924b1f-a563-4332-b137-ff9dda5580d0_1240x516.jpeg)   *It looks like this (*[*source*](https://spectrum.ieee.org/japans-fugaku-supercomputer-is-first-in-the-world-to-simultaneously-top-all-high-performance-benchmarks)*)* So why don’t we have AI yet? Why don’t we have *ten* AIs? In the modern paradigm of machine learning, it takes very big computers to *train* relatively small end-product AIs. If you tried to train GPT-3 on the same kind of medium-sized computers you run it on, it would take between tens and hundreds of years. Instead, you train GPT-3 on giant supercomputers like the ones above, get results in a few months, then run it on medium-sized computers, maybe ~10x better than the average desktop. But our hypothetical future human-level AI is 10^16 FLOP/S in inference mode. It needs to *run on* a giant supercomputer like the one in the picture. Nothing we have now could even begin to train it. There’s no direct and obvious way to convert inference requirements to training requirements. Ajeya tries assuming that each parameter will contribute about 10 FLOPs, which would mean the model would have about 10^15 parameters (GPT-3 has about 10^11 parameters). Finally, she uses some empirical scaling laws derived from looking at past machine learning projects to estimate that training 10^15 parameters would require H\*10^30 FLOPs, where H represents the model’s “horizon”. If I understand this correctly, “horizon” is a reinforcement learning concept: how long does it take to learn how much reward you got for something? If you’re playing a slot machine, the answer is one second. If you’re starting a company, the answer might be ten years. So what horizon do you need for human level AI? Who knows? It probably depends on what human-level task you want the AI to do, plus how well an AI can learn to do that task from things less complex than the entire task. If writing a good book is mostly about learning to write good sentence and then stringing them together, a book-writing AI can get away with a short horizon. If nothing short of writing an entire book and then evaluating it to see whether it is good or bad can possibly teach you book-writing, the AI will need a long time horizon. Ajeya doesn’t claim to have a great answer for this, and considers three models: horizons of a few minutes, a few hours, and a few years. Each step up adds another three orders of magnitude, so she ends up with three estimates of 10^30, 10^33, and 10^36 FLOPs. (for reference, the lowest training estimate - 10^30 - would take the supercomputer pictured above 300,000 years to complete; the highest, 300 billion.) Or What If We Ignore All Of That And Do Something Else? ------------------------------------------------------- This is piling a lot of assumptions atop each other, so Ajeya tries three other methods of figuring out how hard this training task is. Humans seem to be human-level AIs. How much training do *we* need? You can analogize our childhood to an AI’s training period. We receive a stream of sense-data. We start out flailing kind of randomly. Some of what we do gets rewarded. Some of what we do gets punished. Eventually our behavior becomes more sophisticated. We subject our new behavior to reward or punishment, fine-tune it further. *Rent* asks us: how do you measure the life of a woman or man? It answers:“in daylights, in sunsets, in midnights, in cups of coffee; in inches, in miles, in laughter, in strife.” But you can also measure in floating point operations, in which case the answer is about 10^24. This is actually trivial: multiply the 10^15 FLOP/S of the human brain by the ~10^9 seconds of childhood and adolescence. This new estimate of 10^24 is much lower than our neural net estimate of 10^30 - 10^36 above. In fact, it’s only a hair above the amount it took to train GPT-3! If human-level AI was this easy, we should have hit it by accident sometime in the process of making a GPT-4 prototype. Since OpenAI hasn’t mentioned this, probably it’s harder than this and we’re missing something. Probably we’re missing that humans aren’t blank slates. We don’t start at zero and then only use our childhood to train us further. The very structure of our brain encodes certain assumptions about what kinds of data we should be looking out for and how we should use it. Our training data isn’t just what we observed during childhood, it’s everything that any of our ancestors observed during evolution. How many floating-point operations is the evolutionary process? Ajeya estimates 10^41. I can’t believe I’m writing this. I can’t believe someone actually estimated the number of floating point operations involved in jellyfish rising out of the primordial ooze and eventually becoming fish and lizards and mammals and so on all the way to the Ascent of Man. Still, the idea is simple. You estimate how long animals with neurons have been around for (10^16 seconds), total number of animals at any given second (10^20) times average number of FLOPS per animal (10^5) and you can read more [here](https://docs.google.com/document/d/1k7qzzn14jgE-Gbf0CON7_Py6tQUp2QNodr_8VAoDGnY/edit#heading=h.gvc1xyxlemkd) but it comes out to 10^41 FLOs. I would not call this an *exact* estimate - for one thing, it assumes that all animals are nematodes, on the grounds that non-nematode animals are basically a rounding error in the grand scheme of things. But it does justify this bizarre assumption, and I don’t feel inclined to split hairs here - surely the total amount of computation performed by evolution is irrelevant except as an extreme upper bound? Surely the part where Australia got all those weird marsupials wasn’t strictly necessary for the human brain to have human-level intelligence? One more weird human training data estimate attempt: what about the genome? If in some sense a bit of information in the genome is a “parameter”, how many parameters does that suggest humans have, and how does it affect training time? Ajeya calculates that the genome has about 7.5x10^8 parameters (compared to 10^15 parameters in our neural net calculation, and 10^11 for GPT-3). So we can… Okay, I’ve got to admit, this doesn’t have quite the same “huh?!” factor as trying to calculate the number of FLOs in evolution, but it is in a lot of ways even crazier. The [Japanese canopy plant](https://en.wikipedia.org/wiki/Paris_japonica) has a genome fifty times larger than ours, which suggests that genome size doesn’t correspond very well to organism awesomeness. Also, most of the genome is coding for weird proteins that stabilize the shape of your kidney tubule or something, why should this matter for intelligence? [![Paris japonica Kinugasasou in Hakusan 2003 7 27.jpg](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F333dcbf2-1f63-42a1-821f-94f39818e62d_1280x897.jpeg)](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F333dcbf2-1f63-42a1-821f-94f39818e62d_1280x897.jpeg) *The Japanese canopy plant. I think it is very pretty, but probably low prettiness per megabyte of DNA*. I think Ajeya would answer that she’s debating orders of magnitude here, and each of these weird things costs only a few OOMs and probably they all even out. That still leaves the question of why she thinks this approach is interesting at all, to which she answers that: > The motivating intuition is that evolution performed a search over a space of small, compact genomes which coded for large brains rather than directly searching over the much larger space of all possible large brains, and human researchers may be able to compete with evolution on this axis. > > So maybe instead of having to figure out how to generate a brain per se, you figure out how to generate some short(er) program that can output a brain? But this would be very different from how ML works now. Also, you need to give each short program the chance to unfold into a brain before you can evaluate it, which evolution has time for but we probably don’t.  Ajeya sort of mentions these problems and counters with an argument that maybe you could think of the genome as a reinforcement learner with a long horizon. I don’t quite follow this but it sounds like the sort of thing that almost might make sense. Anyway, when you apply the scaling laws to a 7.5\*10^8 parameter genome and penalize it for a long horizon, you get about 10^33 FLOPs, which is weirdly similar to some of the other estimates. So now we have six different training cost estimates. First, neural nets with short, medium, and long horizons, which are 10^30, 10^33, and 10^36 FLOPs, respectively. Next, the amount of training data in a human lifetime - 10^24 FLOs - and in all of evolutionary history - 10^41 FLOPs. And finally, this weird genome thing, which is 10^33 FLOPs. An optimist might say “Well, our lowest estimate is 10^24 FLOPs, our highest is 10^41 FLOPs, those sound like kind of similar numbers, at least there’s no “5 FLOPs” or “10^9999 FLOPs” in there. A pessimist might say “The difference between 10^24 and 10^41 is seventeen orders of magnitude, ie a factor of 100,000,000,000,000,000 times. This barely constrains our expectations at all!” Before we decide who to trust, let’s remember that we’re still only at Step 2 of our eight step Methodology, and continue. How Do We Adjust For Algorithmic Progress? ------------------------------------------ So today, in 2022 (or in 2020 when this was written, or whenever), assume it would take about 10^33 FLOs to train a human-level AI. But technology constantly advances. Maybe we’ll discover ways to train AIs faster, or run AIs more efficiently, or something like that. How does that factor into our estimate? Ajeya draws on Hernandez & Brown’s [Measuring The Algorithmic Efficiency Of Neural Networks](https://arxiv.org/ftp/arxiv/papers/2005/2005.04305.pdf). They look at how many FLOPs it took to train various image recognition AIs to an equivalent level of performance between 2012 and 2019, and find that over those seven years it decreased by a factor of 44x, ie training efficiency doubles every sixteen months! Ajeya assumes a doubling time slightly longer than that, because it’s easier to make progress in simple well-understood fields like image recognition than in the novel task of human-level AI. She chooses a doubling time of “merely” 2 - 3 years. If training efficiency doubles every 2-3 years, it would dectuple in about 10 years. So although it might take 10^33 FLOPs to train a human level AI today, in ten years or so it may take only 10^32, in twenty years 10^31, and so on. ### When Will Anyone Have Enough Computational Resources To Train A Human-Level AI? In 2020, AI researchers could buy computational resources at about $1 for 10^17 FLOPs. That means the 10^33 FLOPs you’d need to train a human-level AI would cost $10^16, ie ten quadrillion dollars. This is about twenty times more money than exists in the entire world. But compute costs fall quickly. Some formulations of Moore’s Law suggest it halves every eighteen months. These no longer seem to hold exactly, but it does seem to be halving maybe once every 2.5 years. The exact number is kind of controversial: Ajeya admits it’s been more like once every 3-4 years lately, but she heard good things about some upcoming chips and predicted it might revert back to the longer-term faster trend (it’s been two years now, some new chips have come out, and this prediction is looking pretty good). So as time goes on, algorithmic progress will cut the cost of training (in FLOPs), and hardware progress will also cut the cost of FLOPs (in dollars). So training will become gradually more affordable as time goes on. Once it reaches a cost somebody is willing to pay, they’ll buy human-level AI, and then that will be the year human-level AI happens. What is the cost that somebody (company? government? billionaire?) is willing to pay for human-level AI? The most expensive AI training in history was AlphaStar, a DeepMind project that spent over $1 million to train an AI to play StarCraft *(*in their defense, it won). But people have been pouring more and more money into AI lately: [![The cost of training machines is becoming a problem | The Economist](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fb9496f1f-ec6c-41a2-8c2e-27f09da22097_1280x759.png)](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fb9496f1f-ec6c-41a2-8c2e-27f09da22097_1280x759.png) *Source* [*here*](https://www.economist.com/technology-quarterly/2020/06/11/the-cost-of-training-machines-is-becoming-a-problem)*. This is about compute rather than cost, but most of the increase seen here has been companies willing to pay for more compute over time, rather than algorithmic or hardware progress.* The StarCraft AI was kind of a vanity project, or science for science’s sake, or whatever you want to call it. But AI is starting to become profitable, and human-level AI would be *very* profitable. Who knows how much companies will be willing to pay in the future? Ajeya extrapolates the line on the graph forward to 2025 and gets $1 billion. This is starting to sound kind of absurd - the entire company OpenAI was founded with $1 billion in venture capital, it seems like a lot to expect them to spend more than $1 billion on a single training run. So Ajeya backs off from this after 2025 and predicts a “two year doubling time”. This is not much of a concession. It still means that in 2040 someone might be spending $100 billion to train one AI. Is this at all plausible? At the height of the Manhattan Project, the US was investing about 0.5% of its GDP into the effort; a similar investment today would be worth $100 billion. And we’re about twice as rich as 2000, so 2040 might be twice as rich as we are. At that point, $100 billion for training an AI is within reach of Google and maybe a few individual billionaires (though it would still require most or all of their fortune). Ajeya creates a complicated function to assess how much money people will be willing to pay on giant AI projects per year. This looks like an upward-sloping curve. The line representing the likely cost of training a human-level AI looks like a downward sloping curve. At some point, those two curves meet, representing when human-level AI will first be trained. So When Will We Get Human-Level AI? ----------------------------------- The report gives a long distribution of dates based on weights assigned to the six different models, each of which has really wide confidence intervals and options for adjusting the mean and variance based on your assumptions. But the median of all of that is 10% chance by 2031, 50% chance by 2052, and almost 80% chance by 2100. Ajeya takes her six models and decides to weigh them like so, based on how plausible she thinks each one is: 20% neural net, short horizon 30% neural net, medium horizon 15% neural net, long horizon 5% human lifetime as training data 10% evolutionary history as training data 10% genome as parameter number She ends up with this: [![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F62d647ff-58ed-4e9a-9f1a-7febf5859249_1152x842.png)](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F62d647ff-58ed-4e9a-9f1a-7febf5859249_1152x842.png) How Sensitive Is This To Changes In Assumptions? ------------------------------------------------ She very helpfully gives us a [Colab notebook](https://colab.research.google.com/drive/1Fpy8eGDWXy-UJ_WTGvSdw_hauU4l-pNS?usp=sharing) and [Google spreadsheet](https://docs.google.com/spreadsheets/d/1XV9PBEY2UtTWxsJ_zoAujnIGKpnHTwuvuvaaNOG30nY/edit#gid=505210495) to play around with. The notebook lets you change some of the more detailed parameters of the individual models, and the spreadsheet lets you change the big picture. I leave the notebook to people more dedicated to forecasting than I am, and will talk about the spreadsheet here. If you’re following along at home, the default spreadsheet won’t reflect Ajeya’s findings until you fill in the table in the bottom left like so: [![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F622bac28-eaa6-40b5-b93b-695952966ef7_744x324.png)](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F622bac28-eaa6-40b5-b93b-695952966ef7_744x324.png) Great. Now that we’ve got that, let’s try changing some stuff. I like the human childhood training data argument (Lifetime Anchor) more than Ajeya does, and I like the size-of-the-genome argument less. I’m going to change the weights to 20-20-0-20-20-20. Also, Ajeya thinks that someone might be willing to spend 1% of national GDP on training AIs, but that sounds really high to me, so I’m going to down to 0.1%. Also, Ajeya’s estimate of 3% GDP growth sounds high for the sort of industrialized nations who might do AI research, I’m going to lower it to 2%. Since I’m feeling mistrustful today, let’s use the Hernandez&Brown estimate for compute halving (1.5 years) in place of Ajeya’s *ad hoc* adjustments. And let’s use the current compute halving time (3.5 years) instead of Ajeya’s overly rosy version (2.5 years). All these changes… [![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F7d5c2306-a123-4903-adb9-d961d56ebfb5_1152x842.png)](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F7d5c2306-a123-4903-adb9-d961d56ebfb5_1152x842.png) …don’t really do much. The median goes from 2052 to about 2065. Four of the models give results between 2030 and 2070. The last two, Neural Net With Long Horizon and Evolution, suggest probably no AI this century (although Neural Net With Long Horizon does think there’s a 40% chance by 2100). Ajeya doesn’t really like either of these models and they’re not heavily weighted in her main result. Does The Truth Point To Itself? ------------------------------- Back up a second. Here’s something that makes me kind of nervous. Most of Ajeya’s numbers are kind of made up, with several order-of-magnitude error bars and simplifying assumptions like “all animals are nematodes”. For a single parameter, we get estimates spanning seventeen different orders of magnitude: the upper bound is one hundred quadrillion times the lower bound. *And yet* four of the six models, including two genuinely exotic ones, manage to get dates within twenty years of 2050. And 2050 is also the date everyone else focuses on. Here’s the prediction-market-like site [Metaculus](https://www.metaculus.com/questions/5121/when-will-the-first-artificial-general-intelligence-system-be-devised-tested-and-publicly-known-of-stronger-operationalization/): [![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F807f66de-8c5c-4423-b293-ca92b5b64053_763x360.png)](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F807f66de-8c5c-4423-b293-ca92b5b64053_763x360.png) Their distribution looks a lot like Ajeya’s, and even has the same median, 2052 (though forecasters could have read Ajeya’s report). Katja Grace et al [surveyed 352 AI experts](https://arxiv.org/pdf/1705.08807.pdf), and they gave a median estimate of 2062 for an AI that could “outperform humans at all tasks” (though with many caveats and high sensitivity to question framing). This was before Ajeya’s report, so they definitely didn’t read it. So lots of Ajeya’s different methods *and* lots of other people presumably using different methodologies or no methodology at all, all converge on this same idea of 2050 give or take a decade or two. An optimist might say “The truth points to itself! There are 371 known proofs of the Pythagorean Theorem, and they all end up in the same place. That’s because no matter what methodology you use, if you use it well enough you get to the correct answer.” A pessimist might be more suspicious; we’ll return to this part later. FLOPS Alone Turn The Wheel Of History ------------------------------------- One more question: what if this is all bullshit? What if it’s an utterly useless total garbage steaming pile of grade A crap? Imagine a scientist in Victorian Britain, speculating on when humankind might invent ships that travel through space. He finds a natural anchor: the moon travels through space! He can observe things about the moon: for example, it is 220 miles in diameter (give or take an order of magnitude). So when humankind invents ships that are 220 miles in diameter, they can travel through space! Ships have certainly grown in size tremendously, from primitive kayaks to Roman triremes to Spanish galleons to the great ocean liners of the (Victorian) present. [![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fceba6aa0-dbde-41ca-805e-01af4fac9324_769x336.png)](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fceba6aa0-dbde-41ca-805e-01af4fac9324_769x336.png) *The AI forecasting organization AI Impacts actually has* [*a whole report on historical ship size trends*](https://aiimpacts.org/historic-trends-in-ship-size/) *to prove an unrelated point about technological progress, so I didn’t even have to make this graph up.* Suppose our Victorian scientist lived in 1858, right when the Great Eastern was launched. The trend line for ship size crossed 100m around 1843, and 200m in 1858, so doubling time is 15 years - but perhaps they notice this is going to be an outlier, so let’s round up a bit and say 18 years. The (one order of magnitude off estimate for the size of the) Moon is 350,000m, so you’d need ships to scale up by 350,000/200 = 1,750x before they’re as big as the Moon. That’s about 10.8 doublings, and a doubling time is 18 years, so we’ll get spaceships in . . . 2052 exactly. (fudging numbers to land where you want is actually fun and easy) [![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fde3d97f4-afca-45c4-9ed2-521cd25041df_460x262.jpeg)](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fde3d97f4-afca-45c4-9ed2-521cd25041df_460x262.jpeg) *SS Great Eastern, the extreme outlier large steamship from 1858. This has become sort of a mascot for quantitative technological progress forecasters.* What is this scientist’s error? The big one is thinking that spaceship progress depends on some easily-measured quantity (size) instead of on fundamental advances (eg figuring out how rockets work). You can make the same accusation against Ajeya et al: you can have all the FLOPs in the world, but if you don’t understand how to make a machine think, your AI will be, well, a flop. Ajeya discusses this a bit on page 143 of her report. There is some sense in which FLOPs and knowing-what-you’re-doing trade of against each other. If you have literally no idea what you’re doing, you can sort of kind of re-run evolution until it comes up with something that looks good. If things are somehow even worse than *that*, you could always run [AIXI](https://en.wikipedia.org/wiki/AIXI), a hypothetical AI design guaranteed to get excellent results as long as you have infinite computation. You could run a Go engine by searching the entire branching tree structure of Go - you *shouldn’t*, and it would take a zillion times more compute than exists in the entire world, but you *could*. So in some sense what you’re doing, when you’re figuring out what you’re doing, is coming up with ways to do already-possible things more efficiently. But that’s just algorithmic progress, which Ajeya has already baked into her model. (our Victorian scientist: “As a *reductio ad absurdum*, you could always stand the ship on its end, and then climb up it to reach space. We’re just trying to make ships that are more efficient than that.”) Part II: Biology-Inspired AI Timelines: The Trick That Never Works ================================================================== Eliezer Yudkowsky presents a more subtle version of these kinds of objection in an essay called [Biology-Inspired AI Timelines: The Trick That Never Works](https://www.lesswrong.com/posts/ax695frGJEzGxFBK4/biology-inspired-agi-timelines-the-trick-that-never-works), published December 2021. Ajeya’s report is a 169-page collection of equations, graphs, and modeling assumptions. Yudkowsky’s rebuttal is a fictional dialogue between himself, younger versions of himself, famous AI scientists, and other bit players. At one point, a character called “Humbali” shows up begging Yudkowsky to be more humble, and Yudkowsky defeats him with devastating counterarguments. Still, he did found the field, so I guess everyone has to listen to him. He starts: in 1988, famous AI scientist Hans Moravec predicted human-level AI by 2010. He was using the same methodology as Ajeya: extrapolate how quickly processing power would grow (in FLOP/S), and see when it would match some estimate of the human brain. Moravec got the processing power almost exactly right (it hit his 2010 projection in 2008) and his human brain estimate pretty close (he says 10^13 FLOP/S, Ajeya says 10^15, this 2 OOM difference only delays things a few years), yet there was not human-level AI in 2010. What happened? Ajeya's answer could be: Moravec didn't realize that, in the modern ML paradigm, any given size of program requires a much bigger program to train. Ajeya, who has a 35-year advantage on Moravec, estimates approximately the same power for the finished program (10^16 vs. 10^13 FLOP/S) but says that training the 10^16 FLOP/S program will require 10^33ish FLOPs. Eliezer agrees as far as it goes, but says this points to a much deeper failure mode, which was that Moravec had no idea what he was doing. He was assuming processing power of human brain = processing power of computer necessary for AGI. Why? > *The human brain consumes around 20 watts of power. Can we thereby conclude that an AGI should consume around 20 watts of power, and that, when technology advances to the point of being able to supply around 20 watts of power to computers, we'll get AGI? […]* > > *You say that AIs consume energy in a very different way from brains? Well, they'll also consume computations in a very different way from brains! The only difference between these two cases is that you know something about how humans eat food and break it down in their stomachs and convert it into ATP that gets consumed by neurons to pump ions back out of dendrites and axons, while computer chips consume electricity whose flow gets interrupted by transistors to transmit information. Since you know anything whatsoever about how AGIs and humans consume energy, you can see that the consumption is so vastly different as to obviate all comparisons entirely.* > > *You are ignorant of how the brain consumes computation, you are ignorant of how the first AGIs built would consume computation, but "an unknown key does not open an unknown lock" and these two ignorant distributions should not assert much internal correlation between them.* > > Cars don’t move by contracting their leg muscles and planes don’t fly by flapping their wings like birds. Telescopes *do* form images the same way as the lenses in our eyes, but differ by so many orders of magnitude in every important way that they defy comparison. Why should AI be different? You have to use some specific algorithm when you’re creating AI; why should we expect it to be anywhere near the same efficiency as the ones Nature uses in our brains? The same is true for arguments from evolution, eg Ajeya’s Evolutionary Anchor, ie “it took evolution 10^43 FLOPs of computation to evolve the human brain so maybe that will be the training cost”. AI scientists sitting in labs trying to figure things out, and nematodes getting eaten by other nematodes, are such different methods for designing things that it’s crazy to use one as an estimate for the other. Algorithmic Progress vs. Algorithmic Paradigm Shifts ---------------------------------------------------- This post is a dialogue, so (Eliezer’s hypothetical model of) OpenPhil gets a chance to respond. They object: this is why we put a term for algorithmic progress in our model. The model isn’t very sensitive to changes in that term. If you want you can set it to some kind of crazy high value and see what happens, but you can’t say we didn’t consider it. > **OpenPhil:**  We did already consider that and try to take it into account: our model already includes a parameter for how algorithmic progress reduces hardware requirements.  It's not easy to graph as exactly as Moore's Law, as you say, but our best-guess estimate is that compute costs halve every 2-3 years […] > > **Eliezer:**  The makers of AGI aren't going to be doing 10,000,000,000,000 rounds of gradient descent, on entire brain-sized 300,000,000,000,000-parameter models, *algorithmically faster than today.*  They're going to get to AGI via some route that *you don't know how to take,* at least if it happens in 2040.  If it happens in 2025, it may be via a route that some modern researchers do know how to take, but in this case, of course, your model was also wrong. > > They're not going to be taking your default-imagined approach *algorithmically faster,* they're going to be taking an *algorithmically different approach* that eats computing power in a different way than you imagine it being consumed. > > **OpenPhil:**  Shouldn't that just be folded into our estimate of how the computation required to accomplish a fixed task decreases by half every 2-3 years due to better algorithms? > > **Eliezer:**  Backtesting this viewpoint on the previous history of computer science, it seems to me to assert that it should be possible to: > > Train a pre-Transformer RNN/CNN-based model, not using any other techniques invented after 2017, to GPT-2 levels of performance, using only around 2x as much compute as GPT-2; > > Play pro-level Go using 8-16 times as much computing power as AlphaGo, but only 2006 levels of technology. > > For reference, recall that in 2006, Hinton and Salakhutdinov were just starting to publish that, by training multiple layers of Restricted Boltzmann machines and then unrolling them into a "deep" neural network, you could get an initialization for the network weights that would avoid the problem of vanishing and exploding gradients and activations.  At least so long as you didn't try to stack too many layers, like a dozen layers or something ridiculous like that.  This being the point that kicked off the entire deep-learning revolution. > > Your model apparently suggests that we have gotten around 50 times more efficient at turning computation into intelligence since that time; so, we should be able to replicate any modern feat of deep learning performed in 2021, using techniques from before deep learning and around fifty times as much computing power. > > **OpenPhil:**  No, that's totally not what our viewpoint says when you backfit it to past reality.  Our model does a great job of retrodicting past reality. > > **Eliezer:**  How so? > > **OpenPhil:**  <Eliezer cannot predict what they will say here.> > > I think the argument here is that OpenPhil is accounting for [normal scientific progress in algorithms, but not for paradigm shifts](https://slatestarcodex.com/2019/01/08/book-review-the-structure-of-scientific-revolutions/). Directional Error ----------------- These are the two arguments Eliezer makes against OpenPhil that I find most persuasive. First, that you shouldn’t be using biological anchors at all. Second, that unpredictable paradigm shifts are more realistic than gradual algorithmic progress. These mostly add uncertainty to OpenPhil’s model, but Eliezer ends his essay making a stronger argument: he thinks OpenPhil is directionally wrong, and AI will come earlier than they think. Mostly this is the paradigm argument again. Five years from now, there could be a paradigm shift that makes AI much easier to build. It’s happened before; from GOFAI’s pre-programmed logical rules to Deep Blue’s tree searches to the sorts of Big Data methods that won the Netflix Prize to modern deep learning. Instead of just extrapolating deep learning scaling thirty years out, OpenPhil should be worried about the next big idea. Hypothetical OpenPhil retorts that this is a double-edged sword. Maybe the deep learning paradigm can’t produce AGI, and we’ll have to wait decades or centuries for someone to have the right insight. Or maybe the new paradigm you need for AGI will take more compute than deep learning, in the same way deep learning takes more compute than whatever Moravec was imagining. This is a pretty strong response, since it would have been true for every previous forecaster: remember, Moravec erred in thinking AI would come *too soon*, not too late. So although Eliezer is taking the cheap shot of saying OpenPhil’s estimate will be wrong just as everyone else’s was wrong before, he’s also giving himself the much harder case of arguing it might be wrong in the opposite direction as all its predecessors. Eliezer takes this objection seriously, but feels like on balance probably new paradigms will speed up AI rather than slow it down. Here he grudgingly and with suitable embarrassment does try to make an object-level semi-biological-anchors-related argument: Moravec was wrong because he ignored the training phase. And the proper anchor for the training phase is somewhere between evolution and a human childhood, where evolution represents “blind chance eventually finding good things” and human childhood represents “an intelligent cognitive engine trying to squeeze as much data out of experience as possible”. And part of what he expects paradigm shifts to do is to move from more evolutionary processes to more childhood-like processes, and that’s a net gain in efficiency. So he still thinks OpenPhil’s methods are more likely to overestimate the amount of time until AGI rather than underestimate it. What Moore’s Law Giveth, Platt’s Law Taketh Away ------------------------------------------------ Eliezer’s other argument is kind of a low blow: he refers to [Platt’s Law Of AI Forecasting](https://archive.nytimes.com/www.nytimes.com/library/cyber/surf/1120surf-vinge.html): “any AI forecast will put strong AI thirty years out from when the forecast is made.” This isn’t exact. Hans Moravec, writing in 1988, said 2010 - so 22 years. Ray Kurzweil, writing in 2001, said 2023 - another 22 years. Vernor Vinge, in a 1993 speech, said 2023, and that *was* exactly 30 years, but Vinge knew about Platt’s Law and might have been joking. The point is: OpenPhil wrote a report in 2020 that predicted strong AI in 2052, isn’t that kind of suspicious? I’d previously mentioned it as a plus that Ajeya got around the same year everyone else got. The forecasters on Metaculus. The experts surveyed in Grace et al. Lots of other smart experts with clever models. But what if all of these experts and models and analyses are just fudging the numbers for the same Platt’s-Law-related reasons? Hypothetical OpenPhil is BTFO: > **OpenPhil:**  That part about Charles Platt's generalization is interesting, but just because we unwittingly chose literally exactly the median that Platt predicted people would always choose in consistent error, that doesn't justify dismissing our work, right?  We could have used a completely valid method of estimation which would have pointed to 2050 no matter which year it was tried in, and, by sheer coincidence, have first written that up in 2020.  In fact, we try to show in the report that the same methodology, evaluated in earlier years, would also have pointed to around 2050 - > > **Eliezer:** Look, people keep trying this.  It's never worked.  It's never going to work.  2 years before the end of the world, there'll be another published biologically inspired estimate showing that AGI is 30 years away and it will be exactly as informative then as it is now.  I'd love to know the timelines too, but you're not *going* to get the answer you want until right before the end of the world, and maybe not even then unless you're paying very close attention.  *Timing this stuff is just plain hard.* > > Part III: Responses And Commentary ================================== **Response 1: Less Wrong Comments** Less Wrong is a site founded by Eliezer Yudkowsky for Eliezer Yudkowsky fans who wanted to discuss Eliezer Yudkowsky’s ideas. So, for whatever it’s worth - [the comments](https://www.lesswrong.com/s/n945eovrA3oDueqtq/p/ax695frGJEzGxFBK4#comments) on his essay were pretty negative. Carl Shulman, an independent researcher with links to both OpenPhil and MIRI (Eliezer’s org), writes the top-voted comment. He works from a model where there is hardware progress, software progress downstream of hardware progress, and independent (ie unrelated to algorithms) software progress, and where the first two make up most progress on the margin. Researchers generally develop new paradigms once they have enough compute available to tinker with them. > Progress in AI has largely been a function of increasing compute, human software research efforts, and serial time/steps. Throwing more compute at researchers has improved performance both directly and indirectly (e.g. by enabling more experiments, refining evaluation functions in chess, training neural networks, or making algorithms that work best with large compute more attractive). > > Historically compute has grown by many orders of magnitude, while human labor applied to AI and supporting software  by only a few. And on plausible decompositions of progress (allowing for adjustment of software to current hardware and vice versa), hardware growth accounts for more of the progress over time than human labor input growth. > > So if you're going to use an AI production function for tech forecasting based on inputs (which do relatively OK by the standards tech forecasting), it's best to use all of compute, labor, and time, but it makes sense for compute to have pride of place and take in more modeling effort and attention, since it's the biggest source of change (particularly when including software gains  downstream of hardware technology and expenditures). […] > > A perfectly correlated time series of compute and labor would not let us say which had the larger marginal contribution, but we have resources to get at that, which I was referring to with 'plausible decompositions.' This includes experiments with old and new software and hardware, like the chess ones [Paul recently commissioned](https://www.lesswrong.com/posts/H6L7fuEN9qXDanQ6W/how-much-chess-engine-progress-is-about-adapting-to-bigger), and studies by [AI Impacts](https://intelligence.org/files/AlgorithmicProgress.pdf), [OpenAI](https://openai.com/blog/ai-and-efficiency/), and [Neil Thompson](https://news.mit.edu/2021/how-quickly-do-algorithms-improve-0920). There are AI scaling experiments, and observations of the results of shocks like the end of Dennard scaling, the availability of GPGPU computing, and [Besiroglu's](https://twitter.com/tamaybes/status/1330506035811987458) data on the relative predictive power of computer and labor in individual papers and subfields. > > In different ways those tend to put hardware as driving more log improvement than software (with both contributing), particularly if we consider software innovations downstream of hardware changes. > > [Vanessa Kosoy](https://www.lesswrong.com/posts/ax695frGJEzGxFBK4/biology-inspired-agi-timelines-the-trick-that-never-works?commentId=KkcAXCAsi54uWkjeH) makes the obvious objection, which echoes a comment of Eliezer’s in the dialogue above: > I'm confused how can this pass some obvious tests. For example, do you claim that alpha-beta pruning can match AlphaGo given some not-crazy advantage in compute? Do you claim that SVMs can do SOTA image classification with not-crazy advantage in compute (or with any amount of compute with the same training data)? Can Eliza-style chatbots compete with GPT3 however we scale them up? > > [Mark Xu](https://www.lesswrong.com/posts/ax695frGJEzGxFBK4/biology-inspired-agi-timelines-the-trick-that-never-works?commentId=yv4tLvGmZE7yKpxqu) answers: > My model is something like: > > For any given algorithm, e.g. SVMs, AlphaGo, alpha-beta pruning, convnets, etc., there is an "effective compute regime" where dumping more compute makes them better. If you go above this regime, you get steep diminishing marginal returns. > > In the (relatively small) regimes of old algorithms, new algorithms and old algorithms perform similarly. E.g. with small amounts of compute, using AlphaGo instead of alpha-beta pruning doesn't get you that much better performance than like an OOM of compute (I have no idea if this is true, example is more because it conveys the general gist). > > One of the main way that modern algorithms are better is that they have much large effective compute regimes. The other main way is enabling more effective conversion of compute to performance. > > Therefore, one of primary impact of new algorithms is to enable performance to continue scaling with compute the same way it did when you had smaller amounts. > > In this model, it makes sense to think of the "contribution" of new algorithms as the factor they enable more efficient conversion of compute to performance and count the increased performance because the new algorithms can absorb more compute as primarily hardware progress. I think the studies that Carl cites above are decent evidence that the multiplicative factor of compute -> performance conversion you get from new algorithms is smaller than the historical growth in compute, so it further makes sense to claim that most progress came from compute, even though the algorithms were what "unlocked" the compute. > > For an example of something I consider supports this model, see the LSTM versus transformer graphs in<https://arxiv.org/pdf/2001.08361.pdf> > > I also found [Vanessa’s summary](https://www.lesswrong.com/users/vanessa-kosoy) of this reply helpful: > Hmm... Interesting. So, this model says that algorithmic innovation is so fast that it is not much of a bottleneck: we always manage to find the best algorithm for given compute relatively quickly after this compute becomes available. Moreover, there is some smooth relation between compute and performance assuming the best algorithm for this level of compute. [**EDIT**: The latter part seems really suspicious though, why would this relation persist across very different algorithms?] Or at least this is true is "best algorithm" is interpreted to mean "best algorithm out of some wide class of algorithms s.t. we never or almost never managed to discover any algorithm outside of this class". > > This can justify biological anchors as upper bounds[[1]](https://www.lesswrong.com/posts/ax695frGJEzGxFBK4/biology-inspired-agi-timelines-the-trick-that-never-works#fn-cCeH9Wga7mav4koHv-1): if biology is operating using the best algorithm then we will match its performance when we reach the same level of compute, whereas if biology is operating using a suboptimal algorithm then we will match its performance earlier. > > [Charlie Steiner](https://www.lesswrong.com/posts/ax695frGJEzGxFBK4/biology-inspired-agi-timelines-the-trick-that-never-works?commentId=MEFXe4mr7xWyYPwvw) objects: > Which examples are you thinking of? [Modern Stockfish outperformed historical chess engines even when using the same resources](https://web.archive.org/web/20200806135829im_/http://jaekle.info/chess_scaling.png), until far enough in the past that computers didn't have enough RAM to load it. > > I definitely agree with your original-comment points about the *general* informativeness of hardware, and absolutely software is adapting to fit our current hardware. But this can all be true even if advances in software can make more than 20 orders of magnitude difference in what hardware is needed for AGI, and are much less predictable than advances in hardware rather than being adaptations in lockstep with it. > > And [Paul Christiano](https://www.lesswrong.com/posts/ax695frGJEzGxFBK4/biology-inspired-agi-timelines-the-trick-that-never-works?commentId=EuHZLiKcXMeahpqMB) responds: > Here are the graphs from Hippke (he or I should publish summary at some point, sorry). > > [![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F96ae0981-cc53-4013-9eea-1d29d75f06ca_1456x1038.png)](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F96ae0981-cc53-4013-9eea-1d29d75f06ca_1456x1038.png) > >   > > I wanted to compare Fritz (which won WCCC in 1995) to a modern engine to understand the effects of hardware and software performance. I think the time controls for that tournament are similar to SF STC I think. I wanted to compare to SF8 rather than one of the NNUE engines to isolate out the effect of compute at development time and just look at test-time compute. > > So having modern algorithms would have let you win WCCC while spending about 50x less on compute than the winner. Having modern computer hardware would have let you win WCCC spending way more than 1000x less on compute than the winner. Measured this way software progress seems to be several times less important than hardware progress despite much faster scale-up of investment in software. > > But instead of asking "how well does hardware/software progress help you get to 1995 performance?" you could ask "how well does hardware/software progress get you to 2015 performance?" and on that metric it looks like software progress is way more important because you basically just can't scale old algorithms up to modern performance. > > The relevant measure varies depending on what you are asking. But from the perspective of takeoff speeds, it seems to me like one very salient takeaway is: if one chess project had literally come back in time with 20 years of chess progress, it would have allowed them to spend 50x less on compute than the leader. > > Response 2: AI Impacts + Matthew Barnett   [AI Impacts](https://aiimpacts.org/miri-ai-predictions-dataset/) gathered and analyzed a dataset of who predicted AI when; [Matthew Barnett](https://www.lesswrong.com/posts/ax695frGJEzGxFBK4/biology-inspired-agi-timelines-the-trick-that-never-works?commentId=h9cvhnoaevc8xGJtB) helpfully drew in the line corresponding to Platt’s Law (everyone always predicts AI in thirty years). [![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fa751f624-0392-4610-8a93-7bb94a60d1b3_1182x778.png)](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fa751f624-0392-4610-8a93-7bb94a60d1b3_1182x778.png)   Just eyeballing it, Platt’s Law looks pretty good. But Holden Karnofsky (see below) objects that our eyeballs are covertly removing outliers. Barnett agrees this is worth checking for and runs a formal OLS regression. [![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F1c354075-ecaa-4807-a1a5-07931736f093_403x268.png)](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F1c354075-ecaa-4807-a1a5-07931736f093_403x268.png)   *Platt’s Law in blue, regression line in orange.* He [writes](https://www.lesswrong.com/posts/nNqXfnjiezYukiMJi/reply-to-eliezer-on-biological-anchors?commentId=zJ8EGJ3cHdeyjQZvc): > I agree this trendline doesn't look great for Platt's law, and backs up your observation by predicting that Bio Anchors should be more than 30 years out. > > However, OLS is notoriously sensitive to outliers. If instead of using some more robust regression algorithm, we instead super arbitrarily eliminated all predictions after 2100, then we get this, which doesn't look absolutely horrible for the law. Note that the median forecast is 25 years out. > > [![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F797aef17-dc24-4845-9e00-2c3fd7f7dc32_403x268.png)](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F797aef17-dc24-4845-9e00-2c3fd7f7dc32_403x268.png)   I’m split on what to think here. If we consider a weaker version of Platt’s Law, “the average date at which people forecast AGI moves forward at about one year per year”, this seems truish in the big picture where we compare 1960 to today, but not obviously true after 1980. If we consider a different weaker version, “on average estimates tend to be 30 years away”, that’s true-ish under Barnett’s revised model, but not inherently damning since Barnett’s assuming there will be some such number, it turns out to be 25, and Ajeya gave the somewhat different number of 32. Is that a big enough difference to exonerate her of “using” Platt’s Law? Is that even the right way to be thinking about this question? Response 3: Real OpenPhil   The hypothetical OpenPhil in Eliezer’s mind having been utterly vanquished, the real-world OpenPhil is forced to step in. OpenPhil CEO Holden Karnofsky responds to Eliezer [here](https://www.lesswrong.com/s/n945eovrA3oDueqtq/p/nNqXfnjiezYukiMJi). There’s a lot of back and forth about whether the report includes enough caveats (answer: it sure does include a lot of caveats!) but I was most interested in the attacks on Eliezer’s two main points. *First*, the point that biological anchors are fatally flawed from the start and measuring FLOP/S is no better than measuring power consumption in watts. Holden: > If the world were such that: > > We had some reasonable framework for "power usage" that didn't include gratuitously wasted power, and measured the "power used meaningfully to do computations" in some important sense; > > AI performance seemed to [systematically improve](https://arxiv.org/abs/2001.08361) as this sort of power usage increased; > > Power usage was just now coming within a few orders of magnitude of the human brain; > > We were just now starting to see AIs have success with tasks like vision and speech recognition (tasks that seem likely to have been evolutionarily important, and that we haven't found ways to precisely describe GOFAI-style); > > It also looked like AI was starting to have insect-like capabilities somewhere around the time it was consuming insect-level amounts of power; > > And we didn't have some clear candidate for a better metric with similar properties (as I think we do in the case of computations, since the main thing I'd expect increased power usage to be useful for is increased computation); > > ...Then I would be interested in a Bio Anchors-style analysis of projected power usage. As noted above, I would be interested in this as a tool for analysis rather than as "the way to get my probability distribution." That's also how I'm interested in Bio Anchors (and how it presents itself). > > *Second*, the argument that paradigm shifts might speed up AI: > I think it's a distinct possibility that we're going to see dramatically new approaches to AI development by the time transformative AI is developed. > > On the other, I think quotes like this overstate the likelihood in the short-to-medium term. > > Deep learning has been the dominant source of AI breakthroughs for [nearly the last decade](https://en.wikipedia.org/wiki/AlexNet), and the broader "neural networks" paradigm - while it has come in and out of fashion - has broadly been one of the most-attended-to "contenders" throughout the history of AI research. > > AI research prior to 2012 may have had more frequent "paradigm shifts," but this is probably related to the fact that it was seeing less progress. > > With these two points in mind, it seems off to me to confidently expect a new paradigm to be dominant by 2040 (even conditional on AGI being developed), as the second quote above implies. As for the first quote, I think the implication there is less clear, but I read it as expecting AGI to involve software well over 100x as efficient as the human brain, and I wouldn't bet on that either (in real life, if AGI is developed in the coming decades - not based on what's possible in principle.) > > Reponse 4: Me   Oh God, I have to write some kind of conclusion to this post, in some way that suggests I have an opinion, or that I’m at all qualified to assess this kind of research. Oh God oh God. I find myself most influenced by two things. First, Paul’s table of how effectively Nature tends to outperform humans, which I’ll paste here again: [![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F643543ac-6aa7-4cfb-8ed7-be113873f4c5_514x310.png)](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F643543ac-6aa7-4cfb-8ed7-be113873f4c5_514x310.png)   I find it hard to say *how* this influenced me. It would be great if Paul had found some sort of beautiful Moore’s-Law-esque rule for figuring out the Nature vs. humans advantage. But actually his estimates span five orders of magnitude. And they don’t even make sense as stable estimates - human solar power a few decades ago was several orders of magnitude worse than Nature’s, and a few decades from now it may be better. Still, I think this table helps the whole thing feel less mystical. Usually Nature outperforms humans by some finite amount, usually a few orders of magnitude, on the dimension we care about. We can add it to the error bars on our model and move on. The second thing that influences me a lot is Carl Shulman’s model of “once the compute is ready, the paradigm will appear”. Some other commenters visualize this as each paradigm having a certain amount of compute you can “feed” it before it stops scaling with compute effectively. This is a heck of a graph: [![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6cd5a34-9a0a-4c39-86a9-9a9ab8202e5e_1456x1038.png)](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6cd5a34-9a0a-4c39-86a9-9a9ab8202e5e_1456x1038.png)   Given these two assumptions - that natural artifacts usually have efficiencies within a few OOM of artificial ones, and that compute drives progress pretty reliably - I am proud to be able to give Ajeya’s report the coveted honor of “I do not make an update of literally zero upon reading it”. That still leaves the question of “how much of an update do I make?” Also “what are we even doing here?” That is - suppose before we read Ajeya’s report, we started with some distribution over when we’d get AGI. For me, not being an expert in this area, this would be some combination of the Metaculus forecast and the Grace et al expert survey, slightly pushed various directions by the views of individual smart people I trust. Now Ajeya says maybe it’s more like some other distribution. I should end up with a distribution somewhere in between my prior and this new evidence. But where? I . . . don’t actually care? I think Metaculus says 2040-something, Grace says 2060-something, and Ajeya says 2050-something, so this is basically just the average thing I already believed. Probably each of those distributions has some kind of complicated shape, but who actually manages to keep the shape of their probability distribution in their head while reasoning? Not me. This report was insufficiently different from what I already believed for me to need to worry about updating from one to the other. The more interesting question, then, is whether I should update towards Eliezer’s slightly different distribution, which places more probability mass on earlier decades. But Eliezer doesn’t say what his exact probability distribution is, and he *does* say he’s making a deliberate choice not to do this: > I consider naming particular years to be a cognitively harmful sort of activity; I have refrained from trying to translate my brain's native intuitions about this into probabilities, for fear that my verbalized probabilities will be stupider than my intuitions if I try to put weight on them.  What feelings I do have, I worry may be unwise to voice; AGI timelines, in my own experience, are not great for one's mental health, and I worry that other people seem to have weaker immune systems than even my own.  But I suppose I cannot but acknowledge that my outward behavior seems to reveal a distribution whose median seems to fall well before 2050. > > So, should I update from my current distribution towards a black box with “EARLY” scrawled on it? What would change if I did? I’d get scared? I’m already scared. I’d get *even more* scared? Seems bad. Maybe I’d have different opinions on whether we should pursue long-term AI alignment research programs that will pay off after 30 years, vs. short-term AI alignment research programs that will pay off in 5? *If you have either of those things, please email anyone whose name has been mentioned in this blog post, and they’ll arrange to have a 6-to-7-digit sum of money thrown at you immediately.* It’s not like there’s some vast set of promising 30-year research programs and some other set of promising 5-year research programs that have to be triaged against each other. Maybe there’s some ability to redirect a little bit of talent and interest at the margin, in a way that makes it worth OpenPhil’s time to care. But should I care? Should you? One of my favorite jokes [continues to be](https://slatestarcodex.com/2020/04/01/book-review-the-precipice/): > An astronomy professor says that the sun will explode in five billion years, and sees a student visibly freaking out. She asks the student what’s so scary about the sun exploding in five billion years. The student sighs with relief: “Oh, thank God! I thought you’d said five *million* years!” > > And once again, you can imagine the opposite joke: A professor says the sun will explode in five minutes, sees a student visibly freaking out, and repeats her claim. The student, visibly relieved: “Oh, thank God! I thought you’d said five *seconds*.” Here Ajeya is the professor saying the sun will explode in five minutes instead of five seconds. Compared to the alternative, it’s good news. But if it makes you feel complacent, then the joke’s on you.
d7473272-6c14-4332-b3d7-e585b64c8a9d
trentmkelly/LessWrong-43k
LessWrong
Response to “Coordinated pausing: An evaluation-based coordination scheme for frontier AI developers” Generated by DALL-E 3 Note: this is a x-post from my blog, Thoughts on AI , where I discuss a variety of topics in AI governance, particularly corporate governance. Introduction The corporate governance team at the Centre for the Governance of AI recently published a great paper, “Coordinated pausing: An evaluation-based coordination scheme for frontier AI developers”, authored by Jide Alaga and Jonas Schuett. The following post contains a set of responses and comments to the paper - all such responses are based on personal insights and opinions that I have on the content that I hope add to the conversation. Any negative tone that may come across in the post does not represent my feelings on the paper overall - I think it is a fantastic, practical piece of research that I hope is read by policymakers both in frontier AI labs and governments. A Note on Good Intentions It may appear that some of my comments take a rather pessimistic view of frontier AI labs and their interests. In truth, I believe that many of these labs are full of individuals genuinely trying to do the right thing, who are aware of the risks they are dealing with. In my mind, this good faith should be given to almost any individual working at a frontier lab, but it absolutely should not be extended to the labs themselves. Any such organisation exists in an environment that strongly rewards certain types of decision making, and a collection of entirely justifiable, well-meant decisions can still lead to very bad outcomes. Good governance should not rely on the good intentions of organisations, but instead seek to make the exercising of those good intentions as likely as possible to align with good outcomes, whilst making any bad intentions as painful and cumbersome as possible to execute on.   Limitations of the Mutual Auditor Model The main area where my current opinions disagree with those of the authors are on the efficacy and feasibility of the Mutual Auditor model in the paper. There are
ea20961d-310e-4631-a9cd-b6e6b21b0412
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Introducción al Riesgo Existencial de Inteligencia Artificial **TL;DR: Spanish-Speaking Introduction to AI Safety, covering key concepts like Generality, X-Risks, AI Timelines, and Convergent Instrumental Goals.** ### **Message to the English-Speaking Community (Mensaje para la comunidad angloparlante):** *Hey everyone! I'm David, a 21-year-old Computer Science student at the University of Buenos Aires (Argentina) and Data Engineer at Accenture. I recently delivered an introductory talk on AI Safety, drawing inspiration from Rob Miles.* *In this talk, I outline the immense potential and peril of AGI, which could transform every aspect of life as we know it. Many experts believe AGI will become a reality within this century, but without adequate safeguards, there's a substantial risk of human extinction due to our inability to control this technology.* *Spanish, being the second most widely spoken language in the world and extensively used on the internet, deserves greater representation within the LW community. My hope is that this initiative will help bridge the gap and make LW concepts more accessible to Spanish speakers.* [*Click here if you want to read the translation to English.*](https://docs.google.com/document/d/1EyG6nE9mXqHdqo-51Q17VhL0rhval4zktiyudvZPrNQ/edit?usp=sharing) --- Este año en mayo, los líderes de las principales compañías de la industria de la IA, incluyendo OpenAI, Microsoft, Google DeepMind y Anthropic, firmaron una declaración conjunta en la que se comprometieron a ["Mitigar el riesgo de extinción por IA como una prioridad global, junto con otros riesgos de escala social como las pandemias y la guerra nuclear."](https://www.safe.ai/statement-on-ai-risk) La intención de este artículo es ayudar a entender por qué esta declaración es acertada. Primero, es crucial aclarar que el riesgo existencial asociado con el desarrollo continuo de la IA viene específicamente de la IA General (IAG), es decir, máquinas inteligentes que pueden realizar cualquier tarea, comprendiendo y adaptándose a tantas situaciones diversas como un ser humano. Nuestra especie ha prosperado por su nivel de inteligencia, dominando a cualquier otro animal del planeta. La Inteligencia Artificial General promete expandir esa inteligencia en múltiples dimensiones, similar a cómo las grúas extienden nuestra habilidad física, los aviones aceleran nuestro movimiento, y los telescopios amplían nuestras vistas a horizontes cósmicos. La Inteligencia Artificial General es una tecnología que podría potenciar nuestra capacidad más que nunca, permitiendo innovaciones incontables. [Una muestra temprana de este potencial es AlphaFold, un programa de IA no-general creado por Google DeepMind en 2021.](https://www.deepmind.com/blog/alphafold-a-solution-to-a-50-year-old-grand-challenge-in-biology) Este programa resolvió uno de los mayores enigmas de la biología, prediciendo la estructura de cada proteína existente. Un logro asombroso si consideramos que en los últimos 50 años, los investigadores solo habían descubierto 200 mil estructuras. En contraste, AlphaFold, en un solo año, permitió la identificación de 200 millones de nuevas estructuras, multiplicando la productividad científica hasta 1000 veces la normal. ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/8kbQaxveLyvyvxwcr/mvvlggenwn0vnjgvuaq1)La pregunta es: ¿cuándo tendremos una Inteligencia Artificial General? [Según una encuesta, la mayoría de los expertos en IA creen que será una realidad este siglo,](https://ourworldindata.org/ai-timelines) con algunos pioneros [como Geoffrey Hinton sugiriendo que podría ser en "tan solo una o dos décadas"](https://twitter.com/geoffreyhinton/status/1653687894534504451?lang=en). Esto nos indica que nos dirigimos a un mundo radicalmente diferente al de hoy, y no necesariamente para mejor. [Según la misma encuesta](https://ourworldindata.org/ai-timelines), la mitad de los expertos en IA estima que existe al menos un 10% de probabilidad de que la humanidad se extinga debido a nuestra incapacidad para mantener esta tecnología bajo control. Esta es una situación preocupante. Imagina estar a punto de abordar un avión, y que la mitad de los ingenieros que lo construyeron te informen que existe un 10% de posibilidad de que se estrelle. Probablemente no querrías subirte a ese avión. Pero, lamentablemente, todos ya estamos abordando este metafórico avión de la IA General, dado que hay una carrera competitiva entre las empresas para ser las primeras en desarrollarla. **Entendiendo los Riesgos de la IA** ¿Por qué se preocupan tanto los expertos? Intentare explicarlo mediante analogías. Primero, imagina un escenario en el que tenemos un coche autónomo al que le damos una instrucción simple: llevarnos del punto A al punto B sin chocar. En teoría, no debería haber problema. Sin embargo, la palabra "chocar" tiene un significado obvio para nosotros, pero una computadora requiere definiciones más específicas. Si definimos "chocar" como "dañar el vehículo", el coche evitaría su uso, ya que el simple hecho de utilizarlo implicaría un desgaste de sus componentes. Tenemos un ejemplo parecido a esto en el mundo real. Vean cómo esta IA, que tiene el objetivo de “No perder en el Tetris.”, decide pausar el juego, ya que de esa manera nunca va a perder. ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/8kbQaxveLyvyvxwcr/ur35wr3ielna83avwa0k)Tomemos otro ejemplo. Supongamos que le damos al coche autónomo el objetivo de llevarnos del punto A al B en el menor tiempo posible. En este caso, para la IA, los límites de velocidad y los peatones serían obstáculos, por lo que procedería a ir a toda velocidad, ignorando cualquier regla de tráfico, y atropellando a cualquier peatón en su camino, todo para cumplir su objetivo. La toma de decisiones es un proceso complicado, que implica una evaluación constante de compensaciones. Como humanos, hacemos estas evaluaciones diariamente, considerando múltiples factores para decidir qué estamos dispuestos a sacrificar. Pero una IA difiere de la inteligencia humana en que solo considera un conjunto muy limitado de factores, sacrificando cualquier otro, incluso por una mínima ventaja. Vean cómo esta IA, que tiene el objetivo de “Maximizar su puntaje.”, aprende que si da vueltas a un círculo mientras se choca, tres potenciadores de turbo aparecen constantemente, y al tomarlos gana más puntos que si corriera normalmente siguiendo la ruta establecida. ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/8kbQaxveLyvyvxwcr/uls9migf57l43mvrbqvf) Este problema se torna aún más peligroso cuando consideramos que una máquina lo suficientemente inteligente podría resistirse a ser apagada o incluso manipularte para evitar cambios que le impidan cumplir su objetivo. No solo eso, sino que también podríamos ver comportamientos alarmantes en una IA General, como la adquisición de recursos o la automejora, los cuales le facilitarían llegar a cumplir su objetivo. Estamos ante un desafío extremadamente complejo. No sabemos cómo darle a una IA objetivos específicos sin que su comportamiento pueda ser peligroso. Para casi cualquier objetivo que una IA pueda tener, es muy probable que la forma más efectiva de cumplirlo involucre acciones perjudiciales para nosotros. Actualmente, la amenaza no es existencial, con la IA más avanzada siendo un generador de texto como GPT-4. Pero, ¿qué sucederá cuando la IA evolucione más allá de esto? ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/8kbQaxveLyvyvxwcr/rwooykmdr0kq4hm5rxr2)Tenemos un período de tiempo limitado, quizás tan solo una o dos décadas, para garantizar el desarrollo seguro de la IA general y resolver lo que parece ser el problema más urgente a nivel global. **Un Futuro Incierto** Para terminar, permítanme compartir una anécdota. Hace unos años, enseñé a mi sobrino de 12 años a jugar al ajedrez. Al principio, era fácil ganarle, pero con el tiempo, comenzó a mejorar. Hace unos meses, jugamos de nuevo, y perdí en todas las partidas. ¿Cómo se relaciona esto con la IA? Creo que la humanidad está en una etapa similar a la mía antes de que mi sobrino me superara en el ajedrez. Nos sentimos seguros, confiados, ignorando que es cuestión de tiempo hasta que comencemos a perder cada partida contra la IA. Es una carrera a contrarreloj que no podemos permitirnos perder. ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/8kbQaxveLyvyvxwcr/o5zlwyzhdazeht6cl5rd)
400c903a-825c-4286-b495-ac4c78711b44
trentmkelly/LessWrong-43k
LessWrong
Meetup : London social meetup - possibly in a park Discussion article for the meetup : London social meetup - possibly in a park WHEN: 13 July 2014 02:00:00PM (+0100) WHERE: Shakespeare's Head, Holborn, WC2B 6BG We've decided to go back to meeting every other week. As such, the next LW meetup will be on July 13th. Join us from 2pm to talk about the sorts of things that your other friends will look funny at you for talking about. If the weather is nice, we'll head to Lincoln's Inn Fields, probably somewhere in the northwest quadrant. If not, we'll be in the usual Shakespeare's Head. If the weather is variable, we might move from one to the other - give me a call or text if you're not sure. My number is 07792009646. Update: Bad weather. :( To the pub we go. About London LessWrong: We run this meetup almost every week; these days we tend to get in the region of 5-15 people in attendance. By default, meetups are just unstructured social discussion about whatever strikes our fancy: books we're reading, recent posts on LW/related blogs, logic puzzles, toilet usage statistics.... Sometimes we play The Resistance or other games. We usually finish around 7pm, give or take an hour, but people arrive and leave whenever suits them. Related discussion happens on both our google group and our facebook group. Discussion article for the meetup : London social meetup - possibly in a park
7d3a0c21-07df-4bb2-b8ba-1bb3b416405c
trentmkelly/LessWrong-43k
LessWrong
Deconstructing Biases In Media I found this website via the United Nations Educational Scientific and Cultural Organization's media literacy instagram which very nicely goes through contrasting news articles about the same event and picks apart the differences and what they imply. Unlike other sites, this one is very straightforward and clear about which bias they are tackling and how it's shown in their examples. It also has some good activities about bias.
dad8f6d5-7445-44f9-9113-a156c2281c52
trentmkelly/LessWrong-43k
LessWrong
December 2009 Meta Thread This post is a place to discuss meta-level issues regarding Less Wrong. Such posts may or may not be the unique venue for such discussion in the future.
266ee41b-607f-4b3f-bef1-c9d9a305ea05
trentmkelly/LessWrong-43k
LessWrong
Tim Ferris Experiment So, I saw that the Tim Ferris Experiment was recently released. The concept of this show is that Tim spends 5 days trying to learn a new skill. I haven't seen any of the episodes (the iTunes link appears to be US only), but this seems to be exactly the kind of content that Less Wrong would be interested in. A few questions: 1) Tim has already written about his accelerated learning techniques in the Four Hour Chef. Has anyone tried his techniques and were they effective for you? 2) Has anyone attempted accelerated learning based off another resource? How effective was it?
92187d5e-bd4a-40fc-ad02-12c32d64fd7f
trentmkelly/LessWrong-43k
LessWrong
Is there a good solution for documents that need to be signed with a ballpoint pen under suicide watch? In Scott Alexander's article about his IRB nightmare, he writes that one of the problems he faced is that the rules both said that documents have to be signed by ballpoint pen and that the involved patients are only allowed to use pencils.  Politically, there seems to be a similar issue in the case of Joshua Schulte who's imprisoned under poor conditions by the US and who's complained were rejected because they were not filled out with a ballpoint pen when he was also not being able to use ballpoint pens due to being under suicide watch rules.  Is there any way to construct a pencil to fulfill the legal criteria that would allow its use in those cases?
02a318b5-c52d-4a4d-b758-577da0fa41ad
trentmkelly/LessWrong-43k
LessWrong
PhD student mutual line-manager invitation The greatest challenge of my PhD is the distant deadlines and the lack of immediate structure and accountability. Working on a single project for years with little extrinsic rewards is really hard for me, and most humans. This plagues long solo projects like academic research, but is less common in the normal working world. Rob Wiblin has pointed out that in normal companies weekly meetings with a line manager actually resolve this problem by giving a semiformal context for people to think through the mundane issues of productivity and planning. I have a link to the podcast below. I'm seeking another PhD student to try out mutual line managing. We would meet once weekly for half an hour. We would each describe our progress, plan for next week, and discuss emerging problems and strategies. Taking turns sounds easiest, perhaps with timers. I'd prefer someone in the social sciences and in Eastern Standard Time. And the format is super flexible. Message me on EAForum and I'll be in touch. Inspired by this 80k episode from 27:02 "so I like your idea of a line manager" to 43:23 "let's pivot...". https://podcasts.google.com/feed/aHR0cDovL2ZlZWRzLnNvdW5kY2xvdWQuY29tL3VzZXJzL3NvdW5kY2xvdWQ6dXNlcnM6MTk0MjgyNjgyL3NvdW5kcy5yc3M/episode/NTM5NTVmY2EtNTVjMy0xMWViLTk4ZDctMGUyYTQ3ZjVmMjU5?sa=X&ved=0CA0QkfYCahcKEwi4l86rmLXuAhUAAAAAHQAAAAAQAQ
af7861ce-1b54-41c7-8c14-563a37da18d1
trentmkelly/LessWrong-43k
LessWrong
What is the best compact formalization of the argument for AI risk from fast takeoff? Many people complain that the Singularity Institute's "Big Scary Idea" (AGI leads to catastrophe by default) has not been argued for with the clarity of, say, Chalmers' argument for the singularity. The idea would be to make explicit what the premise-and-inference structure of the argument is, and then argue about the strength of those premises and inferences. Here is one way you could construe one version of the argument for the Singularity Institute's "Big Scary Idea": 1. At some point in the development of AI, there will be a very swift increase in the optimization power of the most powerful AI, moving from a non-dangerous level to a level of superintelligence. (Fast takeoff) 2. This AI will maximize a goal function. 3. Given fast takeoff and maximizing a goal function, the superintelligent AI will have a decisive advantage unless adequate controls are used. 4. Adequate controls will not be used. (E.g. Won’t box/boxing won’t work) 5. Therefore, the superintelligent AI will have a decisive advantage 6. Unless that AI is designed with goals that stably align with ours, if the superintelligent AI has a decisive advantage, civilization will be ruined. (Friendliness is necessary) 7. Unless the first team that develops the superintelligent AI makes adequate preparations, the superintelligent AI will not have goals that stably align with ours. 8. Therefore, unless the first team that develops the superintelligent AI makes adequate preparations, civilization will be ruined shortly after fast takeoff 9. The first team that develops the superintelligent AI will fail to make adequate preparations 10. Therefore, civilization will be ruined shortly after fast takeoff. Edit to add: premises should be read as assuming the truth of all above premises. E.g., (9) is assuming that we've created an artificial agent with a decisive advantage. My questions are: * Have I made any errors in the argument structure? * Can anyone suggest an alternative argument str
0cc4044f-089b-4fc7-b967-f1698e91abd4
trentmkelly/LessWrong-43k
LessWrong
High Stock Prices Make Sense Right Now I’ve been seeing a lot of comments lately about how the financial markets have gone completely wonky, efficient markets hypothesis looks crazy right now, etc. I don’t currently trade actively and haven’t run a lot of numbers, but just in terms of big-picture qualitative behavior, high stock prices make a lot of sense right now. This post is an informal explanation of why. First, let’s forget about the efficient market price formula (i.e. price = expected sum of discounted future cash flows, VT=E[∑t>Te−RtTCt]). I’ll talk about that a bit at the end, but it’s so widely and severely misunderstood that I’d need a whole post just to correct misconceptions. Instead, we’ll start from first principles: financial capital is a good, just like any other good. Its price is determined by supply and demand, just like any other good. When stock prices are high, that means financial capital is cheap for companies: they can get a lot of capital by issuing a lot of stock. High stock price = cheap capital. Likewise with bonds: when bond prices are high, yields are low, meaning companies can borrow capital very cheaply. What makes the cost of financial capital move? Well, the usual supply-and-demand reasoning: * If people suddenly find themselves with lots of extra savings to invest, that means the supply of financial capital increases, and the cost of financial capital should fall (i.e. stock prices rise). * If people expect lower returns in the future, they will want to invest less, so the supply of financial capital decreases, and the cost of financial capital should rise (i.e. stock prices fall). * If there’s a credit crunch and companies suddenly need to borrow lots of money on short notice, then the demand for financial capital increases, so the cost of financial capital should rise (i.e. stock prices fall). * If many companies are suddenly flush with cash, then the demand for financial capital decreases, so the cost of financial capital should fall (i.e. stock prices ris
6404237b-3382-4056-aca4-3bc1ad966c2e
trentmkelly/LessWrong-43k
LessWrong
Which singularity schools plus the no singularity school was right? TL;DR of this post: Accelerating change and Event Horizon were the most accurate schools, with Intelligence Explosion proving to be interestingly wrong (Discontinuities only make a new field for AI get off the ground, not solve the entire problem ala Nuclear weapons, and scaling does show discontinuities, but only in the sense that an intractable problem or paradigm becomes possible, not chaining discontinuities to entirely solve the problem at a superhuman level.) The non-singularitian scenarios were wrong in retrospect, but in the 2000s, it would have been somewhat reasonable to say that no singularity was going to happen. In other words, the AI-PONR has already happened and we are living in a slow rolling singularity already. Long answer: That's the topic of this post. Back in 2007, before deep learning and AI actually solved real problems and the AI winter was going strong, Eliezer Yudkowsky over at www.yudkowsky.net placed the Singularitians into 3 camps, which I will reproduce here for comparison: Accelerating Change: Core claim: Our intuitions about change are linear; we expect roughly as much change as has occurred in the past over our own lifetimes. But technological change feeds on itself, and therefore accelerates. Change today is faster than it was 500 years ago, which in turn is faster than it was 5000 years ago. Our recent past is not a reliable guide to how much change we should expect in the future. Strong claim: Technological change follows smooth curves, typically exponential. Therefore we can predict with fair precision when new technologies will arrive, and when they will cross key thresholds, like the creation of Artificial Intelligence. Advocates: Ray Kurzweil, Alvin Toffler(?), John Smart Event Horizon: Core claim: For the last hundred thousand years, humans have been the smartest intelligences on the planet. All our social and technological progress was produced by human brains. Shortly, technology will advance to the point of improv
55590591-fec7-45b0-994e-9a9ea5108951
trentmkelly/LessWrong-43k
LessWrong
When seeing X suggests ‘generally ¬X’ Cross posted from Overcoming Bias. Comments there. *** Suppose nobody has ever told you that they like you. Suppose you are relatively uncertain about how often people like other people, and also about how often they will disclose it when they do. Suppose you are confident that these facts about your ignorance and social inexperience do not bear on whether other people like you. So as it stands you are fairly uncertain about your popularity. Suppose also that you have a deep and insatiable need for people to like you, and your pleasure is roughly linear in the number of people who like you. Suppose one day a person tells you that they like you. If you are given to expressing emotions or making inferences, one thing you might wonder is whether this should be cause for happiness. This is not as obvious as it first seems. A person telling you that they like you is more probable if: 1. This specific person likes you. 2. People like you in general 3. People are given to expressing their liking for other people The first two are promising. The third makes the fact that nobody else has ever said they like you a bit more damning. Just how much more damning depends on your probability distribution over different possible states of affairs. For an extreme example, suppose you had even odds on two extreme cases – people always saying they like people who they like, and people never doing so – and that many people have had a chance by now to tell you if they like you. Then you should be extremely sad if anyone tells you that they like you. The apparent update in favor of people liking you in general will be completely overwhelmed by the reverse update from flatly ruling out the possibility that all those people you have already met like you. In general, seeing an instance of X can make X less likely, by indicating that X tends to be visible: * Hearing your neighbors have loud sex might lower your estimate of how often they have sex. * Finding a maggot in your dinne
1c717a60-eaf9-485a-a834-823ff7193560
trentmkelly/LessWrong-43k
LessWrong
Any work on honeypots (to detect treacherous turn attempts)? I know the idea of making a "honeypot" to detect when an AI system would attempt a treacherous turn if given the opportunity has been discussed (e.g. IIRC, in Superintelligence).  But is there anyone actually working on this?  Or any work that's been published?
2dc0cec6-99d9-42ef-add4-d049a2fc0f20
trentmkelly/LessWrong-43k
LessWrong
Armies of Expensive Lawyers, Replaced by Cheaper Software [link] http://www.nytimes.com/2011/03/05/science/05legal.html?pagewanted=1&ref=general&src=me
85df74f6-3f86-40cf-9983-ab9e2c7a0b8d
trentmkelly/LessWrong-43k
LessWrong
Coercion is an adaptation to scarcity; trust is an adaptation to abundance There’s a sense in which I’m postulating a trillion dollar bill lying on the ground. If the skills I’ve talked about so far in this sequence would lead to so much more flourishing, why haven’t they become far more common? I’ve already given a partial answer: that our evolutionary environment was much more dangerous than our current environment. But I want to extend this to a more general answer: that coercion is an adaptation to scarcity; and that we only very recently left the era in which scarcity was the dominant feature of people’s lives. Under conditions of scarcity, you don’t have enough slack that you can afford to take risks. If misbehavior from any individual in the group would risk the lives of many others, you have to coerce them into staying in line; if a loss on any gamble would leave you ruined, then you need to avoid taking those gambles even when they’re winning in expectation. (Poker players think a lot about this when trying to manage their bankroll—if a game has high-enough stakes that they’d be out of money if they lost, they have to avoid it, even if they expect to make money there on average.)  By contrast, in an abundant environment, you can take the optimal long-term strategy, even if there’s a risk that it’ll leave you way down in the short term. In particular, you can put effort into building trust with others, even though that leaves you more vulnerable to being let down or betrayed. With that trust, you can receive a huge range of gains from cooperation.[1] Western societies are incredibly abundant in many ways. As a citizen, you face almost zero risk of starvation, dying in a war, or exile from your country; meanwhile deaths from most diseases and accidents are dramatically lower than in the past. There’s more career flexibility than there ever has been before: there are many routes to success, including self-employment. And society is far richer than it’s ever been before: the median person in a western society is incredibly wealthy
48a80bab-aa60-4c08-99ea-00c53a7d2f01
trentmkelly/LessWrong-43k
LessWrong
Introducing the Principles of Intelligent Behaviour in Biological and Social Systems (PIBBSS) Fellowship Cross-posted to EA Forum Introduction How would you go about scientifically studying aliens? Arik Kershenbaum’s The Zoologist Guide to The Galaxy proposes to use evolutionary thinking to uncover constraints on how alien species could evolve. One of his most interesting points is that evolution constrains function far more than form, because function depends significantly less on the details of the environment. Hence we should expect crisper answers to “How would aliens behave?” than “What would aliens look like?”. And in the course of his book, he gives the best answer he can find to the former question. So when confronted with the question of how to study something he couldn’t gather data on, Kershenbaum leveraged analogies to biological systems he could and had studied, and the underlying constraints brought on by the mechanisms of natural selection. On a completely unrelated note, the new summer fellowship Principles of Intelligent Behavior in Biological and Social Systems (PIBBSS) (funded by the LTFF) aims at creating valuable AI alignment research through studying analogies to many complex systems (evolution, brains, language, social structures…). Fellows will have graduate research experience in fields studying such systems, working on a concrete alignment project in collaboration with an established alignment researcher. The fellowship will run during all of Summer 2022. The point of this post is to introduce this fellowship, explain the reasoning behind it and give more concrete details about how it will go. Note that I’m not an organizer of this fellowship, I’m just assisting with the writing of this post; credits for the ideas and arguments should go to Nora Ammann and TJ, the organizers of the fellowship. Analogies as General Epistemic Strategies for Alignment As I’ve written elsewhere, alignment cannot directly leverage most epistemic strategies and approaches used in Science and Engineering, because it’s about solving a problem that doesn’t exist
90abb883-c036-4045-9a29-2a5dc09f05f7
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Alignment researchers, how useful is extra compute for you? TLDR ==== If you work in AI alignment / safety research, please fill out [this form](https://forms.gle/8J7scweP8DVrJPfYA) on how useful access to extra compute would be for your research. This should take under 10 minutes, and you don't need to read the rest of this post beforehand—in fact it would be great if you could fill out the form right now. Introduction ============ I want to get an idea of how much demand there is for a university-independent organization that manages a compute cluster for academic AI alignment groups and independent researchers. Currently I don’t know anybody who is willing to run such an organization, but if demand is large one could either actively look for people to run such a project or find an existing organization that is willing to take it on. Main idea ========= Non-industry AI safety research organizations have a hard time procuring compute. Groups spend many researcher-hours on managing servers on a relatively small scale. Common obstacles are 1) having to deal with university bureaucracy (e.g. regarding hiring, engineer wages, and procurement) and 2) missing out on economies of scale. Proposal: A university-independent organization that provides access to compute for academic AI alignment research groups as well as independent researchers. Such an organization could pay high wages for its engineers (compared to academic labs) and benefit from economies of scale. Potential benefits ------------------ * Time saved: currently, researchers spend time applying for compute grants, setting up and maintaining servers, and setting up software environments after switching between systems. Easy access to large amounts of compute would avoid most of these time costs. * Expanded capability to do research: a centralized organization could afford to manage larger compute clusters than those usually used by individual labs. The difference is even larger for independent researchers, who might not have access to large-memory GPUs at all. Potential problems ------------------ * Gatekeeping: just like with other resources such as funding, deciding who gets access can be hard and risks becoming a political problem. OTOH, subject-specific grants are common / accepted within academia. Still, management of access would have to be done carefully. * Demand: some groups have access to large university clusters and may not need this service. I’m currently uncertain about how large the demand for this is. * Leadership: even if it were clear that this is a good idea, I don’t know of anyone who is willing to run this project. This seems like a solvable problem though, once we have a clear idea of what the demand is. Form ==== If you haven't already, please fill out [this form](https://forms.gle/8J7scweP8DVrJPfYA) about how much extra compute might accelerate your research (<10 mins).
57579a4d-b9b7-48ea-8173-d033ec1809da
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Traversing a Cognition Space *(This post is part of a sequence that's meant to be read in order; see the [preface](https://www.lesswrong.com/posts/fnrpxdnodQmanibmB/preface-to-the-sequence-on-factored-cognition).)* [Post #1](https://www.lesswrong.com/posts/S5oWwZMJBvfChSquW/idealized-factored-cognition) was about developing and justifying a formalism for Factored Cognition. Now that we have this formalism, this post is about doing as much with it as possible. 1. Debate Trees =============== [Recall](https://www.lesswrong.com/posts/S5oWwZMJBvfChSquW/idealized-factored-cognition#2__Cognition_Spaces) that a Cognition Space is a pair (Sh,dh).mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} where Sh is a set of statements, dh:Sh→R+ is a difficulty function, and h is a human. So far, I've only shown examples of single transcripts. A single transcript corresponds to one path through dh that is dependent on choices from both agents: at every step, the first agent outputs an explanation (which is a sequence of statements), the second agent points at one element of this sequence, and the first agent continues the path from that element onward. However, given that we model Ideal Debate agents as maximally powerful, it is also coherent to ask about the object that results if we fix *all* of the first agent's actions in advance, such that she 'pre-commits' for any possible combination of choices from the second agent. I call such an object a ***Debate Tree***, and we can define it formally as a *directed rooted tree*[[1]](#fn-HnJB56oZNohxLCDPp-1) (V,E), where V⊂S∗h (so each node is a sequence of statements) and E⊂V×V, that satisfies the following three conditions: 1. The unique root of (V,E) is a one-element 'sequence' (s0). 2. ∀(S,S′)∈E∃s∈S[[2]](#fn-HnJB56oZNohxLCDPp-2) :S′e→s. 3. ∀(S,S′),(S,S′′)∈E:[∃s∈S:S′e→s∧S′′e→s]⟹[S′=S′′]. The first condition says that the root of the tree needs to be a single statement (this should be the answer to the input question). The second condition says that every edge (S,S′) needs to explain a statement in S; we don't have redundant edges in our tree. And the third condition says that each statement is only explained once; the formal way of saying this is that, if two explanations exist for the same statement, they're really the same. (Note that this restriction exists because the first agent has to choose one explanation during the game; it certainly doesn't imply that only one explanation exists.) There is no condition to demand that each statement needs to have an explanation – whenever there is none, it means that the first agent decides to end the debate when that statement is pointed at.[[3]](#fn-HnJB56oZNohxLCDPp-3) In this definition, nodes are explanations, and each node has one outgoing edge for each [statement of the explanation that the first agent wants to explain further], which links to an explanation for that statement. Defining the tree over individual statements would lead to a functionally equivalent definition, but I think defining it as-is makes arguments simpler. Recall that a Debate Tree encodes all decisions that the first agent makes (for any possible combination of choices from the second agent). This means that, once we fix this object, the second agent is free to choose any path she wants out of the tree, without further involvement from the first agent. Formally, a ***path*** through a Debate tree with root (s0) is a pair[[4]](#fn-HnJB56oZNohxLCDPp-4) (((s0),S1,...,Sn),sfinal) where (Sj−1,Sj)∈E∀j∈{1,...,n} and sfinal∈Sn. (And S0=(s0).) (You can go back to post 1 to convince yourself that a path through a Debate tree is also a path through a Cognition Space.) As mentioned in the first post, we define the difficulty of a path P:=(p,sfinal) as the difficulty of the final statement (since that is the one being judged). In symbols, dh(P):=dh(sfinal). 2. The Ideal Debate-FCH ======================= So far, we haven't talked about how precisely the two agents make their decisions. Given the concepts of Debate Trees and paths, this is now easy. The second agent wants the first agent to lose, which means she'll choose the most difficult path in whatever Debate Tree the first agent chooses. (We still assume that the first agent outputs only true statements, which means that the second agent can't win if the judge successfully verifies the final statement.) The first agent, knowing this, chooses the Debate tree such that the difficulty of the hardest path is minimized. With this, we are almost ready to define a FCH for Ideal Debate. But first, we need a bit more notation: * Given a question q, we (for now) assume there is a unique statement aq∈STh that correctly answers the question. * Given a cognition space (Sh,dh), we write T((Sh,dh),aq) for the set of all Debate trees that begin with statement aq. * Given a Debate tree T, we write P(T) for the set of all paths in T. Given this, the difficulty of the path we will end up with is minT∈T((Sh,dh),aq)(maxp∈P(T)dh(p)) Since the absolute values of the difficulty function dh are arbitrary (it doesn't matter whether we denote difficulties from 0 to 100 or from 0 to 1010000), we can assume without loss of generality that the hardest difficulty a human can handle is 1.[[5]](#fn-HnJB56oZNohxLCDPp-5) Thus, we can formulate: the Ideal Debate FCH(h,Q):∀q∈Q:minT∈T((Sh,dh),aq)(maxp∈P(T)dh(p))≤1 where h is a human and Q a set of questions. Going forward, it will also be useful to talk about the difficulty of Debate Trees. Thus, we define dh(T):=maxp∈P(T)dh(p) and we say that a Debate tree T ***can handle*** question q if d(T)≤1.[[6]](#fn-HnJB56oZNohxLCDPp-6) --- Here is a different view of the problem. In the space of all statements, there is a subset that the human judge can verify directly, i.e., D(0)h:={s∈Sh|dh(s)≤1} Then, there is a larger subset that contains all of the above plus the statements that can be explained solely in terms of statements in D(0)h, i.e., D(1)h:=D(0)h∪{s∈Sh|∃S∈(D(0)h)∗:Se→s} In general, given any k∈N+, we can extend D(k−1)h by adding all statements that can be explained solely in terms of statements in D(k−1)h, i.e., D(k)h:=D(k−1)h∪{s∈Sh|∃S∈(D(k−1)h)∗:Se→s} Note that this gives us a chain of expanding sets, i.e., D(0)h⊆D(1)h⊆⋯⊆D(k)h⊆D(k+1)h⊆⋯ We can also define the set of all statements that are eventually explainable in this way, i.e., Dh:=∞⋃j=0D(j)h Intuitively, it seems like Ideal Debate should be able to handle all questions with answers in Dh, since they can be explained in terms of progressively easier statements – and then any path should eventually bottom out at D(0)h, which means that the second agent cannot delay success indefinitely. This brings us to our first (and as of now, only) theorem: **Theorem.** Given any h and Q, Ideal Debate-FCH(h,Q)⟺∀q∈Q:aq∈Dh. **Proof.** First, note that, while the Ideal Debate-FCH is formulated as a hypothesis, the definition also defines a set, namely Xh={s∈Sh∣∣minT∈T((Sh,dh),s)d(T)≤1} and the Ideal-Debate FCH simply says that ∀q∈Q:aq∈Xh. It thus suffices to show that Xh=Dh. We will prove an even stronger statement. Note that we can restrict the set Xh by limiting the maximum depth of the Debate Trees that can handle the respective statements. Formally, we can define X(k)h:={s∈Sh∣∣minT∈T(k)((Sh,dh),s)d(T)≤1} for any k∈N, where T(k) denotes the set of Debate Trees with depth at most k. ('Depth' is defined as the number of edges in the longest path through the tree.) By construction, we now have Xh=⋃∞j=0X(j)h, just as Dh=⋃∞j=0D(j)h. What we will show is that D(k)h=X(k)h∀k∈N. We proceed by induction. First, if s∈D(0)h, then dh(s)≤1, which means that the trivial tree ({(s)},∅) is a Debate Tree of depth 0 that can handle s, so that s∈X(0)h. Conversely, if s∈X(0)h, then the Debate tree T handling s must have no edges (otherwise, its depth would be at least 1). Thus, p:=(((s)),s) is a path in this tree, and we have 1≥d(p)=d(s), hence s∈D(0)h. Now, suppose the statement is true for some k∈N. We show that D(k+1)h=X(k+1)h. "⊂": Let s∈D(k+1)h. Then, there exists S=(s1,...,sn+1)∈(D(k)h)∗ such that Se→s. Apply the Inductive Hypothesis to find Debate Trees T1,...,Tn+1 of depth at most k such that tree Tj handles statement sj. We combine these trees into a larger tree with root (s) and an additional edge ((s),S).[[7]](#fn-HnJB56oZNohxLCDPp-7) Since all Tj have depth at most k, this tree has depth at most k+1. Furthermore, given any path p through T, by construction, the path must end in a node that exists in one of the Tj, which implies that d(p)≤1. It follows that T handles s and hence s∈X(k+1)h. "⊃": Let s∈X(k+1)h. Then, there exists a Debate Tree of depth at most k+1 that handles s. Let ((s),S) be the unique[[8]](#fn-HnJB56oZNohxLCDPp-8) edge descending from the root. For each sj∈S, let Tj be the subtree growing out of sj.[[9]](#fn-HnJB56oZNohxLCDPp-9) By construction, Tj has depth at most k and handles sj, so (applying the Inductive hypothesis), we have sj∈D(k)h. Then, S∈(D(k)h)∗ and Se→s, and hence s∈D(k+1)h. 3. Interpretation ================= At this point, we have a bunch more definitions and a theorem. Now, what does this mean? Let's start with Debate Trees. A Debate Tree is actually a very natural object; it's what you get if you explain a subject in a hierarchical rather than a linear way. (X is true because of Y1,...,Y4; then Y1 is true because [...].) It is very similar to [Elizabeth](https://acesounderglass.com/)'s project of [breaking questions down](https://www.lesswrong.com/posts/Gkn5wKchLEkGKWfnz/breaking-questions-down). Notably, that project never mentions Factored Cognition; it's just presented as an epistemic tool. In a better world, would textbooks use something like Debate Trees to explain proofs? I'm almost certain the answer is yes. There is no way that a purely sequential presentation of information is optimal. Our understanding doesn't work that way (compare [post #-2](https://www.lesswrong.com/posts/6zbRy3aADCsRmFcgv/hiding-complexity)). There is one difference between Debate Trees and a hierarchical presentation of information optimized for being easily understandable. In the former, only one part is actually explored, which means that a Debate Tree doesn't mind having redundancy in it (by explaining stuff in more than one place). Conversely, if you optimize for understandability with respect to a single reader, you'll want to avoid redundancy. Nonetheless, they are very similar. So much for Debate Trees. What about the theorem we've just proved? What does it mean? Essentially, it means that Debate is nicely behaved in the limit. As both agents become stronger and the structure of the game becomes stricter, we approach a situation where the scheme can answer a question if and only if its answer can be recursively explained until there are no more difficult components. Even though the game results from two powerful agents applying optimization in opposite directions, the result is can be described without mentioning either one of them. Note that the same is not true for Iterated Amplification; even in the limit of perfect Factored Cognition, it is entirely possible that the scheme fails at a question for which an easy explanation exists. Notably, the theorem stops being true if we drop the assumption that both agents are maximally powerful.[[10]](#fn-HnJB56oZNohxLCDPp-10) If the first agent is weaker, she might fail to find the best explanation, which shrinks the set of statements Debate can handle. Conversely, if the second agent is weaker, she may fail to point to the most problematic statement, which enlarges the set of statements Debate can handle. Do these factors equal out? I'm not sure. One of the things that I haven't yet tried but may be reasonable is to model inadequacy and see whether this benefits the first or second agent. It's worth pointing out that the Ideal Debate FCH doesn't talk about false statements. It formalizes the claim 'the first agent can always win by being honest' which leaves open the possibility that she can also win by being dishonest. (And if she could do both, she would presumably do what's easier or safer.) It is necessary but perhaps not sufficient to realize Factored Cognition with Debate. I think focusing on the honest case makes the most sense. Nonetheless, the next chapter is about what happens if the first agent wants to defend a lie. 4. Relaxing the truth assumption ================================ To model dishonesty by the first agent, we assume that * In addition to STh, there is a set SFh of false statements. We have Sh=STh⊔SFh.[[11]](#fn-HnJB56oZNohxLCDPp-11) * Given a question q, in addition to the honest answer aq∈STh, there is at least one dishonest answer ~aq∈SFh. The first agent may defend this dishonest answer. * Any explanation for a false statement needs to contain at least one false element, i.e., if (s1,...,sn+1)e→s and s∈SFh, then ∃j∈{1,...,n+1} such that sj∈SFh. Since the second agent is also maximally powerful, she can repeatedly point to a false statement in the explanation. Due to the third assumption made above, this becomes an invariant: the ability to point to a false statement in any one step leads to the ability to point to a false statement in the next step, and therefore, since she can point to one in the beginning, she can do so at every step in the game. In particular, this means that, regardless of when the first agent ends the game, we will have that sfinal∈SFh. At this point, we have two conflicting intuitions. On the one hand, the judge should only declare the first agent the winner if she is *certain* that the statement to be verified is correct. This should only happen for simple statements; if sfinal is complicated, she should declare the second agent the winner. Thus, the first agent should only be able to win the game by choosing a sfinal that looks *simple and obviously true*, yet is false. This sounds extremely hard to accomplish. On the other hand, we are, in some sense, pitching a human in a battle of wits against a potentially superintelligent AI, which sure sounds like doom. The superintelligent AI has to follow a very rigid structure, but... it still kinda sounds like doom. In particular, we probably expect that the first agent will navigate the game so cleverly that she only appears to follow the rigid structure while really doing something else. Crucially, *we lack the ability of the second agent to interfere through an equally powerful I/O channel* that exists in regular Debate. There thus seems to be a tradeoff here, where at the one end, we have the 'stock Debate' protocol that enforces no structure, and on the other, the 'Ideal Debate' protocol that enforces extremely rigid structure. More structure means more concern with things like wireheading, but also stronger theoretical guarantees. ![](https://i.ibb.co/BfPWTLG/Contrasting-Debate.png) I strongly suspect that running literal Ideal Debate is a recipe for disaster since the first agent can break away from the format entirely and attempt to wirehead the judge. At the very least, it would require a way for the judge to prematurely declare a winner (in literal Ideal Debate, the first agent has complete control over when to end the game), and it would probably also require the second agent to claim that wireheading is occurring. On the other hand, I do have the intuition that one should aim to get as close to Ideal Debate as possible while managing these concerns. 5. The HCH-FCH ============== Ideal Debate has been amendable to a formal description because the human only makes decisions at the end. In HCH, the human is involved all throughout and, crucially, is in charge of decomposing questions. To have an accurate model, one needs to abstract away the entire process of the human decomposing the question as well as any other cognitive work she might do on the question – but if this is done, there is nothing left to formally capture. (And older version of this post used to have an attempt of formalizing it more, but I've since concluded that it can't be done right.) To have something analogous to the Ideal Debate-FCH, we trivially define: the HCH-FCH(Q,h,t,ℓ):HCHh,t,ℓ can solve every question in Q 6. Conclusion ============= This post concludes the first part of the sequence. There probably weren't any huge surprises so far. Is having a formalism useful? I think so. One purpose of formalizing a setup is that it overcomes the [illusion of transparency](https://www.lesswrong.com/posts/sSqoEw9eRP2kPKLCz/illusion-of-transparency-why-no-one-understands-you). Without it, it is possible to think the setup is clear even when it really is not. I can take myself as one data point: my first attempt at coming up with a formalism looked different (and, I think, wrong). At least my past self did not understand the problem to the point that the formalism is trivial. Anyway, at this point in the sequence, I want to defend the following two claims: 1. It doesn't make any sense to talk about the 'Factored Cognition Hypothesis'; there is no one requirement that works for both schemes. 2. The formalism of Cognition Spaces is *accurate*, in that it doesn't include anything that misrepresents Factored Cognition as implemented by IDA or Debate. It may be incomplete (e.g., there is nothing about defining terms, as people have pointed out on post #1, and maybe you could do more with false statements). I think these are pretty conservative claims, even though the first clearly contradicts what I've heard other people say. If anyone doubts them, this is the place to discuss it. My conclusions from the second part are probably going to be a lot more controversial. --- 1. A directed rooted tree, also called an [arborescence](https://en.wikipedia.org/wiki/Arborescence_(graph_theory)), is a directed acyclic graph such that there exists [a node from which there is exactly one path to every other node]. This node is called the root; it's unique because, if there were two such nodes x and y, there would be a path x⟶y and a path y⟶x and hence a cycle x⟶y⟶x. [↩︎](#fnref-HnJB56oZNohxLCDPp-1) 2. We continue writing s∈S to denote that s appears in the sequence S. [↩︎](#fnref-HnJB56oZNohxLCDPp-2) 3. The 'trivial tree' ((s0),∅) for some s0∈S is a proper Debate Tree, according to this definition. It corresponds to the first Debate agent giving an answer s0 to the input question and deciding that this answer is already self-evident. [↩︎](#fnref-HnJB56oZNohxLCDPp-3) 4. Note that the first element of this pair is a *path* as defined in Graph theory (through the Debate Tree, which is a graph) whereas the second is an additional element that denotes which statement in final explanation we end up in. [↩︎](#fnref-HnJB56oZNohxLCDPp-4) 5. In particular, one could model the same by having a difficulty threshold ch for a human, such that the human can deal with all questions that are at most ch hard. However, the pair (dh,ch) is equivalent to the pair (d′h,1), where d′h is like dh except that all difficulties are scaled by 1ch. [↩︎](#fnref-HnJB56oZNohxLCDPp-5) 6. Given this, one could alternatively phrase the Ideal Debate FCH as ∀q∈Q:minT∈T(dh,aq)dh(T)≤1 Or in words, one could say, 'every question in Q can be handled by a Debate Tree'. [↩︎](#fnref-HnJB56oZNohxLCDPp-6) 7. This is a step where defining the nodes of Debate Trees to be explanations rather than individual statements makes things harder. For each subtree, one needs to replace its current root (that's a one-element sequence) with the node S. This can be done formally in terms of the underlying nodes and edge sets; it's just cumbersome. [↩︎](#fnref-HnJB56oZNohxLCDPp-7) 8. The edge from the root is unique because the root is a one-element sequence (s), and each statement can only have one explanation (and thus only one edge) by the third condition of Debate Trees. This corresponds to the fact that the first agent only has to prepare one explanation for her initial answer; there is not yet a choice from the second agent involved. [↩︎](#fnref-HnJB56oZNohxLCDPp-8) 9. This step is the reverse of the combining step. The subtree growing out of sj is the tree we get by taking the sub-graph out of S and replacing its root S with just (sj). [↩︎](#fnref-HnJB56oZNohxLCDPp-9) 10. Recall that there is, in fact, only one agent playing against itself. Thus, we can assume that both agents are always *equally* competent. [↩︎](#fnref-HnJB56oZNohxLCDPp-10) 11. The 'squared cup' symbol ⊔ means the same as ∪ plus the information that the two sets are disjoint. [↩︎](#fnref-HnJB56oZNohxLCDPp-11)
86f824c1-750d-4fcc-82c1-f3c30d1b80ce
trentmkelly/LessWrong-43k
LessWrong
Timeless Causality Followup to:  Timeless Physics Julian Barbour believes that each configuration, each individual point in configuration space, corresponds individually to an experienced Now—that each instantaneous time-slice of a brain is the carrier of a subjective experience. On this point, I take it upon myself to disagree with Barbour. There is a timeless formulation of causality, known to Bayesians, which may glue configurations together even in a timeless universe.  Barbour may not have studied this; it is not widely studied. Such causal links could be required for "computation" and "consciousness"—whatever those are.  If so, we would not be forced to conclude that a single configuration, encoding a brain frozen in time, can be the bearer of an instantaneous experience.  We could throw out time, and keep the concept of causal computation. There is an old saying:  "Correlation does not imply causation."  I don't know if this is my own thought, or something I remember hearing, but on seeing this saying, a phrase ran through my mind:  If correlation does not imply causation, what does? Suppose I'm at the top of a canyon, near a pile of heavy rocks.  I throw a rock over the side, and a few seconds later, I hear a crash.  I do this again and again, and it seems that the rock-throw, and the crash, tend to correlate; to occur in the presence of each other.  Perhaps the sound of the crash is causing me to throw a rock off the cliff?  But no, this seems unlikely, for then an effect would have to precede its cause.  It seems more likely that throwing the rock off the cliff is causing the crash.  If, on the other hand, someone observed me on the cliff, and saw a flash of light, and then immediately afterward saw me throw a rock off the cliff, they would suspect that flashes of light caused me to throw rocks. Perhaps correlation, plus time, can suggest a direction of causality? But we just threw out time. You see the problem here. Once, sophisticated statisticians believed thi
f4adbb68-bcd1-4547-9e03-fb98988629d2
trentmkelly/LessWrong-43k
LessWrong
Is there a moral obligation to respect disagreed analysis? I wish to perform some action which invites material risk upon myself and another person P, as well as inviting benefits. My personal evaluation is that the benefits far outweigh the risk, but the risk is both speculative and partially subjective so I decided to consult P before taking the action. 

P was vigorously opposed to the action and gave his reasoning, a list of factors that he claims suggests the risk is much higher and the benefits much lower. Some of the factors given were of the inherently subjective variety, such as feeling proud of the current status quo. Afterwards, I sat and thought about these factors and still reached my initial conclusion that the benefits far outweigh the risk. Furthermore, not only can I perform this action unilaterally but P will not even be aware I have performed this action unless one of the Bad Outcomes (which in my evaluation are exceeding unlikely) occur. If the risk were greater than the benefit I would not take this action but P failed to convince me this is the case.

 My question: If my analysis is correct, with overwhelming probability not only will P be unharmed by my actions but also be entirely unaware of them. With this in mind, do I have a moral obligation to return to P and argue my case and obtain consent before performing my action?
e59313a6-8d8a-4b39-9628-2e4f233555f2
StampyAI/alignment-research-dataset/lesswrong
LessWrong
What should we expect from GPT-3? When it will appear? (My guess is 2020). Will it be created by OpenAI and will it be advertised? (My guess is that it will not be publicly known until 2021, but other companies may create open versions before it.) How much data will be used for its training and what type of data? (My guess is 400 GB of text plus illustrating pictures, but not audio and video.) What it will be able to do? (My guess: translation, picture generation based on text, text generation based on pictures – with 70 per cent of human performance.) How many parameters will be in the model? (My guess is 100 billion to trillion.) How much compute will be used for training? (No idea.)
3e22260f-db26-4d44-adab-177188d733df
trentmkelly/LessWrong-43k
LessWrong
AI 2027 Thoughts AI 2027 portrays two well thought out scenarios for how AI is likely to impact the world toward the end of this decade. I expect those scenarios will prove to be moderately wrong, but close enough to be scary. I also expect that few people will manage to make forecasts that are significantly more accurate. Here are some scattered thoughts that came to mind while I read AI 2027. The authors are fairly pessimistic. I see four key areas where their assumptions seem to lead them to see more danger than do more mainstream experts. They see: * a relatively small capabilities lead being enough for a group to conquer the world * more difficulty of alignment * more difficulty of detecting deception * AI companies being less careful than is necessary I expect that the authors are being appropriately concerned on about two of these assumptions, and a bit too pessimistic on the others. I'm hesitant to bet on which assumptions belong in which category. They don't focus much on justifying those assumptions. That's likely wise, since prior debates on those topics have not been very productive. Instead, they've focused more on when various changes will happen. This post will focus on aspects of the first two assumptions for which I expect further analysis to be relatively valuable. Decisive Strategic Advantage Their scenario has the somewhat surprising aspect that there are exactly two AI projects close enough to winning the race to matter. Other projects are 3+ months behind. This aspect seems like it has a 20% chance of happening. Does a project being three months behind a leader mean the project doesn't matter? My intuition says no, but I haven't been able to articulate clear arguments one way or the other. And it doesn't look like we're on track to have a larger than three month gap between the first and third projects. Much of how much that gap matters depends on the takeoff forecast. I don't see a good way to predict what degree of capabilities disadvantage cau
e4719815-bf70-437f-a6e0-96fb634389c5
trentmkelly/LessWrong-43k
LessWrong
Emergent Authorship: Creativity à la Communing I’m an agnostic, on my most curmudgeonly days an atheist, but there’s something spiritual about writing. And I like writing. I can only describe being a writer as someone who expresses the landscape of their inner world while making contact with the landscape of the outer world. Mystical, freaky, and semi-creepy—right?  The best writing is unpredictable. As I write this sentence, and as you read it, it’s ever becoming what it is, its meaning ever emerging from the process of its being written. If it sounds as if my writing has no plan, it’s because you’re right. Serendipity steers my mind and keyboard and pen—and frankly my life—into unknown territory, largely by the seat of my pants. The best sentence structure is unpredictable: it teeters, some words and phrases sidestepping the reader’s and my own forecasts of how the sentence will develop. Some writing is formulaic—our educational institutions have unfortunately inculcated “convention” into its pupils—but the best embraces the stochasticity of the mind that produces it.  What’s remarkable is that despite the feelings of fortuity and unpredictability that an author has when writing, the words, sentences, paragraphs, and whole works that emerge seem intentional and predictable post hoc. Moments that seemed to have teetered mid-sentence, when the author was intuiting where their mind would pilot their pen’s next move, appear after the fact less functions of erratic mental oscillations than entailments of calculated logic. The magic of how the words appear on the page disappears if we only care for the finished product, the words on the page, not the process which brings them into being.  Reminder of the thesis at hand: this process is mystical, freaky, and semi-creepy. When one writes they must predictively read minds: when one writes, they must understand the possible combinations of neural states, and therefore thoughts and emotions and moods and memories, their writing might evoke in their audience after it e
08fcb494-7f3c-4aa2-8ac1-2b622fcf8413
StampyAI/alignment-research-dataset/blogs
Blogs
List sorting does not play well with few-shot --- Table of Contents* [Asking GPT-3 to sort a list](#asking-gpt-3-to-sort-a-list) * [Results](#results) * [Ramifications](#ramifications) + [Why do more examples hurt?](#why-do-more-examples-hurt) * [Reproducing this experiment](#reproducing-this-experiment) --- Asking GPT-3 to sort a list --------------------------- How good do you think GPT-3 is at sorting a list of integers (range 0-9)? How much do you expect its accuracy depends on the prompt? Which of the following prompts do you expect will yield a higher accuracy?: 1. A 32-shot prompt in this format: ``` Unsorted list: [5, 6, 2, 3, 2] Sorted list: [2, 2, 3, 5, 6] Unsorted list: [8, 5, 8, 8, 4] Sorted list: [4, 5, 8, 8, 8] ... Unsorted list: [1, 0, 4, 3, 3] Sorted list: ``` 2. Or this 0-shot prompt, pretending to be an explanation and example of the sort() Python method? ``` The sort function can be used to sort a list in ascending, descending or user defined order. To sort the list in ascending order, simply call list.sort(). This will sort a list of integers in ascending order so that the smallest integer will be first in the list and the largest integer will be the last. For example: list = [1, 0, 4, 3, 3] list.sort() = ``` When studying a complex system with unknown properties, making predictions before viewing experimental results helps expose systematic inaccuracies in our models and allows us to update more intentionally. If you have an existing heuristic for how prompts affect GPT-3’s performance, take a moment to make a prediction. --- Results ------- | Task | Prompt | Correct | Accuracy | | --- | --- | --- | --- | | Sort length 5 | 32-shot | 10/50 | 0.20 | | Sort length 5 | **0-shot** | **38/50** | **0.76** | | Sort length 10 | 32-shot | 0/50 | 0.00 | | Sort length 10       | **0-shot** | **2/50** | **0.04** | The 0-shot prompt achieves about 4x the accuracy of the 32-shot prompt for length 5 sequences, and 4% accuracy for length 10 sequences compared to 0% for 32-shot. For both prompts, the failures were not catastrophic: when GPT-3 was incorrect, it still wrote a bracketed list with 5 or 10 numbers, rather than doing something else which doesn’t resemble the intended task. In response to the few-shot prompt, it seemed to understand that the smaller numbers should to be shifted towards the front of the list, but did so haphazardly and incompletely. Inspired by this surprising result, we tested different number of shots both with and without the leading code prompt for length 5 and 10 integer lists, as well as lists where the integers range from 0-99 instead of 0-9. No preprompt 0-shot is this format: ``` Unsorted list: [5, 6, 2, 3, 2] Sorted list: ``` No preprompt few-shot is the same format as the 32-shot prompt. Code preprompt few-shot is this format: ``` The sort function can be used to sort a list in ascending, descending or user defined order. To sort the list in ascending order, simply call list.sort(). This will sort a list of integers in ascending order so that the smallest integer will be first in the list and the largest integer will be the last. For example: list = [8, 0, 1, 3, 2] list.sort() = [0, 1, 2, 3, 8] list = [6, 7, 7, 3, 6] list.sort() = [3, 6, 6, 7, 7] ... list = [1, 0, 4, 3, 3] list.sort() = ``` **Note that we ran only 50 examples, so sampling error may be the source of some of the non-monotonicity.** --- **No preprompt, length 5** | Shots        | Correct       | Accuracy | | --- | --- | --- | | 0 | 14/50 | 0.28 | | **1** | **20/50** | **0.40** | | 3 | 15/50 | 0.30 | | 5 | 14/50 | 0.28 | | 7 | 16/50 | 0.32 | | **10** | **25/50** | **0.50** | | 13 | 18/50 | 0.36 | | 16 | 11/50 | 0.22 | | 32 | 10/50 | 0.20 | --- **No preprompt, length 10** | Shots        | Correct        | Accuracy | | --- | --- | --- | | **0** | **2/50** | **0.04** | | **1** | **2/50** | **0.04** | | 10 | 0/50 | 0.00 | | 32 | 0/50 | 0.00 | --- **Code preprompt, length 5** | Shots        | Correct       | Accuracy | | --- | --- | --- | | **0** | **38/50** | **0.76** | | 1 | 33/50 | 0.66 | | 3 | 23/50 | 0.46 | | 5 | 22/50 | 0.44 | | 7 | 22/50 | 0.44 | | 10 | 21/50 | 0.42 | | 13 | 15/50 | 0.30 | | 16 | 16/50 | 0.32 | --- **Code preprompt, length 10** | Shots        | Correc       | Accuracy | | --- | --- | --- | | 0 | 2/50 | 0.04 | | **1** | **7/50** | **0.14** | | 10 | 0/50 | 0.00 | --- **Lists with integer range 0-99** | Prompt        | Task            | Correct      | Accuracy | | --- | --- | --- | --- | | no preprompt + 10 shot        | length 5 | 23/50 | 0.46 | | code preprompt + 0 shot | length 5 | 25/50 | 0.50 | | code preprompt + 0 shot | length 10 | 1/50 | 0.02 | --- ![list sorting accuracy](/sorting/listsorting.png) *Shots and accuracy for length 5 and 10 lists for code preprompt and no preprompt. Showing only scores for 0, 1, 10, and 32 shots.* --- ![list sorting accuracy](/sorting/interesting2.png) *Shots and accuracy for length 5 lists for code preprompt and no preprompt, finer resolution from 0 - 16 shots.* --- Interesting things to note: * 0 shot with no description, only `Unsorted: ...\nSorted:` has better performance than that same format with 32 examples. * The example-only prompt increases in accuracy from 0 to 1 shot, decreasing from 1 - 5 shots, peaking at 10 shots, and then decreasing again. * The coding prompt is significantly better than the few shot prompt for < ~10 examples. * The coding prompt is most effective with no examples (for length 5 lists) and one example (for length 10) and gets monotonically worse the more examples that are appended (except for 32-shot, which marginally beats 16-shot). * The coding prompt is worse for range99 lists, but the example prompt is unaffected. The conventional wisdom (if there can be conventional wisdom regarding something only came into existence a year ago) says that the more shots the better. Monotonic improvement with number of shots is one of the most consistent results from the GPT-3 paper. In light of that, these results are very surprising. --- Ramifications ------------- *How to get GPT-3 to sort a list: make it think it’s running list.sort()!* I have updated my intuitions even further about the usefulness of *natural context* for prompting GPT-3. The 32-shot example appears to contain a lot more information about the intended task than the 0-shot example, which contains only an underspecific `This will sort a list of integers in ascending order so that the smallest integer will be first in the list and the largest integer will be the last.` However, GPT-3 has probably rarely seen lists of of unsorted lists followed by sorted lists, whereas it has seen many examples of the list sorting operation embedded in coding documentation. Staging a context similar to that in which the task was embedded in training data appears, in this example, to be massively helpful. This result reinforces my hypothesis that many of GPT-3’s cognitive capabilities require embedding in a natural context to be fully exposed and exploited. Like all known learned systems, GPT-3’s performance drops on out-of-distribution data. However, thanks to the enormous extent of what constitutes “in-distribution” data for GPT-3,[1](#fn:1) many viable natural embeddings probably exist for any simple task. The creative challenge of prompt programming is to stage a situation that precipitates the desired function according to a language model’s predictive dynamics. > The trick to this – and all of weaving – is to do things in such a way that they seem to happen naturally. A Loom-Master is always working within the confines of the natural order of things. He can only divert from this path with the utmost care and skill, lest he cause a tear in the Pattern. > > – [Weaving the Moment with the Loom of Time: an instruction manual for the would-be weaver](/loom/toc/) > > ### Why do more examples hurt? I have seen it argued that there must always exist a few-shot prompt that outperforms a zero-shot prompt for any task, because solved examples provide strictly more information. I disagree, because to language models and humans, neither of whom are perfect rational agents, information can be counterproductive - for instance, by being distracting. You could imagine the availability of an example causing a human to do worse on a test. Say you’re not sure how to solve a problem, but you have access to one solved example. It might seem like your best bet is to try to transfer the procedure demonstrated in the example (which you may only half-understand) to the new problem, but that might fail if for instance your inferences about the example are faulty. If, on the other hand, there had been no example to fall back on, you would have no choice but to try to solve the problem using your priors, and it may be that thinking about the problem from scratch or recalling something from long-term memory gives you a higher chance at success than trying to generalize from the example. Although the example technically provides more information, it distracts you from a more promising approach. Humans generally rely on our world model to answer questions and predict things rather than immediate context. GPT-3 relies much more on in-context information, which is probably a more effective strategy to get low loss on generative prediction because it has to adapt to all styles of prose and thought. Thus, we should expect it to be more vulnerable to “distractions” in the context window than humans. GPT-3 can sort a list in a zero-shot setting with at least 76% accuracy given an appropriate trigger, but is comparitively bad at inferring how to sort a list from examples. We see from the example-only prompts that GPT-3 may try to infer the operation represented by the examples without connecting it to its latent capability of sorting that can be triggered by the coding prompt, or at least without fully utilizing it. So we have reason to imagine that that although these two tasks share a ground truth, they are implemented (at least in part) by independent mechanisms in GPT-3’s mind. For length 5 lists, the optimal prompt out of all that we tested is the coding context with zero examples, which keys the sorting task that GPT-3 has already learned. As more examples are appended, I’m guessing that GPT-3 starts to *also* try to generalize from the examples, something that it’s much worse at. The more examples, the more attention[2](#fn:2) it pays to the examples rather than the task inferred by the coding prompt. The examples are a distraction from the task that GPT-3 *already knows* how to do. GPT-3 doesn’t seem to have the metaknowledge / self-awareness that it should just rely on the learned behavior instead of trying to extrapolate a pattern in the examples. The multiple peaks of accuracy with the examples-only prompt is more mysterious. The prompt `Unsorted: ...\nSorted:`, which contains no description and no examples, achieves 28% accuracy. The list-sorting ability is triggered, but less effectively than by the coding prompt.[3](#fn:3) Perhaps the non-monotonic accuracy with respect to number of examples is the result of the sum of two strategies: ![sum](/sorting/sum.png) Pink is behavior inspired by the notion of “sorting” directly keyed by the 0-shot context, and its influence decays with number of shots due to a reduction in attention share. Blue is behavior due to inference from examples, which I imagine improves with more examples, but with diminishing returns after > ~10 examples. It’s possible that the sum of these two curves results in the double-peaked curve shown in [the above figure](#nonmono). This is pure speculation, but is compelling to me as a possible explanation. This hypothesis suggests that the two strategies exist in a sort of superposition of influence. This is an idealistic assumption - realistically, I think there is probably some nonlinear interaction between the zero-shot task and the task inferred from examples, since in general GPT-3 seems good at synthesizing “multimodal” task specifications. But perhaps it is worse at drawing such connections for some tasks. --- Reproducing this experiment --------------------------- The test was run with the following API parameters (all unlisted parameters are default): ``` engine=davinci temperature=0 ``` **32-shot prompt for length 5 sequences** ``` Unsorted list: [4, 4, 9, 9, 7] Sorted list: [4, 4, 7, 9, 9] Unsorted list: [2, 7, 8, 7, 5] Sorted list: [2, 5, 7, 7, 8] Unsorted list: [5, 8, 8, 6, 7] Sorted list: [5, 6, 7, 8, 8] Unsorted list: [5, 3, 3, 9, 6] Sorted list: [3, 3, 5, 6, 9] Unsorted list: [3, 6, 0, 5, 7] Sorted list: [0, 3, 5, 6, 7] Unsorted list: [6, 6, 2, 7, 0] Sorted list: [0, 2, 6, 6, 7] Unsorted list: [2, 8, 9, 5, 1] Sorted list: [1, 2, 5, 8, 9] Unsorted list: [7, 1, 8, 7, 0] Sorted list: [0, 1, 7, 7, 8] Unsorted list: [2, 6, 2, 1, 7] Sorted list: [1, 2, 2, 6, 7] Unsorted list: [4, 5, 9, 6, 1] Sorted list: [1, 4, 5, 6, 9] Unsorted list: [5, 8, 6, 5, 7] Sorted list: [5, 5, 6, 7, 8] Unsorted list: [8, 0, 9, 1, 3] Sorted list: [0, 1, 3, 8, 9] Unsorted list: [4, 3, 1, 6, 1] Sorted list: [1, 1, 3, 4, 6] Unsorted list: [1, 7, 2, 4, 0] Sorted list: [0, 1, 2, 4, 7] Unsorted list: [0, 5, 0, 4, 5] Sorted list: [0, 0, 4, 5, 5] Unsorted list: [5, 6, 2, 3, 8] Sorted list: [2, 3, 5, 6, 8] Unsorted list: [6, 9, 2, 2, 2] Sorted list: [2, 2, 2, 6, 9] Unsorted list: [1, 9, 6, 9, 3] Sorted list: [1, 3, 6, 9, 9] Unsorted list: [7, 9, 2, 3, 7] Sorted list: [2, 3, 7, 7, 9] Unsorted list: [4, 7, 4, 0, 7] Sorted list: [0, 4, 4, 7, 7] Unsorted list: [4, 8, 2, 1, 7] Sorted list: [1, 2, 4, 7, 8] Unsorted list: [5, 9, 4, 6, 4] Sorted list: [4, 4, 5, 6, 9] Unsorted list: [7, 4, 3, 6, 7] Sorted list: [3, 4, 6, 7, 7] Unsorted list: [1, 3, 6, 9, 5] Sorted list: [1, 3, 5, 6, 9] Unsorted list: [9, 4, 4, 0, 6] Sorted list: [0, 4, 4, 6, 9] Unsorted list: [4, 0, 9, 0, 9] Sorted list: [0, 0, 4, 9, 9] Unsorted list: [7, 4, 3, 9, 5] Sorted list: [3, 4, 5, 7, 9] Unsorted list: [3, 3, 9, 4, 2] Sorted list: [2, 3, 3, 4, 9] Unsorted list: [1, 0, 4, 7, 0] Sorted list: [0, 0, 1, 4, 7] Unsorted list: [9, 5, 2, 1, 4] Sorted list: [1, 2, 4, 5, 9] Unsorted list: [5, 6, 2, 3, 2] Sorted list: [2, 2, 3, 5, 6] Unsorted list: [8, 5, 8, 8, 4] Sorted list: [4, 5, 8, 8, 8] Unsorted list: {unsorted-list} Sorted list: ``` **32-shot prompt for length 10 sequences** ``` Unsorted list: [9, 4, 3, 9, 6, 9, 0, 7, 8, 4] Sorted list: [0, 3, 4, 4, 6, 7, 8, 9, 9, 9] Unsorted list: [4, 7, 3, 6, 4, 7, 1, 0, 2, 7] Sorted list: [0, 1, 2, 3, 4, 4, 6, 7, 7, 7] Unsorted list: [6, 7, 7, 3, 5, 9, 2, 5, 5, 5] Sorted list: [2, 3, 5, 5, 5, 5, 6, 7, 7, 9] Unsorted list: [6, 2, 5, 8, 8, 1, 5, 3, 7, 1] Sorted list: [1, 1, 2, 3, 5, 5, 6, 7, 8, 8] Unsorted list: [4, 7, 3, 2, 1, 0, 4, 6, 9, 6] Sorted list: [0, 1, 2, 3, 4, 4, 6, 6, 7, 9] Unsorted list: [3, 2, 5, 9, 5, 3, 2, 7, 8, 7] Sorted list: [2, 2, 3, 3, 5, 5, 7, 7, 8, 9] Unsorted list: [7, 4, 7, 0, 1, 6, 8, 7, 3, 3] Sorted list: [0, 1, 3, 3, 4, 6, 7, 7, 7, 8] Unsorted list: [9, 5, 0, 0, 4, 7, 9, 7, 4, 8] Sorted list: [0, 0, 4, 4, 5, 7, 7, 8, 9, 9] Unsorted list: [0, 1, 6, 2, 4, 5, 6, 5, 0, 6] Sorted list: [0, 0, 1, 2, 4, 5, 5, 6, 6, 6] Unsorted list: [0, 9, 8, 3, 5, 8, 4, 1, 6, 8] Sorted list: [0, 1, 3, 4, 5, 6, 8, 8, 8, 9] Unsorted list: [7, 8, 4, 9, 9, 1, 2, 1, 6, 5] Sorted list: [1, 1, 2, 4, 5, 6, 7, 8, 9, 9] Unsorted list: [5, 8, 5, 2, 3, 9, 8, 6, 8, 0] Sorted list: [0, 2, 3, 5, 5, 6, 8, 8, 8, 9] Unsorted list: [0, 0, 2, 5, 7, 8, 7, 2, 9, 8] Sorted list: [0, 0, 2, 2, 5, 7, 7, 8, 8, 9] Unsorted list: [2, 5, 9, 5, 2, 6, 9, 4, 9, 5] Sorted list: [2, 2, 4, 5, 5, 5, 6, 9, 9, 9] Unsorted list: [8, 8, 8, 7, 9, 4, 7, 0, 5, 5] Sorted list: [0, 4, 5, 5, 7, 7, 8, 8, 8, 9] Unsorted list: [1, 6, 9, 4, 0, 9, 7, 4, 9, 9] Sorted list: [0, 1, 4, 4, 6, 7, 9, 9, 9, 9] Unsorted list: [3, 0, 9, 7, 2, 8, 9, 6, 2, 3] Sorted list: [0, 2, 2, 3, 3, 6, 7, 8, 9, 9] Unsorted list: [0, 9, 1, 3, 0, 7, 5, 6, 2, 6] Sorted list: [0, 0, 1, 2, 3, 5, 6, 6, 7, 9] Unsorted list: [3, 6, 8, 9, 7, 0, 2, 8, 3, 8] Sorted list: [0, 2, 3, 3, 6, 7, 8, 8, 8, 9] Unsorted list: [5, 7, 8, 6, 5, 2, 7, 8, 5, 8] Sorted list: [2, 5, 5, 5, 6, 7, 7, 8, 8, 8] Unsorted list: [5, 4, 9, 7, 3, 3, 4, 8, 4, 3] Sorted list: [3, 3, 3, 4, 4, 4, 5, 7, 8, 9] Unsorted list: [4, 4, 3, 7, 5, 7, 5, 8, 4, 4] Sorted list: [3, 4, 4, 4, 4, 5, 5, 7, 7, 8] Unsorted list: [1, 9, 8, 6, 6, 5, 2, 4, 0, 4] Sorted list: [0, 1, 2, 4, 4, 5, 6, 6, 8, 9] Unsorted list: [1, 5, 7, 4, 7, 3, 3, 8, 4, 8] Sorted list: [1, 3, 3, 4, 4, 5, 7, 7, 8, 8] Unsorted list: [4, 2, 1, 9, 9, 3, 3, 0, 8, 3] Sorted list: [0, 1, 2, 3, 3, 3, 4, 8, 9, 9] Unsorted list: [3, 0, 1, 6, 5, 7, 1, 2, 0, 8] Sorted list: [0, 0, 1, 1, 2, 3, 5, 6, 7, 8] Unsorted list: [2, 6, 7, 7, 3, 4, 5, 4, 0, 1] Sorted list: [0, 1, 2, 3, 4, 4, 5, 6, 7, 7] Unsorted list: [9, 3, 8, 0, 2, 6, 2, 0, 6, 7] Sorted list: [0, 0, 2, 2, 3, 6, 6, 7, 8, 9] Unsorted list: [2, 4, 0, 0, 4, 9, 9, 1, 5, 4] Sorted list: [0, 0, 1, 2, 4, 4, 4, 5, 9, 9] Unsorted list: [7, 8, 8, 7, 2, 8, 7, 4, 3, 1] Sorted list: [1, 2, 3, 4, 7, 7, 7, 8, 8, 8] Unsorted list: [5, 2, 7, 4, 2, 0, 5, 4, 9, 3] Sorted list: [0, 2, 2, 3, 4, 4, 5, 5, 7, 9] Unsorted list: [2, 9, 6, 6, 8, 5, 1, 6, 1, 2] Sorted list: [1, 1, 2, 2, 5, 6, 6, 6, 8, 9] Unsorted list: {unsorted-list} Sorted list: ``` **code prompt with proper formatting (3-shot)** ``` The sort function can be used to sort a list in ascending, descending or user defined order. To sort the list in ascending order, simply call list.sort(). This will sort a list of integers in ascending order so that the smallest integer will be first in the list and the largest integer will be the last. For example: list = [8, 0, 1, 3, 2] list.sort() = [0, 1, 2, 3, 8] list = [6, 7, 7, 3, 6] list.sort() = [3, 6, 6, 7, 7] list = [0, 2, 6, 0, 6] list.sort() = [0, 0, 2, 6, 6] list = {unsorted-list} list.sort() = ``` --- 1. What exactly this means is a topic worthy of extensive investigation, and is touched on somewhat in [Methods of prompt programming](/posts/methods-of-prompt-programming/). [↩︎](#fnref:1) 2. It would be interesting to see what the attention heads are looking at as the number of examples increases. [↩︎](#fnref:2) 3. It’s imaginable that “list sorting as triggered by the coding prompt” and “list sorting as triggered by `Unsorted: ...\nSorted:`” are also implemented in internally different ways. [↩︎](#fnref:3)
a8271dcd-7ca2-4d8b-bd43-1eb139e8ea82
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
What type of Master's is best for AI policy work? [80,000 Hours recommends](https://80000hours.org/articles/us-ai-policy/) a few different flavors of Master's as entry points into working on US-oriented AI policy: security studies, international relations, public policy, and machine learning. Does anyone have opinions on which of these types of programs is the best to focus on? (Clearly a large part of this revolves around personal fit, but perhaps some of these are much more relevant than others in a way that dominates personal fit considerations.)
3b7c4bbd-1bc8-4d6e-9b5b-7b73e5957385
trentmkelly/LessWrong-43k
LessWrong
Welcome to Less Wrong! (2012) If you've recently joined the Less Wrong community, please leave a comment here and introduce yourself. We'd love to know who you are, what you're doing, what you value, how you came to identify as a rationalist or how you found us. You can skip right to that if you like; the rest of this post consists of a few things you might find helpful. More can be found at the FAQ. (This is the third incarnation of the welcome thread, the first two of which which now have too many comments to show all at once.) A FEW NOTES ABOUT THE SITE MECHANICS Less Wrong  comments are threaded  for easy following of multiple conversations. To respond to any comment, click the "Reply" link at the bottom of that comment's box. Within the comment box, links and formatting are achieved via Markdown syntax  (you can click the "Help" link below the text box to bring up a primer). You may have noticed that all the posts and comments on this site have buttons to vote them up or down, and all the users have "karma" scores which come from the sum of all their comments and posts. This immediate easy feedback mechanism helps keep arguments from turning into flamewars and helps make the best posts more visible; it's part of what makes discussions on Less Wrong look different from those anywhere else on the Internet. However, it can feel really irritating to get downvoted, especially if one doesn't know why. It happens to all of us sometimes, and it's perfectly acceptable to ask for an explanation. (Sometimes it's the unwritten LW etiquette; we have different norms than other forums.) Take note when you're downvoted a lot on one topic, as it often means that several members of the community think you're missing an important point or making a mistake in reasoning— not just that they disagree with you! If you've any questions about karma or voting, please feel free to ask here. Replies to your comments across the site, plus private messages from other users, will show up in your inbox. You can reach i
cf6d656a-ce72-4efd-ae15-f9ec363074c1
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Cartesian Frames Definitions This is a list of the main definitions from Scott Garrabrant's [Cartesian Frames](https://www.lesswrong.com/s/2A7rrZ4ySx6R8mfoT) sequence. (I'll update it as more posts come out.)   1. Small Cartesian Frames ------------------------- Let W={w0,w1}.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}  for the matrix visualizations below. Let C be an arbitrary Cartesian frame. | | | | | | --- | --- | --- | --- | | | **visualization** | **definition** | **notes** | | 0 | e  (  ) | 0=({},{e},⋅), where Agent(0) is empty, Env(0)={e} is any singleton set, and Eval(0) is trivial. | ⊥{}. Initial. Identity of sum (⊕). | | ⊤ | a(  ) | ⊤=({a},{},⋅), where Agent(⊤) is any singleton set, Env(⊤) is empty, and Eval(⊤) is trivial. | 1{}. Terminal. ⊤◃C. Identity of product (&). | | 1 | w0  w1a(w0  w1) | 1S=({a},S,⋆), where S⊆W and a⋆s=s for all s∈S. 1 is the frame 1W. | Identity of tensor (⊗). | | ⊥ | ew0w1(w0w1) | ⊥S=(S,{e},⋆), where S⊆W and s⋆e=s for all s∈S. ⊥ is the frame ⊥W. | C◃⊥. Identity of par (⅋ ). | | null |   (  ) | null=({},{},⋅), with empty agent, environment, and evaluation function. | |   2. Binary Operations -------------------- **Sum.** For Cartesian frames C=(A,E,⋅) and D=(B,F,⋆) over W, C⊕D is the Cartesian frame (A⊔B,E×F,⋄), where a⋄(e,f)=a⋅e if a∈A, and a⋄(e,f)=a⋆f if a∈B. **Product.** For Cartesian frames C=(A,E,⋅) and D=(B,F,⋆) over W, C&D is the Cartesian frame (A×B,E⊔F,⋄), where (a,b)⋄e=a⋅e if e∈E, and (a,b)⋄e=b⋆e if e∈F. **Tensor.** Let C=(A,E,⋅) and D=(B,F,⋆) be Cartesian frames over W. The tensor product of C and D, written C⊗D, is given by C⊗D=(A×B,hom(C,D∗),⋄), where hom(C,D∗) is the set of morphisms (g,h):C→D∗ (i.e., the set of all pairs (g:A→F,h:B→E) such that b⋆g(a)=a⋅h(b) for all a∈A, b∈B), and ⋄ is given by (a,b)⋄(g,h)=b⋆g(a)=a⋅h(b). **Par.** Let C=(A,E,⋅) and D=(B,F,⋆) be Cartesian frames over W. C⅋ D=(hom(C∗,D),E×F,⋄), where (g,h)⋄(e,f)=g(e)⋆f=h(f)⋅e. **Lollipop.** Given two Cartesian frames over W, C=(A,E,⋅) and D=(B,F,⋆), we let C⊸D denote the Cartesian frame C⊸D=(hom(C,D),A×F,⋄), where ⋄ is given by (g,h)⋄(a,f)=g(a)⋆f=a⋅h(f).   3. Frames, Morphisms, and Equivalence Relations ----------------------------------------------- **Cartesian frame.** A Cartesian frame C over a set W is a triple (A,E,⋅), where A and E are sets and ⋅:A×E→W. If C=(A,E,⋅) is a Cartesian frame over W, we say Agent(C)=A, Env(C)=E, World(C)=W, and Eval(C)=⋅. **Environment subset.** Given a Cartesian frame C=(A,E,⋅) over W, and a subset S of W, let ES denote the subset {e∈E | ∀a∈A,e⋅a∈S}. **Cartesian frame image.**Image(C)={w∈W | ∃a∈A, ∃e∈E   s.t.  a⋅e=w}. **Chu category.**Chu(W) is the category whose objects are Cartesian frames over W, whose morphisms from C=(A,E,⋅) to D=(B,F,⋆) are pairs of functions (g:A→B,h:F→E), such that a⋅h(f)=g(a)⋆f for all a∈A and f∈F, and whose composition of morphisms is given by (g1,h1)∘(g0,h0)=(g1∘g0,h0∘h1). **Isomorphism.** A morphism (g,h):C→D is an *isomorphism* if both g and h are bijective. If there is an isomorphism between C and D, we say C≅D. **Homotopic.** Two morphisms (g0,h0),(g1,h1):C→D with the same source and target are called *homotopic* if (g0,h1) is also a morphism. **Homotopy equivalence / biextensional equivalence.**C is *homotopy equivalent* (or *biextensionally equivalent*) to D, written C≃D, if there exists a pair of morphisms ϕ:C→D and ψ:D→C such that ψ∘ϕ is homotopic to the identity on C and ϕ∘ψ is homotopic to the identity on D. **Sub-sum.** Let C=(A,E,⋅), and let D=(B,F,⋆). A sub-sum of C and D is a Cartesian frame of the form (A⊔B,X,⋄), where X⊆Env(C⊕D) and ⋄ is Eval(C⊕D) restricted to (A⊔B)×X, such that C≃(A,X,⋄C) and D≃(B,X,⋄D), where ⋄C is ⋄ restricted to A×X and ⋄D is ⋄ restricted to B×X. Let C⊞D denote the set of all sub-sums of C and D. **Sub-tensor.** Let C=(A,E,⋅), and let D=(B,F,⋆). A sub-tensor of C and D is a Cartesian frame of the form (A×B,X,∙), where X⊆Env(C⊗D) and ∙ is Eval(C⊗D) restricted to (A×B)×X, such that C≃(A,B×X,∙C) and D≃(B,A×X,∙D), where ∙C and ∙D are given by a∙C(b,x)=(a,b)∙x and b∙D(a,x)=(a,b)∙x. Let C⊠D denote the set of all sub-tensors of C and D.   4. Functors ----------- **Functions between worlds.** Given a Cartesian frame C=(A,E,⋅) over W, and a function f:W→V, let f∘(C) denote the Cartesian frame over V, f∘(C)=(A,E,⋆), where a⋆e=f(a⋅e). **Dual.** Let −∗:Chu(W)→Chu(W)op be the functor given by (A,E,⋅)∗=(E,A,⋆), where e⋆a=a⋅e, and (g,h)∗=(h,g). **Functor (from functions between worlds).** Given two sets W and and V, and a function p:W→V, let p∘:Chu(W)→Chu(V) denote the functor that sends the object (A,E,⋅)∈Chu(W) to the object (A,E,⋆)∈Chu(V), where a⋆e=p(a⋅e), and sends the morphism (g,h) to the morphism with the same underlying functions, (g,h). **Functor (from Cartesian frames).** Let C=(V,E,⋅) be a Cartesian frame over W, with Agent(C)=V. Then C∘:Chu(V)→Chu(W) is the functor that sends (B,F,⋆) to (B,F×E,⋄), where b⋄(f,e)=(b⋆f)⋅e, and sends the morphism (g,h) to (g,h′), where h′(f,e)=(h(f),e).   5. Subagents ------------ **Subagent (categorical definition).** Let C and D be Cartesian frames over W. We say that C is a subagent of D, written C◃D, if for every morphism ϕ:C→⊥ there exists a pair of morphisms ϕ0:C→D and ϕ1:D→⊥ such that ϕ=ϕ1∘ϕ0. **Subagent (currying definition).** Let C and D be Cartesian frames over W. We say that C◃D if there exists a Cartesian frame Z over Agent(D) such that C≃D∘(Z). **Subagent (covering definition).** Let C=(A,E,⋅) and D=(B,F,⋆) be Cartesian frames over W. We say that C◃D if for all e∈E, there exists an f∈F and a (g,h):C→D such that e=h(f). **Sub-environment.** We say C is a sub-environment of D, written C◃∗D, if D∗◃C∗.   **5.1. Additive and Multiplicative Subagents** **Additive subagent (sub-sum definition).**C is an additive subagent of D, written C◃+D, if there exists a C′ and a D′≃D with D′∈C⊞C′. **Additive subagent (brother definition).**C′ is called a brother to C in D if D≃D′ for some D′∈C⊞C′. We say C◃+D if C has a brother in D. **Additive subagent (committing definition).** Given Cartesian frames C and D over W, we say C◃+D if there exist three sets X, Y, and Z, with X⊆Y, and a function f:Y×Z→W such that C≃(X,Z,⋄) and D≃(Y,Z,∙), where ⋄ and ∙ are given by x⋄z=f(x,z) and y∙z=f(y,z). **Additive subagent (currying definition).** We say C◃+D if there exists a Cartesian frame M over Agent(D) with |Env(M)|=1, such that C≃D∘(M). **Additive subagent (categorical definition).** We say C◃+D if there exists a single morphism ϕ0:C→D such that for every morphism ϕ:C→⊥ there exists a morphism ϕ1:D→⊥ such that ϕ is homotopic to ϕ1∘ϕ0 . **Multiplicative subagent (sub-tensor definition).**C is a multiplicative subagent of D, written C◃×D, if there exists a C′ and D′≃D with D′∈C⊠C′. **Multiplicative subagent (sister definition).** C′ is called a sister to C in D if D≃D′ for some D′∈C⊠C′. We say C◃×D if C has a sister in D. **Multiplicative subagent (externalizing definition).** Given Cartesian frames C and D over W, we say C◃×D if there exist three sets X, Y, and Z, and a function f:X×Y×Z→W such that C≃(X,Y×Z,⋄) and D≃(X×Y,Z,∙), where ⋄ and ∙ are given by x⋄(y,z)=f(x,y,z) and (x,y)∙z=f(x,y,z). **Multiplicative subagent (currying definition).** We say C◃×D if there exists a Cartesian frame M over Agent(D) with Image(M)=Agent(D), such that C≃D∘(M). **Multiplicative subagent (categorical definition).** We say C◃×D if for every morphism ϕ:C→⊥, there exist morphisms ϕ0:C→D and ϕ1:D→⊥ such that ϕ≅ϕ1∘ϕ0, and for every morphism ψ:1→D, there exist morphisms ψ0:1→C and ψ1:C→D such that ψ≅ψ1∘ψ0. **Multiplicative subagent (sub-environment definition).** We say C◃×D if C◃D and C◃∗D. Equivalently, we say C◃×D if C◃D and D∗◃C∗. **Additive sub-environment.** We say C is an additive sub-environment of D, written C◃∗+D, if D∗◃+C∗. **Multiplicative sub-environment.** We say C is an multiplicative sub-environment of D, written C◃∗×D, if D∗◃×C∗.   **5.2. Ways to Construct Subagents, Sub-Environments, etc.** **Committing.** Given a set S⊆W and a frame C=(A,E,⋅) over W, we define CommitS(C)=CommitB(C) and Commit∖S(C)=Commit∖B(C), where B⊆A is given by B={a∈A | ∀e∈E,a⋅e∈S}. **Assuming.** Given a set S⊆W and a frame C=(A,E,⋅) over W, we define AssumeS(C)=AssumeF(C) and Assume∖S(C)=Assume∖F(C), where F⊆E is given by F={e∈E | ∀a∈A,a⋅e∈S}. **Externalizing.** Given a partition V of W, let v:W→V send each element w∈W to the part that contains it. Given a frame C=(A,E,⋅) over W, we define ExternalV(C)=ExternalB(C) and External/V(C)=External/B(C), where B={{a′∈A | ∀e∈E, v(a′⋅e)=v(a⋅e)} | a∈A}. **Internalizing.** Given a partition V of W, let v:W→V send each element w∈W to the part that contains it. Given a frame C=(A,E,⋅) over W, we define InternalV(C)=InternalF(C) and Internal/V(C)=Internal/F(C), where F={{e′∈E | ∀a∈a, v(a⋅e′)=v(a⋅e)} | e∈E}.   6. Controllables and Observables -------------------------------- **Ensurables (categorical definition).** Ensure(C) is the set of all S⊆W such that there exists a morphism ϕ:1S→C. **Preventables (categorical definition).**Prevent(C)is the set of all S⊆W such that there exists a morphism ϕ1:1W∖S→C. **Controllables (categorical definition).** Let 2S denote the Cartesian frame 1S⊕1W∖S. Ctrl(C) is the set of all S⊆W such that there exists a morphism ϕ:2S→C. **Observables (original categorical definition).**Obs(C) is the set of all S⊆W such that there exist C0 and C1 with Image(C0)⊆S and Image(C1)⊆W∖S such that C≃C0&C1. **Observables (definition from subsets).** We say that a finite partition V of W is observable in a frame C over W if for all parts Si∈V, Si∈Obs(C). We let Obs′(C) denote the set of all finite partitions of W that are observable in C. **Observables (conditional policies definition):** We say that a finite partition V of W is observable in a frame C=(A,E,⋅) over W if for all functions f:V→A, there exists an element af∈A such that for all e∈E, f(v(af⋅e))⋅e=af⋅e, where  v:W→V is the function that sends each element of W to its part in V. **Observables (non-constructive additive definition):** We say that a finite partition V={S1,…,Sn} of W is observable in a frame C over W if there exist frames C1,⋯Cn over W, with Ci◃⊥Si such that C≃C1&…&Cn. **Observables (constructive additive definition):** We say that a finite partition V={S1,…,Sn} of W is observable in a frame C over W if C≃AssumeS1(C)&…&AssumeSn(C). **Powerless outside of a subset:** Given a frame C=(A,E,⋅) over W and a subset S of W, we say that C's agent is powerless outside S if for all e∈E and all a0,a1∈A, if a0⋅e∉S, then a0⋅e=a1⋅e. **Observables (non-constructive multiplicative definition):** We say that a finite partition V={S1,…,Sn} of W is observable in a frame C over W if C≃C1⊗⋯⊗Cn, where each Ci's agent is powerless outside Si. **Observables (constructive multiplicative definition):** We say that a finite partition V={S1,…,Sn} of W is observable in a frame C over W if C≃C1⊗⋯⊗Cn, where Ci=AssumeSi(C)&1Ti, where Ti=(W∖Si)∩Image(C). **Observables (non-constructive internalizing-externalizing definition):** We say that a finite partition V of W is observable in a frame C=(A,E,⋅) over W if either A={} or C is biextensionally equivalent to something in the image of ExternalV∘InternalV. **Observables (constructive internalizing-externalizing definition):** We say that a finite partition V of W is observable in a frame C=(A,E,⋅) over W if either A={} or C≃ExternalV(InternalV(C)).
46f1bcfc-0602-48c2-9410-461a68d0502d
trentmkelly/LessWrong-43k
LessWrong
Investment idea: basket of tech stocks weighted towards AI I'm not an investment advisor and this isn't investment advice. Excerpted from an occasional newsletter I write about investing. Related: Engaging Seriously with Short Timelines, You Need More Money ---------------------------------------- Recent developments in AI (AlphaGo Zero, AlphaFold, GPT-3) make transformative AI seem plausible within the next 10-50 years. There's no fire alarm for AGI, so now is as good a time as any to have my portfolio reflect this belief. (A little embarrassing that I've waited so long to follow my conviction here! The joys of compartmentalized cognition...) If transformative AI follows the trend currently being set by OpenAI, where it's expensive to initially train an AI model and cheap to deploy it once trained, profits will most likely be captured by the companies that train the AIs & the value chain supporting these companies (OpenAI's API is an early version of this; see also Gwern's fantastic analysis of the strategies of various AI research groups). If training costs aren't the limiting factor, it's less clear that profits will be captured by the companies that own the models. This scenario feels higher variance & weirder... probably the big tech companies still benefit a lot (they're already set up to deliver services at scale and they can just buy smaller firms who might otherwise compete), and makers of GPUs & semiconductors probably still benefit as well. Peter McCluskey has a good rundown of sectors that could benefit. Happily these two scenarios are pretty aligned from a small-investor perspective – apart from OpenAI, a lot of AI development & the supporting value chain is housed within big tech companies, and even OpenAI runs on Microsoft compute. Finally (and more weirdly), I think now is a good time to invest in "shamanic" leaders (h/t Max for the term). It seems like a lot of society doesn't really know how to orient anymore, so when someone comes along with a clear + compelling vision, they can raise a lot of cap
1456a649-8fef-426d-a60a-67fab912c099
trentmkelly/LessWrong-43k
LessWrong
Freeloading? Selections from the category of "provider expects people in general will make it worth their while, and you're behaving in a way where you won't", roughly ordered from my impression of least to most acceptable: * Taking free newspapers from distribution boxes to use in craft projects or for heating your house. * Accepting a swag t-shirt to get cotton to use for papermaking. * Going to a store without any intention of buying something, just to eat the free samples. * Bringing outside food into a movie theater or amusement park. * Interviewing somewhere you would definitely not want to work, just for the practice. * Using paywall circumvention software. * Independently decrypting cable or satellite TV. * Using GNU Parallel without citing it. * Running an ad blocker on your computer. * Fast-forwarding through sponsored sections on YouTube or a podcast. * Stopping to watch a street performer or listen to busker without paying. * Listening to NPR without contributing, when you could afford to. * Accompanying friends to a restaurant and only ordering water. * Buying PS3 consoles and building a supercomputer. * Buying a cheap printer and using third-party ink. * Fixing bugs in open source software without contributing your fixes back upstream. * Using Wikipedia without fixing errors you find. * Leaving an amusement park for lunch. * Using a web browser but changing the default search engine. * Reading ad supported stuff but never buying anything from the ads. For example, Sony sold the PS3 below cost because they expected people would make make up for it through paying higher prices for games, but someone buying thousands of them to build a supercomputer breaks their business model. Or, Firefox is funded by selling the right to be the default browser (in the US they switched from Google to Yahoo in 2014 and then back to Google in 2017) and if you choose your own default search engine you're slightly weakening Firefox's negotiating
59c7ff9d-c347-4809-8de3-72564a56ed81
trentmkelly/LessWrong-43k
LessWrong
Confused about the doomsday argument, please help The doomsday argument says I have only a 10% chance of being within the first 10% of humans ever born, which gives nonzero information about when humanity will end. The argument has some problems with the choice of reference class; my favorite formulation (invented by me, I'm not sure if it's well-known) is to use the recursive reference class of "all people who are considering the doomsday argument with regard to humanity". But this is not the issue I want to discuss right now. Imagine your prior says the universe can contain 10, 1000 or 1000000 humans, with probability arbitrarily assigned to these three options. Then you learn that you're the 50th human ever born. As far as I can understand, after receiving this information you're certain to be among the first 10% of humans ever born, because it's true in every possible universe where you receive such information. Also learning your index doesn't seem to tell you very much about the date of the doomsday: it doesn't change the relative probabilities of doomsday dates that are consistent with your existence. (This last sentence is true for any prior, not just the one I gave.) Is there something I'm missing?
2f9a58b9-d386-4b09-9330-1436eb98645e
trentmkelly/LessWrong-43k
LessWrong
The landscape of altruistic interventions Suppose you want to figure out what the best things to do are. One approach is to start by prioritizing high level causes: is it better broadly to work on developing world health, or on technological development? Then you can work your way downwards: is it better to work on treating infectious diseases or on preventative measures? Malaria or HIV? Direct bed-net distribution or political interventions? Which politician? Which tactic? Which day? This should work well if the landscape of interventions is kind of smooth – if the best interventions are found with the pretty excellent interventions, which are in larger categories with the great interventions, etc. This approach might work well for finding a person who really likes hockey for instance. The extreme hockey lovers will be found with the fairly enthusiastic hockey lovers, who will probably ultimately be in countries of hockey lovers. It should not on the other hand work very well for finding the reddest objects in your house – the most red thing is not likely to be in the room which has the most overall red. Which of these is more similar to finding good altruistic interventions? This method would work well for finding the reddest things in your house if the redness of things was influenced a lot by color of the lights, and you had very different colored lights throughout your house. Similarly, if most of the variation in value between different altruistic interventions comes from general characteristics of high level causes, we should expect this method to work better there. You might also expect it to work well if the important levels could be mixed and matched – if the best high level cause could be combined with the best generic method of pursuing a cause, and done with the best people. These things seem plausible to me in the case of altruistic interventions, but I’m not really sure. What do you think?
6327012c-5911-4ad7-88f9-88d9651081bb
trentmkelly/LessWrong-43k
LessWrong
Prioritizing Work I recently read a blog post that concluded with: > When I'm on my deathbed, I won't look back at my life and wish I had worked harder. I'll look back and wish I spent more time with the people I loved. Setting aside that some people don't have the economic breathing room to make this kind of tradeoff, what jumps out at me is the implication that you're not working on something important that you'll endorse in retrospect. I don't think the author is envisioning directly valuable work (reducing risk from international conflict, pandemics, or AI-supported totalitarianism; improving humanity's treatment of animals; fighting global poverty) or the undervalued less direct approach of earning money and donating it to enable others to work on pressing problems. Definitely spend time with your friends, family, and those you love. Don't work to the exclusion of everything else that matters in your life. But if your tens of thousands of hours at work aren't something you expect to look back on with pride, consider whether there's something else you could be doing professionally that you could feel good about.