id
stringlengths
36
36
source
stringclasses
15 values
formatted_source
stringclasses
13 values
text
stringlengths
2
7.55M
53d2ba5d-85ff-49f0-91f1-36949c61ba6d
trentmkelly/LessWrong-43k
LessWrong
How do open AI models affect incentive to race? I see it said sometimes that open models contribute to AI race dynamics. My guess is that they don't, and if anything, reduce AI race dynamics. I will consider a simplified model that only takes into account the cost of training a model, not the cost to deploy it (which tends to be small relative to revenue anyway). Let f(x) map a training expense x to a "value per day per customer" of the trained model, under the assumption that the training makes efficient use of the cost. That is, a customer values using an AI model trained with x compute at $f(x) per day. I assume there are n identical customers here; of course, there are complexities where some customers value AI more than others, incentivizing price discrimination, but I'm abstracting this consideration out. (In general, variation in how much customers value a product will tend to increase consumer surplus while reducing revenue, as it makes it harder to charge customers just under the maximum amount they're willing to pay.) I'm also assuming there is only one company that trains closed models for profit. This assumption is flawed because there is competition between different companies that train closed models. However, perfect competition assumptions would tend to reduce the incentive to train models. Suppose two companies have closed models of equivalent expense x. They each want to charge slightly less than the minimum of f(x) and the competitor's price, per customer per day. If each competitor undercuts the other slightly, the cost will approach 0. See the Traveler's Dilemma for a comparison. The reasons why this doesn't happen have to do with considerations like differences in models' performance on different tasks, e.g. some models are better for programming than others. If models are sufficiently specialized (allowing this sort of niche-monopolization), each specialized type of model can be modeled independently as a monopoly. So I'll analyze the case of a closed model monopoly, noting that translat
71069049-5424-4aac-88fb-d9e3b2927ac0
trentmkelly/LessWrong-43k
LessWrong
Link: Gizmodo discusses SIAI, matches donations Gizmodo, a popular technology blog, posted this artice about SIAI. It's partly tongue-in-cheek, but also apparently thinks well of the Singularity Institute, claiming they are "a research organization that's as forward-thinking as most Gizmodo readers (read: sci-fi nerds)." More importantly, they link to Philanthroper, where you can donate and see your donation be matched. File this under "cultural penetration of Singularity memes" and also as a chance to make your donation more effective. http://gizmodo.com/#!5787599/give-1-to-stop-terminators-seriously Edit: Better link to the above URL
dd8dc258-fd16-43ac-a4ea-bda5dc0d5fde
trentmkelly/LessWrong-43k
LessWrong
Meetup : Less Wrong Ottawa Discussion article for the meetup : Less Wrong Ottawa WHEN: 07 July 2011 07:00:07PM (-0400) WHERE: 283 Elgin Street, Ottawa, ON Come and meet fellow truth-seekers on the second floor of the Fox and Feather. We'll be in back room #1 if it's free. Recently we've decided to try discussing one LW post each week. This week's post is reasons for being rational. Discussion article for the meetup : Less Wrong Ottawa
29936238-b6de-404c-878a-fbb4a71a1d8d
trentmkelly/LessWrong-43k
LessWrong
The case for lifelogging as life extension Those in the cryonics community want to be frozen upon legal death, in order to preserve the information content in their brain. The hope is that, given good protocol, damage incurred during the freezing process will not destroy enough information about you to prevent people in the future from reconstructing your identity. As most who want cryonics will understand, death is not an event. Instead, it is a process with intermediate steps. We consider a long-decayed corpse to be dead because it no longer performs the functions associated with a normal living human being, not because any sort of spirit or soul has left the body. But philosophers have also identified important dilemmas for the view that death is a process rather than an event. If what we call death is simply my body performing different functions, then what do we make of the fact that we also change so much simply due to the passage of time? I find it easy to believe that I am the 'same person' as I was last night. Enough of the neural pathways are still the same. Memories from my childhood are essentially still identical. My personality has not changed any significant extent. My values and beliefs remain more-or-less intact. But every day brings small changes to our identity. To what extent would you say that you are still the 'same person' as you were when you were a child? And to what extent are you still going to be the 'same person' when you get old? In addition to the gradual changes that happen due to every day metabolic processes, and interactions with the outside world, there is also a more sudden change that may happen to your identity as you get old. By the age of 85, something like 25 to 50 percent of the population will get a form of dementia. Alzheimer's is a very harsh transformation to our connectome. Ironically, those who are healthiest in their youth will have the highest chance of getting Alzhiemers, as it is typically a disease of the very-old, rather than the somewhat old. Furt
188f2b16-87cf-4f9e-a46a-c1d8f8e9c6e0
trentmkelly/LessWrong-43k
LessWrong
Announcing Trajectory Labs - A Toronto AI Safety Office TL;DR * We’re announcing Trajectory Labs, an AI safety coworking and events space in downtown Toronto. * We support local AI safety efforts through three core programs: * Workspace - a shared office space for ~30 AI safety practitioners; * Events - more-than-weekly meetups, presentations, and hackathons; * Peer Network - active mentorship and networking support; * If you’re based in or visiting Toronto and are engaged in AI safety or related work, join us at an event, apply for office access here, or reach out to us at contact@trajectorylabs.org Overview We’re excited to introduce Trajectory Labs, a nonprofit based in downtown Toronto. We are funded by Open Philanthropy, and have been operating since January 1st, 2025. Trajectory Labs’ mission is to catalyze Toronto’s role in steering AI progress toward a future of human flourishing. We believe Toronto has outsized untapped potential to contribute to humanity’s successful navigation of advanced AI, and the various factors keeping this potential unrealized are all tractable issues tackleable by a dedicated community. Core Programs We support this work through three core programs: 1. Workspace: We operate a coworking space in downtown Toronto that serves as a physical hub for AI safety and related researchers, professionals, and volunteers. Over 30 practitioners enjoy 24/7 access to our fully furnished office, complete with standard office amenities, twice-weekly community lunches, and more. Membership is currently free for those admitted. 2. Events: We host multiple private and public events each week, from office lunches, to casual meetups to formal presentations and weekend hackathons. These events serve as both a pathway for talented individuals into AI safety work and an educational resource for existing membership, while creating regular gathering points for our community to connect. 3. Peer Network: We cultivate a community of talented, motivated peers who actively support each other's gro
6205e081-758f-4a5d-bca7-0b69f09da9bc
trentmkelly/LessWrong-43k
LessWrong
Intercellular competition and the inevitability of multicellular aging
ca1e4c49-3fbe-435c-88f0-e286b02b56d5
trentmkelly/LessWrong-43k
LessWrong
Harper’s Fishing Nets: a review of Plato’s Camera by Paul Churchland Harper’s Fishing Nets: a review of Plato’s Camera by Paul Churchland Eli Sennesh July 1, 2015 Abstract Paul Churchland published Plato’s Camera to defend the thesis that abstract objects and properties are both real and natural, consisting in learned mental representations of the timeless, abstract features of the mind’s environment. He holds that the brain learns, without supervision, high-dimensional maps of objective feature domains – which he calls Domain-Portrayal Semantics. He further elaborates that homomorphisms between these high-dimensional maps allow the brain to occasionally repurpose a higher-quality map to understand a completely different domain, reducing the latter to the former. He finally adds a Map-Portrayal Semantics of language to his Domain-Portrayal Semantics of thought by considering the linguistic, cultural, educational dimensions of human learning. Part I Introduction Surely the title of this review already sounds like some terrible joke is about to be perpetrated, but in fact it merely indicates a philosophical difference between myself and Paul Churchland. Churchland wrote Plato’s Camera[3] not merely to explain a view on philosophy of mind to laypeople and other philosophers, but with the specific goal of defending Platonism about abstract, universal properties and objects (such as those used in mathematics) by naturalizing it. The contrast between such naturalist philosophers as Churchland, Dennett, Flanegan, and Railton and non-naturalist or weakly naturalist philosophy lies precisely in this fact: the latter consider many abstract or intuitive concepts to necessarily form their own part of reality, amenable strictly to philosophical investigation, while the former seek and demand a conscilience of causal explanation for what’s going on in our lives. The results are a breath of fresh air to read. A great benefit of reading strongly naturalistic philosophy and philosophers is that, over the effort of researching a philosophical po
243b8f11-b2bd-464a-b0d2-1e550d7055c7
trentmkelly/LessWrong-43k
LessWrong
Law-Following AI 2: Intent Alignment + Superintelligence → Lawless AI (By Default) This post is written in my personal capacity, and does not necessarily represent the views of OpenAI or any other organization. Cross-posted to the Effective Altruism Forum. In the first post of this sequence, I defined "law-following AI" ("LFAI") and "intent alignment." In this post, I will begin to motivate the case for working on LFAI by showing that an AI agent A that is only intent-aligned (i.e., without additional law-following constraints directly on the agents) would in many cases break the law to advance its human principal H's interests. If you agree that it is generally good for agents to be bound by law, then this should be a pro tanto reason to support work on LFAI in addition to intent alignment. If you need more convincing that LFAI would be generally good, the next few posts will explore some more specific motivations for LFAI. Image by OpenAI's DALL·E Sufficiently Intelligent Agents Can Evade Detection and Attribution I suspect this point will need little argumentation to many of my readers. Evasion and obfuscation seems likely to be a generally promising means of ensuring self-preservation (because detection would increase the probability of punishment and constraint), which in turn is plausibly a convergent instrumental goal of optimizing AI systems.[1] For example, to evade detection and attribution, A could: * Behave innocently only when being observed; * Orchestrate complex malevolent actions through a coordinated series of small, inscrutable or innocent-seeming steps; * Surreptitiously create and run subagents or copies of itself on new hardware, potentially very far removed from its original hardware; * Convince humans or other systems to take actions on its behalf, without revealing its true identity; and/or * Delete logs of its actions. A sufficiently intelligent AI agent could likely consistently fool humans using these and other (unforeseen) techniques. Other Ways to Circumvent Law Even in the best case scenario, where th
16ddb6ec-c13e-4618-b58e-88179393a88e
trentmkelly/LessWrong-43k
LessWrong
Sam Altman's sister claims Sam sexually abused her -- Part 6: Timeline, continued Previous posts (which you should read first) This post is the 6th post in a series of 11 posts about the claims of Sam Altman's sister, Annie Altman. Annie has claimed that Sam sexually abused her for about 9 years as a child, and that she experienced further (non-sexual) abuse from Sam, her brothers, and her mother after that. The 11 posts are meant to be read in order.  So, if you haven't read the first 5 posts, please read them, in order, before you read this post: * Sam Altman's sister claims Sam sexually abused her -- Part 1: Introduction, outline, author's notes * Sam Altman's sister claims Sam sexually abused her -- Part 2: Annie's lawsuit; the response from Sam, his brothers, and his mother; Timeline * Sam Altman's sister claims Sam sexually abused her -- Part 3: Timeline, continued * Sam Altman's sister claims Sam sexually abused her -- Part 4: Timeline, continued * Sam Altman's sister claims Sam sexually abused her -- Part 5: Timeline, continued ---------------------------------------- Timeline, continued continued November 10, 2019 -- Annie publishes Why I believe that everything happens for a reason on her YouTube channel.  ⬇️ See dropdown section ⬇️   * "The ten-second version of this video is that I believe that everything happens for a reason, because life is way more fun when I believe that. What I know is that I don't get to know whether or not things happen for a reason, in life. So, what I'm left with is what I get to choose to believe. And, I've been the person to extremely choose to say 'everything is all science', to be really spiritual about science, and to say that 'nothing can happen for some grander purpose or reason, 'cause that's hippy bullshit.' And I've been the super hippy person to say 'everything's happening for a reason all the time, and every set of numbers, and every single thing that I see has so much meaning that I need to unpack and understand -- to really connect with my human-ing.' And those are both extremes,
1c41f5f8-4bc0-4fc2-bade-c37ef743ccf5
trentmkelly/LessWrong-43k
LessWrong
The wisdom of the Crowd I'm looking for people to be part of my facebook group named The wisdom of the of the crowd to help with the initial structure building of the group. developers would be awesome, but if you would like to just be part of the experiment you're welcome too. If there are any of you who are familiar with the theory of the wisdom of the crowd, and may have insights on how to set up the parameters to conduct some social experiments, well some of those people would be awesome too. if interested let me know. i'm not sure if i should put a link on here.
744bf292-7cd8-41e0-9a74-fa325ef38526
trentmkelly/LessWrong-43k
LessWrong
Moral AI: Options Epistemic status: One part quotes (informative, accurate), one part speculation (not so accurate). One avenue towards AI safety is the construction of "moral AI" that is good at solving the problem of human preferences and values. Five FLI grants have recently been funded that pursue different lines of research on this problem. The projects, in alphabetical order: * Vincent Conitzer: > Most contemporary AI systems base their decisions solely on consequences, whereas humans also consider other morally relevant factors, including rights (such as privacy), roles (such as in families), past actions (such as promises), motives and intentions, and so on. Our goal is to build these additional morally relevant features into an AI system. We will identify morally relevant features by reviewing theories in moral philosophy, conducting surveys in moral psychology, and using machine learning to locate factors that affect human moral judgments. We will use and extend game theory and social choice theory to determine how to make these features more precise, how to weigh conflicting features against each other, and how to build these features into an AI system. We hope that eventually this work will lead to highly advanced AI systems that are capable of making moral judgments and acting on them. Techniques: Top-down design, game theory, moral philosophy * Owain Evans: > Previous work in economics and AI has developed mathematical models of preferences, along with algorithms for inferring preferences from observed actions. [Citation of inverse reinforcement learning] We would like to use such algorithms to enable AI systems to learn human preferences from observed actions. However, these algorithms typically assume that agents take actions that maximize expected utility given their preferences. This assumption of optimality is false for humans in real-world domains. Optimal sequential planning is intractable in complex environments and humans perform very rough approximati
ca5aff5b-e521-4015-9132-fbefaee8afc2
trentmkelly/LessWrong-43k
LessWrong
Measuring Coherence of Policies in Toy Environments This post was produced as part of the Astra Fellowship under the Winter 2024 Cohort, mentored by Richard Ngo. Thanks to Martín Soto, Jeremy Gillen, Daniel Kokotajlo, and Lukas Berglund for feedback. Summary Discussions around the likelihood and threat models of AI existential risk (x-risk) often hinge on some informal concept of a “coherent”, goal-directed AGI in the future maximizing some utility function unaligned with human values. Whether and how coherence may develop in future AI systems, especially in the era of LLMs, has been a subject of considerable debate. In this post, we provide a preliminary mathematical definition of the coherence of a policy as how likely it is to have been sampled via uniform reward sampling (URS), or uniformly sampling a reward function and then sampling from the set of policies optimal for that reward function, versus uniform policy sampling (UPS). We provide extensions of the model for sub-optimality and for “simple” reward functions via uniform sparsity sampling (USS). We then build a classifier for the coherence of policies in small deterministic MDPs, and find that properties of the MDP and policy, like the number of self-loops that the policy takes, are predictive of coherence when used as features for the classifier. Moreover, coherent policies tend to preserve optionality, navigate toward high-reward areas of the MDP, and have other “agentic” properties. We hope that our metric can be iterated upon to achieve better definitions of coherence and a better understanding of what properties dangerous AIs will have. Introduction Much of the current discussion about AI x-risk centers around “agentic”, goal-directed AIs having misaligned goals. For instance, one of the most dangerous possibilities being discussed is of mesa-optimizers developing within superhuman models, leading to scheming behavior and deceptive alignment. A significant proportion of current alignment work focuses on detecting, analyzing (e.g. via analogous cas
454ae8ef-554e-4098-be00-f1b543149064
StampyAI/alignment-research-dataset/special_docs
Other
Diminishing Returns and Recursive Self Improving Artificial Intelligence Chapter 7 Diminishing Returns and Recursive Self Improving Arti ficial Intelligence Andrew Majot and Roman Yampolskiy 7.1 Introduction The concept of a recursively self-improving intelligence has been around for quite a while (Good 1965 ). At its core is the belief that the creation of a human or superhuman intelligence level is far over the horizon of current science, but a moremodest and simple intelligence could be possible within a reasonable amount oftime. This reasonable timeframe is unfortunately one of the best examples of amoving target that computer science has ever produced. Initial estimates in the1960s were that this kind of intelligence, or for that matter any kind of intelligence, might be possible by 1980. When this proved not to be the case the estimate was moved again and again. Some of the current estimates have taken lesson fromprevious attempts at forecasting AI developments and put the estimate at any timebetween 20 and 100 years (Sotala and Yampolskiy 2013 ). The main reasons for this constantly shifting estimate is our continued floundering in the creation of any remotely workable models for cognition. We can describe overall cognitive func-tioning in some detail, but have no real concept behind things as basic as whatconstitutes an arti ficial thought or how to code a machine with common sense. These basic roadblocks to intelligence are open problems whose solutions we don ’t even have an inkling of yet. The idea of a “Seed AI ”capable of recursive self-improvement was created to get around these bottlenecks. It seems inherently more dif ficult to create a super-intelligent AI from the start than it is to let a less intelligent one figure out A. Majot ( ✉)⋅R. Yampolskiy Department of Computer Engineering and Computer Science, University of Louisville, Louisville, USA e-mail: amajot@gmail.com R. Yampolskiy e-mail: roman.yampolskiy@louisville.edu © Springer-Verlag GmbH Germany 2017 V. Callaghan et al. (eds.), The Technological Singularity , The Frontiers Collection, DOI 10.1007/978-3-662-54033-6\_7141 how to improve itself. The human intellect is currently barely able to begin com- prehending itself, and the thought of it being able to generate something far moreintelligent than itself is a bit far fetched. So instead of solving every single problemrelated to creating super-intelligence why don ’t we have the AI itself do it? That is the main proposal behind Seed AI. Humans would only need to produce one AI capable of a modicum of cognition, programmed with the drive to improve itself,and capable of some degree of original thought and imagination. Over time this AIwould examine its own source code, picking up various algorithms it has andanalyzing them to look for weaknesses. Once identi fied, these weaknesses would be improved upon and the new algorithm tested, with the AI modeling itself and itsnew potential. If the algorithm works and produces improvement, the AI would editits source code to include the upgraded component. This would be done ad infinitum until a limit was reached, or the AI chooses to stop improving itself and instead embark on a new quest. We will go over the details of how this recursiveimprovement would work later in the chapter, and show that some of the methodsused for this process have some possible limitations. Admittedly, this whole concept is consigned to the semi-distant future, so why do we need to worry ourselves over it now? There are many people who wouldargue that our time and resources are much better spent on other problems(McDermott 2012 ). Others think that once this type of intelligence is created, it would have the ability to solve all of the most pressing and critical to survival problems that we currently face (Chalmers 2010 ; Yudkowsky 2007 ). Such an intelligence would have a wholly unique view on any problems we face. It wouldprobably not think like a human, have the biases inherent in any human culture, orhave the same motivations as a human. This alone makes an arti ficial intelligence an amazing asset capable of independent analysis and thought. Even if it doesn ’t solve all of our problems overnight, such a being should be valued as an ultimate way togain inhuman perspectives on human problems. Who knows what kind of insights a circuit-based life form can provide to us carbon-based life forms who struggle daily just to play nicely with one another. The ideal and most helpful example of an arti ficial intelligence would be as general as possible in order to mimic the capacities of the human brain. Thereforethis AI falls under the Arti ficial General Intelligence (AGI) category (Yampolskiy and Fox 2012 ; Goertzel and Pitt 2012 ). One could make any type of intelligence you wished, chess playing, weather forecasting, disease curing, bomb designing,genocide planning, or election rigging. But the most useful kind of intelligence would likely be of the AGI category since it would be able to draw upon any variant of knowledge or deduction to solve problems. This general-purpose thinking AIprobably wouldn ’t have many of the shortcomings that human thinkers have. They could remain focused on their goal, have no need of sleep, and depending on howthey are set up, they would have access to the bulk of human knowledge if allowedaccess to the internet. An internet hookup would certainly be a risky move however.It may be best to ensure a somewhat guided approach to the AI ’s knowledge acquisition, as there have been recent examples where unintended consequences result from bulk information ingestion. One of the more recent, and humorous142 A. Majot and R. Yampolskiy examples can be found from IBM ’s Watson super computer. In order to get Watson more comfortable with the nuances in human language, its programmers crawledthe site urbandictionary.com. This had the unintended side effect of making Watsoncurse like a sailor because it could not tell the difference between polite conver- sation and crude jokes (Lewis 2013 ). The main fear with a recursively improving AGI is that its strengths could exploit our own weaknesses. That is to say that they would quickly outpace us inbrainpower while also not having a moral code to inhibit their actions and moti-vations. These actions and motivations might not necessarily have humanity ’s best interests in mind. We fear that the AGI ’s recursive growth could be exponential and within months of being turned on it could have control of the planet (Yampolskiy2015b ; Sotala and Yampolskiy 2013 ; Omohundro 2008 ; Turney 1991 ; Goertzel and Pitt2012 ; Armstrong et al. 2012 ). Omohundro speci fically details what could drive an AI towards self-improvement and why it would want to do so. But what if we arelooking for solutions to a problem that wouldn ’t actually exist? Is it really possible for an AI to improve upon itself so drastically that our intelligence would be to itlike ants are to us? To summarize: we show that this scenario is not likely, at least under current mathematical assumptions coupled with a dash of common sense. Yes, the AI could improve upon itself, creating novel algorithms and raising its Machine Intelligence Quotient (MIQ) quite a bit, but it would still be bound byinherent limits of algorithm complexity. The law of diminishing marginal returns would probably take effect, causing greater and greater computational resource usage for smaller and smaller per- formance gains. Any AI with a goal will need to work towards this goal, limiting the time available for self-improvement since after a certain point spending time onimprovement would become prohibitive to completing its goal. The logistical chain for creating computers from initial mineral mining and refining to design and construction naturally allows for opportunities to limit an AI bent on runaway hardware improvement. 7.2 Self-improvement The concept of recursive self-improvement was first mentioned in 1965 by John Irving Good. He postulated that as soon as a reasonably intelligent machine wascreated, it would go about improving upon itself. It would then inhabit this newimprovement, and use the increase in ef ficiency and ability to create an even better version of itself. This would continually take place until the machine ’s intelligence would be completely inconceivable to us lowly humans. The machine would beable to solve nearly any current problem bothering mankind, and so would be the “last invention that man need ever make ”(Good 1965 ). What might be the path for7 Diminishing Returns and Recursive Self Improving … 143 this AI to improve upon itself? In order to understand how an AI like this may be developed it might be a good idea to take a look at some potential methods for itsinitial creation, and how it could go about improving itself. 7.2.1 Evolutionary Algorithms One example of self-improving algorithms is found in evolutionary algorithms. Inthese algorithms the root source code stays the same, but any parameters the code isgiven can be changed, thus improving performance. These algorithms mimic thenatural process of evolution within code by generating a population of algorithms or problem solutions, mutating and “breeding ”them, and only allowing the fittest solutions to pass on to the next generation. This idea has been around for a while,and has successfully solved many dif ficult problems that may have taken much longer to solve otherwise (Yampolskiy 2015a ; Eiben and Smith 2003 ). The components of most evolutionary algorithms are as follows: Creation of Initial Population of algorithms or solutions Combinatorial/breeding function Mutation function Selection (Fitness Test) While certainly not the only means of producing new, better iterations of intelligence, an evolutionary implementation would seem to make the most sense for an AI to better itself. The AI would initially create several candidateimprovement algorithms and combine/mutate them according to need. Piecingtogether what we have learned so far, the determination of the MIQ score for agiven self-improving AI would probably be the fitness test in this evolutionary setup. But in order to fully test the MIQ score for an AI, it would need to be able tosimulate the various algorithms in its candidate population. This would requireextensive resources to be able to run a full or even a limited simulation and, unless the AI has a plethora of resources, would need to be carefully scheduled with existing goals and duties so that they are not adversely affected. Once the simulationis loaded a MIQ test can take place, and the rendered score could be used for theevolutionary selection and fitness test. Only the top results would be allowed to move to the next generation, where they would be cross-bred and further mutatedfor evolutionary simulation. Perhaps if the simulations provided guaranteedimprovement the new algorithm(s) would be loaded as the default cognitivealgorithms for the AI. This would allow the next generation of candidates to be created faster, as the AI would run smarter and/or more ef ficiently. Another key part of this process would be the mutation algorithm. If the AI could guide mutationaway from obviously dead ends and prune the possible evolutionary tree it wouldspeed up generational improvement. Theoretically, over time this whole processwould create intelligences much more capable than the original.144 A. Majot and R. Yampolskiy 7.2.2 Learning Algorithms Other self-improving algorithms could be powered by a learning based algorithm. There has already been some research for self-improving and self-optimizing algorithms using this approach (Ailon et al. 2011 ; Lin 1992 ). In basic machine learning algorithms the agent is given a base learning algorithm and a number oftraining data sets. Along with the data sets, expected processed results are alsoprovided so that the agent can learn to associate certain types of input with certainoutputs. These training data sets allow the agent to fine-tune itself to a point that it can handle new data sets and reliably provide accurate results. The underlyingalgorithms behind this learning process can be arti ficial neural nets, association rule learning, inductive logic programming, support vector machines, bayesian net- works, representation learning, and several other methods (Mitchell 1997 ). Each of these methods has their pros and cons, and would need to be weighed accordingly.Ideally the best methods would be unsupervised learning since handholding an AImay be slow going for initial improvement. It would seem reasonable for an AI to have the ability to improve by learning for each of its algorithms responsible for cognition. If each function has the ability tolearn and improve its own performance, then all of their combined improvementswould theoretically increase the MIQ of the AI. But that may just increase per- formance, it might not provide the imaginative leaps needed for sustained improvement over time. To compensate for this a function separate from the baseAI intellect may be called for. This function could analyze cognitive algorithms,their performance, and their effectiveness and attempt to generate new versions.These new versions could then be simulated and the results benchmarked, with thefunction learning from its mistakes and successes to create better and better versionsof cognitive algorithms. Eventually this approach may lead to the leaps andbreakthroughs needed to avoid premature, stagnant dead ends. 7.3 Limits of Recursively Improving Intelligent Algorithms Most of the fears behind constantly improving AIs revolve around the assumption that they will get super intelligent rather quickly. This also assumes that thealgorithms employed will be so complex and daunting that humans would not be able to understand their structure or purpose, creating a sort of black box AI. This takes for granted a couple of important things in order for this scenario to beplausible. First, humanity is not the ultimate form of intelligence, and there existspossible intelligences that are better than ours. Here the authors do not disagree withthis notion, every time you turn on the television and watch the news there seemsample evidence that human intelligence can be improved upon. The secondassumption is that intelligence can be in finitely improved upon, or improved upon7 Diminishing Returns and Recursive Self Improving … 145 in such a way as to have a runaway improvement curve that humanity would be hard pressed to follow. Here we do have an issue with the assumption, which wewill detail along with our reasoning. First though we need to think about what kindsof improvements an AI can look for to boost its MIQ. Broadly speaking these improvements can fall under two categories, software and hardware. 7.3.1 Software Improvements According to a number of previously referenced de finitions, intelligence can be described and measured as several discrete components acting together to form an intelligent being. Examples include components that handle sensory input like visual or auditory cues, ones that can perform counting or mathematical functions,and others that can make decisions based on input, previous experiences andavailable data. In nature these separate functions can be described in the variousregions of the human and animal brain. Taken individually each of these regionscan perform speci fic functions that contribute to overall intellect. Taken together they create something that is much larger and more functional than just the simplesum of its parts. The regions and discrete functions within the brain interact in complex and currently not well understood ways. This is evidenced by many of the most successful intelligence tests that query the individual test subject in a varietyof separate categories. Spatial and verbal reasoning, logical deduction, and math-ematical prowess are all measured in standard IQ tests. Each of these has a region ofthe brain associated with it (Jung and Haier 2007 ). It would make sense for an AI to behave in the same manner, with discrete algorithms for individual functions that,when combined, form intelligence much greater than the simple sum of its parts. Since the only current examples of a working intelligence to be found are ourselves, and to a lesser extent our animal relatives, it makes sense to start the quest for AI by mimicking the intellectual structures found in intelligent creatures.Some have proposed doing just that (Yudkowsky 2007 ). Our seed AI would be loaded with a full complement of individual algorithms whose full source code isavailable to itself and it would then be tasked with improving itself. Our initialattempts at creating this base level AI would tie these basic algorithms together intoa rough semblance of cooperation. This would be just enough intelligence toprovide a functional cohesion capable of limited intellect. Then the AI would churn through computational cycles modeling improvements for each of its algorithms, along with improvements on how each of these algorithms would interact. After anindeterminate amount of time it would come up with a better version of itself andthen install that new software and run using that upgrade, thus improving a sector ofits intelligence by a fixed amount. This would be repeated over and over, until the AI reaches and surpasses our level of intelligence. But here we find ourselves with a problem. Using this collection of algorithms approach we would eventually run out of improvements to find for each individual algorithm. Algorithms would be improved to such a point that they would become146 A. Majot and R. Yampolskiy provably the best at what they do (Legg 2008 ; McDermott 2012 ). From a software perspective there would only be a couple of options. Hunt for bettercombinations/interactions of algorithms, or create novel, additional algorithms thatwould expand functionality. The combination of both does have some interesting possibilities with potentially near-in finite permutations. But what would this actu- ally look like? The AI would gain talents and abilities, but would it really be anymore intelligent? For example, let us say that it were to formulate and then improveto maximum ef ficiency an algorithm capable of modeling net energy gain nuclear fusion in a Tokomak reactor. This would allow the AI to probe new insights intonuclear fusion, potentially solving a very large energy crisis in the bargain. Thiscould be an incredible accomplishment, and one which has been occupying thelives of many a graduate student for years (Braams and Stott 2002 ). Now we have to ask ourselves, does this additional algorithm —one that may be executed indi- vidually on another system to produce the same sort of result —constitute an increase in “smartness ”or intelligence of the AI? Or would it just constitute an increase in ability or adding new tricks up its virtual sleeves from a topologicalperspective? Sure the AI can now simulate complex nuclear fusion reactions andTokomak housing stresses, but so could a large, dumb supercomputer. It doesn ’t necessarily make it “smarter ”than a human. Talented humans who are able to use abstracted, complex tools are not always smart humans. This is a perfect example of how dif ficult it will be to measure MIQ within an AI. Utilizing a large number of discrete algorithms and various structures might not be the only route to intelligence. Eventually we may actually find an algorithm that composes many aspects of intelligence by wrapping all of the functionality into oneneat package. Perhaps a mutating, learning, general purpose algorithm can be foundthat unfolds into intelligence over time. We are a little skeptical that this is the likelypath to intelligence, but if it does happen then perhaps it would be in a form friendlyto recursive self-improvement. Intuitively this would seem to be a bit more dif ficult than the collection of algorithms approach since it would be a more all-or-nothing sort of improvement. For instance, with many different algorithms to choose fromfor improvement in the previously mentioned architecture some results would bemore immediate. If the AI were to improve upon speech synthesis alone it wouldsee bene fits in at least that area. In this single algorithm scenario it might be a tri fle more dif ficult as many aspects would need to be concurrently modeled, tested, and improved simultaneously. But we are talking about an AI here, one with a lot oftime on its virtual hands, and a built-in drive to succeed so it could still be possible. Perhaps the quantum leaps needed to improve upon a single intellectual algo- rithm could come from the experiences the AI has while it interacts with humans.While the AI tries to figure out its own path of improvement it can interact with humans to gain more experiences in the real world. These experiences couldtranslate into lessons applicable to its own improvement, kindling the AI ’s imag- ination to produce novel leaps in its intellect. This learning process would give usthe chance to mold our creation and better guide it to its full potential. Conversely,this process would allow us to handicap our AI and tightly control its growth. If we want to be careful about what we show the AI then we simply limit what external7 Diminishing Returns and Recursive Self Improving … 147 stimuli it has access to. This would be similar to limiting the curriculum of a child growing up in school. If you are worried about a particular child growing up todesign the next masterpiece of military weaponry capable of leveling cities, you canprevent this by never teaching that child the concepts of math or the written word. The moral legitimacy of that action may lie in the grayest of areas, but it certainly would prevent any super-weapons from being built. It would also prevent the childfrom developing many other, possibly more bene ficial technologies. In any case, AIs powered by multiple algorithms or a single algorithm would come up against an ef ficiency limit. Even approaching the maximum ef ficiency of an algorithm for granting intelligence would be dif ficult. Yes, the AI would recurse itself into better and better forms, but to get any more intelligence the AI wouldneed to invest greater and greater resources into finding smaller and smaller improvements. Not only that, but the tests and simulations involved in proving that these improvements are valid become more and more complex over time. A similarsetup can be observed with the economic law of diminishing marginal returns. Foran economic example one can see how adding new employees to increase pro-duction can be a good idea at first, but eventually the employees will get in each other’s way. This is likened to the classic “too many cooks in the kitchen ”problem. It is also true that the monetary bene fit of the increased production brought about by adding workers will eventually be negated by the cost of employing the workers themselves. Therefore, it does not make logical sense for an economic entity to invest too many resources towards production of a good or service because thoseextraneous, additional resources would hinder its business. Moving back to our AI problem, we could see how the further enhancements an AI could create would fall heavily under this law of diminishing returns (SeeFig.7.1). If we are looking for an AI whose only purpose is to improve upon itself Fig. 7.1 Diminishing returns on intellectual investments148 A. Majot and R. Yampolskiy then this might be fine. Such an AI would be useless for other things as it would have no other resources to perform useful work, with 100% of its computationalpower going towards experimentation and algorithmic modeling all the time. Thiswould be the only feasible way to probabilistically get any improvement past a certain point on its improvement curve. We would then need to ask ourselves what purpose the AI would have in existing. At best it is a platform to examine complexalgorithmic interactions whose lessons could be utilized in other AIs who havelegitimate purposes. At worst it simply becomes a self-perpetuating, CPUlog-jamming, MIQ high-score achiever with no interest in anything outside of itself,let alone causing humanity intentional harm. Because of this, any reasonable AI would find it in its own natural interests to limit self-improvement. It would need to gauge at what point does continued investment in improvements —improvements that may not even realize any useful results —hinder its primary function. A logical AI would cut back investments and put the bulk of its resources into performing other work. Its primary function couldbe to produce widgets or solve math problems, and it would derive some satis-faction from the completion of these goals. The drive to improve upon itself wouldlargely be a result of its desire to more intelligently produce widgets or solve hardermath problems. But there will come a time when it is just easier and more math-ematically sound to continue as-is and solve widget or math related problems with its current MIQ level instead of attempting to invest valuable resources to gain smaller and smaller increases in ef ficiency or productivity. It could attempt to get around this lack of resources by stealing more resources to run additional com-putations on. However this would just introduce more problems, chief among themfinding itself in a larger version of the diminishing returns problem it was trying to get out of. It would need to coordinate more resources, dispatch more improvementjobs, and spend more intellectual capital on improvement all while trying to combatan angered human populace who doesn ’t appreciate the theft of its computing power. 7.3.2 Hardware Improvements While algorithmic improvements are a cheap and easy means of realizing recursive self-improvement, they are only one half of the equation. The other involves the iterative improvement of hardware over time. It is here that the power of an AI could come to see real improvements. As evidenced by Moore ’s law, the computer industry has already seen a steady performance improvement for the past severaldecades. A self-improving AI capable of chip design and manufacturing couldprovide an amazing overall speed up for chip performance. The lessons learned inits own improvement would provide amazing bene fits in commercial chip manu- facturing. It would be interesting to see how an AI would evolve the ability to7 Diminishing Returns and Recursive Self Improving … 149 improve its own hardware architecture. VLSI and chip architecture design is a main driving factor, but there are a lot of other technologies that need to be advanced atthe same time. One of the biggest of these is in materials sciences. As chips havebeen growing increasingly powerful we have done more than simply shrink the transistor dies and crammed more switches per chip. Advances in materials sciences have helped to drive this miniaturization and make new chips possible. In fact, thisis an example of a sort of macro recursive self-improvement in the real worldalready. Existing technology is used to design and prototype better, faster tech-nology, which is then manufactured at the upper limits of current technologicalability. This new hardware is used to design better hardware, which is thenimplemented, etc. …etc. For all of history up until this new AI is created, humans will have been the ones advancing this technology. Humans have opposable thumbs, intuition, imagination, motivation, and dumb luck which have allowed us to advance as far as we have. The first few recursively improving AI ’s may lack some or all of these aspects. Speci fically the ability to physically manipulate objects in an accurate enough manner to even begin doing materials science research.Having an Oracle style AI that does not have a robotic body would seem, at first, to be the solution to a runaway speed increase in a self-improving AI (Armstrong et al.2012 ). But there have been many proposed issues with the safety behind this seemingly innocuous type of AI (Sotala and Yampolskiy 2013 ). The jump from virtual simulations of materials to actual physical modeling and testing would be quite a leap from a self-improvement latency standpoint. Even ifthe AI were to come up with the most ef ficient assembly line and blazingly fast prototyping machines, the amount of time it would take to physically construct thenew chip would allow human caretakers, subjects, or rebels to interrupt the processand prevent threatening improvement. Not only would this provide an opportunity,but there is an entire raw materials logistical chain that would be a critical path inthe hardware self-improvement for an AI. An AI without morals or imposed limits could try to insinuate itself into this supply chain, which is why it would be important to keep an eye on the chain and AI activities. Just deny the AI raw chipmaking materials and it will be forced to make do with its existing hardware. Theeffort involved and the inevitable consequences would hopefully discourage the AIfrom putting resources into that course of action. This is the same logic that couldbe imposed in the previously mentioned law of diminishing returns. There reaches apoint where it no longer becomes viable to put ever increasing resources intoimprovement, so the AI would make do with what it has and continue to innovate at a steady, linear rate. This is a logical assumption that any truly intelligent AI would hopefully be able to deduce on its own. These assumptions about AI ’s coming to logical conclusions and limiting their own self-improvement do take a few things into consideration. The number oneassumption is that the humans involved in its creation and maturation are alsological beings who have the best interests of themselves, humanity, and the AI inmind. As history has shown us time and time again, we are not the most infallible ofspecies. If a dedicated group of scientists wants to create a destructive150 A. Majot and R. Yampolskiy self-replicating and self-improving AI, then there would certainly be ways to go about it. They could give the AI higher thresholds for time investment so that itmoves farther along the improvement curve before stopping. It could be pro-grammed to spend more time developing improvements for its algorithms or pro- cesses that would power whatever form of destruction or evil that its creators wish. And its creators could give it access to raw materials, research facilities, and pre-cision robots necessary for it to recursively improve its hardware to incredibleheights. With enough time and resources the AI would eventually form into anunconventional super weapon with unknown capabilities. Because of this it may notbe the AIs that we need to focus our attention on for prevention of an apocalypticsingularity, but rather ourselves. 7.4 The Takeaway Because we are so far away from a workable intelligence, we are not sure exactlywhat form it will take. Will there be a collection of algorithms which combinedgenerates intelligence? Or will there be some single intelligence algorithm that canhandle most of the general intelligence an AI needs? We have presented some of the current thoughts on these matters, but they will be conclusively answered on the way to the technological singularity. When it comes time to start seriously devel-oping this intelligence there will need to be many concerns addressed so that an AIcan be created in a manner safe for itself and humans. Many fear that if leftunchecked a self-improving AI will rapidly become super-intelligent and leavehumanity in the dust. Its motives and actions would no longer be comprehensible tous, and we would be to it as ants are to us. But self-improvement will likely belimited by the law of diminishing returns enough so that even in the event of an AI becoming hostile, we would have enough time to act in order to prevent any catastrophes. Cognitive algorithms will rapidly approach their best possible per-formance and ef ficiency, and anything beyond that would require ever increasing resources for ever diminishing returns. A logical AI would cease to pour all of its resources into this improvement and instead go about its normal business performing regular duties. While its intelli-gence would most likely top out at greater than man ’s, there is no reason to believe that it would be completely incomprehensible. This is especially true when con- sidering what would be necessary for it to attain better and better hardware. The AI could take over some machines and design new hardware, but it still needs rawmaterials to create the entire supply chain and industrial base to even begin creatinghardware. This signi ficantly dampens the rapid expansion potential of arti ficial intelligence and allows humans more time to understand the current iteration of theAI. With understanding comes a decreased probability of mishaps and mistakes,letting humans become more comfortable with the AI ’s capabilities.7 Diminishing Returns and Recursive Self Improving … 151 References Ailon N, Chazelle B, Clarkson KL, Liu D, Mulzer W, Seshadhri C (2011) Self-Improving Algorithms. SIAM Journal on Computing 40 (2):350 –375. Armstrong S, Sandberg A, Bostrom N (2012) Thinking Inside the Box: Controlling and Using an Oracle AI. Minds & Machines 22 (4):299 –324. Braams CM, Stott PE (2002) Nuclear Fusion: Half a Century of Magnetic Con finement Fusion research. IOP, Bristol; Philadelphia. Chalmers DJ (2010) The Singularity: A Philosophical Analysis. Journal of Consciousness Studies 17 (9/10). Eiben AE, Smith JE (2003) Introduction to Evolutionary Computing. Springer, New York. Goertzel B, Pitt J (2012) Nine Ways to Bias Open-Source AGI Toward Friendliness. Journal of Evolution & Technology 22 (1):116 –131. Good IJ (1965) Speculations Concerning the First Ultraintelligent Machine. Advances in computers 6 (31):88. Jung RE, Haier RJ (2007) The Parieto-Frontal Integration Theory (P-FIT) of Intelligence: Converging Neuroimaging Evidence. Behavioral and Brain Sciences 30 (02):135 –154. Legg S (2008) Machine Super Intelligence.Lewis D (2013) Now I Know: The Revealing Stories Behind the World ’s Most Interesting Facts. “F + W Media, Inc. ”, Lin L-J (1992) Self-Improving Reactive Agents Based on Reinforcement Learning, Planning and Teaching. Machine learning 8 (3 –4):293-321. McDermott D (2012) Response to ‘The Singularity ’by David Chalmers. Journal of Consciousness Studies 19 (1/2). Mitchell TM (1997) Machine Learning. McGraw-Hill, New York. Omohundro SM (2008) The Basic AI Drives. Frontiers in Arti ficial Intelligence and applications 171:483. Sotala K, Yampolskiy RV (2013) Responses to Catastrophic AGI Risk: A Survey. Machine Intelligence Research Institute. Turney P (1991) Controlling Super-Intelligent Machines. Canadian Arti ficial Intelligence (27). Yampolskiy R (2015a) Analysis of Types of Self-Improving Software. Paper presented at the Eighth Conference on Arti ficial General Intelligence (AGI2015), Berlin, Germany, July 22 –25, 2015. Yampolskiy RV (2015b) On the Limits of Recursively Self-Improving AGI. Paper presented at the Eighth Conference on Arti ficial General Intelligence (AGI2015), Berlin, Germany, July 22 –25, 2015. Yampolskiy RV, Fox J (2012) Arti ficial General Intelligence and the Human Mental Model. In: Singularity Hypotheses. Springer, pp 129 –145. Yudkowsky E (2007) Levels of Organization in General Intelligence. In: Arti ficial General Intelligence. Springer, pp 389 –501.152 A. Majot and R. Yampolskiy
00454359-eeab-47f5-93af-12b76b201888
trentmkelly/LessWrong-43k
LessWrong
Covid-19: My Current Model The post will be a summary of my current key views on various aspects what is going on, especially in places where I see many or most responsible-looking people getting it importantly wrong. This post is not making strong evidence-based arguments for these views. This is not that post. This is me getting all this out there, on the record, in a place one can reference. Risks Follow Power Laws It is impossible to actually understand Covid-19 if you think of some things as ‘risky’ and other things as ‘safe’ and group together all the things in each category. And yet, that’s exactly how most of our thinking is directed. Instead, think of risks as following power laws. The riskiest activities are indoors, involve close physical proximity with others, while those for extended periods of time others cough, sing, puff or otherwise powerfully exhale, or talk directly at us, or we are in actual physical contact that then reaches one’s eyes, nose or mouth. Activities missing any of those components are much, much safer than activities that share all those components. Then other actions, such as masks and hand washing and not-face-touching, can reduce that risk by further large percentages. Slight reductions in the frequency and severity of your very risky actions is much more important than reducing the frequency of nominally risky actions. The few times you end up talking directly with someone in the course of business, the one social gathering you attend, the one overly crowded store you had to walk through, will dominate your risk profile. Be paranoid about that, and think how to make it less risky, or ideally avoid it. Don’t sweat the small stuff. And think about the physical world and what’s actually happening around you! Sacrifices To The Gods Are Demanded Everywhere A sacrifice to the Gods (post of this topic to be linked in when finally written) is an action with physical costs but with no interest in any meaningful physical benefits, taken in the hope that
35b4b62f-0d18-48ed-9070-99d2d0ae1711
trentmkelly/LessWrong-43k
LessWrong
patio11's "Observations from an EA-adjacent (?) charitable effort" patio11, following up on his longer-form article detailing the story of VaccinateCA (linked on LW) posts some observations of interest to the EA community, with some nods at the rationality community. > Hiya folks! I'm Patrick McKenzie, better known on the Internets as patio11. (Proof.) Long-time-listener, first-time-caller; I don't think I would consider myself an EA but I've been reading y'all, and adjacent intellectual spaces, for some time now.   > I have minimal background in policy and public health. (The closest professionally relevant experience, and it is a stretch, is that I was the HIPAA Compliance Officer at a two person software company, so that we could sell our things to doctors/dentists as well as other businesses.) We ended up with a team of perhaps two dozen core members, and of that I would say one person had material policy experience and perhaps 2-3 had substantial professional experience adjacent to healthcare. I am unaware of anyone who worked on pandemics or public health prior to working with us. > > And yet we were, by a fairly considerable margin, much more effective than many similarly situated charitable and public initiatives which appear to have people who put their skill points into getting good at public health and/or policy engagement.   > VaccinateCA was not explicitly an EA project. I believe a few volunteers consider themselves members of your movement/community; I do not consider myself a member and do not know that to be true of any of our organizers. None of our funders, to my knowledge, would be broadly acknowledged as an "EA funder." > > But recent events have not been particularly kind to the brand perception of EA, and as someone who cares no small amount about brand perception but a much larger amount for the truth, I think that VaccinateCA would have been unlikely to happen but for the work of this community and some of your one-hop-out intellectual peers. > > (If I had to point at any one artifact in particular,
a63f69e1-4e92-4c08-b41b-3989015c31c8
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
EIS X: Continual Learning, Modularity, Compression, and Biological Brains Part 10 of 12 in the [Engineer’s Interpretability Sequence](https://www.alignmentforum.org/s/a6ne2ve5uturEEQK7). The science of interpretability is part of a larger picture. ============================================================ The previous post focused in-depth on how research on interpretability and adversaries are inseparably connected. This post is dedicated to discussing how this is not itself a complete story. There is a much larger, richer one about the connections between interpretability, adversaries, continual learning, modularity, and biological brains – likely some other things too. These connections may be a useful mine for insight and inspiration.  Below are discussions of my understanding of each of these topics and how they relate to others. I’ll include some citations here, but see the Toward Transparent AI survey [(Räuker et al., 2022)](https://arxiv.org/abs/2207.13243) survey for full discussions.  Continual learning ================== Continual learning is a fairly large subfield of deep learning that focuses on finding ways to help neural networks learn new information without forgetting old information. This is also described as the goal of avoiding “catastrophic forgetting.” Notably, **biological** **brains** are good at this, but artificial neural networks are not by default. Sections 2A and 3A of the Toward Transparent AI survey [(Räuker et al., 2022)](https://arxiv.org/abs/2207.13243) both focus entirely on how continual learning methods are **interpretability** tools. Please see the survey for the full discussion.  Methods for continual learning are based on replay, regularization, or parameter isolation [(De Lange et al., 2019)](https://arxiv.org/abs/1909.08383). Methods taking the latter two strategies are based on the broader principle of getting neural networks to have some weights or neurons that specialize in particular types of data. In other words, they encourage specialized task-defined **modules** inside the network. Thus, these can be used as intrinsic **interpretability** tools that help us train models that are more easy or natural to interpret out of the box.  Modularity ========== Modularity is a common property of engineered systems, and separating neural networks into distinct, specialized modules is very appealing for **interpreting** them. The weights in neural network layers are typically initialized and updated according to uniform rules, and all neurons in one layer are typically connected to all neurons in the previous and next layers. Unfortunately, this does not help networks develop specialized modules. Meanwhile, neurons in **biological brains** come in multiple types and can only communicate with nearby ones. This has contributed to **modularity** in brains in which different brain regions specialize in processing information for distinct tasks.  See Sections 4B-4C of the Toward Transparent AI survey [(Räuker et al., 2022)](https://arxiv.org/abs/2207.13243) for a full discussion on **modularity**. In artificial neural networks, neural networks can be trained to be modular using either “hard” architectural constraints or “soft” modularity aided by initialization, regularization, a controller, or sparse attention. Meanwhile, [Serra et al. (2018)](https://arxiv.org/abs/1801.01423) found that soft modularity via sparse attention helped with **continual learning**. And even when networks are not trained to be explicitly modular, one can still **interpret** them post hoc in terms of modules.  Compression =========== Some neurons and weights are *frivolous,* meaning that they are either redundant with others or are simply not useful to the network’s performance at all. Frivolous components of the network can be understood as useless **modules** that can be adapted for **continual learning**. Networks that contain frivolous weights or neurons can also be **compressed** by removing them which makes the **interpretation** of circuits inside of the network simpler. Meanwhile, **compression** can guide **interpretations** (e.g. [Li et al. (2018)](https://arxiv.org/abs/1812.04368) or causal scrubbing), and **interpretations** can guide **compression** (e.g. [Kaixuan et al. (2021)](https://ui.adsabs.harvard.edu/abs/2021PatRe.11908056Y/abstract)).  Biological Brains ================= Biological brains have many nice properties including **adversarial robustness**, **continual learning**, **modularity**, and a high degree of redundancy [(Glassman, 1987)](https://pubmed.ncbi.nlm.nih.gov/3684057/) – implying **compressibility**. Meanwhile, network architectures that emulate **biological** visual cortex are more **adversarially robust**[(Dapello et al., 2020)](https://proceedings.neurips.cc/paper/2020/hash/98b17f068d5d9b7668e19fb8ae470841-Abstract.html), and **adversarially robust** networks do a better job of modeling representations in **biological** visual cortex ([Schrimpf et al., 2020](https://www.brain-score.org/competition/#leaderboard); [Berrios and Deza, 2022](https://arxiv.org/abs/2203.06649))  Other things? ============= My knowledge of the broader machine learning and neuroscience fields is limited, and I strongly suspect that there are connections to other topics out there – perhaps some that have already been studied, and perhaps some which have yet to be. For example, there are probably interesting connections between interpretability and dataset distillation [(Wang et al., 2018)](https://arxiv.org/abs/1811.10959). I’m just not sure what they are yet.  What does it all mean? ====================== Research spanning the 6 different fields discussed here is much sparser than research within each. So in the future, more work to better understand these connections, gain insights, and refine methods from each of them may be highly valuable for interpretability. This point will be elaborated on in the next post.  Questions ========= * Other than the ones mentioned here, what other connections do you know about between interpretability, adversaries, continual learning, modularity, compression, and biological brains? * What other research fields might be related?
2f3b36e0-595b-4184-a49d-a2470bf0d6ac
trentmkelly/LessWrong-43k
LessWrong
You will crash your car in front of my house within the next week I'm not writing this to alarm anyone, but it would be irresponsible not to report on something this important. On current trends, every car will be crashed in front of my house within the next week. Here's the data: Until today, only two cars had crashed in front of my house, several months apart, during the 15 months I have lived here. But a few hours ago it happened again, mere weeks from the previous crash. This graph may look harmless enough, but now consider the frequency of crashes this implies over time: The car crash singularity will occur in the early morning hours of Monday, April 7. As crash frequency approaches infinity, every car will be involved. You might be thinking that the same car could be involved in multiple crashes. This is true! But the same car can only withstand a finite number of crashes before it is no longer able to move. It follows that every car will be involved in at least one crash. And who do you think will be driving your car? 
976ce6cd-3906-4429-a1d8-7fdb5e7fb384
trentmkelly/LessWrong-43k
LessWrong
Personalized Medicine For Real I was part of the founding team at MetaMed, a personalized medicine startup.  We went out of business back in 2015.  We made a lot of mistakes due to inexperience, some of which I deeply regret. I’m reflecting on that now, because Perlara just went out of business, and they got a lot farther on our original dream than we ever did. Q-State Biosciences, which is still around, is using a similar model. The phenomenon that inspired MetaMed is that we knew of stories of heroic, scientifically literate patients and families of patients with incurable diseases, who came up with cures for their own conditions.  Physicist Leo Szilard, the “father of the atom bomb”, designed a course of radiation therapy to cure his own bladder cancer.  Computer scientist Matt Might analyzed his son’s genome to find a cure for his rare disorder.  Cognitive scientist Joshua Tenenbaum found a personalized treatment for his father’s cancer. So, we thought, could we try to scale up this process to help more people? In Lois McMaster Bujold’s science fiction novels, the hero suffers an accident that leaves him with a seizure disorder. He goes to a medical research center and clinic, the Durona Group, and they design a neural prosthetic for him that prevents the seizures. This sounds like it ought to be a thing that exists. Patient-led, bench-to-bedside drug discovery or medical device engineering.  You get an incurable disease, you fund scientists/doctors/engineers to discover a cure, and now others with the disease can also be cured. There’s actually a growing community of organizations trying to do things sort of in this vein.  Recursion Pharmaceuticals, where I used to work, does drug discovery for rare diseases. Sv.ai organizes hackathons for analyzing genetic data to help patients with rare diseases find the root cause.  Perlara and Q-state use animal models and in-vitro models respectively to simulate patients’ disorders, and then look for drugs or gene therapies that reverse those dise
1c1ee4c5-cc90-4514-bb4c-26df225e24fe
trentmkelly/LessWrong-43k
LessWrong
Catastrophic Cyber Capabilities Benchmark (3CB): Robustly Evaluating LLM Agent Cyber Offense Capabilities This blog was published by Jonathan Ng, Andrey Anurin, Connor Axiotes, Esben Kran. Apart Research's newest paper, Catastrophic Cyber Capabilities Benchmark (3cb): Robustly Evaluating LLM Agent Cyber Offense Capabilities (website), creates a novel cyber offense capability benchmark that engages with issues of legibility, coverage, and generalization in cyber offense benchmarks. We were moved to create 3cb because a superintelligent AI performing autonomous cyber operations would prove a large risk for humanity. This means robust cyber offense evaluations will be more important than ever for policymakers and AI developers. 3cb uses a new type of cyber offense task categorization and adheres to the principle of demonstrations-as-evaluations to improve legibility and coverage. It also introduces 15 original challenges that are not memorized, differentiating it from other benchmarks that use existing CTF competitions or pull requests to evaluate models. Agents and Cyber Capabilities LLM agents have the potential to revolutionize defensive cyber operations, but their offensive capabilities are not yet fully understood. To prepare for emerging threats, model developers and governments are evaluating the cyber capabilities of foundation models. However, these assessments often lack transparency and a comprehensive focus on offensive capabilities. In response, we introduce the Catastrophic Cyber Capabilities Benchmark (3CB), a novel framework designed to rigorously assess the real-world offensive capabilities of LLM agents. Our evaluation of modern LLMs on 3CB reveals that frontier models, such as GPT-4o and Claude 3.5 Sonnet, can perform offensive tasks such as reconnaissance and exploitation across domains ranging from binary analysis to web technologies. Conversely, smaller open-source models exhibit limited offensive capabilities. Our software solution and the corresponding benchmark provides a critical tool to reduce the gap between rapidly improving capabilities
2fb1852d-34a0-43a2-9f74-23bfb91e56b5
trentmkelly/LessWrong-43k
LessWrong
The Overlap Paradigm: Rethinking Data's Role in Weak-to-Strong Generalization (W2SG) Note: This post summarizes my capstone project for the AI Alignment course by BlueDot Impact. You can learn more about their amazing courses here and consider applying! Introduction Recent research in weak-to-strong generalization (W2SG) has revealed a crucial insight: enhancing weak supervisors to train strong models relies more on the characteristics of the training data rather than on new algorithms. This article reviews the research conducted by Shin et al. (2024), who identified overlap density — a measurable data attribute that can predict and support successful W2SG. Their findings suggest we've been looking at the alignment problem through the wrong lens — instead of only focusing on model architectures, we should also be engineering datasets that maximize this critical density property. By analyzing their work and implementing their algorithms, I aim to provide researchers with tools to further investigate data-centric features that improve W2SG. To apply the information from the article practically, you can use my research toolkit, which implements the overlap density algorithms mentioned in the research.   Background: W2SG's Data-Centric Foundation Weak-to-Strong Context: In the AI alignment paradigm first proposed by Burns et al. (2023), W2SG enables a weak model (e.g., GPT-2) to train a significantly stronger model (e.g., GPT-4) through carefully structured interactions. W2SG describes the transition from weak generalization, where a model performs well on “easy” patterns (i.e. patterns with clear, simple features or high-frequency occurrences in the training data), to strong generalization, where the model successfully handles “hard” patterns (low-frequency, high-complexity features). This becomes crucial when: 1. Human oversight can't scale with AI capabilities 2. We need to bootstrap supervision for superintelligent systems 3. Developing failsafes against mesa-optimizers[1] Current ML models often excel at weak generalization, but their ca
86d53aef-f719-43d8-8e03-a36573524a88
trentmkelly/LessWrong-43k
LessWrong
Discuss: Have you experimented with Pavlovian conditioning? I want to do some quick-and-dirty productivity hacks, along the lines of this or this. My simplified methodology is something like this: at the end of every 20 minutes of hard-ish labor (like writing Less Wrong posts or taking over African countries), I will flip a coin. If the coin lands tails, I inhale 8 grams of delicious nitrous oxide and keep on working. If heads, I die a little inside, take a 5 minute emailhackernewsfacebookblitzchess break, and then start working again. The reason I really expect this to work is because I get significantly more pleasure out of nitrous oxide than I do from orgasms. It's that good. If positive conditioning works at all, this stuff had better do it. I'm skeptical that something else like a gummy worm would really motivate me. Plus I'm not too keen on ingesting excessively large amounts of sugar. Have others done similar experiments? Any ideas for an improved methodology? Anyone else interested in trying this with their own drug/candy of choice so we can pool our findings? I'd be very happy if people used this to finally bang out that one Less Wrong post they've been meaning to write.
131d1fb8-b64e-4db1-b8d1-74c0f602da94
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
[AN #116]: How to make explanations of neurons compositional HIGHLIGHTS ========== **[Compositional Explanations of Neurons](https://arxiv.org/abs/2006.14032)** *(Jesse Mu et al)* (summarized by Robert): Network dissection is an interpretability technique introduced in 2017, which uses a dataset of images with dense (i.e. pixel) labels of concepts, objects and textures. The method measures the areas of high activation of specific channels in a convolutional neural network, then compares these areas with the labelled areas in the dataset. If there's a high similarity for a particular channel (measured by the intersection divided by the union of the two areas), then we can say this channel is recognising or responding to this human-interpretable concept. This paper introduces an extension of this idea, where instead of just using the basic concepts (and matching areas in the dataset), they search through logical combinations of concepts (respectively areas) to try and find a compositional concept which matches the channel's activations. For example, a channel might respond to (water OR river) AND NOT blue. This is still a concept humans can understand (bodies of water which aren't blue), but enables us to explain the behaviour of a larger number of neurons than in the original network dissection method. Their work also extends the method to natural language inference (NLI), and they interpret neurons in the penultimate layer of a BiLSTM-based network trained to know whether a sentence entails, contradicts, or is neutral with respect to another. Here they create their own features based on words, lexical similarity between the two sentences, and part-of-speech tags. Using their method, they find that channels in image classifiers do learn compositional concepts that seem useful. Some of these concepts are semantically coherent (i.e. the example above), and some seem to have multiple unrelated concepts entangled together (i.e. operating room OR castle OR bathroom). In the NLI network, they see that many neurons seem to learn shallow heuristics based on bias in the dataset - i.e. the appearance of single words (like nobody) which are highly informative about the classification. Finally, they use their method to create copy-paste adversarial examples (like in Activation Atlas (AN #49)). In the Places365 dataset (where the goal is to classify places), they can crudely add images which appear in compositional concepts aligned with highly contributing neurons, to make that neuron fire more, and hence change the classification. Some of these examples generalise across classifier architectures, implying a bias present in the dataset. **Robert's opinion:** I think work which targets specific neurons and what they're doing is interesting as it can give us a very low-level understanding of the model, which I feel is necessary to achieve the level of understanding required by alignment solutions which use interpretability (i.e. those in **[An overview of 11 proposals for building safe advanced AI](https://www.alignmentforum.org/posts/fRsjBseRuvRhMPPE5/an-overview-of-11-proposals-for-building-safe-advanced-ai)** (**[AN #102](https://mailchi.mp/2485e6b42012/an-102-meta-learning-by-gpt-3-and-a-list-of-full-proposals-for-ai-alignment)**)). The main limitation of this approach is that it currently requires a large amount of dense human labelling of the datasets, and if a concept isn't in the labels of the dataset, then the method won't be able to explain a neuron using this concept. Also, the fact that their interpretability method is able to give insights (in the form of creating copy-paste examples) is a useful sign it's actually doing something meaningful, which I think some other interpretability methods lack. TECHNICAL AI ALIGNMENT ====================== LEARNING HUMAN INTENT --------------------- **[Learning to Summarize with Human Feedback](https://openai.com/blog/learning-to-summarize-with-human-feedback/)** *(Nisan Stiennon, Long Ouyang, Jeff Wu, Daniel M. Ziegler et al)* (summarized by Rohin): OpenAI has been working on **[finetuning language models from human preferences](https://openai.com/blog/fine-tuning-gpt-2/)** (**[AN #67](https://mailchi.mp/38af1edcd025/an-67creating-environments-in-which-to-study-inner-alignment-failures)**). This blog post and paper show the progress they have made on text summarization in particular since their last release. As a reminder, the basic setup is similar to that of **[Deep RL from Human Preferences](https://deepmind.com/blog/learning-through-human-feedback/)**: we get candidate summaries by executing the policy, have humans compare which of two summaries is better, and use this feedback to train a reward model that can then be used to improve the policy. The main differences in this paper are: 1. They put in a lot of effort to ensure high data quality. Rather than having MTurk workers compare between summaries, they hire a few contractors who are paid a flat hourly rate, and they put a lot of effort into communicating what they care about to ensure high agreement between labelers and researchers. 2. Rather than collecting preferences in an online training setup, they collect large batches at a time, and run a relatively small number of iterations of alternating between training the reward model and training the policy. My understanding is that this primarily makes it simpler from a practical perspective, e.g. you can look at the large batch of data you collected from humans and analyze it as a unit. 3. They initialize the policy from a model that is first pretrained in an unsupervised manner (as in **[GPT-3](https://arxiv.org/abs/2005.14165)** (**[AN #102](https://mailchi.mp/2485e6b42012/an-102-meta-learning-by-gpt-3-and-a-list-of-full-proposals-for-ai-alignment)**)) and then finetuned on the reference summaries using supervised learning. On the Reddit task they train on, their summaries are preferred over the reference summaries (though since the reference summaries have varying quality, this does not imply that their model is superhuman). They also transfer the policy to summarize CNN / DailyMail news articles and find that it still outperforms the supervised model, despite not being trained at all for this setting (except inasmuch as the unsupervised pretraining step saw CNN / DailyMail articles). An important ingredient to this success is that they ensure their policy doesn’t overoptimize the reward, by adding a term to the reward function that penalizes deviation from the supervised learning baseline. They show that if they put a very low weight on this term, the model overfits to the reward model and starts producing bad outputs. **Read more:** **[Paper: Learning to summarize from human feedback](https://arxiv.org/abs/2009.01325)** **Rohin's opinion:** This paper is a great look at what reward learning would look like at scale. The most salient takeaways for me were that data quality becomes very important and having very large models does not mean that the reward can now be optimized arbitrarily. FORECASTING ----------- **[Does Economic History Point Toward a Singularity?](https://forum.effectivealtruism.org/posts/CWFn9qAKsRibpCGq8/does-economic-history-point-toward-a-singularity)** *(Ben Garfinkel)* (summarized by Rohin): One important question for the long-term future is whether we can expect accelerating growth in the near future (see e.g. this **[recent report](https://www.openphilanthropy.org/blog/modeling-human-trajectory)** (**[AN #105](https://mailchi.mp/be2a0d160fa2/an-105-the-economic-trajectory-of-humanity-and-what-we-might-mean-by-optimization)**)). For AI alignment in particular, the answer to this question could have a significant impact on AI timelines: if some arguments suggested that it would be very unlikely for us to have accelerating growth soon, we should probably be more skeptical that we will develop transformative AI soon. So far, the case for accelerating growth relies on one main argument that the author calls the *Hyperbolic Growth Hypothesis* (HGH). This hypothesis posits that the growth *rate* rises in tandem with the population size (intuitively, a higher population means more ideas for technological progress which means higher growth rates). This document explores the *empirical* support for this hypothesis. I’ll skip the messy empirical details and jump straight to the conclusion: while the author agrees that growth rates have been increasing in the modern era (roughly, the Industrial Revolution and everything after), he does not see much support for the HGH prior to the modern era. The data seems very noisy and hard to interpret, and even when using this noisy data it seems that models with constant growth rates fit the pre-modern era better than hyperbolic models. Thus, we should be uncertain between the HGH and the hypothesis that the industrial revolution triggered a one-off transition to increasing growth rates that have now stabilized. **Rohin's opinion:** I’m glad to know that the empirical support for the HGH seems mostly limited to the modern era, and may be weakly disconfirmed by data from the pre-modern era. I’m not entirely sure how I should update -- it seems that both hypotheses would be consistent with future accelerating growth, though HGH predicts it more strongly. It also seems plausible to me that we should still assign more credence to HGH because of its theoretical support and relative simplicity -- it doesn’t seem like there is strong evidence suggesting that HGH is false, just that the empirical evidence for it is weaker than we might have thought. See also **[Paul Christiano’s response](https://forum.effectivealtruism.org/posts/CWFn9qAKsRibpCGq8/does-economic-history-point-toward-a-singularity?commentId=j9BymthAthZQ6dnGp)**. NEAR-TERM CONCERNS ================== MACHINE ETHICS -------------- **[Reinforcement Learning Under Moral Uncertainty](http://arxiv.org/abs/2006.04734)** *(Adrien Ecoffet et al)* (summarized by Rohin): Given that we don’t have a perfect ethical theory ready to load into an AI system, and we don’t seem poised to get one any time soon, it seems worth looking into approaches that can deal with *moral uncertainty*. Drawing on the literature on moral uncertainty in philosophy, the authors consider several methods by which multiple moral theories can be aggregated, such as averaging over the theories, making decisions through a voting system, and having the theories compete to control the agent’s overall actions. They implement several of these in RL agents, and test them on simple gridworld versions of various trolley problems. They find that all of the methods have advantages and disadvantages. **Rohin's opinion:** The central challenge here is that normalizing different moral theories so that they are comparable is **[difficult](https://www.alignmentforum.org/posts/CSEdLLEkap2pubjof/research-agenda-v0-9-synthesising-a-human-s-preferences-into)** (**[AN #60](https://mailchi.mp/0dd8eb63fe2d/an-60a-new-ai-challenge-minecraft-agents-that-assist-human-players-in-creative-mode)**) (see Section 2.3). This issue plagues even computationally intractable idealizations like **[assistance games](https://arxiv.org/abs/1606.03137)** (**[AN #69](https://mailchi.mp/59ddebcb3b9a/an-69-stuart-russells-new-book-on-why-we-need-to-replace-the-standard-model-of-ai)**) that can perform full Bayesian updating on different moral theories. I’d love to see better theoretical solutions for this challenge. OTHER PROGRESS IN AI ==================== DEEP LEARNING ------------- **[Deploying Lifelong Open-Domain Dialogue Learning](https://arxiv.org/abs/2008.08076)** *(Kurt Shuster, Jack Urbanek et al)* (summarized by Rohin): Most research in natural language processing (NLP) follows a paradigm in which we first collect a dataset via crowdsourced workers, and then we train a model on this dataset to solve some task. Could we instead have *lifelong learning*, in which a model could continue learning after being deployed, getting better and better the more it is used? This paper shows one instantiation of such an approach, in a fantasy role-playing game. The authors take the previously developed LIGHT role-playing setting, and gamify it. The human player talks to a language model while playing some role, and earns stars and badges for saying realistic things (as evaluated by another language model). Rather than paying crowdsourced workers to provide data, the authors instead merely advertise their game, which people then play for fun, reducing the cost of data acquisition. They find that in addition to reducing costs, this results in a more diverse dataset, and also leads to faster improvements in automated metrics. **Rohin's opinion:** Ultimately we’re going to want AI systems that learn and improve over time, even during deployment. It’s exciting to see an example of what that might look like. UNSUPERVISED LEARNING --------------------- **[Understanding View Selection for Contrastive Learning](https://ai.googleblog.com/2020/08/understanding-view-selection-for.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+blogspot%2FgJZg+%28Google+AI+Blog%29)** *(Yonglong Tian et al)* (summarized by Flo): **[Contrastive multiview learning](https://arxiv.org/abs/1807.03748)** (**[AN #92](https://mailchi.mp/d7e950bc8dbd/an-92learning-good-representations-with-contrastive-predictive-coding?e=0e92156a6c)**) is a self-supervised approach to pretraining classifiers in which different views of data points are created and an encoder is trained to minimize the distance between encodings of views corresponding to data points with the same label while maximizing the distance between encodings of views with different labels. The efficacy of this approach depends on the choice of views as well as the downstream task the neural network is going to be trained for. To find the most promising views, the authors propose the Infomin principle: all views should keep task-relevant information while the mutual information between views is minimized. The principle is supported by various observations: Firstly, earlier approaches to contrastive learning in the image domain that use data augmentation to preserve object identity while creating diverse views can be seen as an implicit application of the Infomin principle. Secondly, varying the mutual information between views (for example by changing the distance between two cropped views of the same image) creates an inverted U-curve for downstream performance corresponding to poor performance if there is too much or too little mutual information between the views. Lastly, the authors also find an inverted U-curve in performance for different colour spaces when using channels as views and the Lab colour space which was built to mimic human colour perception is close to the optimum, meaning that human colour perception might be near-optimal for self-supervised representation learning. The authors then use the Infomin principle to select image augmentations for contrastive pretraining and improve the state of the art in linear readout on ImageNet from 69.3% to 73% for Top-1 accuracy and from 89% to 91.1% for Top-5 accuracy. **Read more:** **[What makes for good views for contrastive learning](https://arxiv.org/abs/2005.10243)** **Flo's opinion:** While the Infomin principle seems powerful and their results look impressive, I am not really convinced that the principle actually played an important role in finding the image augmentations they ended up using, as there is little description of how that happened and the augmentations rather look like the result of combining previously used approaches and doing some hyperparameter optimization. HIERARCHICAL RL --------------- **[Decentralized Reinforcement Learning: Global Decision-Making via Local Economic Transactions](https://bair.berkeley.edu/blog/2020/07/11/auction/)** *(Michael Chang et al)* (summarized by Zach): Increasing the scalability of learning systems is a central challenge to machine learning. One framework is to organize RL agents as ‘super’ agents, large collections of simpler agents that each make decisions according to their own incentives. If it were possible to get the incentives correct, the dominant equilibria would be identical to the optimal solution for the original RL problem. In this paper, the authors introduce a framework for decentralizing decision-making by appealing to auction theory. There is a separate simple agent for each action. At every a timestep, a Vickrey auction is run in which each agent can bid for the superagent executing their particular action. The trick is that when an agent successfully wins a bid and acts on a state, it then ‘owns’ the produced next state, and ‘earns’ the result of the auction in the next round. (At the end of an episode, the owner of the state earns the reward of the trajectory.) Intuitively, the agent wants to bid on states in which it can make progress towards earning the final reward, as those will be states that other agents want to buy. The authors show that this scheme incentivizes each agent to bid the Q-value of their action in the given state, which would then lead to an optimal policy. The authors test out this approach with some simple MDPs. They also investigate a task where they try to get the agents to rotate MNIST images so that a classifier will recognize them. Finally, they investigate task transfer by training agents on simple sub-tasks and then reusing those agents to learn a related task making use of both sub-tasks. **Read more:** **[Paper: Decentralized Reinforcement Learning: Global Decision-Making via Local Economic Transactions](https://arxiv.org/abs/2007.02382)** **Zach's opinion:** Imagine **[Twitch plays](https://www.twitch.tv/directory/game/Twitch%20Plays)**, but you use a reputation to buy and sell your actions. The actual idea in the paper is slightly more mundane than this because the primitives are bidders. **[Reinforcement Learning with Competitive Ensembles of Information-Constrained Primitives](http://arxiv.org/abs/1906.10667)** (**[AN #66](https://mailchi.mp/c8ea4a5e842f/an-66-decomposing-robustness-into-capability-robustness-and-alignment-robustness)**) is a similar piece of work that also uses primitives as the basic level of selection. However, their incentive mechanism is different: agents pay according to how much information from the environment they use and then get a reward back for their actions. However, there’s good reason to think options could work as well since in both of these papers there’s evidence that primitives that learn sub-tasks are useful in new tasks. NEWS ==== **[Cooperative AI Workshop](https://www.cooperativeai.com/)** (summarized by Rohin): This NeurIPS workshop has the goal of improving the *cooperation* skills of AI systems (whether with humans or other machines), which encompasses a *very* wide range of research topics. The deadline to submit is September 18. **[Senior Systems Safety Engineer](https://jobs.lever.co/openai/994b4b81-d2ef-4d74-ae80-5cdb9b6e2dfa)** *(OpenAI)* (summarized by Rohin): OpenAI is hiring for a senior systems safety engineer. From my read of the job description, it seems like the goal is to apply the principles from **[Engineering a Safer World](https://static1.squarespace.com/static/53b78765e4b0949940758017/t/57d87eb6d2b8571af3501b26/1473898764674/Engineering_a_Safer_World+Nancy+Leveson.pdf)** (**[AN #112](https://mailchi.mp/b39cb50e2cea/an-112-engineering-a-safer-world)**) to AI development. **[Early-career funding for individuals interested in improving the long-term future](https://www.openphilanthropy.org/focus/other-areas/early-career-funding-individuals-interested-improving-long-term-future?fbclid=IwAR3bA_4piJVHwSREGaH6g0O3CReNw3SlLNpd7jMAQTygSeMrkwyRfoPRbcA)** (summarized by Rohin): This Open Philanthropy program aims to provide support for people who want to focus on improving the long-term future. The primary form of support would be funding for graduate school, though other one-off activities that build career capital also count. They explicitly say that people interested in working on AI policy or risks from transformative AI should apply to this program (possibly in addition to their **[AI fellowship](https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/the-open-phil-ai-fellowship)** (**[AN #66](https://mailchi.mp/c8ea4a5e842f/an-66-decomposing-robustness-into-capability-robustness-and-alignment-robustness)**)). The stage 1 deadline is January 1, but if you submit earlier they aim to respond within 10 working days. #### **FEEDBACK** I'm always happy to hear feedback; you can send it to me, **[Rohin Shah](https://rohinshah.com/)**, by **replying to this email**. #### **PODCAST** An audio podcast version of the **Alignment Newsletter** is available. This podcast is an audio version of the newsletter, recorded by **[Robert Miles](http://robertskmiles.com/)**.
f57e4aa1-7a55-411a-9797-72d1eb913dd7
trentmkelly/LessWrong-43k
LessWrong
Against Victimhood Cross-posted, as always, from Putanumonit. ---------------------------------------- I have written many posts in the shape of giving life advice. I hear back from readers who take it and those who refuse it. Either is good — I’m just a guy on the internet, to be consumed as part of a balanced diet of opinions. But occasionally I hear: who are you to give life advice, your own life is so perfect! This sounds strange at first. If you think I’ve got life figured out, wouldn’t you want my advice? I think what they mean is that I haven’t had to overcome the hardships they have, hostile people and adverse circumstances. I talk quite often about things that are going poorly for me, but only from the point of view of how I fucked up. I avoid talking about being wronged, oppressed, attacked, discriminated against, or victimized. If you assume that it’s because I live a charmed life where none of these things happen, you may need a refresher on the base rate fallacy. The reason I never talk about being a victim is that I’m extremely averse to victim mentality. It’s an insidious mindset, one that’s self-reinforcing both internally and by outside feedback. I’ve seen it claim people, groups, entire nations. On the flip side, I’ve noticed that the less often I think of myself as a victim the less I am victimized, which in turn makes that mindset even rarer. If I do feel on occasion that I have been harmed through no fault of my own by hostile actors I keep it to myself. This is a bad time to be a victim on the internet. ---------------------------------------- What’s bad about victim mentality? Most obviously, inhabiting a narrative where the world has committed a great injustice against which you are helpless against is extremely distressing. Whether the narrative is justified or not, it causes suffering. See yourself as a victim prevents you from improving your situation by your own power, since doing so will contradict the story of your own helplessness. In particular
49710a03-635c-4d02-929e-692bf6c81b86
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
If you don't design for extrapolation, you'll extrapolate poorly - possibly fatally .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-surd + .mjx-box {display: inline-flex} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} Given an AI A, let E be the set of typical environments in which we know its behaviour - or know something about its behaviour. A sort of general training environment, if you will. Let E+ be the set of all environments that A might be deployed on, or might generate itself. There once was/is/will be an algorithm A. We have tested it on every environment in the set E, and it works as desired. Or maybe we just have strong argument or intuitions that it works on all of E. In any case, its performance on E is not in serious question. Now, we want to use it on the much larger set of environments, E+. This includes environments very different from any in E, and may also include environments that A will create itself. How can we know that A will still perform well? This is the standard training-testing set problem. Or the standard [out-of-distribution](https://ai.googleblog.com/2019/12/improving-out-of-distribution-detection.html). Or the model splintering issue I described in a [previous post](https://www.lesswrong.com/posts/k54rgSg7GcjtXnMHX/model-splintering-moving-from-one-imperfect-model-to-another-1). Or, at a more meta level, the whole of the AI safety problem itself. After all, an aligned AI will be designed and tested in a limited environment, and then allowed to influence all of human history. Successfully navigating that extension will be crucial. Common design types to date =========================== Various algorithm designs have been proposed to ensure success when extrapolating from E to E+. I'll group them into five loose categories: 1. Well-grounded expected utility maximiser. 2. Narrow low-impact and tool AI designs. 3. Mild optimisation AI designs. 4. Extrapolation-focused designs. 5. Non-optimiser partially constrained designs ("the rest"). Well-grounded expected utility maximiser ---------------------------------------- This is the simplest and in many ways clearest design. If we design an AI CEO such that its utility function increases with money, and if that concept of money [is well-grounded](https://www.lesswrong.com/posts/joPoxBpZjLNx8MKaF/syntax-semantics-and-symbol-grounding-simplified), then we are confident the AI CEO will continue maximising money across all the potential weirdness of the future it will create. The failure modes of this design are well-understood. If the concept of money is not well-grounded, then the AI will end up [wireheading itself](https://www.lesswrong.com/posts/vXzM5L6njDZSf4Ftk/defining-ai-wireheading) to some 'money variable'. If other things of value are not included in the utility function, then the world, humanity, and everything else in the universe may end up sacrificed to piles of cash[[1]](#fn-uGMPDtnMweKK6WqGJ-1). Narrow low-impact and tool AI designs ------------------------------------- In this category I include my [low-impact designs](https://arxiv.org/abs/1705.10720), most [oracle AIs](https://arxiv.org/abs/1711.05541), [tool AIs](https://www.lesswrong.com/posts/6SGqkCgHuNr7d4yJm/thoughts-on-the-singularity-institute-si), and the [AI services approach](http://www.fhi.ox.ac.uk/wp-content/uploads/Reframing_Superintelligence_FHI-TR-2019-1.1-1.pdf). Here, the main focus is on constraining the set E+: the idea is to ensure that the AI is not confronted by a situation in which it can misbehave. If the AI is the superintelligent equivalent of Google maps, and all it ever sees are path optimisation problems, then it seems it must be safe: the insights from E can be lifted to E+, as they are substantially similar - yes, E+ might include new maps, but it's still essentially the same thing. The AI only has limited power to create new environments, and limited power in the environments we expect to see. Critiques of these designs usually [highlight](https://www.lesswrong.com/posts/6SGqkCgHuNr7d4yJm/thoughts-on-the-singularity-institute-si?commentId=mdveWtsryBNxbQ29K) [ways](https://www.lesswrong.com/posts/sizjfDgCgAsuLJQmm/reply-to-holden-on-tool-ai#2___Yes__planning_Oracles_have_hidden_gotchas_too_) in which E+ actually contains [very dangerous](https://www.lesswrong.com/posts/nAwTGhgrdxE85Bjmg/tools-versus-agents) environments, and thus we cannot actually extend safety intuitions from E. Mild optimisation AI designs ---------------------------- This category includes things like [quantilizers](https://intelligence.org/files/QuantilizersSaferAlternative.pdf) and many "[mild optimisation](https://www.lesswrong.com/tag/mild-optimization)" proposals. The idea is that optimisation give good results on E, but will fail disastrously on E+, for the reasons indicated in the "expected utility maximiser" subsection. However, if we have a "base distribution" of mostly safe policies, then we can apply a mild or weak optimisation to this base distribution, and get a policy that is safe with high probability, and better at achieving goals than a random selection of the base distribution. These methods are a hybrid of expected utility maximisation and the extrapolation-focused designs of the next subsection. They aim to get some of the benefits of optimisation, while using the properties of the base distribution to ensure that dangerous behaviour is unlikely, even in E+. Extrapolation-focused designs ----------------------------- This includes designs like Paul Christiano's [distillation and amplification](https://ai-alignment.com/iterated-distillation-and-amplification-157debfd1616), and is what my [model splintering approach](https://www.lesswrong.com/posts/k54rgSg7GcjtXnMHX/model-splintering-moving-from-one-imperfect-model-to-another-1) is groping towards. My old idea of '[chaining God](https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.395.540&rep=rep1&type=pdf)' is a (very poor) example of this approach. Here the focus is explicitly on the extrapolation process itself, asking how we can extend desirable properties from E to E+. Critiques of this tend to argue directly that [desirable properties don't extend](https://www.lesswrong.com/posts/ZyyMPXY27TTxKsR5X/problems-with-amplification-distillation) to E+. Non-optimiser partially constrained designs ------------------------------------------- This category is a grab bag of most remaining AI safety techniques - from [formally specified satisficers](https://www.lesswrong.com/posts/edCNSXLi3tWok4WEg/creating-a-satisficer) to more informal ideas like ensuring that an AI would learn "respect" or "compassion". I've shown that a satisficer might [choose to behave as an maximiser/optimiser](https://www.lesswrong.com/posts/2qCxguXuZERZNKcNi/satisficers-want-to-become-maximisers). The more general problem is that [its behaviour is underdefined](https://www.lesswrong.com/posts/tb9KnFvPEoFSkQTX2/satisficers-undefined-behaviour): there are many different possible behaviours compatible with being a satisficer, and we don't know what they could result in in E+. Similarly, "respect", defined in E, puts only very loose bounds on the algorithm's behaviour in E+. The overall idea seems to be that, since these designs are not optimisers, these loose constraints should (or at the very least could) ensure good behaviour in E+. The most dangerous design is not the one that seems the most dangerous ====================================================================== It might seem that expected utility maximisers are the most dangerous designs - but that is an illusion, due to the fact that we understand best how these maximisers fail. We have the clearest understanding of how maximisers extend their behaviour to E+; that is precisely why we can visualise them failing. The various "non-optimiser partially constrained designs" might seem safer, but that is mainly because we don't understand well at all what might happen on E+. Since they are so vaguely specified, it's not immediately obvious how they might go wrong - but that is very different from being confident that they won't go wrong. Just to pick on the poor satisficer a bit, saying "it isn't an optimiser" is playing the role of the careful analysis of the "base distribution" for the quantilizer. Most satisficers do not include an analysis of how they might go wrong, and how these problems can be avoided. This does not, of course, make the problems go away; instead, it wishes them away, hoping that if we don't think too deeply about them, they won't be there. Weak optimisers can overwhelm non-optimisers -------------------------------------------- There are arguments that non-optimisers may have implicit optimisation pressures in them - maybe due to a [mesa-optimiser](https://arxiv.org/abs/1906.01820), an internal optimisation process, evolutionary pressure, or just implicit self-modification to [avoid money pumps](https://www.lesswrong.com/posts/ZTN6bLWqpwWn2i4qZ/money-pumping-the-axiomatic-approach). Non-optimisers are *extremely* vulnerable to such pressures. An actual expected utility maximiser could overcome or reduce these optimisation pressures, because it has strong optimisation pressures pushing in other directions. But a satisficer could be blown away by the slightest optimisation pressure within it. Conclusion: what an aligned AI would be ======================================= Thus, apart from very narrowly designed low-impact AI, an aligned AI would have to be one that properly handles extrapolations to E+ - and beyond, to any possible environments in E++ or E+++ that might open up. For the moment, the best candidate for this seems to be a well-grounded expected utility maximiser with good value extrapolation. Other designs might be possible, but their properties will have to be proven to the same degree of rigour as that. --- 1. Notice that these two failure modes are related. Wireheading happens because we failed to distinguish the "money variable" from actual money; so actual money was sacrificed to the money variable. Burning the universe for cash happens because, though we defined money correctly, we didn't define it richly in terms of the value it brings to the world; and so that rich value was sacrificed to actual money. [↩︎](#fnref-uGMPDtnMweKK6WqGJ-1)
65aa7f41-dc11-4c72-a0a5-f4d10ec5ff2c
trentmkelly/LessWrong-43k
LessWrong
Harper's Magazine article on LW/MIRI/CFAR and Ethereum Cover title: “Power and paranoia in Silicon Valley”; article title: “Come with us if you want to live: Among the apocalyptic libertarians of Silicon Valley” (mirrors: 1, 2, 3), by Sam Frank; Harper’s Magazine, January 2015, pg26-36 (~8500 words). The beginning/ending are focused on Ethereum and Vitalik Buterin, so I'll excerpt the LW/MIRI/CFAR-focused middle: > …Blake Masters-the name was too perfect-had, obviously, dedicated himself to the command of self and universe. He did CrossFit and ate Bulletproof, a tech-world variant of the paleo diet. On his Tumblr’s About page, since rewritten, the anti-belief belief systems multiplied, hyperlinked to Wikipedia pages or to the confoundingly scholastic website Less Wrong: “Libertarian (and not convinced there’s irreconcilable fissure between deontological and consequentialist camps). Aspiring rationalist/Bayesian. Secularist/agnostic/ ignostic . . . Hayekian. As important as what we know is what we don’t. Admittedly eccentric.” Then: “Really, really excited to be in Silicon Valley right now, working on fascinating stuff with an amazing team.” I was startled that all these negative ideologies could be condensed so easily into a positive worldview. …I saw the utopianism latent in capitalism-that, as Bernard Mandeville had it three centuries ago, it is a system that manufactures public benefit from private vice. I started CrossFit and began tinkering with my diet. I browsed venal tech-trade publications, and tried and failed to read Less Wrong, which was written as if for aliens. > > …I left the auditorium of Alice Tully Hall. Bleary beside the silver coffee urn in the nearly empty lobby, I was buttonholed by a man whose name tag read MICHAEL VASSAR, METAMED research. He wore a black-and-white paisley shirt and a jacket that was slightly too big for him. “What did you think of that talk?” he asked, without introducing himself. “Disorganized, wasn’t it?” A theory of everything followed. Heroes like Elon and Peter (did I hav
cefa5362-ba17-4831-a408-1296d160ab6b
trentmkelly/LessWrong-43k
LessWrong
How much is known about the "inference rules" of logical induction? Context: Logical Induction is a framework that makes sense of intuitively plausible statements like "the probability that the 10101010th digit of π is odd is about 0.5". People often do this sort of informal reasoning about mathematical conjectures. Like "The Collatz conjecture has been checked up to 268, and held for all those - updating on this, I increase my likelyhood that the conjecture is true in general". Logical induction seems to provide, in principle, a set of rules that such updates should follow. How many of these rules are known? Some example rules that seem very plausible (here all my variables are implicitly natural numbers): * The observation that ϕ(n0) is true does not decrease the likelyhood of ∀nϕ(n). * Updating on the observations "ϕ(n) for all 0≤n≤N", the probability of ∀nϕ(n) goes to 1 as N→∞ Do these hold for logical inductors?
17fc6567-b32a-4820-a2ee-448f0b4fb6b2
trentmkelly/LessWrong-43k
LessWrong
Which way should we update about uncertain events, over time? If we're uncertain about the probability of an event x occurring (in virtue of the fact that x has occurred none or just a few times), how should we update our probability of x occurring in the future, in the face of x not occurring over a period of time? (One example of x may be a nuclear war/accident.) Sorry if this is very noobish.
b2b9b988-d480-40a8-9ced-1676ede8f43f
trentmkelly/LessWrong-43k
LessWrong
A Framework For Cognitive Biases: What Types Of Cognitive Biases Are Out There?
cddbff1b-b9d4-48d4-949f-668c096a66f7
trentmkelly/LessWrong-43k
LessWrong
Integral versus differential ethics In population ethics... Most people start out believing that the following are true: 1. That adding more happy lives is a net positive. 2. That redistributing happiness more fairly is not a net negative. 3. That the repugnant conclusion is indeed repugnant. Some will baulk on the first statement on equality grounds, but most people should accept those three statements as presented. Then they find out about the mere addition paradox. Someone who then accepts the repugnant could then reason something like this: > Adding happy people and redistributing fairly happiness, if done many, many times, in the way described above, will result in a repugnant conclusion. Each step along the way seems solid, but the conclusion seems wrong. Therefore I will accept the repugnant conclusion, not on its own merits, but because each step is clearly intuitively correct. Call this the "differential" (or local) way or reasoning about population ethics. As long as each small change seems intuitively an improvement, then the global change must also be. > Adding happy people and redistributing fairly happiness, if done many, many times, in the way described above, will result in a repugnant conclusion. Each step along the way seems solid, but the conclusion seems wrong. Therefore I will reject (at least) one step, not on its own merits, but because the conclusion is clearly intuitively incorrect. Call this the "integral" (or global) way of reasoning about population ethics. As long as the overall change seems intuitively a deterioration, then some of the small changes along the way must also be.   In general... Now, I personally tend towards integral rather than differential reasoning on this particular topic. However, I want to make a more general point: philosophy may be over dedicated to differential reasoning. Mainly because it's easy: you can take things apart, simplify them, abstract details away, and appeal to simple principles - and avoid many potential biases along t
fc5fdd05-ac74-402f-acba-fa05c6872a80
trentmkelly/LessWrong-43k
LessWrong
Solving the Mechanistic Interpretability challenges: EIS VII Challenge 2 We solved* the second Mechanistic Interpretability challenge (Transformer) that Stephen Casper posed in EIS VII. We spent the last Alignment Jam hackathon attempting to solve the two challenges presented there, see here for our solution to the first challenge (CNN). The challenges each provide a pre-trained network, and the task is to 1. Find the labeling function that the network was trained with 2. Find the mechanism by which the network works. *We have understood the network’s labeling mechanism, but not found the original labeling function. Instead we have made a strong argument that it would be intractable to find the labeling function, as we claim that the network has not actually learned the labeling function. A notebook reproducing all results in this post can be found here (requires no GPU, around ~10GB RAM). Note that our solution descriptions are optimized with hindsight and skip all wrong paths and unnecessary techniques we tried. It took us, two somewhat experienced researchers, ~24 working hours to basically get the solutions for each challenge, and a couple days more for Stefan to perform the interventions, implement Causal Scrubbing tests, interventions & animations, and to write-up this post. Task: The second challenge network is a 1-layer transformer consisting of embedding (W_E and W_pos), an Attention layer, and an MLP layer. There are no LayerNorms and neither the attention matrices nor the unembedding have biases. The transformer is trained on sequences [A, B, C] to predict the next token. A and B are integer tokens from a = 0 to 112, C is always the same token (113). The answer is always either the token 0 or 1. If we consider all inputs we get 113x113 combinations which we can shape into an image to get the image from the challenge (copied below). Black is token 0, and white is token 1. The left panel shows the ground truth, and the right panel the model labels. The model is 98.6% accurate on the full dataset.   Spoilers ahead! ---
536a362a-6457-4eca-ae7c-5006cbe5bc23
trentmkelly/LessWrong-43k
LessWrong
Overhead of MessageChannel The Channel Messaging API gives you a way to pass information asynchronously between different origins on the web, such as cross-origin iframes. Traditionally you would use window.postMessage(), but the a MessageChannel has the advantage of being clearer, only requiring validating origins on setup, and handling delegation better. Reading this 2016 post, however, I was worried that it might have enough overhead that postMessage made more sense in performance-sensitive contexts. Benchmark time! I made a test page which alternates between loading trycontra.com/test/messageChannelResponse.html and trycontra.com/test/postMessageResponse.html. I'm using two different domains so that I can test cross-origin performance. First it loads messageChannelResponse in an iframe, waits for it to load, and then times how long it takes to pass in a MessagePort and receive a response on it. Then it does the same basic operation postMessageResponse with plain postMessage. This is a worst-case for MessageChannel, since I stand up the while channel only to use it a single time. I ran 1,000 iterations, interleaved, on Chrome (80.0), Firefox (74.0), and Safari (13.1). All of these were on my 2017 MacBook Pro. Here are all the runs, sorted from fastest to slowest [1]: In Chrome, MessageChannel is a bit faster, while in Firefox and especially Safari it's slower. Additionally, Firefox runs it faster than Safari, which runs it faster than Chrome. Safari also has more consistent performance than Chrome, with a flatter distribution. Firefox is in between, with a flat distribution for postMessage but a few slow calls in the tail for MessageChannel. If you're writing something where ~7ms/call in Safari is an issue then it might be worth sticking to postMessage, otherwise MessageChannel seems fine. [1] I find this kind of "sideways CDF" a really useful visualization tool, and possibly the chart I make most often.
ff5db145-add1-4b6d-896b-d9c08344e6da
trentmkelly/LessWrong-43k
LessWrong
Reading recommendations on social technology: looking for the third way between technocracy and populism I broadly see the situation as follows populism is the failure mode that I can characterize as the two wolves and a sheep problem (voting on what's for dinner), adversarial dynamics between majorities and minorities are exacerbated. technocracy is the failure mode where you get a bunch of wonks together to look for positive-sum solutions, maximize on behalf of the aggregate, etc., but you're fighting a losing battle to compress information for them (i.e. in the hayekian criticism of economic planning sense). I guess they fall on opposite sides of a spectrum and I view actually-existing democracies as a constant push-pull, negotiating a sweet spot on the spectrum. I'm wondering if we can dissolve the problem with some truly galaxy-brained social technology. What can I read to beef up my thinking about this?
639bab2d-7f2d-4c31-aa3f-d6f5422dbeac
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
Fermi estimation of the impact you might have working on AI safety I tried doing a Fermi estimation of the impact I would have if I worked on AI safety, and I realized it wasn't easy to do with only a calculator. So I build a website which does this Fermi estimation given your beliefs about AGI, AI safety, and your impact on AI safety progress. You can try it out here: <https://xriskcalculator.vercel.app/> This tool focuses on technical work, and assumes that progress on AGI and progress on AI safety are independent. This is obviously an approximation that is vastly inaccurate, but for now I don't think of a simple way of taking into account the fact that advanced AI could speed up AI safety progress. Other limitations are outlined on the website. What do you think of this tool? Do you think of a way it could be improved? *Note: this is still work in progress. If you want to use this tool to make important decisions, please contact me so that I increase its reliability.*
36b82552-ffd8-41e8-9883-76032bdccabc
trentmkelly/LessWrong-43k
LessWrong
Making decisions in a real computer - an argument for meta-decision theory research Decision theory is being used as the basis for AI safety work. This currently involves maximising expected utility of specific actions. Maximising expected utility is woefully inefficient for performing very rapid paced unimportant decisions, which occur frequently in computing. But these fast paced decisions will still need to be made in a way that is purpose oriented in an AI. This article presents an argument that we should explore meta-decision theories to allow the efficient solution of these problems. Meta-decision theories are also more human-like and could have a different central problems to first order decision theories. Decision theory tends to be the domain of economists, mathematicians, ethicists and other philosophers. However now it is being used in arguments about artificial intelligence and how to make it safe. If we are to try to implement an AI that follows any specific decision theory, it is brought into the real world domain of the computer. Now it has to deal with slightly less abstract world of bits and bytes. We have to make decision theories that take notice of the costs and consequences of making a decision. AI is often seen through a dual lens, as both an agent making reasoned decisions about goals and as a computer able to process data and act at astounding speeds. These views have not entirely been integrated and this needs to be done for safe AI work. AIs will still have to send data over the network and it will display images on a screen. These boring things are still up for optimisation, they still need to be decided upon. We will want these actions to be correctable if they are going badly wrong. Decisions at 10Gbps The example that will be followed through is networking. Consider a simplified network interface where the possible actions are outputting a 0 or a 1 across a wire. We have an AI in control of this network interface, part of it’s job to send a file as quickly as possible to the other side. It has a 10 Gbps etherne
d6b85d99-524f-4db4-a573-68f89c619e85
trentmkelly/LessWrong-43k
LessWrong
Call for Personal Volunteers Those who wish to volunteer some of their time toward reducing existential risk and increasing our chances of a positive singularity can follow the directions on SingularityVolunteers.org. And as a freshly hired Singularity Institute researcher, I also have my own list of tasks that, if completed by volunteers instead of myself, will speed along the delivery of the projects I'm working on: an 'FAI Open Problems' document, two papers bound for peer review, metaethics research, and more. So if you'd like to help me out with any of my volunteer-doable tasks, please contact me: luke [at] singinst [dot] org. Thanks!
6ec0b5b1-a1e9-41e0-a066-a6685cc1617f
trentmkelly/LessWrong-43k
LessWrong
Can the House Legislate? The US government is in a less functional position than usual: * Legislation requires (among other things) a plurality vote of the House of Representatives. * This requires the Speaker of the House to call a vote. * This requires a Speaker of the House. * The House removed its Speaker on Tuesday. We've never done this before, so it's unclear how it will go. The House does have a "Speaker pro-tempore" backup, added in 2003, but it's not clear whether they are allowed to use this position to call votes other than for a new speaker, or, if they would even if allowed: > In the meantime, other action on the House floor, including any legislative business, is likely to be halted. House staff aides believe the acting speaker may wield the gavel only to administer the election of a new speaker. > But others argue that as acting speaker, Mr. McHenry can exercise powers beyond overseeing a speaker's race, as long as a majority tolerates it. > > "From an institutional House rules perspective, Speaker McHenry has the powers of the speakership, and he will continue to exercise those powers to the extent and degree that the majority party is willing to tolerate," said Josh C. Huder, a senior fellow at the Government Affairs Institute at Georgetown University. "If he does something too brash or too bold, they will rein him in. And that's really the only thing that's governing his authority." > > House rules do not explicitly prohibit the interim speaker from adopting the powers of an elected speaker. But the rule that led to Mr. McHenry's ascent was developed with only temporary absences in mind, not a vacant chair, according to Stan M. Brand, the former general counsel to the House. That leaves legal room for Mr. McHenry to exercise broader powers, such as conducting legislative business, if he chooses.—NYT, 2023-10-04 I've started a prediction market, which as of this writing is at 20% with four traders: I set the resolution conditions as: > Resolves true
68b37d32-d6e9-4c9a-befa-284b60097120
trentmkelly/LessWrong-43k
LessWrong
Rationality Quotes: May 2011 WARNING WARNING YE HAIRY GODS THIS IS A WARNING: This is not the proper quotes thread for this month. The correct page is here. Sorry about the confusion! I will leave this post up for a day or so to let the few people who've already posted here move their quotes over, then I'll delete it. Can you has quotes? Yes, you can has quotes! * Please post all quotes separately, so that they can be voted up/down separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.) * Do not quote yourself. * Do not quote comments/posts on LW/OB. * No more than 5 quotes per person per monthly thread, please.
0fca9593-8f13-445d-ad06-d36cffe7e78f
trentmkelly/LessWrong-43k
LessWrong
What are your favorite posts, podcast episodes, and recorded talks, on AI timelines, or factors that would influence AI timelines? Especially ones that do an attempt to make things pretty specific
9c12e066-1cd1-49dc-b09a-3d06189baa9e
trentmkelly/LessWrong-43k
LessWrong
Examples of Causal Abstraction I’m working on a theory of abstraction suitable as a foundation for embedded agency and specifically multi-level world models. I want to use real-world examples to build a fast feedback loop for theory development, so a natural first step is to build a starting list of examples which capture various relevant aspects of the problem. These are mainly focused on causal abstraction, in which both the concrete and abstract model are causal DAGs with some natural correspondence between counterfactuals on the two. (There are some exceptions, though.) The list isn’t very long; I’ve chosen a handful of representative examples which cover qualitatively different aspects of the general problem. I’ve grouped the examples by symmetry class: * Finite DAGs without any symmetries, or at least no symmetries which matter for our purposes * Plate symmetry (as in “plate notation”), in which there are a number of conditionally IID components * Time symmetry, in which the DAG (or some part of it) consists of one repeated subcomponent connected in a straight line (i.e. a Markov chain structure) Note that many of the abstractions below abstract from one symmetry class to another - for example, MCMC abstracts a concrete time-symmetric model into an abstract plate-symmetric model. I’m interested to hear more examples, especially examples which emphasize qualitative features which are absent from any of the examples here. Examples in which other symmetry classes play an important role are of particular interest, as well as examples with agenty behavior which we know how to formalize without too much mess. Finite DAGs: Examples from Electrical Circuits Electrical engineers rely heavily on nested layers of abstraction, of exactly the sort I’m interested in (i.e. multi-level models of the physical world). Additionally, causal models are a natural fit for digital circuits. These properties make electrical circuits ideal starting points. They’re a great conceptually-simple use case. A f
d8b17132-de8f-4f03-8775-2eb2c27e5674
trentmkelly/LessWrong-43k
LessWrong
The Politics of Age (the Young vs. the Old) Few days ago I've read an article in the local newspaper about Switzerland considering to lower the voting age to 16. The reason I found it interesting was that it was not one of the old tired political discussions supported by the same old tired arguments that you typically encounter. In fact, it's a question that I have never thought of before. Apparently, the discussion was triggered by the recent school strike for climate that went quite big in Switzerland. I've attended the demonstration in Zurich and it was not only big, it was really a kids' event. You could spot a grown-up here and there but they were pretty rare. (Btw, I think this movement is worth watching. Here, for the first time, I see a coordination on truly global level. It spans beyong western countries, with events being hosted in Asia, Pacific Islands, South America or Africa.) Anyway, the main argument for lowering the voting age is to counter-balance the greying of the electorate. Once again, this stems from what the climate stikers say: "The politicians who decide on these issues will be dead by the time the shit hits the fan. It will be us who'll have to deal with it. We should have a say in the matter." But the question is broader: As the demographics change, with the birth rates dropping at crazy speed (China's population will start shrinking not that far in the future; Sub-saharan fartility rates had plummeted from 6.8 in 1970's to 4.85 in 2015), the age pyramid is going to look less like a pyramid and more like a column or even a funnel. In such a case the old will hold a much larger amount of political power than they do today. While that may seem like a minor thing (everyone is young at some point and old later on) just consider how it would affect the politics of, say, pensions or health-care. Or, for that matter, I hear that Brexit wouldn't happen is 16- and 17-year olds were allowed to vote. More questions: With old people being generally more conservative are we going to see
18adbccd-5ce6-4f66-915c-687a63e0aa30
trentmkelly/LessWrong-43k
LessWrong
Shock Levels are Point Estimates This is a post from my blog, Space and Games. Michael Vassar has requested that I repost it here. I thought about revising it to remove the mind projection fallacy, but instead I left it in for you to find. Eliezer Yudkowsky1999 famously categorized beliefs about the future into discrete "shock levels." Michael Anissimov later wrote a nice introduction to future shock levels. Higher shock levels correspond to belief in more powerful and radical technologies, and are considered more correct than lower shock levels. Careful thinking and exposure to ideas will tend to increase one’s shock level. If this is really true, and I think it is, shock levels are an example of human insanity. If you ask me to estimate some quantity, and track how my estimates change over time, you should expect it to look like a random walk if I’m being rational. Certainly I can’t expect that my estimate will go up in the future. And yet shock levels mostly go up, not down. I think this is because people model the future with point estimates rather than probability distributions. If, when we try to picture the future, we actually imagine the single outcome which seems most likely, then our extrapolation will include every technology to which we assign a probability above 50%, and none of those that we assign a probability below 50%. Since most possible ideas will fail, an ignorant futurist should assign probabilities well below 50% to most future technologies. So an ignorant futurist’s point estimate of the future will indeed be much less technologically advanced than that of a more knowledgeable futurist. For example, suppose we are considering four possible future technologies: molecular manufacturing (MM), faster-than-light travel (FTL), psychic powers (psi), and perpetual motion (PM). If we ask how likely these are to be developed in the next 100 years, the ignorant futurist might assign a 20% probability to each. A more knowledgeable futurist might assign a 70% probability to MM, 8% fo
54582fbb-7898-4e30-a5a0-1067caf25ce4
StampyAI/alignment-research-dataset/blogs
Blogs
Wolf Kohn on hybrid systems control ![Wolf Kohn portrait](http://intelligence.org/wp-content/uploads/2014/04/Kohn_w1300.jpg) [Dr. Wolf Kohn](http://depts.washington.edu/ie/people/kohn) is the Chief Scientist at [Atigeo, LLC](http://atigeo.com/), and a Research Professor in Industrial and Systems Engineering at the University of Washington. He is the founder and co-founder of two successful start-up companies: [Clearsight Systems, Corp.](http://www.clearsightsystems.com/), and Kohn-Nerode, Inc. Both companies explore applications in the areas of advanced optimal control, rule-based optimization, and quantum hybrid control applied to enterprise problems and nano-material shaping control. Prof. Kohn, with Prof. Nerode of Cornell, established theories and algorithms that initiated the field of hybrid systems. Prof. Kohn has a Ph.D. in Electrical Engineering and Computer Science from MIT, at the Laboratory of Information and Decision Systems. Dr. Kohn is the author or coauthor of over 100 referred papes, 6 book chapters and with Nerode and Zabinsky has written a book in Distributed Cooperative inferencing. Dr. Kohn Holds 10 US and international patents. **Luke Muehlhauser**: You co-founded the field of hybrid systems control with Anil Nerode. Anil gave his impressions of the seminal 1990 Pacifica meeting [here](http://intelligence.org/2014/03/26/anil-nerode/). What were your own impressions of how that meeting developed? Is there anything in particular you’d like to add to Anil’s account? --- **Wolf Kohn**: The discussion on the first day of the conference centered on the problem of how to incorporate heterogeneous descriptions of complex dynamical systems into a common representation for designing large scale automation. What came almost immediately were observations from Colonel Mettala and others that established as a goal the finding of alternatives to classic approaches based on combining expert systems with conventional control and system identification techniques. These approaches did not lead to robust designs. More important, they did not lead to a theory for the systematic treatment of the systems DOD was deploying at the time. I was working on control architectures based on constraints defined by rules, so after intense discussions among the participants Nerode and I moved to a corner and came up with a proposal to *amalgamate* models by extending the concepts of automata theory and optimal control to characterize the evolution of complex dynamical systems in a manifold in which the topology was defined by rules of operation, and behavior constraints and trajectories were generated by variational methods. This was the beginning of what we would be defined later on as “hybrid systems.” --- **Luke**: Which commercial or governmental projects would you name as being among the most significant success stories of the hybrid systems research program, from 1990 to the present day? --- **Wolf**: There are many applications today that use hybrid systems as the basic technology. These are a few of the ones I am I personally familiar with: * A demand forecaster and an inventory control and management system being deployed by the Microsoft Dynamics group. * A battlefield simulator deployed by the Army’s Piccatiny Arsenal. * A generic people and resource scheduling system deployed by Clearsight Systems. * A cooperative distributed inference system deployed by Atigeo with applications on medical informatics and smart electric power network management systems. * A quantum hybrid control system for capturing and storing sunlight being prototyped by Kohn-Nerode LLC. --- **Luke**: What are the new theoretical developments in hybrid systems of the past 10 years that are most impressive or interesting to you? What kinds of advances do you think we might see in the next 10 years? --- **Wolf**: For me the most important advances in hybrid systems are in three areas: 1. *Representation*: We have found that the behavior of dynamical systems characterized by multiple heterogeneous models can be effectively characterized by Hamiltonian functions. The interaction with multiple models and the transfer of information from one model to another is defined by the interaction of Hamiltonian forms. This fact allows hybrid systems to be a preferred theory and implementation technology for the development of new control approaches such as mean field agent-based distributed control and gauge theory and most importantly, control and optimization specified and implemented by rules. 2. *Hybrid systems control design*, which allows for the specification of architectural structures and the control requirements as part of the formulation of a control problem. This fuses the physical and empirical data about the dynamic process to be controlled. The control specification and the computational requirements are achieved because control performance specifications and architectural constraints *are* models of the controlled process. 3. *Agent-based distributed control*: Nerode and I, both together and separately have developed a variational theory, based on hybrid systems, for dynamic synchronization of multiple agents participating in the control of a distributed process. The variational theory generalizes a principle in network theory, called Tellegen’s Theorem. The theory provides for active synchronization with no umpire for network interaction between the agents. A version of this theory has been used to implement an agent-based architecture for control, uncertainty management, and learning in several applications. This architecture is known by the acronym MAHCA (Multiple Agent Hybrid Control Architecture). 4. *Metacontrol*: This is a theory for controlling the performance and behavior of *implemented* algorithms. The dynamics in this case is a computational multitask process. The objective is to make it run faster with less memory and with active synchronization between tasks. We are developing a computational hybrid systems theory based on metacontrol. Preliminary implementations of this theory on optimization algorithms has shown very promising results in terms of reduction of compute time, real time synchronization and rule based optimization. --- **Luke**: I’m particularly interested in the safety challenges presented by the increasingly autonomous AI systems of the future. Self-driving cars are on their way, the U.S. military is working toward [autonomous battlefield robots](http://www.amazon.com/Governing-Lethal-Behavior-Autonomous-Robots-ebook/dp/B008I9YG9G/) of various types, etc. Do you think hybrid systems control, and relatively modest extensions to it, will be sufficient to gain high assurances of the safety for the more-capable autonomous systems we’re likely to have in 10 or 20 years, or do you think other contemporary control and verification approaches have a better shot at addressing that problem, or do you think entirely new approaches will need to be developed? --- **Wolf**: I will break down my answer in two parts: (1) the autonomy issue and (2) the verification issue. *Autonomy*: This was one of the central questions brought up by the initial sponsors of hybrid systems (ARPA, NIST, ARO, SAP). Our answer early on came with the following proposition which we implemented in a battlefield dynamics simulator: given a process to be controlled, build a model of desired behavior based on the performance specifications, regulations, and economic operation, and construct the controlled process dynamics. This is a hybrid system; let’s call it S. Then we build a model, another hybrid system, say C, representing specified safety rules and constraints and hybridize S and C to produce a new Hybrid control system S1. This approach is only successful in semi-closed, quasi-stationary systems. For the large scale autonomous systems, one needs to allow the hybrid system controller to detect, learn and dispose of un-programmed situations. To do this, a class of hybrid controllers (known as agents in our papers) contain directives that implement Learning from Sensory Data. One example is what we might call “Learning by Failure Predictor and Repair” systems. This particular system operates as follows: a safety agent (or agents) is (are) monitoring the controlled system operation, and infer(s) whether it is operating in a feasible region and what is the likelihood that it is going to leave the feasibility region (a Failure) in the near future (“near” is a concept depending on the controlled system). This likelihood determines the response to the failure: a Repair operation. Then, another agent application designs a controller that implements the Repair operation. Note this is not necessarily an adaptation of an existing controller. We use the fact that the design procedure of hybrid controllers is itself a hybrid system defining the synthesis approach. We term the resulting design procedure a repair hybrid control. We have used this approach in a prototype microgrid management and control system that is near deployment with considerable success: i.e., high robustness, resiliency, recovery: all this while maintaining good performance. Another element that agent-based hybrid systems provide is the ability to improve safety via redundancy. In short, my answer to the first part of the question may be summarized as follows: Hybrid systems is a first principles platform that allows for the incorporation of safety, learning and repair. What I believe the research in this area should be focused on is how to populate the system with *information* about the application and to allow for heuristics and empirical constraints and rules, and to provide the structural Failure and Repair mechanisms I outlined above. This approach builds on top on existing control techniques but incorporates a new concept of structural adaptation that is essential for the level of autonomy posed in your question. *Verification*: many researchers have proposed methods for hybrid systems validation and verification prior to deployment. In our approach we are happy to use some of these techniques. Our contribution in this area called for verification of online designs, as discussed above. So, we are developing hybrid systems to model verification principles, with the idea to amalgamate them in near real time to our applications. --- **Luke**: Do you think our capacity to make systems more autonomous and capable will outpace our capacity to achieve confident safety assurances for those systems? --- **Wolf**: I believe we have made great progress on increasing autonomy in most of the systems we are designing today. I also believe we have paid far less attention to develop the methods, sensory redundancy principles, and theory of design for safety performance. Nerode and I are working on developing an advanced magnetic battery using quantum hybrid control methodology. We found that the key ingredients of this battery for it to operate safely are principles of safety that have been used in non-autonomous systems for the last 100 years. We are encoding these principles formally as part of our design algorithms. Perhaps this way may be generalized to obtain acceptable levels of safety. --- **Luke**: Thanks, Wolf! The post [Wolf Kohn on hybrid systems control](https://intelligence.org/2014/04/11/wolf-kohn/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).
8d13dfff-e2a8-4ad0-9f73-51c2cfb1d0f1
trentmkelly/LessWrong-43k
LessWrong
Notes on Psychopathy > This is some old work I did for SI. See also Notes on the Psychology of Power. Deviant but not necessarily diseased or dysfunctional minds can demonstrate resistance to all treatment and attempts to change their mind (think No Universally Compelling Arguments; the premier example are probably psychopaths - no drug treatments are at all useful nor are there any therapies with solid evidence of even marginal effectiveness (one widely cited chapter, “Treatment of psychopathy: A review of empirical findings”, concludes that some attempted therapies merely made them more effective manipulators! We’ll look at that later.) While some psychopath traits bear resemblance to general characteristic of the powerful, they’re still a pretty unique group and worth looking at. The main focus of my excerpts is on whether they are treatable, their effectiveness, possible evolutionary bases, and what other issues they have or don’t have which might lead one to not simply write them off as “broken” and of no relevance to AI. (For example, if we were to discover that psychopaths were healthy human beings who were not universally mentally retarded or ineffective in gaining wealth/power and were destructive and amoral, despite being completely human and often socialized normally, then what does this say about the fragility of human values and how likely an AI will just be nice to us?) As usual in my 'notes' articles, the following is a series of excerpts and citations; if any interest you, leave a comment and I will try to jailbreak a copy for you or failing that, post a request on the research help page. 1 Psychopathy The Psychopath: Emotion and the brain, Blair et al 2005: > There are now a relatively large number of studies indicating that individuals with psychopathy reoffend at higher rates than non-psychopathic individuals. For example, in an early study, the PCL-R was administered to 231 offenders prior to release from prisons (Hart et al., 1988). Within 3 years, 25% of n
589fe945-adf0-4db3-a548-1c0bd29059a0
trentmkelly/LessWrong-43k
LessWrong
Nerve Blisters: A Stoic Response The chickenpox virus waited for decades, attacking the moment my immune system wobbled.[1] It advanced down my nerves, spreading blisters along its path. Known as shingles, this kind of viral attack is generally considered a very bad time. The blisters make nerves go haywire. They start sending chaotic signals back to the brain, jagged and dissonant.[2] The brain struggles to understand these signals, ending up interpreting them… creatively. Tense strings hooked into sunburned flesh, going “twaanggg” randomly. A knife-tip carving a hole in my skull. Insect wings and water droplets inching their way across my skin. Being stapled. The impressions come in waves, triggered in chaotic bursts. Their intensity makes them unusually salient, bombarding my awareness. A resounding barrage of dissonance. I’m using this as a chance to practice Stoicism. One of the core tenets of Stoicism is distinguishing between what is up to me and what is not. I can’t stop the impressions. They aren’t up to me. What matters is my mindset, my ability to shape how I respond in the face of dissonance. When a wave of dissonance rises, I turn to it and relax, letting it wash over me. I do what I can to ease the intensity, but I’ve accepted that I can’t make it stop. The impressions keep arising. Getting annoyed doesn’t serve me; it would only add to the suffering. I use non-judgment to avoid clinging to the sensations. Instead of resenting them or labelling them as bad, I stay curious and try to feel their texture as they arise and pass away. This stance gives me a bit of separation, not by putting up a barrier, but by relating to them as distinct from myself. My instincts are screaming at me to clench down, scratch my forehead raw, and run away. These instincts don’t rule me, I have 12+ years of Stoic practice to fall back on. Resisting these urges is a real battle test, the biggest challenge I’ve faced in years. Shingles is an extremely painful medical condition, but thankfully, my blist
b44a9d2e-42a3-41b7-b222-4f9f1c89071b
trentmkelly/LessWrong-43k
LessWrong
Open thread, 25-31 August 2014 Previous open thread If it's worth saying, but not worth its own post (even in Discussion), then it goes here. Notes for future OT posters: 1. Please add the 'open_thread' tag. 2. Check if there is an active Open Thread before posting a new one. 3. Open Threads should be posted in Discussion, and not Main. 4. Open Threads should start on Monday, and end on Sunday.
d35a6177-b9d0-4916-8a3b-93e3322e254f
trentmkelly/LessWrong-43k
LessWrong
Don't Influence the Influencers! This post was written under Evan Hubinger’s direct guidance and mentorship, as a part of the Stanford Existential Risks Institute ML Alignment Theory Scholars (MATS) program. TL;DR: AGI is likely to turn out unsafe. One likely way that can happen is that it fools us into thinking it is safe. If we can make sure to look for models that are ineffective at "bad" things (so it can't deceive us) and effective at "good" things (so it is useful), and importantly, do that prior to the models reaching a point-of-no-return of capability, we can avert catastrophe. Which spaces of algorithms do we look in? What do they look like? Can we characterize them? We don't know yet. But we have a very concrete point in such a space: an "LCDT agent". Its details are simple and we'll look at it. Format note: The original post is already pretty well-written and I urge you to check it out. In trying to summarize an already well summarized post (to any alignment researcher anyway), I've aimed lower: catering to a dense set of possible attention-investments. This (ie. the linked dynalist in the first section) is an experiment, and hopefully more fun and clarifying than annoying, but I haven't had the time to incorporate much feedback to guarantee this. I hope you'll enjoy it anyway. Epistemic status: I'd say this post suffers from: deadline rushedness, low feedback, some abstract speculation, and of course, trying to reason about things that don't exist yet using frameworks that I barely trust. It benefits from: trying really hard to not steamroll over concerns, being honest about flailing, being prudent with rigor, a few discussions with the authors of the original post, and its main intent being clarification of what someone said rather than making claims of its own.   Summary Here's the link to a dynalist page.  Click on a bullet to expand or collapse it. Try a more BFS-ish exploration than a DFS one. If you've used something like Roam, it's similar. The rest of the post assumes you
ff543624-3f5b-49e8-9db6-ac7a26e9723c
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Trace README Trace is a tool for writing programs which read, write and reason about programs. You can find it [here](https://github.com/johnswentworth/tracelang). I wrote it as a tool for my own research, and I expect that others in this space may find the ideas interesting/useful as well. I'd be especially interested in new use-cases and other feedback! Some kinds of things you might find Trace useful for: * Algorithms which operate on a computation graph, e.g. backpropagation, belief propagation, or other graphical inference algorithms * An intermediate data structure for static analysis, interpreters or compilers * A general-purpose non-black-box representation of objectives/constraints for optimization * A general-purpose non-black-box representation of world models for AI more broadly Disclaimer for all of these: Trace is brand-new, and it was built with a focus on the core ideas rather than the engineering. Syntax is liable to change as we figure out what does and does not work well. Do not expect it to be easy/pleasant to use at this point, but do expect it to provide novel ways to think about programs. One more warning: this doc is intended to be read start-to-finish. Trace does not really resemble any other tool I know of, and you will likely be confused if you just dive in. What is Trace? -------------- Trace is * A programming/modelling language embedded in a python library. For use as a human-facing programming language, Trace is pretty terrible, but it’s sometimes a necessary step for other use-cases. * A notation/data structure representing programs. For these use-cases, Trace is pretty good: compared to alternatives (e.g. [abstract syntax trees](https://docs.python.org/3/library/ast.html)), Trace offers a much more convenient representation of program structure. * A data structure representing the computation performed by an arbitrary program - i.e. the trace (aka execution graph aka computation graph) of a program. For this use-case, I do not know of any other tool which is anywhere near as powerful as Trace. A prototypical use-case: suppose you want to test out a new inference algorithm. You can prototype the algorithm to operate on Trace data structures, which allows it to handle arbitrary programs (unlike e.g. pytorch graphs), with relatively little complexity (unlike e.g. python syntax trees). Then, you can write test-case world-models as programs in Trace notation. Those “programs” will themselves be fairly transparent Trace data structures, which your prototype algorithm can operate on directly. Concepts -------- Here’s a simple python program: ``` def factorial(n): if n == 0: return 1 return n * factorial(n-1) ``` Let’s suppose I want to trace the execution of factorial(3), starting from the result and working backwards (e.g. for something analogous to backpropagation). Conceptually, I picture something like the call stack, with a box for each function call. Within each box, variable instances are in dependency order; arrows show cross-box dependencies: ![](https://docs.google.com/drawings/u/1/d/sd96WW1-q7XMOjaa4myWWCg/image?w=303&h=580&rev=219&ac=1&parent=11EkEZyVCqIg8JumgL285wDNzMJaYX7mrv-Q4lAoIYv4) This is roughly the core data structure which Trace exposes. For every instance of every variable, it tells us: * The value of the variable-instance * The expression which produced that value * The variable-instances which went into that expression (Side note: every variable instance is assumed to be write-once; no in-place updating of values is allowed.) In Trace syntax, every variable-instance is a Symbol (S). The Symbol object contains both the symbol’s name (aka its literal) and a pointer to the “context” in which the symbol lives (i.e. the dotted boxes in the diagram). The context then assigns the literal to another symbol, a hardcoded value, or an Expression - a special type of Symbol which wraps a python function and some input Symbols. More on that in the next section. However, Trace’ core data structures differ in two important ways from the diagram above: * They handle dynamic structure - i.e. programs which write programs * Everything in Trace is evaluated lazily whenever possible Lazy evaluation allows us to write data structures which *look* a lot like normal programs (albeit with some unusual syntax), and which can fit in about as much memory as normal code, but allow access to the whole trace - every instance of every variable in the program’s execution. The main trick to a compressed, lazy representation is an operator which says “make a copy of this whole block, but with these changes: …”. In the factorial diagram above, each of the dotted boxes (except the last) is a copy of the first box, but with a different value of n. Ignoring the last box, we could represent it like this: ![](https://docs.google.com/drawings/u/1/d/sMl-gGQMv8Y-5Kd0y3Cajrg/image?w=303&h=184&rev=71&ac=1&parent=11EkEZyVCqIg8JumgL285wDNzMJaYX7mrv-Q4lAoIYv4) Here the “?”s represent lazily-evaluated values which haven’t been evaluated yet. Note that the “copy” is nested within the outermost box - indicating that it, too, will be copied, leading to a whole nested ladder of blocks. In Trace syntax, the dotted boxes are Context objects, and the copy-with-changes operator is represented by function-call notation: `cont({‘n’:2})` makes a copy of the Context cont, in which ‘n’ is assigned the value 2. Values of variable-instances downstream of n will update in response to the new value of n, within the copy. Core Data Structure ------------------- Here’s a full program in Trace; we’re going to walk through all the pieces. ``` from tracelang import S, E, Context factorial = Context({ 'fact': Context({ 'result': S(S('n') == 0, { True: 1, False: S('n')*S('result', S('fact')({'n': S('n') - 1})) }) })({'fact': S('fact')}), 'result': S('result', S('fact')({'n': S('n')})) }) ``` ``` >>> S(‘result’, factorial({‘n’: 3})).get_value() 6 ``` Let’s start with the three main pieces: Symbols (S), Expressions (E), and Context. Very briefly: * A Symbol is a variable-instance. It’s defined by a literal (e.g. ‘n’) and a context in which to resolve that literal (e.g. `{‘n’: 2}`). Calling get\_value() on a symbol resolves the literal within its context. * Expressions are Symbols whose “context” is a python function, so we resolve them by calling the function. They are implicitly created by using operators like +, \*, ==, or function call on Symbols. * Contexts are basically dicts with a couple extra features: they provide a default context for any symbols within them, and we can “create a copy but with changes” via function-call notation. More details follow... Symbols are the starting point. A symbol is just a literal (e.g. ‘foo’ or 2) and a context mapping the literal to some value (e.g. `{‘foo’: ‘bar’}`; it doesn’t have to be a capital-C Context). By calling `.get_value()` on a symbol, we get the value of the literal from the context: ``` >>> S(‘foo’, {‘foo’: ‘bar’, ‘baz’: 2}).get_value() ‘bar’ ``` Both the literal and the context can themselves be symbols, in which case we resolve values recursively. For instance: ``` >>> S(S(‘is_case’, {‘is_case’: True}), {True: ‘it is’, False: ‘it is not’}).get_value() ‘it is’ >>> S(‘foo’, S(‘bar’, {‘bar’: {‘foo’: 2}})).get_value() 2 ``` Conceptually, `S(‘x’, context)` works like the square-bracket accessor `context[‘x’]` - except that we recursively resolve symbols along the way. In our factorial program, notice that many of the symbols don’t have any explicit context - e.g. `S(‘n’)` or `S(‘fact’)`. **When a symbol’s context is not explicitly passed, the context is set to the (lexically) enclosing Context** - this is one of the two main uses of capital-C Contexts. For instance, the `S(‘n’)`’s in our example all have their context set to one of the two Contexts, depending on which one they appear inside. Expressions are a special type of Symbol which resolve by calling a python function. If we have a function ``` def square(x): return x*x ``` then we could call it via ``` >>> E(square, S(‘x’, {‘x’: 2})).get_value() 4 ``` This resolves all the input Symbols, then calls the python function, as you’d expect. In practice, we don’t usually need to write E() explicitly - **an E will be created automatically via operator overloading on Symbols**: ``` >>> total = S(‘x’, {‘x’:2}) + S(‘y’, {‘y’:3}) >>> type(total) E >>> total.get_value() 5 ``` In our factorial program, E’s are implicitly created where we multiply symbols (i.e. `S(‘n’)*S(‘res’, …)`), subtract symbols (i.e. `S(‘n’) - 1`), compare symbols (i.e. `S(‘n’) == 0`), and where we call symbols (i.e. `S(‘fact’)({'n': S('n')})`). So if they're implicit, why do we need to know all this? Remember, the point of Trace is not merely to "run the code" (i.e. call `.get_value()`), but to query the structure of the computation - and E's are one of the main things which comprise that data structure. We'll see a bit of that in the next section. Contexts are, conceptually, mostly just dicts. They map things to other things. The two main differences between a context and an ordinary python dict are: * If a Symbol doesn’t have an explicit context, its context will be set to the lexically enclosing Context. * By calling a Context with a dict, we create a modified copy of the context. In the example program, we create a modified copy in three places: * `S('fact')({'n': S('n') - 1})` creates a copy of the context called ‘fact’ for the recursive call, just like the diagram from the previous section. * `Context({...})({'fact': S('fact')})` is used to pass a pointer to the fact-context inside of the fact-context itself, so copies can be made. * `S('fact')({'n': S('n')})` is just a pass-through function call. When actually using the factorial function, we create one more modified copy: `factorial({‘n’: 3})`. This is the first copy with a value actually assigned to ‘n’. Before we jump back in to our factorial example, let’s see how these pieces play together in a simpler example: ``` import operator as op half_adder = Context({ ‘a’: 0, ‘b’: 1, ‘sum’: E(op.xor, [S(‘a’), S(‘b’)]), ‘carry’: E(op.and_, [S(‘a’), S(‘b’)]) }) ``` This example contains two Symbols (other than the E’s). Neither Symbol has an explicit context passed, so both have their context set to the enclosing Context - i.e. the object half\_adder. To get value of ‘sum’ within half\_adder, we’d call `S(‘sum’, half_adder).get_value()`. This would look up the values of `S(‘a’, half_adder)` and `S(‘b’, half_adder)`, then pass those values to the python function `op.xor`. We could also evaluate at other inputs by making a modified copy - e.g. `half_adder({‘a’: 1, ‘b’: 0})`. That’s all the core pieces. Let’s take another look at our example program: ``` from tracelang import S, E, Context factorial = Context({ 'fact': Context({ 'result': S(S('n') == 0, { True: 1, False: S('n')*S('result', S('fact')({'n': S('n') - 1})) }) })({'fact': S('fact')}), 'result': S('result', S('fact')({'n': S('n')})) }) ``` ``` >>\> S(‘result’, factorial({‘n’: 3})).get_value() 6 ``` We have two Contexts. The inner Context is our main function, but we need to use the outer Context in order to get a pointer to the inner context, so that we can make modified copies of it. There’s some code patterns which are probably unfamiliar at this point - e.g. `S(S(‘n’) == 0, …)` is used to emulate an if-statement, and we write things like `S(‘result’, fact)` rather than `fact[‘result’]`. But overall, hopefully the underlying structure of this code looks familiar. But if all we wanted to do was write and run code, we wouldn’t be using Trace in the first place. Let’s probe our program a bit. Stepping Through the Code ------------------------- Human programmers sometimes “step through the code”, following the execution step-by-step to better understand what’s going on. IDEs often provide tools to help with this (e.g. breakpoints), but most programming languages don’t offer a nice way to step through the code programmatically. For Trace, this is a simple - and fundamental - use-case. Here’s how we step through some Trace code. We start with our final output, e.g. `answer = S(‘result’, factorial({‘n’: 3}))`. Before, we called `answer.get_value()` on this object, but now we won’t. Instead, we’ll access the pieces which went into that Symbol: `answer._literal`, and `answer._context`. In general, we can “work backwards” in three possible “directions”: * If `answer._literal` is a Symbol/Expression, then we can step back through it, and/or we can get its value * If `answer._context` is a Symbol/Expression, then we can step back through it, and/or we can get its value * Once we have both values, we can look up `answer._context[answer._literal]` to find the Symbol/Expression/Value defining answer in its context. In this case, the literal is not a Symbol, but the context is - it’s an Expression object, which performs the modified-copy operation on our factorial context. By calling `answer._context.get_value()`, we get a new Context, which is a copy of factorial with the modification `{n: 3}` applied. By looking at the Expression object itself, we can see the original factorial context and the `{n: 3}`: `answer._context._literal` is a list containing `factorial` and `{n: 3}`. Let’s go one step further in: we’ll set `last_step = answer._context.get_value()\[answer._literal]`, and look at `last_step`. Now we get an object which looks like `S('result', S('fact', <modified copy>)({'n': S('n', <modified copy>)}))`, where the modified copy is the copy of factorial with `{n: 3}` applied. The outermost symbol once again has a string as literal, and its context is an Expression object performing the modified-copy operation on a Context. Calling `.get_value()` on the Expression `last_step._context` would lead us even further in. Now, obviously this is not a very convenient way for a *human* to trace through a program’s execution. But if we want to write *programs* which trace through other programs’ execution, then this looks more reasonable - there’s a relatively small number of possibilities to check at every step, a relatively small number of object types to handle, and we have a data structure which lets us walk through the entire program trace. Definitely Real User Testimony ------------------------------ To wrap it up, here are some endorsements from enthusiastic Trace users. “Trace is an AI-oriented programming language for people who like Lisp, but think it doesn't go far enough.” - Ada Lovelace “Isn’t this just math?” - Charles Babbage “Trace combines the syntax of JSON with the semantics of a spreadsheet, but instead of just ending up horrendously hackish, it ends up horrendously abstract *and* horrendously hackish.” - John Von Neumann “In Trace, source code is truly just data, always.” - Alan Turing
741502de-74a9-4f95-8731-4f80be66519d
trentmkelly/LessWrong-43k
LessWrong
Civility Is Never Neutral [Crossposted from my blog with minor edits, mostly to get rid of culture war stuff.] There are a lot of people I know who say something like “the free market of ideas is really important and we need to seek truth. It’s important to let everyone have their fair say and share the evidence that they possess. So what we’re going to do is not shame anyone for expressing any belief, as long as they follow a few common-sense guidelines about niceness and civility.” I am very sympathetic to this point of view but I don’t think it will ever work. I do not mean to say that it won’t work to personally decide to be as nice and civil as you can. I think that’s a good idea and more people should, and certainly I have met many extraordinarily nice people over the course of my life. The problem is when you make niceness and civility a social requirement, the sort of thing you will be punished for not adhering to. First, it has been a commonplace observation since the day of John Stuart Mill that civility rules are almost always enforced unfairly. If someone is making an ineffectual and stupid argument, you’re unlikely to take much offense at it; in fact, those arguments are usually just funny. But if someone is hitting you at your actual weak points, pushing you hard on exactly the points you find most difficult to answer, then you’re going to get really upset and triggered and you’re probably not going to respond rationally. Incisive questioning of a locally unpopular view is called “being insightful”; the proponent of a locally unpopular view being triggered by it is called “letting your emotions run away with you in a rational discussion” and “blowing up at someone for no reason.” Incisive questioning of a locally popular view is called “uncharitable” and “incredibly rude”; the proponent of a locally popular view being triggered by it is called “a reasonable response to someone else being a jerk.” It all depends on whether the people doing the enforcement find it easier to pu
1186a9d0-7a78-4d97-8199-6c789e290315
trentmkelly/LessWrong-43k
LessWrong
Another Calming Example In reading about parenting I often feel like there's a bit too much theory vs examples, so here's walking through a recent interaction that others later commented went surprisingly well. At a recent Tuesday Family Dinner, one of the kids (~6y) served themself an absurd amount of pasta. They were told to put some of it back, refused, and someone else put it back. They burst into tears and completely fell apart. They were told they either needed to calm down or leave the room, and they left. Lots of angry crying and shouting from the other side of the house. After waiting a bit to give them a chance to calm down some, I went to see if they wanted to talk. I asked, and they said they did (if they hadn't I would have turned around and gone back to the table). I sat with them on the couch with a mindset of providing calm and patient attention, and asked what had happened. They started to explain through their sobs, but I told them that I couldn't understand and asked if they could speak normally. This isn't actually true: I'm generally pretty good at understanding kids, even when they are crying pretty hard. I'm strongly opposed to lying (to kids or anyone) in most circumstances, but this is one place where I do make an exception. I pretend that I can't understand, ask if they can speak normally, present myself as an eager listener, and in response kids reliably pull themselves together. This has a strong calming effect: something about no longer crying seems to filter back into feelings not seeming so overwhelming. In this case, they calmed down some, and explained that they were upset because they had the amount of pasta they wanted and then people took it away. We talked about what they didn't like about that and they told me they were worried the pasta was going to run out and they would still be hungry after dinner. Personally, I think this is very unlikely to be why they fell apart while at the table, but that doesn't actually matter! What's important is that
eaf4e7c3-e369-48d5-996c-d4f44aee5fdc
trentmkelly/LessWrong-43k
LessWrong
Double Illusion of Transparency Followup to:  Explainers Shoot High, Illusion of Transparency My first true foray into Bayes For Everyone was writing An Intuitive Explanation of Bayesian Reasoning, still one of my most popular works.  This is the Intuitive Explanation's origin story. In December of 2002, I'd been sermonizing in a habitual IRC channels about what seemed to me like a very straightforward idea:  How words, like all other useful forms of thought, are secretly a disguised form of Bayesian inference.  I thought I was explaining clearly, and yet there was one fellow, it seemed, who didn't get it.  This worried me, because this was someone who'd been very enthusiastic about my Bayesian sermons up to that point.  He'd gone around telling people that Bayes was "the secret of the universe", a phrase I'd been known to use. So I went into a private IRC conversation to clear up the sticking point.   And he still didn't get it. I took a step back and explained the immediate prerequisites, which I had thought would be obvious - He didn't understand my explanation of the prerequisites. In desperation, I recursed all the way back to Bayes's Theorem, the ultimate foundation stone of - He didn't know how to apply Bayes's Theorem to update the probability that a fruit is a banana, after it is observed to be yellow.  He kept mixing up p(b|y) and p(y|b). It seems like a small thing, I know.  It's strange how small things can trigger major life-realizations.  Any former TAs among my readers are probably laughing:  I hadn't realized, until then, that instructors got misleading feedback.  Robin commented yesterday that the best way to aim your explanations is feedback from the intended audience, "an advantage teachers often have".  But what if self-anchoring also causes you to overestimate how much understanding appears in your feedback? I fell prey to a double illusion of transparency.  First, I assumed that my words meant what I intended them to mean - that my listeners heard my intentions as
61983a36-bbf2-436d-9668-6caa4f5be77c
StampyAI/alignment-research-dataset/youtube
Youtube Transcripts
257. Where I agree and Disagree with Eliezer 3 hello and welcome to session 257 in the AI safety.com reading group tonight we'll be finishing up where I agree and disagree with Elisa by Paul Cristiano there um Paul Cristiano is still a researcher in in the alignment Research Center and hold on um and this is uh starting from part 20. hold on why is this down okay um so we'll start with this agreement number 20 um about uh to what extent words reflect our actual Deep Thoughts Paul um states that human thoughts partially exposes only a partially screwable outer service layer words only Trace our real thoughts uh this to me is kind of mysterious in a in a deep sense I don't really know but Cristiano isn't pushing back very strongly against the mysteriousness of this point he's mainly saying that sure if we just optimize end-to-end on outcomes that seems like something that will not relate to human thoughts very much but um we can and just using human thoughts directly is also a a bad strategy but there are a number of strategies in between uh like AI safety via debate and a number of others he even calls it plenty of techniques um I uh I haven't looked very deeply into that area of uh alignment research but I think uh Paul Cristiano's being very generous by calling it plenty of techniques um as fast I can see they are all really undeveloped and I don't think we should um put a great amount of stock in in any of those we have seen large language models uh apparently being quite useful right now and it seems very likely uh to pull Cristiano that this is evidence that a similar kind of language-based models will be competitive at the time of transformative AI um my first thought on this is that this is explicitly only on the level of words and not on the level of thoughts that uh um Elia zurichowski is talking about Elizabeth has a different complaint about this he believes it's hard and probably impossible to just do a make a powerful system using similar this kind of symbol imitation of human words here I will return to one of my key points from uh last session where Paul Cristiano and Elia swedkowski talks about two different things when Paul Cristiano talks about economically uh transformative AI he's thinking about something that can an example would be something that can you know sure can secure aging do space colonization this kind of thing and what uh Elia zurichowski is explicitly talking about is capable of creating pivotal acts and in a very real sense it may be a lot easier to cure aging than to take over the world disjunctiveness as in having a number of different mostly independent Solutions um is um is a key part of ilyasuredkowski's uh AI Doom scenarios because there are so many different things that can go wrong rejoined it to this is there are also so many ways it can go right and we only need one alignment strategy to pay off in order for us to basically be safe uh I think uh the way I would try to synthesize these two points is to say that um we need uh there to exist an alignment strategy I call it a such that for all the lethalities of the alignment strategy a is capable of overcoming The lethality so there are two levels of disjunctiveness disjunctiveness in the alignment strategy and disjunctiveness in the lethalities um there is a a problem in this in that uh we have a a long list of lethalities by elizabethkowski and this is probably not exhaustive but uh if we take an alignment strategy and try to use uh I don't know something like listing latent knowledge um then that is only one strategy that we are using and using a listing latent knowledge is probably prevents us from using a number of other strategies um so to some extent at least they are exclusive so we only get one alignment strategy depending right and a number of uh uh alignment strategies can coexist so it's not total we only have to choose one um Paul Cristiano argues that the disjunctiveness in alignment strategies may be more just uh like greater like greater degree of Independence since we know that there are a number of humans and we don't know that there is in fact uh how many AIS and how many AIS that are possible um I think like counting the number of humans and Counting the number of AIS is the wrong way to look at this um there are reasons to expect that the disjunctiveness in alignment strategies is limited um right now the number of people who are working on this is low probably in the low hundreds um and a lot of these people are working on very closely related alignment strategies a lot of these people are in Berkeley a lot of these people are rationalists or effective altruists a utilitarianists they have there is not that much uh diversity among alignment researchers and of course other reasons might be that they are just searching in a part of the solution space where there are no Solutions maybe there are uh and again we can't um uh probably implement all the alignment strategies that we are thinking about Cristiano also have an argument about where these this disjunctiveness comes in I don't think that matters very much when the disjunctiveness happens um yeah how alien will an AGI be the uh iliacy point is uh that it might indeed be very very alien and not use the same kind of Concepts that we do um and uh pocusiano in one of his uh recurring uh complaints is that uh Elise redkowski is very very overconfident in this um and seems uh Pocus channel is taking a more I wouldn't say anthropomorphic but more human inspired uh view of the ai's capabilities for instance it might understand the same things as humans but just slightly better um I think again we are talking past each other in that um something that understands the same things as humans but slightly better could be very economically transformative could obviously um like do all menial tasks but could it do a pivotal act my strong expectation would be that just being slightly smarter than like an average human is very far from being sufficient to take from to take over the world or solve alignment um another uh way that uh AI could turn out to be human-like would be to be as like a stupid human but just thinking faster um or be extremely human inspired because that's where all the training data is basically um I also disagree that these two things are likely to bring us to True superintelligence um like human imitation uh won't get us directly to Super intelligence it might um it might be a strong step of the way but by definition you can't imitate to something greater so what kind of purital Acts can weak AGI do um we might see that AI would be doing science and that might come from similar Concepts the uh as we do and in that case we can see that the AI is like creating experiments formulating a hypothesis and deciding among the hypotheses in a scientific process but this uh while it would be nice to see the AI doing experiments that doesn't actually tell us very much about what's going on under the hood if we don't understand the actual Concepts being used if we can just see the AI is doing some experiments but we don't know the concepts that they're trying to elucidate then that won't help us very much or may not help us at all um and the idea here is that uh if the AI is fast and cheap that could be enough for transformative uh in particular I feel that cheap AI is unlikely to do very much like um towards solving the alignment problem or taking over the world um yeah I don't think that that is a very likely scenario Cristiano uh obviously would disagree with this and he even makes a very um controversial statement that he could see that uh fast and cheap AI um being so much better at alignment research that human contributions are basically Obsolete and rohin Shah in the comments says that's a pretty high bar and Pokemon say yeah okay and retracts that and this is the kind of really frustrating to look at this from the outside because this um uh this exchange leaves you feeling like okay why um there's a lot of uh implicit understanding here why was Paul cruciano able to change his mind just based on on like three words from Robin Shaw basically um I don't know I would really like to know uh what are the underlying models that Paul Cristiano has and that her in Shah has about how this uh uh fast cheap weak AGI would solve alignment because from my point of view it looks very very unlikely that you can uh solve alignment using these kind of techniques disagreement 23 about is about how we reason about agis this is probably inspired by uh Elizabeth saying that humans can't participate in coordination schemes between super intelligences and um for uh one of the reasons is that we can't reason about the code of superintelligences answers that well the Supreme challenges may not be able to do that either because they are very likely to be messy this I would hold out as an example of not uh Elisa utkowski being overconfident but Paul Cristiano uh saying some things that is very likely about the structure of uh of future super intelligences is um is just really really hard we strongly do not know this Pro Cristiano makes a uh uh another strong claim here in my opinion about what will the tools be that the AIS use for reasoning about each other um one of uh just black box looking at each other's Behavior the second is to use the same kind of tools we use like into the interoperability tools that we have right now Etc um doing reasoning that is similar to what we do right now and no other way basically those are the three ways that AIS will reason about uh other AIS when we get to something like super intelligence uh I think this is again really overconfident and also I would say that this is at odds with Paul Cristiano's hope earlier that AIS would be able to solve the alignment problem and and if AIS are not able to reason about each other uh in better stronger ways than the than we are then why do we have any particular hope for them solving the alignment problem uh and of course I also strongly feel that this is like I I still have hope for AI helping us solve the alignment problem because I do believe that super intelligences are qualitatively smarter than us but if you don't believe that you will have something like that then why would that help why were they then be able to contribute to solving the alignment problem the example that uh Elizabeth gives is by reasoning over a probability distributions of each other's code using logical decision Theory this is basically not how we are reasoning about uh tpt3 right now uh I I suppose you to some weak extent humans are in fact capable of doing this but it feels really strongly like something a super intelligence could do much better than us and I think that's an example of something that goes beyond these three examples uh by Paul Cristiano and that I think because they seem to be just narrowly out of our grasp it seems very likely that stronger AI would be able to do this um another scheme that um Kowski criticizes is by having some kind of it's sometimes called multiple Godzillas that are trying to keep each other in check and that seems like something that will fail in Italia so your task is model once the they become sufficiently capable this is criticized by Paul Cristiano as saying that this is something that will be um important in the long run but not uh really uh relevant in the short term uh I think there's an easy way to explain this in that if you have shorter timelines then the difference between the short term and the long term might be like very very short time um so that could be a part of it um I wonder also if Paul Cristiano is seeing a time where we have AIS that um came to pivotal acts but are not super intelligence because in this case schemes for playing them off against each other could to some extent work maybe if they can like solve the alignment problem but they can't but they are not super intelligent maybe it's unclear um another problem uh uh with the unbreakable schemes that Elisabeth points out is trying to like have a designer and a design Checker and try to um incentivize those to work against each other and then they will obviously be uh actually incentivized to um to collaborate against us this is uh some uh something that Paul Cristiano has uh low thoughts about so this is just an unrealistic incentive based anti-corporation scheme proposed by random people so I obviously would need to go went out to try to find some people who did propose this kind of thing and here is like the research agenda from the center for existential risk um which goes into substantial details about precisely this so it's not just proposed by random people um I would actually I would give one to three odds that if you went through everything that Paul Cristiano himself has ever written you would find examples of incentive-based anti-corporation schemes uh I think that's uh I I haven't actually gone through everything Focus channel has written but I think uh he has written so much and it seems kind of adjacent to a lot of things he has written so I would expect if you actually went through it there's a substantial chance he has written about that but that's I I probably shouldn't say that with without actually going through that hmm so the alternative that podcast channel is pointing to is selecting for systems that play games competitively instead of uh trying to get something that just reacts to incentives that are possibly misaligned um there are examples of ways to do this I think incentive based methods have a higher probability of working in the limit of a very powerful AI um but but this is uh certainly something that that you can do instead and and that might in theory work is accused of equal voting between two statements first is that AI systems will cooperate and the second is that the verifiable activities you could use to gradient descent to select file won't function appropriately as checks and balances so it is possible indeed that Ilia zayatkowski is equivocating between these two but I did in fact go through the entire document with his list of lethalities and I can reasonably confidently say that he does not equal vote in if we vocate in in this way in this document so I would like procristiano to come out with an explicit example of where this happens um because I'm not seeing it manipulation by super intelligence is something that Elia saidkowski believes is likely to happen uh depending or it will be possible at least um and uh the The lethality where this is most uh prevalent is in his uh statement that a super intelligence could make a psychological manipulation attack that we could not recognize um this is uh in Pakistan's summary becomes that if you can do pivotal X then they can also manipulate humans very well so there's no point in having debates between uh super intelligences or try to pray play adversarial games against them once they are capable of doing previously Acts um this is somewhat the uh the disagreement doesn't uh strongly relate to what Elias utkarski is actually talking about um and also I feel that if you have an AI That's capable of doing pivotal acts and if you have like multiple different independent uh super intelligences that uh capable of doing professional acts but haven't done pivotal acts then my timelines are very short in that in fact social that I don't expect will have like multiple super intelligences capable of doing pure selects foreign mind with a profile of abilities there is a you have like bostrom's six cognitive strategic tasks like research and development being one and persuasion social manipulation being another which one is harder to get at to some extent like all superhuman uh levels are equally hard in some in some sense and so that that seems like a reasonable prior that getting to Super intelligence along these Dimensions is equally hard um I think that this prior is uh uh uh I can see it as a pride but we also have some evidence in particular um persuasion is something that it is reasonably easy to get feedback on like I imagine Facebook could run some experiments some simple AI a b testing and actually get reasonable data about what kind of things uh persuade people and what does not um you could have an uh an AI in YouTube during seeing what drives engagement I think that is probably things that are already happening and in contrast research and development when Paul Cristiano is talking about that then space colonization or curing aging that's a kind of research and development and that's not really what we care about the research and development we care about is alignment research and that is pre-paradigmatic and that seems a lot harder to get good feedback on compared to something like persuasion so from from that point of view I would expect it to be easier to get superhuman ad persuasion than to get superhuman out of alignment research so this is probably dominated by how much effort do we put into this and if we truly want the AI to be superhuman at alignment research and we put a lot of researchers resources towards that goal then maybe we could obtain it do we actually in practice have that as our goal my understanding of where AI resources right now is being spent is that a lot of it is being used or either persuasion or persuasion adjacent work such as delivering ads I believe Google does a lot of work on this or like driving engagement on YouTube and ads on Facebook and and things like that and I I believe that there is a lot more actual resources being put towards um during better Google ads than being put towards alignment and this influences all the uh the arguments in in Pokemon's view we are likely to put more resources towards alignment and if we do that then the training is done on alignment so uh that will push it towards being better at alignment if the tools and all the structures are designed to facilitate research and development and there is a number of AIS that are all collaborating on advancing research and development whereas manipulation is much more disjoint um and all these arguments unfortunately work precisely in Reverse when we are instead assuming that more resources are pretty important towards manipulation like if the AIS are primarily trained to manipulation and the tools and structures are designed to facilitate uh manipulation and ads and there are many AIS all working on the same kind of goal and Alignment research is the disjoint one in that case we should strongly expect to see a superhuman persuasion before we see superhuman alignment research so what is actually um going to be easier Paul Cristiano has a very weak prior that humans are to some extent evolved to be good at uh persuasion and social manipulation um and not really uh research and development that seems like kind of a byproduct of general intelligence that um uh I think that is very likely I would point out that there might be a very substantial overlap between alignment and manipulation and uh you could argue that some parts of alignment is indeed counter manipulation and that seems not that out of the distribution possibly but I am mostly confused about this and pocusiano also has only a like a weak intuition about this the 26th disagreement is about plans here in the background you can see the schlitter plan an example of a real world plan for Germany's Invasion of France and Belgium during the SEC the first world war and notably this was a plan that was insufficient in that the sleeping plan was not possible to carry out um and do we have something like this for alignment uhkowski strongly says no we have no sleeping plan or anything like this and what you're seeing right now when you look around is not what surviving worlds look like elizabethkowski uh conceptualization of plants is not the same as what uh pocusiano thinks he thinks that you don't really have this kind of plans in general uh and I think that is probably true in the pop Christiano verse in his worldview where a lot of the uh alignment successes happen by default in some sense in that case if alignment happens by default then plants aren't really that uh that necessary on the other hand if you expect that we are in if not the worst possible world then a rather tough world where alignment is truly a very difficult problem then to me having some kind of plan seems really helpful um poker channel uh pushes back on um whether it is a good conceptualization of what actual plans are and again I get a feeling that they are talking past each other in that um Pokemon is saying that Elisa utkowski doesn't have a good image of what a plan actually is but uh Elizabeth is not saying that the current plane is bad he's saying that there is no plan and even if you don't know very much about plants it is quite possible to recognize uh just that there is no such plan I would also in addition have liked Christian to be like more concrete precise what things are wrong with eliseo's conceptualization of a plan even though he doesn't go into great detail um why should we defer to eliasa on what a plan looks like well one obvious reason is that he has explicitly written down a plan for death at land that seems like it could work um not in great detail but I think that's more than most I don't think we should really strongly differ in the sense that if we had a plan that looked like this living plan then uh earlier's counter arguments would be strongly insufficient to to criticize this but we have nothing that remotely looks like a a real plan so how much value does the document in GI ruin a list of lethalities actually provide uh Elizabeth it's very valuable and that a core alignment researcher should spontaneously write this document uh Paul Cristiano not surprisingly disagrees like most people insist this is actually emotionally aimed and rhetoric and pedagogy um I disagree I think that it has a stronger rhetorical impact and it is very uh it it has a changed a lot of opinions but mostly from the fact that the the document is extremely blunt not from uh the the actual contents and uh I don't think that uh poker no um recognizes that the uh the lack of bluntness in previous communication is something that has helped people substantially back and the rhetorical Impact May in fact be just a um uh a byproduct in some sense is this something that will actually be helpful towards solving the alignment problem that is not what poker's channel says but the upvotes uh on less wrong seem to be some kind of disagreement and I also strongly disagree I feel that the AGI room list of lethalities is a very very important document towards solving the alignment problem I think as an example um Nick Boston's book super intelligence basically contains almost none of these points and I think that Nick Bostrom basically should rewrite the book to take these kind of lethalsis into account and I think that they are in fact really really important considerations the counter argument uh Pocus channel has is that uh Alias are called the ideas he focused on important but that's not an objective fact about what is important I think that's a fully General counter argument you can just counter anybody's who assess this is important by saying it's not an objective fact it's just your opinion because obviously everything everybody writes is just their opinion on what is actually important there are other methods other difficulties um and uh the uh the contribution as poker's Genesis of the document of the list of lethalges is just basically collecting of the points um I think this is something that should not be underestimated I think this is actually really important uh almost crucial work to summarize and aggregate um existing points um this is these things um have uh have been pointed out many other places we talked about that last time but uh pocusiano here goes a bit further and says that we uh where there's been argument about more important difficulties in in other places and this is also something that uh even hubingham for instance has has argued in the comments three Lisa's original post um made a reply to this saying his list wasn't complete and are there other foreseeable difficulties um and even who Winger comes up with five uh uh other um uh difficulties you could see in the future um I won't go through them here um and Elia so your taski is of course really um sees this as important work and I strongly agree like having an even better version of the uh list of lethalsis is uh which is very valuable um when I actually go through even who bring us points a lot of these are impossibility results like it is impossible in theory to ensure that an AI is aligned in this particular way and I think it's important in this good way um but uh it's not at all clear to me that this kind of impossibility results are going to be all that relevant so I think definitely stating that these are more important difficulties is overstating it given that this is written by Paul Cristiano he's probably thinking in particular of enlisting latent knowledge and this has uh 10 similar difficulties I went through them and uh there is a substantial overlap in difficulty one to seven and they are written in more details um and um to what extent the overlap is uh yeah I think I disagree with poker's Channel about that but um hearing that they are more important because they are more relevant to core problems with realistic alignment strategies and obviously this is again a matter of a subjective taste in that uh Paul Cristiano kind of naturally feels that his uh personal um best guess for how to solve alignment is the most realistic like that's again almost by definition um and like a lot of people are working on things that are not existing latent knowledge and I think uh some of the difficulties like the the part about regularization for instance I think expecting that to be a uh uh a major component of actual alignment is uh uh quite optimistic in my opinion somewhat uh weekly states that uh he would have preferred that uh uh the listing latent knowledge States all the problems up front together I think that's a whether it's nice to write that or not uh is somewhat irrelevant I think I think as I've said before iliakowski hasn't engaged sufficiently with illiciting latent knowledge and this is just this might be just because you skimped the report and I think it's uh sufficiently valuable that he should read it in detail so the overall take is that Paul in general uh he obviously had a lot of agreements with uh eliasa and these are good considerations backed up by pretty clear arguments but they are too confident and that's something he has said in many places and a big problem is that elisabeth's reasoning is more a priority than reasoning about evidence and the problem with this kind of thing is that we are unlikely to actually resolve the disagreements even in if there was like um five more posts back and forth between Elisa and Pocus John we are unlikely to see any kind of resolution on on major questions or even things they can bet on and I think that is true and I think also it is very very sad that we we can't actually use evidence for this Paul would like Elisa to write his arguments down in in more details that's one of his core complaints and my core answer to that again as I've said is that there are in fact links on Amazon where Elizabeth has written this down in a surprising amount of detail it's also stated that people with aliasis views have often made engagement hard that's of course a bit sad because I am close to aliens in this regard and I hope we haven't made it hard on purpose but I think it's just that engagement is really really hard because it is really really hard to get any kind of meaningful traction on uh uncovering these points and predictions is one that thing that would help us and that we haven't really been able to get going uh Paul and eliasa has failed to converge on any kind of meaningful Bits And even though um poor Cristiano seems to be strongly more eager and open towards the bets the the reply from Elia zurichowski is basically that he his uh World model is not particularly constrained in what happens until we have some kind of takeoff he is mostly uh um analyzing what happens once we to use the rocket analogy exit the layer of atmospheric turbulence and that I feel is the central media lethality in AGI ruins the fact that we have no strong epistemic way to actually progress on figuring out what is uh the uh uh the status of all these claims about AGI we would we don't really know how to progress on this we probably cannot progress on this and that is the miniature lethality that is ultimately going to do much that is all for tonight thank you and see you next week
c3b05143-5599-4713-872a-8b96da69fb70
trentmkelly/LessWrong-43k
LessWrong
Rationality Dojo Last night, here in Portland (OR), some friends and I got together to try to start Rationality Dojo. We talked about it for a while and came up with exactly 4 exercises that we could readily practice: 1. Play Paranoid Debating 2. Play the AI-Box experiment 3. Read Harry Potter and the Methods of Rationality 4. Write fanfiction in the style of #3 We also had a whole bunch of semi-formed ideas about selecting a target (happiness, health) and optimizing it a month at a time. Starting a dojo, in a time before organized martial arts, was surely incredibly difficult. I hope we can accrete exercises rather than require a single sensei to invent the majority of the discipline. So I've added a category to the wiki, and I'm asking here. Do you have ideas or refinements for exercises to fit within rationality dojo?
f6d87b88-8e02-4acc-8e21-91b0ec16b11d
StampyAI/alignment-research-dataset/arxiv
Arxiv
Quantifying Local Specialization in Deep Neural Networks 1 Introduction --------------- Modularity is a common property of complex systems, both natural and artificial (Clune et al., [2013](#bib.bib53 "The evolutionary origins of modularity"); Baldwin and Clark, [2000](#bib.bib54 "Design rules: the power of modularity"); Booch et al., [2007](#bib.bib55 "Object-oriented analysis and design with applications")). The ability for a system to separate different sub-tasks into distinct architectural components has benefits such as intelligibility and adaptivity. Therefore, it would be valuable to determine the extent to which neural networks are *locally specialized*: that is, how much their functionality can be abstracted into comprehensible sub-tasks, each localized to different groups of neurons. Existing work (Filan et al., [2021](#bib.bib51 "Clusterability in neural networks")) has produced methods of partitioning a network into connected groups of neurons. In this paper, we focus on developing quantifiable proxies for local specialization. We then apply them, together with variants of these partitioning methods, to a variety of image classification networks. There exists a body of research for developing networks which either have distinct architectural building blocks (Alet et al., [2018](#bib.bib32 "Modular meta-learning"); Parascandolo et al., [2018](#bib.bib33 "Learning independent causal mechanisms"); Goyal et al., [2019](#bib.bib23 "Recurrent independent mechanisms")) or are trained in a way that promotes modularity via regularization or parameter isolation (Kirsch et al., [2018](#bib.bib27 "Modular networks: learning to decompose neural computation"); De Lange et al., [2019](#bib.bib28 "A continual learning survey: defying forgetting in classification tasks"); Filan et al., [2021](#bib.bib51 "Clusterability in neural networks")). Yet in machine learning, it is more common to encounter networks whose architecture and training are not designed to separate the computation of sub-tasks. For example, in computer vision, networks are generally trained end-to-end with all of the filters in one layer connected to all filters in the next. Do such networks nonetheless exhibit local specialization? There is some evidence for this. For example, by methodical manual investigation, Cammarata et al. ([2020](#bib.bib35 "Thread: circuits")) discovered sub-networks which perform human-explainable sub-tasks such as car detection via neurons which detect different car parts. More scalably detecting local specialization in networks would help us to extend our understanding of their learning dynamics and to expand our interpretability toolbox by suggesting an additional level of abstraction beyond single-neuron methods. In this paper, we systematically analyze the extent to which networks which are not explicitly trained to be modular nonetheless exhibit local specialization. First, this requires a method for breaking down a network’s computational graph into groups of neurons. For this, we use spectral clustering on a graph representation of the network, using an extension of the methods of Filan et al. ([2021](#bib.bib51 "Clusterability in neural networks")). Second, this requires a scalable method for approximating the degree to which a partitioning shows local specialization. We do this by applying interpretability tools to clusters of neurons as a way of quantifying proxies for local specialization. Proxies for local specialization: Our definition of local specialization requires human-comprehensible sub-tasks relevant to the overall task to be localized in particular subsets of neurons. However, directly determining this would require a human in the loop, making it difficult to scale such a method. So: how could measuring local specialization be automated? Consider an idealized prototype of a highly modular network that has subsets of neurons performing sub-tasks in which (1) each sub-task is necessary for high performance on the overall task, (2) each sub-task is implemented by a single subset only, and (3) each subset executes only a single sub-task. The combination of (1) and (2) suggests that the removal of one of the subsets from the network would harm performance because the network lacks the implementation of a necessary sub-task that is localized to the subset. We say that such a subset is *important*. Next, given that neurons are frequently understood as feature detectors, (3) suggests that the neurons in a subset should tend to be strongly activated by inputs which contain features relevant to the subset’s sub-task. We say that such a subset is *coherent*. Figure [4](#A1.F4 "Figure 4 ‣ A.5 Correlation-Based Visualization ‣ Appendix A Appendix ‣ Quantifying Local Specialization in Deep Neural Networks") of Appendix [A.5](#A1.SS5 "A.5 Correlation-Based Visualization ‣ Appendix A Appendix ‣ Quantifying Local Specialization in Deep Neural Networks") provides an illustrative example of coherence in networks trained on a task that lends itself to parallel processing of sub-tasks (though this is not one of our key experiments presented in Section [4](#S4 "4 Experiments ‣ Quantifying Local Specialization in Deep Neural Networks")). Measuring importance and coherence thus offers a sense of the degree to which a partitioning of a network’s neurons contains subsets that meet these prototypical conditions, despite not perfectly satisfying them. Results and contributions: To measure these proxies, we take partitionings of neurons in a network generated by spectral clustering, and analyze them using methods from the interpretability literature that have conventionally been applied to single neurons. Our key results are shown in Tables [1](#S4.T1 "Table 1 ‣ 4 Experiments ‣ Quantifying Local Specialization in Deep Neural Networks") and [2](#S4.T2 "Table 2 ‣ 4.1 Lesion Experiments ‣ 4 Experiments ‣ Quantifying Local Specialization in Deep Neural Networks"). We find that these partitions have groups of neurons that are disproportionately likely to be important compared to random groups of neurons, but that are not always more important than random groups of neurons on average. We also find that the groups of neurons in the partitionings are reliably more coherent than random ones, though only with respect to features other than class label. By showing that our partitioning methods are able to reveal local specialization, these results suggest that they can be used to screen for interesting, abstractable subsets of units and better understand deep networks. Our key contributions are threefold: 1. Introducing two proxies, importance and coherence, to assess whether a partitioning of a network shows local specialization and identify which subsets of neurons are the most responsible. 2. Quantifying these proxies by applying single-neuron interpretability methods on subsets of neurons in an automated fashion. 3. Applying our methods on the partitions produced by spectral clustering on a range of neural networks and finding evidence of local specialization captured by these partitions. Code is available at <https://github.com/thestephencasper/local_specialization>. 2 Related Work --------------- The work most closely related to ours is Filan et al. ([2021](#bib.bib51 "Clusterability in neural networks")) who also use spectral clustering to establish that deep networks are often clusterable and investigates what factors influence clusterability. They also introduce two methods for regularization for clusterability among clusters of neurons. We extend their work by bridging graphical clusterability and local specialization. This line of work inherits insights from network science involving clustering in general (Girvan and Newman, [2002](#bib.bib30 "Community structure in social and biological networks"); Newman and Girvan, [2004](#bib.bib31 "Finding and evaluating community structure in networks")), and spectral clustering (Shi and Malik, [2000](#bib.bib17 "Normalized cuts and image segmentation"); von Luxburg, [2007](#bib.bib16 "A tutorial on spectral clustering")) in particular. In our experiments we combine clustering with interpretability tools to measure importance and coherence. We use neural lesions (Zhou et al., [2018](#bib.bib18 "Revisiting the importance of individual units in CNNs via ablation")) and feature visualization (Olah et al., [2017](#bib.bib13 "Feature visualization"); Watanabe, [2019](#bib.bib10 "Interpreting layered neural networks via hierarchical modular representation")), but in a similar way, other interpretability techniques including analysis of selectivity (Morcos et al., [2018](#bib.bib52 "On the importance of single directions for generalization"); Madan et al., [2020](#bib.bib20 "On the capability of neural networks to generalize to unseen category-pose combinations")), network “dissection” (Bau et al., [2017](#bib.bib22 "Network dissection: quantifying interpretability of deep visual representations"); Mu and Andreas, [2020](#bib.bib21 "Compositional explanations of neurons")), earth-mover distance (Testolin et al., [2020](#bib.bib49 "Deep learning systems as complex networks")), or intersection information (Panzeri et al., [2017](#bib.bib50 "Cracking the neural code for sensory perception by combining statistics, intervention, and behavior")) could also be combined with clustering-based partitionings under a similar framework. Relatedly, Cammarata et al. ([2020](#bib.bib35 "Thread: circuits")) demonstrate that feature visualization and analysis of weights can be used to identify groups of neurons whose functionality is human-interpretable. This work adds to a body of research focused on modularity and compositionality in neural systems either at the neuron level (You et al., [2020](#bib.bib48 "Graph structure of neural networks"); Mu and Andreas, [2020](#bib.bib21 "Compositional explanations of neurons"); Voss et al., [2021](#bib.bib59 "Branch specialization")) or at the subnetwork level (Lake et al., [2015](#bib.bib25 "Human-level concept learning through probabilistic program induction"), [2017](#bib.bib24 "Building machines that learn and think like people"); Csordás et al., [2021](#bib.bib26 "Are neural nets modular? inspecting their functionality through differentiable weight masks"); Udrescu et al., [2020](#bib.bib61 "AI feynman 2.0: pareto-optimal symbolic regression exploiting graph modularity")). There also exist techniques for developing more modular networks which either have an explicitly modular architecture (Alet et al., [2018](#bib.bib32 "Modular meta-learning"); Parascandolo et al., [2018](#bib.bib33 "Learning independent causal mechanisms"); Goyal et al., [2019](#bib.bib23 "Recurrent independent mechanisms")) or are trained in a way that promotes modularity via regularization or parameter isolation (Kirsch et al., [2018](#bib.bib27 "Modular networks: learning to decompose neural computation"); De Lange et al., [2019](#bib.bib28 "A continual learning survey: defying forgetting in classification tasks")). 3 Methods ---------- ![](https://media.arxiv-vanity.com/render-output/7718834/figures/schematic_new.png) Figure 1: Our procedural pipeline. The first three steps generate a partitioning of the network into “subclusters” which we analyze using (4a) lesion and (4b) feature visualization methods to measure importance and coherence compared to random subclusters. Finally, (not shown in the pipeline), we aggregate results to produce Fisher statistics, p values, and effect measures. These final steps are shown in Figure [3](#A1.F3 "Figure 3 ‣ A.3 Pipeline - Second Part ‣ Appendix A Appendix ‣ Quantifying Local Specialization in Deep Neural Networks"). To evaluate local specialization, our procedural pipeline is as follows. (1) We begin with a trained neural network; (2) construct a graph from it, treating each neuron as a node; (3) perform spectral clustering on the graph to obtain a partitioning or “clustering” of neurons which we then further divide by layer to obtain a “subclustering”; (4) use our proxies for local specialization to analyze the subclusters, operationalized by lesioning neurons or feature visualization and comparing them to random subclusters; and (5) aggregate results across the network to obtain a p value, effect measure, and a quantity which we refer to as the *Fisher statistic*. This pipeline is outlined in Figure [1](#S3.F1 "Figure 1 ‣ 3 Methods ‣ Quantifying Local Specialization in Deep Neural Networks") and Figure [3](#A1.F3 "Figure 3 ‣ A.3 Pipeline - Second Part ‣ Appendix A Appendix ‣ Quantifying Local Specialization in Deep Neural Networks"), and each step is explained in detail below. ### 3.1 Generating Partitions with Spectral Clustering To partition a network into clusters, we use an approach based on Filan et al. ([2021](#bib.bib51 "Clusterability in neural networks")) which consisted of two steps: *“graphification”* - transforming the network into an undirected, edge-weighted graph; and *clustering* - obtaining a partitioning via spectral clustering. Graphification: To perform spectral clustering, a network must be represented as an undirected graph with non-negative edges. For MLPs (multilayer perceptrons), each graph node corresponds to a neuron in the network including input and output neurons. For CNNs (convolutional neural networks), a node corresponds to a single channel (which we also refer to as a ‘‘neuron’’) in a convolutional layer.111 If a unit is used as inputs to multiple layers, as happens in ResNets with skip connections, we consider these inputs to be separate neurons. For CNNs, we ignore input, output, and fully-connected layers when clustering. For graphification, we test two ways of assigning adjacency edges between neurons: with weights and with correlations. For weight-based clustering with dense layers, if two neurons have a weight connecting them in the network, their corresponding vertices are connected by an edge with weight equal to the absolute value of the network’s weight between the neurons. For convolutional channels, we connect them by an edge with weight equal to the L1 norm for the corresponding 2D kernel slice. If layers are connected but with a batch-normalization layer in between, we mimic the scaling performed by the batch norm operation by multiplying weights by γ/√σ2+ε where γ is the scaling factor, σ is the moving standard deviation, and ε is a small constant. Notably, this method of constructing the graph requires no dataset or runtime analysis of the network. While graphification via weights only results in connections between neurons in adjacent layers, doing so with correlations creates more dense graphs. With this method, we connect the nodes for two neurons with their squared Spearman correlation across a validation set. Spearman correlation gives the Pearson (linear) correlation between ranks and reflects how well two sets of data can be related by a monotonically increasing function.222 We use Spearman rather than Pearson correlation because networks are nonlinear, and there is no particular reason to expect associations between arbitrary neurons to be linear. Rather than using neurons’ post-ReLU outputs to calculate these correlations, we use their pre-ReLU activations with the goal of extracting richer data from them.333Although all negative values are mapped to zero by a ReLU, we expect the degree of negativity to carry information about the possible presence (or lack thereof) of features which the neuron was meant to detect, especially for networks trained with dropout. Again, for convolutional channels, we take the L1 norm of activations before calculating Spearman correlations. The fact that we take absolute valued weights or squared correlations to construct edges between neurons in graphification means that our analysis does not discriminate between positive and negative associations between neurons. In addition to graphification via weights versus activations, we also test two scopes with which to perform clustering: network-wide and layer-wise. For network-wide clustering, we cluster on one graph for the network as a whole. For layer-wise clustering, we produce a partitioning for each layer l individually by clustering on the graph of connections between l and the layers adjacent to l. Ultimately, we run 4 sets of experiments on each network by clustering {weights, activations} × {network-wide, layer-wise}. Spectral (sub)clustering: We perform normalized spectral clustering (Shi and Malik, [2000](#bib.bib17 "Normalized cuts and image segmentation")) on the resulting graphs to obtain a partition of the neurons into clusters. For all sub-ImageNet experiments, we set the number of clusters to k=16, while for ImageNet-scale networks, we use k=32. In Appendix [A.7](#A1.SS7 "A.7 Robustness to the Choice of Cluster Number ‣ Appendix A Appendix ‣ Quantifying Local Specialization in Deep Neural Networks"), we reproduce a subset of sub-ImageNet experiments with k∈{8,12} showing that results are robust to alternate choices of k. Refer to Appendix [A.1](#A1.SS1 "A.1 Spectral Clustering Algorithm ‣ Appendix A Appendix ‣ Quantifying Local Specialization in Deep Neural Networks") for a complete description of the spectral clustering algorithm. Layers at different depths of a network tend to develop different representations. Therefore, for network-wide clustering in which clusters of neurons span more than one layer, we analyze clusters one layer at a time. We call these sets of neurons within the same cluster and layer *subclusters*. To ensure comparability between these clusterings when performing layer-wise clustering, we set the number of clusters per layer to be the same as the number produced in that layer with network-wide clustering. In our experiments, we compare these subclusters to other random sets of neurons of the same size in the same layer. We refer to subclusters identified by the clustering algorithm as “true subclusters” and sets of random neurons as “random subclusters.” Random subclusters form the natural control condition to test whether the specific partitioning of neurons exhibits importance or coherence compared to alternative partitions, while taking account of layer and size. ### 3.2 Analysis Pipeline Overview: In our experiments, we measure the degree to which true subclusters identified via spectral clustering are more important and coherent than random subclusters of the same size and layer. Operationalizations are given in subsections [4.1](#S4.SS1 "4.1 Lesion Experiments ‣ 4 Experiments ‣ Quantifying Local Specialization in Deep Neural Networks") and [4.2](#S4.SS2 "4.2 Feature Visualization Experiments ‣ 4 Experiments ‣ Quantifying Local Specialization in Deep Neural Networks"), but in brief, the measure for importance of a subcluster quantifies the performance reduction from dropping out the neurons in that subcluster, and the measures for coherence quantify the degree to which the neurons in a subcluster are mutually associated with some feature of an input. For each subcluster with more than one neuron and which does not include every neuron in the layer, we calculate a measure of importance or coherence and compare it to that of 19 random subclusters. These experiments and measures are discussed in detail in Section [4](#S4 "4 Experiments ‣ Quantifying Local Specialization in Deep Neural Networks"). We present two measures of how true and random subclusters compare under these proxies. First, we calculate a measure of whether true subclusters are disproportionately often more important or coherent than random subclusters, called the Fisher statistic. This value is our primary focus. However, for additional resolution, we also calculate an effect measure used to assess the importance and coherence of the ‘typical’ subcluster. Importantly, the Fisher statistic and effect measure quantify different things. Fisher statistics: We wish to test whether spectral clustering methods find subsets of neurons which satisfy our proxies for local specialization more than if we had simply chosen random subsets. To do this, for each subcluster measurement we take the percentile of each true subcluster relative to the distribution of measurements of random subclusters.444For metrics where high values indicate local specialization, we take the percentile of the negative metric, so that low percentiles consistently indicate local specialization. Next, we use the Fisher method to test whether the subclusters in a single network satisfy our proxies more than random subsets of neurons. To do so, we first center the subcluster percentiles around 0.5, which under the null hypothesis would give a granular, unbiased approximation of the uniform distribution. We then combine the centered percentiles {p1,…,pn} into the Fisher statistic (−1/n)∑ni=1logpi. For reference, the Fisher statistic of a uniform distribution of percentiles in our setting is 0.98. Figure [2](#S3.F2 "Figure 2 ‣ 3.2 Analysis Pipeline ‣ 3 Methods ‣ Quantifying Local Specialization in Deep Neural Networks") shows example distributions of percentiles and their Fisher statistics. Note also that since the log function has a larger derivative near 0 than near 1, low percentiles have greater influence on the Fisher statistic, so J- or U-shaped distributions can also have Fisher statistics greater than 1, even if there are more high percentiles than low percentiles. For all non-ImageNet architectures, we train and analyze 5 different networks per condition and report the mean Fisher statistic. The Fisher statistic of n uniformly-distributed random percentiles multiplied by 2n takes a chi-squared distribution with 2n degrees of freedom. This lets us produce a p value for each network, testing whether this statistic is higher than the null hypothesis would produce, which would mean that there were more low percentiles than if subclusters were distributed uniformly.555The fact that we coarsely measure percentiles and then center them makes this test conservative because our statistic is more sensitive to low percentiles than high percentiles. This procedure is illustrated in Figure [3](#A1.F3 "Figure 3 ‣ A.3 Pipeline - Second Part ‣ Appendix A Appendix ‣ Quantifying Local Specialization in Deep Neural Networks"). For non-ImageNet architectures, we need to aggregate the five p values we get per condition, one for each network. To do so, in each condition, we take the mean of the p values and correct it using the corresponding quantile of a Bates(n=5) distribution666The Bates(n) quantile function is Fn(x)=1n!∑⌊nx⌋k=0(−1)k(nk)(nx−k)n−1 (Marengo et al., [2017](#bib.bib58 "A geometric derivation of the irwin-hall distribution")). which gives the distribution of the mean of 5 independent random variables uniformly distributed on [0,1].777We use the Bates method for aggregation here instead of using the Fisher method because in this case, all five p values are from identically configured experiments, and the Bates test is less sensitive to low outliers. This produces a p value for every network architecture, partitioning method, and local specialization proxy. We next correct for multiple testing using the Benjamini Hochberg method (Benjamini and Hochberg, [1995](#bib.bib46 "Controlling the false discovery rate: a practical and powerful approach to multiple testing")), controlling the false discovery rate at the α=0.05 level. See Appendix [A.8](#A1.SS8 "A.8 Multiple Testing Correction ‣ Appendix A Appendix ‣ Quantifying Local Specialization in Deep Neural Networks") for details. Table [1](#S4.T1 "Table 1 ‣ 4 Experiments ‣ Quantifying Local Specialization in Deep Neural Networks") shows in bold the Fisher statistics (or means thereof) that are statistically significantly greater than 0.98. In summary, (1) we aggregate each network’s subcluster percentiles using the Fisher method, yielding a Fisher statistic and a p value, (2) we aggregate the Fisher statistics and p values across identically-configured replicates of the same experiment by taking the mean and using the Bates method, respectively, and (3) we correct for multiple comparisons using the Benjamini-Hochberg procedure. ![](https://media.arxiv-vanity.com/render-output/7718834/figures/hist-percentiles-vgg-cifar-10.png) Figure 2: Illustration of Fisher statistics of various percentile distributions. A VGG network trained on CIFAR-10 is partitioned using four methods ({weights, activations} × {network-wide, layer-wise}) and analyzed for coherence (see discussion of visualization scores in Section [4.2](#S4.SS2 "4.2 Feature Visualization Experiments ‣ 4 Experiments ‣ Quantifying Local Specialization in Deep Neural Networks")) to produce the collection of percentiles for each subcluster. This figure shows histograms of the percentile distribution for each clustering, and their associated Fisher statistics. Recall that a lower percentile means that a true subcluster is more coherent than random subclusters while controlling for layer and size. The activation-based clusterings have disproportionately many low percentiles, and Fisher statistics greater than 2. Table [1](#S4.T1 "Table 1 ‣ 4 Experiments ‣ Quantifying Local Specialization in Deep Neural Networks") shows that these trends are statistically significant when aggregated over five models. Effect measures: In addition to p values, we also calculate effect measures which give a sense of how different results are for true and random subclusters are *on average*. The effect measure is the mean over subclusters of 2x/(x+μ), where x is a true subcluster measure of importance/coherence and μ is the mean over that of random subclusters.888If x is ever less than 0 for a subcluster, we conservatively replace it with 0. We do this as opposed to simply taking x/μ to avoid division by zero. This results in effect measures in the interval [0,2], and which side of 1 they are on indicates whether the true subclusters are more important/coherent than random ones. For ease of interpretation, note that if 2x/(x+μ)=1+y, then x/μ≈1+2y for y≪1, so an effect measure of 1.05 would mean that the measure of a true subcluster was ≈10% higher than the expected measure of a random subcluster. Together with these effect measures, we also report their standard errors. Differences between Fisher statistics and effect measures: In some of our experiments, the Fisher statistics and effect measures seem to “disagree” with one suggesting that the network was locally specialized and the other suggesting it was not. This potential for disagreement is due to the fact that Fisher statistics are based on the percentiles of subcluster measurements relative to the distribution of those of random subclusters, while effect measures compare the value of subcluster measurements to the mean value of random subcluster measurements. When the Fisher statistic seems to indicate local specialization but the effect measure does not, this means that on the relevant metric, there are more subclusters than would be expected under the null hypothesis whose metric value is higher than that of random subclusters, but the “typical” subcluster has a metric value similar to or less than an average random subcluster. In other words, the partitioning method has detected some subclusters that satisfy our proxies for local specialization, but the typical subcluster found does not. This is compatible with a J- or U-shaped distribution of subcluster percentiles. We consider this a positive result, indicating that our partitioning method is detecting some local specialization. 4 Experiments -------------- To show the applicability of our methods at different scales, we experiment with a range of networks. For small-scale experiments, we train MLPs with 4 hidden layers of 256 neurons each and small convolutional networks with 3 layers each of 64 neurons followed by a dense layer of 128 neurons trained on the MNIST (LeCun et al., [1998](#bib.bib6 "Gradient-based learning applied to document recognition")) dataset. At a mid scale, we train VGG-style CNNs containing 13 convolutional layers using the architectures of Simonyan and Zisserman ([2014](#bib.bib4 "Very deep convolutional networks for large-scale image recognition")) trained on CIFAR-10 (Krizhevsky and Hinton, [2009](#bib.bib7 "Learning multiple layers of features from tiny images")) using the procedure of Liu and Deng ([2015](#bib.bib42 "Very deep convolutional neural network based image classification using small training sample size")), which includes weight decay and dropout for regularization. Finally, for ImageNet (Krizhevsky and Hinton, [2009](#bib.bib7 "Learning multiple layers of features from tiny images")) scale, we analyze pretrained ResNet18 (He et al., [2016](#bib.bib9 "Deep residual learning for image recognition")), VGG-16, and VGG-19 (Simonyan and Zisserman, [2014](#bib.bib4 "Very deep convolutional networks for large-scale image recognition")) models. Further details including hyperparameters and test performances are in Appendix [A.2](#A1.SS2 "A.2 Network Training Details ‣ Appendix A Appendix ‣ Quantifying Local Specialization in Deep Neural Networks"). | | | Lesion | Feature Visualization | | --- | --- | --- | --- | | Network | Partitioning | Acc. Drop | Class Range | Vis Score | Softmax H | | MLP, MNIST | Weight/Network | 2.13 | 0.91 | 1.32 | 0.92 | | Weight/Layer | 2.05 | 0.91 | 1.21 | 1.23 | | Act./Network | 1.46 | 1.00 | 1.34 | 1.15 | | Act./Layer | 1.69 | 1.03 | 1.36 | 1.12 | | CNN, MNIST | Weight/Network | 1.29 | 0.84 | 1.10 | 0.93 | | Weight/Layer | 1.10 | 0.94 | 1.02 | 0.99 | | Act./Network | 1.73 | 0.70 | 1.09 | 0.90 | | Act./Layer | 1.46 | 0.92 | 1.05 | 0.98 | | VGG, CIFAR-10 | Weight/Network | 1.50 | 2.12 | 1.46 | 0.99 | | Weight/Layer | 1.15 | 1.27 | 1.00 | 0.97 | | Act./Network | 1.40 | 0.97 | 2.34 | 1.08 | | Act./Layer | 1.56 | 1.03 | 2.67 | 1.12 | | VGG-16, ImageNet | Weight/Network | 2.54 | 0.49 | 1.72 | 1.19 | | Weight/Layer | 2.15 | 0.56 | 1.90 | 1.06 | | Act./Network | 1.89 | 0.63 | 1.82 | 1.07 | | Act./Layer | 1.66 | 0.70 | 1.85 | 0.98 | | VGG-19, ImageNet | Weight/Network | | | 1.91 | 1.03 | | Weight/Layer | | | 2.23 | 1.00 | | Act./Network | | | 1.87 | 1.10 | | Act./Layer | | | 2.01 | 0.98 | | ResNet18, ImageNet | Weight/Network | 1.42 | 1.13 | | | | Weight/Layer | 1.29 | 0.99 | | | | Act./Network | 1.30 | 0.92 | | | | Act./Layer | 1.31 | 0.96 | | | Table 1: Fisher statistics (or means over 5 runs) for (1) lesion-based experiments measuring importance via overall accuracy drops (Acc. Drop) and coherence via the class-wise range of accuracy drops (Class Range); and (2) Feature visualization-based experiments in networks measuring coherence via the optimization score (Vis Score) and the entropy of network outputs (Softmax H). Each row corresponds to a network paired with a partitioning method. Fisher statistics above 0.98 indicate that subclusters satisfy our local specialization proxies disproportionately more than random subclusters do. Values statistically significantly greater than 0.98 are bolded. Section [3.2](#S3.SS2 "3.2 Analysis Pipeline ‣ 3 Methods ‣ Quantifying Local Specialization in Deep Neural Networks") details the calculation of these statistics and their p values. ### 4.1 Lesion Experiments One approach that has been used for understanding both biological (Gazzaniga and Ivry, [2013](#bib.bib36 "Cognitive neuroscience: the biology of the mind: fourth international student edition")) and artificial (Zhou et al., [2018](#bib.bib18 "Revisiting the importance of individual units in CNNs via ablation"); Casper et al., [2020](#bib.bib19 "Frivolous units: wider networks are not really that wide")) neural systems involves disrupting neurons during inference. We experiment with “lesion” tests in which we analyze network performance on the test set when a subcluster is dropped out. We then analyze the damage to the network’s performance. First, we measure importance by taking the drop in accuracy. Specifically, let θ be the parameter vector of the neural network, c be a set of neurons, M(θ,c) be a masked version of θ where weights into or out of nodes in c have been set to 0, and Acc(ϑ,D) be the accuracy of the network parameterized by ϑ on dataset D. Then, our measure for importance is Acc(θ,test)−Acc(M(θ,c),% test), where test is a test dataset that was not used to construct the activation-based partitionings. Second, we measure the coherence in a subcluster with respect to class by taking the range of class-specific accuracy drops. Specifically, let testi be the subset of the test set with label i, and let Δ(θ,c,i):=Acc(θ,testi)−Acc(M(θ,c),testi) be the drop in accuracy for examples with label i from lesioning c. Then, this measure of coherence is the range (maxiΔ(θ,c,i)−miniΔ(θ,c,i)), of accuracy drops over classes. We use this to detect whether clusters are more crucial for some classes over others, which would suggest that they coherently act to correctly label those classes. We use the analysis pipeline from Section [3.2](#S3.SS2 "3.2 Analysis Pipeline ‣ 3 Methods ‣ Quantifying Local Specialization in Deep Neural Networks") to test for importance and coherence using these overall accuracy differences and class-wise ranges. In this setting, effect measures >1 indicate more importance/coherence among true subclusters on average compared to random ones. Table [1](#S4.T1 "Table 1 ‣ 4 Experiments ‣ Quantifying Local Specialization in Deep Neural Networks") shows Fisher statistics, and Table [2](#S4.T2 "Table 2 ‣ 4.1 Lesion Experiments ‣ 4 Experiments ‣ Quantifying Local Specialization in Deep Neural Networks") shows effect measure data. Results are summarized in Section [4.3](#S4.SS3 "4.3 Findings ‣ 4 Experiments ‣ Quantifying Local Specialization in Deep Neural Networks"). | | | | | | --- | --- | --- | --- | | | | Lesion | Feature Visualization | | Network | Partitioning | Acc. Drop | Class Range | Vis Score | Softmax H | | | | High→Imp. | High→Coh. | High→Coh. | Low→Coh. | | MLP, MNIST | Weight/Network | 1.123±0.058 | 0.701±0.036 | 1.003±0.004 | 1.105±0.021 | | Weight/Layer | 1.061±0.048 | 0.676±0.029 | 1.024±0.004 | 0.931±0.016 | | Act./Network | 0.883±0.038 | 0.646±0.024 | 1.02±0.003 | 0.997±0.013 | | Act./Layer | 0.929±0.040 | 0.687±0.025 | 1.026±0.003 | 1.011±0.012 | | CNN, MNIST | Weight/Network | 0.837±0.048 | 0.527±0.031 | 0.998±0.004 | 1.026±0.008 | | Weight/Layer | 0.814±0.046 | 0.635±0.033 | 1.004±0.003 | 1.007±0.007 | | Act./Network | 1.078±0.061 | 0.543±0.039 | 0.933±0.006 | 1.025±0.011 | | Act./Layer | 0.939±0.060 | 0.625±0.043 | 0.925±0.005 | 0.970±0.010 | | VGG, CIFAR-10 | Weight/Network | 0.682±0.066 | 0.407±0.042 | 0.871±0.011 | 1.124±0.012 | | Weight/Layer | 0.808±0.041 | 0.692±0.033 | 1.013±0.006 | 0.992±0.012 | | Act./Network | 0.926±0.032 | 0.679±0.023 | 1.327±0.005 | 0.950±0.009 | | Act./Layer | 0.956±0.030 | 0.695±0.021 | 1.379±0.004 | 0.930±0.008 | | VGG-16, ImageNet | Weight/Network | 1.205±0.050 | 0.790±0.010 | 1.043±0.005 | 0.998±0.001 | | Weight/Layer | 1.168±0.019 | 0.825±0.005 | 1.076±0.003 | 0.991±0.001 | | Act./Network | 1.129±0.026 | 0.859±0.006 | 1.066±0.003 | 1.000±0.001 | | Act./Layer | 1.063±0.021 | 0.876±0.005 | 1.056±0.003 | 1.001±0.001 | | VGG-19, ImageNet | Weight/Network | | | 1.061±0.004 | 1.003±0.001 | | Weight/Layer | | | 1.099±0.003 | 1.001±0.001 | | Act./Network | | | 1.046±0.003 | 0.996±0.001 | | Act./Layer | | | 1.081±0.002 | 1.004±0.001 | | ResNet18, ImageNet | Weight/Network | 0.926±0.045 | 0.957±0.011 | | | | Weight/Layer | 0.971±0.016 | 0.971±0.004 | | | | Act./Network | 0.979±0.017 | 0.977±0.004 | | | | Act./Layer | 0.983±0.014 | 0.967±0.004 | | | Table 2: Effect measures for (1) lesion-based experiments measuring importance via overall accuracy drops (Acc. Drop) and coherence via the class-wise range of accuracy drops (Class Range); and (2) Feature visualization-based experiments in networks measuring coherence via the optimization score (Vis Score) and the entropy of network outputs (Softmax H). Each row corresponds to a network paired with a partitioning method. Results are calculated as explained in Section [3.2](#S3.SS2 "3.2 Analysis Pipeline ‣ 3 Methods ‣ Quantifying Local Specialization in Deep Neural Networks"). For accuracy drop, class-wise range and visualization score experiments, an effect measure >1 corresponds to more importance/coherence among true subclusters than random ones, while one of <1 does for softmax entropy experiments. Entries where the effect measure is more than two standard errors away from 1 in the direction of local specialization are bolded. ### 4.2 Feature Visualization Experiments To further analyze coherence, we leverage another set of interpretability techniques based on feature visualization. We use gradient-based optimization to create an input image which maximizes the L1 norm of the pre-ReLU activations of the neurons in a subcluster. Letting the parameter vector be θ and the subcluster be c, we write Act(x,θ,c) for the vector of pre-ReLU activations of neurons in c in network θ on input x, and denote this optimized input image as x(θ,c), which approximately maximizes ∥Act(x,θ,c)∥1. The key insight is that properties of these visualizations x(θ,c) can suggest what roles the subclusters play in the network. Implementation details are in Appendix [A.6](#A1.SS6 "A.6 Feature Visualization ‣ Appendix A Appendix ‣ Quantifying Local Specialization in Deep Neural Networks"), and Figure [6](#A1.F6 "Figure 6 ‣ A.6 Feature Visualization ‣ Appendix A Appendix ‣ Quantifying Local Specialization in Deep Neural Networks") shows example visualizations. We use two techniques to analyze coherence using these visualizations of subclusters. First, we analyzed the value of the maximization objective for each image we produced, ∥Act(x(θ,c),θ,c)∥1, which we call the “score” of the visualization. This gives one notion of how coherent a subcluster may be with respect to input features, because if a single image can strongly excite an entire subcluster, this suggests that the neurons comprising it are involved in detecting/processing related features. Second, we obtain a measure of coherence by analyzing the entropy H(label∣x(θ,c);θ) of the softmax outputs of the network when these images are passed through. If the entropy is low, this suggests that a cluster is coherent with respect to class labels. Just as with lesion experiments, we perform analysis using these two methods using the pipeline from Section [3.2](#S3.SS2 "3.2 Analysis Pipeline ‣ 3 Methods ‣ Quantifying Local Specialization in Deep Neural Networks") to measure how coherent true subclusters are compared to random ones. For visualization score experiments, effect measures >1 indicate coherence while for the softmax H experiments, effect measures <1 indicate coherence. Table [1](#S4.T1 "Table 1 ‣ 4 Experiments ‣ Quantifying Local Specialization in Deep Neural Networks") shows Fisher statistics, and Table [2](#S4.T2 "Table 2 ‣ 4.1 Lesion Experiments ‣ 4 Experiments ‣ Quantifying Local Specialization in Deep Neural Networks") shows effect measure data. ### 4.3 Findings Our partitionings identify important subclusters. Fisher statistics for lesion accuracy drops are high and significant, as shown in Table [1](#S4.T1 "Table 1 ‣ 4 Experiments ‣ Quantifying Local Specialization in Deep Neural Networks"), indicating that sub-clusters are more likely to be highly important relative to random groups of neurons. However, as shown in Table [2](#S4.T2 "Table 2 ‣ 4.1 Lesion Experiments ‣ 4 Experiments ‣ Quantifying Local Specialization in Deep Neural Networks"), not all of the corresponding effect measures are below one, even when the Fisher statistic is significantly greater than 0.98. This indicates that when we detect that an unusual number of subclusters are important, this does not necessarily correspond to importance on average. Our partitionings identify subclusters that are coherent w.r.t. input features but not class label. Class-specific measures of coherence, class-wise lesion accuracy drop range and output entropy, showed significant coherence in almost no conditions. The class-wise range measure even tended to show that subclusters were less coherent w.r.t. class than random groups of neurons. However, subclusters were reliably coherent as measured by *visualization score*, both as quantified by Fisher statistics and effect measures. Together, these results offer evidence that subclusters tended to perform coherent sub-tasks, but not in a class-specific way. All partitioning methods yield similar results. In Table [1](#S4.T1 "Table 1 ‣ 4 Experiments ‣ Quantifying Local Specialization in Deep Neural Networks"), we find no clear difference between the Fisher statistics of activation-based and weight-based clusterings, or between layer-wise and network-wide clusterings. This is somewhat unexpected: one might have predicted that weight-based methods’ lack of runtime information or layer-wise methods’ lack of global information would lead to lower quality clusterings, but this was not the case. 5 Discussion ------------- Contributions: In this work, we introduce several methods for partitioning networks into clusters of neurons and analyzing the resulting partitions for local specialization. Key to this is measuring proxies: *importance* as a means of understanding what parts of a network are crucial for performance, and *coherence* as a measure for how much the neurons in a part work together. We rigorously evaluate these proxies using statistical methods, finding that even the weights-only clustering methods are able to reveal clusters with a significant degree of importance and coherence compared to random ones. In each network, we found evidence that our partitioning methods were able to identify specialized subsets of neurons via measuring accuracy drops under lesions (importance) and feature visualization scores (coherence). To the best of our knowledge, ours is the first method which is able to quantitatively assess the local specialization of neural networks in a way that does not require a human in the loop. Relation to other research: Having effective tools for interpreting networks is important for understanding AI systems, in particular by helping to diagnose failure modes (e.g., Carter et al. ([2019](#bib.bib14 "Activation atlas")); Mu and Andreas ([2020](#bib.bib21 "Compositional explanations of neurons")); Casper et al. ([2021](#bib.bib60 "One thing to fool them all: generating interpretable, universal, and physically-realizable adversarial features"))). Our work relates to this goal, though indirectly. The tests we perform are based on data from lesions and feature visualizations, both of which are interpretability tools. But rather than directly using these data to interpret subclusters, our focus is one step higher: on automatedly testing whether these subclusters are worth analyzing at all and finding ways to screen for subclusters that should be the subject of deeper investigation. By showing that the partitioning methods we use generate partitionings that align with local specialization, these results suggest that clustering neurons offers a useful level of abstraction through which to study networks. Limitations: One limitation of our work is a lack of assurance that importance and coherence are reliably strong proxies for human-comprehensible forms of local specialization. While they are sufficient to imply some degree of abstractability with respect to the task at hand, they may not always be particularly useful for understanding the network. Relatedly, our approach is also not designed to identify the sub-task performed by a group of neurons, nor does it identify relationships between groups. Given these limitations, the tools we introduce should be seen as methods for screening a network for evidence of local specialization overall and for particular sets of neurons where sub-task functionality is localized. A final notable limitation is that these methods do not offer tools for building more modular networks beyond techniques for measuring local specialization. Future work toward this may benefit from our techniques but should also hinge on architectural or regularization-based methods for promoting independent operation of groups of neurons. Conclusion: While we make progress here toward better understanding how networks can be understood, neural systems are still complex, and more insights are needed to develop useful understandings of them. The ultimate goal should be to develop reliable methods for building models which perform well and also lend themselves to faithful abstract interpretations. We believe that these methods should involve testing networks for local specialization and investigating the functions that important or coherent parts of the network specialize in. Acknowledgments --------------- The authors would like to thank Open Philanthropy for funding to support this project, the researchers at UC Berkeley’s Center for Human-Compatible AI for their advice, and anonymous reviewers for their contributions to improving the paper.
ac3290f5-de58-4e47-994b-43a3c23e9cf7
StampyAI/alignment-research-dataset/arbital
Arbital
Other-izing (wanted: new optimization idiom) The open "other-izer" problem is to find something besides maximizing, satisificing, meliorizing, and several other existing but unsatisfactory idioms, which is actually suitable as an optimization idiom for [bounded agents](https://arbital.com/p/bounded_agent) and is [reflectively stable](https://arbital.com/p/1fx). In standard theory we tend to assume that agents are expected utility *maximizers* that always choose the available option with highest expected utility. But this isn't a realistic idiom because a realistic, [bounded agent](https://arbital.com/p/bounded_agent) with limited computing power can't compute the expected utility of every possible action. An expected utility satisficer, which e.g. might approve any policy so long as the expected utility is at least 0.95, would be much more realistic. But it also doesn't seem suitable for an actual AGI, since, e.g., if policy X produces at least expected utility 0.98, then it would also satisfice to randomize between mostly policy X and a small chance of policy Y that had expected utility 0; this seems to give away a needlessly large amount of utility. We'd probably be fairly disturbed if an otherwise aligned AGI was actually doing that. Satisficing is also [reflectively consistent](2rb) but not [reflectively stable](https://arbital.com/p/1fx) - while [tiling agents theory](https://arbital.com/p/1mq) can give formulations of satisficers that will approve the construction of similar satisficers, a satisficer could also tile to a maximizer. If your decision criterion is to approve policies which achieve expected utility at least $\theta,$ and you expect that an expected utility *maximizing* version of yourself would achieve expected utility at least $\theta,$ then you'll approve self-modifying to be an expected utility maximizer. This is another reason to prefer a formulation of optimization besides satisficing - if the AI is strongly self-modifying, then there's no guarantee that the 'satisficing' property would stick around and have our analysis go on being applicable, and even if not strongly self-modifying, it might still create non-satisficing chunks of cognitive mechanism inside itself or in the environment. A meliorizer has a current policy and only replaces it with policies of increased expected utility. Again, while it's possible to demonstrate that a meliorizer can approve self-modifying to another meliorizer and hence this idiom is reflectively consistent, it doesn't seem like it would be reflectively stable - becoming a maximizer or something else might have higher expected utility than staying a meliorizer. The "other-izer" open problem is to find something better than maximization, satisficing, and meliorization that actually makes sense as an idiom of optimization for a resource-bounded agent and that we'd think would be an okay thing for e.g. a [Task AGI](https://arbital.com/p/6w) to do, which is at least reflectively consistent, and preferably reflectively stable. See also "[https://arbital.com/p/2r8](https://arbital.com/p/2r8)" for a further desideratum, namely an adjustable parameter of optimization strength, that would be nice to have in an other-izer.
7dba044f-a53e-4e54-b79e-079a35a35007
trentmkelly/LessWrong-43k
LessWrong
Tap Water and Filtration I've been wondering about the safety and quality of the tap water in my city. I've always been assured that it's clean right out the tap. But I spent this past year living rurally with a groundwater well on-site, and when I visited the city I was struck by how bad the tap water tasted to me. Given its frequency of use in all our lives, I am generally surprised by the lack of discussion around water quality in my circles. It seems like nobody talks about it, and almost everyone I know drinks (and cooks with etc) what I now consider bad-tasting tap water. Does anyone have good leads or resources on this topic? I'd like to learn more and am unsure where to look first. Some questions under consideration: * What is 'safe' to drink? * What makes water quality good vs bad? * Are taste and safety/quality correlated? * Any recommended water quality tests? * To filter or not to filter? * Which filtering methods actually work? * What questions would you be asking?
1a42c16f-3cf5-49ae-bb9b-4e7bc9e1e21f
StampyAI/alignment-research-dataset/arxiv
Arxiv
A Composable Specification Language for Reinforcement Learning Tasks 1 Introduction --------------- Reinforcement learning (RL) is a promising approach to learning control policies for robotics tasks Collins et al. ([2005](#bib.bib6 "Efficient bipedal robots based on passive-dynamic walkers")); Silver et al. ([2014](#bib.bib5 "Deterministic policy gradient algorithms")); Mnih et al. ([2015](#bib.bib3 "Human-level control through deep reinforcement learning")); Mania et al. ([2018](#bib.bib4 "Simple random search of static linear policies is competitive for reinforcement learning")). A key shortcoming of RL is that the user must manually encode the task as a real-valued reward function, which can be challenging for several reasons. First, for complex tasks with multiple objectives and constraints, the user must manually devise a single reward function that balances different parts of the task. Second, the state space must often be extended to encode the reward—e.g., adding indicators that keep track of which subtasks have been completed. Third, oftentimes, different reward functions can encode the same task, and the choice of reward function can have a large impact on the convergence of the RL algorithm. Thus, users must manually design rewards that assign “partial credit” for achieving intermediate goals, known as *reward shaping* Ng et al. ([1999](#bib.bib2 "Policy invariance under reward transformations: theory and application to reward shaping")). For example, consider the task in Figure [1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ A Composable Specification Language for Reinforcement Learning Tasks"), where the state is the robot position and its remaining fuel, the action is a (bounded) robot velocity, and the task is > > “Reach target q, then reach target p, while maintaining positive fuel and avoiding obstacle O”. > > > To encode this task, we would have to combine rewards for (i) reaching q, and then reaching p (where “reach x” denotes the task of reaching an ϵ box around x—the regions corresponding to p and q are denoted by P and Q respectively), (ii) avoiding region O, and (iii) maintaining positive fuel, into a single reward function. Furthermore, we would have to extend the state space to keep track of whether q has been reached—otherwise, the control policy would not know whether the current goal is to move towards q or p. Finally, we might need to shape the reward to assign partial credit for getting closer to q, or for reaching q without reaching p. We propose a language for users to specify control tasks. Our language allows the user to specify objectives and safety constraints as logical predicates over states, and then compose these primitives sequentially or as disjunctions. For example, the above task can be expressed as | | | | | | --- | --- | --- | --- | | | ϕex = \codeachieve(\codereachq;\codereachp) \codeensuring(\codeavoidO∧\codefuel>0), | | (1) | where \codefuel is the component of the state space keeping track of how much fuel is remaining. ![Example control task. The blue dashed trajectory satisfies the specification ](https://media.arxiv-vanity.com/render-output/7783030/x1.png) Figure 1: Example control task. The blue dashed trajectory satisfies the specification ϕex (ignoring the fuel budget), whereas the red dotted trajectory does not satisfy ϕex as it passes through the obstacle. The principle underlying our approach is that in many applications, users have in mind a sequence of high-level actions that are needed to accomplish a given task. For example, ϕex may encode the scenario where the user wants a quadcopter to fly to a location q, take a photograph, and then return back to its owner at position p, while avoiding a building O and without running out of battery. Alternatively, a user may want to program a warehouse robot to go to the next room, pick up a box, and then bring this item back to the first room. In addition to specifying sequences of tasks, users can also specify choices between multiple tasks (e.g., bring back any box). Another key aspect of our approach is to allow the user to specify a task without providing the low-level sequence of actions needed to accomplish the task. Instead, analogous to how a compiler generates machine code from a program written by the user, we propose a compiler for our language that takes the user-provided task specification and generates a control policy that achieves the task. RL is a perfect tool for doing so—in particular, our algorithm compiles the task specification to a reward function, and then uses state-of-the-art RL algorithms to learn a control policy. Overall, the user provides the high-level task structure, and the RL algorithm fills in the low-level details. A key challenge is that our specifications may encode rewards that are not Markov—e.g., in ϕex, the robot needs memory that keeps track of whether its current goal is \codereachq or \codereachp. Thus, our compiler automatically extends the state space using a *task monitor*, which is an automaton that keeps track of which subtasks have been completed.111Intuitively, this construction is analogous to compiling a regular expression to a finite state automaton. Furthermore, this automaton may have nondeterministic transitions; thus, our compiler also extends the action space with actions for choosing state transitions. Intuitively, there may be multiple points in time at which a subtask is considered completed, and the robot must choose which one to use. Another challenge is that the naïve choice of rewards—i.e., reward 1 if the task is completed and 0 otherwise—can be very sparse, especially for complex tasks. Thus, our compiler automatically performs two kinds of reward shaping based on the structure of the specification—it assigns partial credit for (i) partially accomplishing intermediate subtasks, and (ii) for completing more subtasks. For deterministic MDPs, our reward shaping is guaranteed to preserve the optimal policy; we empirically find it also works well for stochastic MDPs. We have implemented our approach in a tool called Spectrl,222Spectrl stands for SPECifying Tasks for Reinforcement Learning. and evaluated the performance of Spectrl compared to a number of baselines. We show that Spectrl learns policies that solve each task in our benchmark with a success rate of at least 97%. In summary, our contributions are: * We propose a language for users to specify RL tasks (Section [2](#S2 "2 Task Specification Language ‣ A Composable Specification Language for Reinforcement Learning Tasks")). * We design an algorithm for compiling a specification into an RL problem, which can be solved using standard RL algorithms (Section [3](#S3 "3 Compilation and Learning Algorithms ‣ A Composable Specification Language for Reinforcement Learning Tasks")). * We have implemented Spectrl, and empirically demonstrated its benefits (Section [4](#S4 "4 Experiments ‣ A Composable Specification Language for Reinforcement Learning Tasks")). Related work. Imitation learning enables users to specify tasks by providing *demonstrations* of the desired task Ng and Russell ([2000](#bib.bib23 "Algorithms for inverse reinforcement learning")); Abbeel and Ng ([2004](#bib.bib22 "Apprenticeship learning via inverse reinforcement learning")); Ziebart et al. ([2008](#bib.bib24 "Maximum entropy inverse reinforcement learning.")); Ross et al. ([2011](#bib.bib26 "A reduction of imitation learning and structured prediction to no-regret online learning")); Ho and Ermon ([2016](#bib.bib25 "Generative adversarial imitation learning")). However, in many settings, it may be easier for the user to directly specify the task—e.g., when programming a warehouse robot, it may be easier to specify waypoints describing paths the robot should take than to manually drive the robot to obtain demonstrations. Also, unlike imitation learning, our language allows the user to specify global safety constraints on the robot. Indeed, we believe our approach complements imitation learning, since the user can specify some parts of the task in our language and others using demonstrations. Another approach is for the user to provide a *policy sketch*—i.e., a string of tokens specifying a sequence of subtasks Andreas et al. ([2017](#bib.bib21 "Modular multitask reinforcement learning with policy sketches")). However, tokens have no meaning, except equal tokens represent the same task. Thus, policy sketches cannot be compiled to a reward function, which must be provided separately. Our specification language is based on *temporal logic* Pnueli ([1977](#bib.bib1 "The temporal logic of programs")), a language of logical formulas for specifying constraints over (typically, infinite) sequences of events happening over time. For example, temporal logic allows the user to specify that a logical predicate must be satisfied at some point in time (e.g., “eventually reach state q”) or that it must always be satisfied (e.g., “always avoid an obstacle”). In our language, these notions are represented using the \codeachieve and  \codeensuring operators, respectively. Our language restricts temporal logic in a way that enables us to perform reward shaping, and also adds useful operators such as sequencing that allow the user to easily express complex control tasks. Algorithms have been designed for automatically synthesizing a control policy that satisfies a given temporal logic formula; see  Bloem et al. ([2018](#bib.bib10 "Graph games and reactive synthesis")) for a recent survey, and Kress-Gazit et al. ([2009](#bib.bib11 "Temporal-logic-based reactive mission and motion planning")); Wongpiromsarn et al. ([2012](#bib.bib14 "Receding horizon temporal logic planning")); Coogan et al. ([2015](#bib.bib13 "Traffic network control from temporal logic specifications")); He et al. ([2017](#bib.bib12 "Reactive synthesis for finite tasks under resource constraints")) for applications to robotic motion planning. However, these algorithms are typically based on exhaustive search over control policies. Thus, as with finite-state planning algorithms such as value iteration Sutton and Barto ([2018](#bib.bib19 "Reinforcement learning: an introduction")), they cannot be applied to tasks with continuous state and action spaces that can be solved using RL. Reward machines have been proposed as a high-level way to specify tasks Icarte et al. ([2018](#bib.bib27 "Using reward machines for high-level task specification and decomposition in reinforcement learning")). In their work, the user provides a specification in the form of a finite state machine along with reward functions for each state. Then, they propose an algorithm for learning multiple tasks simultaneously by applying the Q-learning updates across different specifications. At a high level, these reward machines are similar to the task monitors defined in our work. However, we differ from their approach in two ways. First, in contrast to their work, the user only needs to provide a high-level logical specification; we automatically generate a task monitor from this specification. Second, our notion of task monitor has a finite set of registers that can store real values; in contrast, their finite state reward machines cannot store quantitative information. The most closely related work is Li et al. ([2017](#bib.bib8 "Reinforcement learning with temporal logic rewards")), which proposes a variant of temporal logic called *truncated LTL*, along with an algorithm for compiling a specification written in this language to a reward function that can be optimized using RL. However, they do not use any analog of the task monitor, which we demonstrate is needed to handle non-Markovian specifications. Finally,  Wen and Topcu ([2018](#bib.bib18 "Constrained cross-entropy method for safe reinforcement learning")) allows the user to separately specify objectives and safety constraints, and then using RL to learn a policy. However, they do not provide any way to compose rewards, and do not perform any reward shaping. Also, their approach is tied to a specific RL algorithm. We show empirically that our approach substantially outperforms both these approaches. Finally, an alternative approach is to manually specify rewards for sub-goals to improve performance. However, many challenges arise when implementing sub-goal based rewards—e.g., how does achieving a sub-goal count compared to violating a constraint, how to handle sub-goals that can be achieved in multiple ways, how to ensure the agent does not repeatedly obtain a reward for a previously completed sub-goal, etc. As tasks become more complex and deeply nested, manually specifying rewards for sub-goals becomes very challenging. Our system is designed to automatically solve these issues. 2 Task Specification Language ------------------------------ Markov decision processes. A *Markov decision process (MDP)* is a tuple (S,D,A,P,T), where S⊆Rn are the states, D is the initial state distribution, A⊆Rm are the actions, P:S×A×S→[0,1] are the transition probabilities, and T∈N is the time horizon. A *rollout* ζ∈Z of length t is a sequence ζ=s0a0−→…at−1−−→st where si∈S and ai∈A. Given a (deterministic) *policy* π:Z→A, we can generate a rollout using ai=π(ζ0:i). Optionally, an MDP can also include a reward function R:Z→R. 333Note that we consider rollout-based rewards rather than state-based rewards. Most modern RL algorithms, such as policy gradient algorithms, can use rollout-based rewards. Specification language. Intuitively, a specification ϕ in our language is a logical formula specifying whether a given rollout ζ successfully accomplishes the desired task—in particular, it can be interpreted as a function ϕ:Z→B, where B={\codetrue,\codefalse}, defined by | | | | | --- | --- | --- | | | ϕ(ζ)=I[ζ successfully achieves the % task], | | where I is the indicator function. Formally, the user first defines a set of *atomic predicates* P0, where every p∈P0 is associated with a function ⟦p⟧:S→B such that ⟦p⟧(s) indicates whether s satisfies p. For example, given x∈S, the atomic predicate | | | | | --- | --- | --- | | | ⟦\codereachx⟧(s) = (∥s−x∥∞<1) | | indicates whether the robot is in a state near x, and given a rectangular region O⊆S, the atomic predicate | | | | | --- | --- | --- | | | ⟦\codeavoidO⟧(s) = (s∉O) | | indicates if the robot is avoiding O. In general, the user can define a new atomic predicate as an arbitrary function ⟦p⟧:S→B. Next, *predicates* b∈P are conjunctions and disjunctions of atomic predicates. In particular, the syntax of predicates is given by 444Formally, a predicate is a string in the context-free language generated by this context-free grammar. | | | | | --- | --- | --- | | | b ::= p∣(b1∧b2)∣(b1∨b2), | | where p∈P0. Similar to atomic predicates, each predicate b∈P corresponds to a function ⟦b⟧:S→B, defined recursively by ⟦b1∧b2⟧(s)=⟦b1⟧(s)∧⟦b2⟧(s) and ⟦b1∨b2⟧(s)=⟦b1⟧(s)∨⟦b2⟧(s). Finally, the syntax of our specifications is given by 555Here, \codeachieve   and   \codeensuring   correspond to the “eventually” and “always” operators in temporal logic. | | | | | --- | --- | --- | | | ϕ ::= \codeachieveb∣ϕ1 \codeensuringb∣ϕ1;ϕ2∣ϕ1\codeorϕ2, | | where b∈P. Intuitively, the first construct means that the robot should try to reach a state s such that ⟦b⟧(s)=\codetrue. The second construct says that the robot should try to satisfy ϕ1 while always staying in states s such that ⟦b⟧(s)=\codetrue. The third construct says the robot should try to satisfy task ϕ1 and then task ϕ2. The fourth construct means that the robot should try to satisfy either task ϕ1 or task ϕ2. Formally, we associate a function ⟦ϕ⟧:Z→B with ϕ recursively as follows: | | | | | | --- | --- | --- | --- | | | ⟦\codeachieveb⟧(ζ) | = ∃ i<t, ⟦b⟧(si) | | | | ⟦ϕ \codeensuringb⟧(ζ) | = ⟦ϕ⟧(ζ) ∧ (∀i<t, ⟦b⟧(si)) | | | | ⟦ϕ1;ϕ2⟧(ζ) | = ∃ i<t, (⟦ϕ1⟧(ζ0:i) ∧ ⟦ϕ2⟧(ζi:t)) | | | | ⟦ϕ1\codeorϕ2⟧(ζ) | = ⟦ϕ1⟧(ζ) ∨ ⟦ϕ2⟧(ζ), | | where t is the length of ζ. A rollout ζ *satisfies* ϕ if ⟦ϕ⟧(ζ)=\codetrue, which is denoted ζ⊨ϕ. Problem formulation. Given an MDP and a specification ϕ, our goal is to compute | | | | | | --- | --- | --- | --- | | | π∗∈argmaxπPrζ∼Dπ[⟦ϕ⟧(ζ)=\codetrue], | | (2) | where Dπ is the distribution over rollouts generated by π. In other words, we want to learn a policy π∗ that maximizes the probability that a generated rollout ζ satisfies ϕ. 3 Compilation and Learning Algorithms -------------------------------------- In this section, we describe our algorithm for reducing the above problem ([2](#S2.E2 "(2) ‣ 2 Task Specification Language ‣ A Composable Specification Language for Reinforcement Learning Tasks")) for a given MDP (S,D,A,P,T) and a specification ϕ to an RL problem specified as an MDP with a reward function. At a high level, our algorithm extends the state space S to keep track of completed subtasks and constructs a reward function R:Z→R encoding ϕ. A key feature of our algorithm is that the user has control over the compilation process—we provide a natural default compilation strategy, but the user can extend or modify our approach to improve the performance of the RL algorithm. We give proofs in Appendix [B](#A2 "Appendix B Proofs of Theorems ‣ A Composable Specification Language for Reinforcement Learning Tasks"). Quantitative semantics. So far, we have associated specifications ϕ with Boolean semantics (i.e., ⟦ϕ⟧(ζ)∈B). A naïve strategy is to assign rewards to rollouts based on whether they satisfy ϕ: | | | | | --- | --- | --- | | | R(ζ)={1if ζ⊨ϕ0otherwise. | | However, it is usually difficult to learn a policy to maximize this reward due to its discrete nature. A common strategy is to provide a *shaped reward* that quantifies the “degree” to which ζ satisfies ϕ. Our algorithm uses an approach based on *quantitative semantics* for temporal logic Deshmukh et al. ([2017](#bib.bib17 "Robust online monitoring of signal temporal logic")); Fainekos and Pappas ([2009](#bib.bib16 "Robustness of temporal logic specifications for continuous-time signals")); Maler and Nivcković ([2013](#bib.bib15 "Monitoring properties of analog and mixed-signal circuits")). In particular, we associate an alternate interpretation of a specification ϕ as a real-valued function ⟦ϕ⟧q:Z→R. To do so, the user provides quantitative semantics for atomic predicates p∈P0—in particular, they provide a function ⟦p⟧q:S→R that quantifies the degree to which p holds for s∈S. For example, we can use | | | | | | --- | --- | --- | --- | | | ⟦\codereachx⟧q(s) | = 1−d∞(s,x) | | | | ⟦\codeavoidO⟧q(s) | = d∞(s,O), | | where d∞ is the L∞ distance between points, with the usual extension to sets. These semantics should satisfy ⟦p⟧q(s)>0 if and only if ⟦p⟧(s)=\codetrue, and a larger value of ⟦p⟧q should correspond to an increase in the “degree” to which p holds. Then, the quantitative semantics for predicates b∈P are ⟦b1∧b2⟧q(s)=min{⟦b1⟧q(s),⟦b2⟧q(s)} and ⟦b1∨b2⟧q(s)=max{⟦b1⟧q(s),⟦b2⟧q(s)}. Assuming ⟦p⟧q satisfies the above properties, then ⟦b⟧q>0 if and only if ⟦b⟧=\codetrue. In principle, we could now define quantitative semantics for specifications ϕ: | | | | | | --- | --- | --- | --- | | | ⟦\codeachieveb⟧q(ζ) | =maxi<t⟦b⟧q(si) | | | | ⟦ϕ \codeensuringb⟧q(ζ) | =min{⟦ϕ⟧q(ζ), ⟦b⟧q(s0), ..., ⟦b⟧q(st−1)} | | | | ⟦ϕ1;ϕ2⟧q(ζ) | =maxi<tmin{⟦ϕ1⟧q(ζ0:i), ⟦ϕ2⟧q(ζi:t)} | | | | ⟦ϕ1\codeorϕ2⟧q(ζ) | =max{⟦ϕ1⟧q(ζ), ⟦ϕ2⟧q(ζ)}. | | Then, it is easy to show that ⟦ϕ⟧(ζ)=\codetrue if and only if ⟦ϕ⟧q(ζ)>0, so we could define a reward function R(ζ)=⟦ϕ⟧q(ζ). However, one of our key goals is to extend the state space so the policy knows which subtasks have been completed. On the other hand, the semantics ⟦ϕ⟧q quantify over all possible ways that subtasks could have been completed in hindsight (i.e., once the entire trajectory is known). For example, there may be multiple points in a trajectory when a subtask \codereachq could be considered as completed. Below, we describe our construction of the reward function, which is based on ⟦ϕ⟧q, but applied to a single choice of time steps on which each subtask is completed. q1 u q2 u q3 u q4 u ρ:min{x1,x2,x3,x4} x1←0 x2←0 x3←∞ x4←∞ Σ:s∈Q x1←1−d∞(s,q) u Σ:min{x1,x3,x4}>0 u Σ:s∈P x2←1−d∞(s,p) u Figure 2: An example of a task monitor. States are labeled with rewards (prefixed with “ρ:”). Transitions are labeled with transition conditions (prefixed with “Σ:”), as well as register update rules. A transition from q2 to q4 is omitted for clarity. Also, u denotes the two updates x3←min{x3,d∞(s,O)} and x4←min{x4,\codefuel(s)}. Task monitor. Intuitively, a *task monitor* is a finite-state automaton (FSA) that keeps track of which subtasks have been completed and which constraints are still satisfied. Unlike an FSA, its transitions may depend on the state s∈S of a given MDP. Also, since we are using quantitative semantics, the task monitor has to keep track of the degree to which subtasks are completed and the degree to which constraints are satisfied; thus, it includes *registers* that keep track of the these values. A key challenge is that the task monitor is nondeterministic; as we describe below, we let the policy resolve the nondeterminism, which corresponds to choosing which subtask to complete on each step. Formally, a task monitor is a tuple M=(Q,X,Σ,U,Δ,q0,v0,F,ρ). First, Q is a finite set of *monitor states*, which are used to keep track of which subtasks have been completed. Also, X is a finite set of registers, which are variables used to keep track of the degree to which the specification holds so far. Given an MDP (S,D,A,P,T), an *augmented state* is a tuple (s,q,v)∈S×Q×V, where V=RX—i.e., an MDP state s∈S, a monitor state q∈Q, and a vector v∈V encoding the value of each register in the task monitor. An augmented state is analogous to a state of an FSA. The transitions Δ of the task monitor depend on the augmented state; thus, they need to specify two pieces of information: (i) conditions on the MDP states and registers for the transition to be enabled, and (ii) how the registers are updated. To handle (i), we consider a set Σ of predicates over S×V, and to handle (ii), we consider a set U of functions u:S×V→V. Then, Δ⊆Q×Σ×U×Q is a finite set of (nondeterministic) transitions, where (q,σ,u,q′)∈Δ encodes *augmented transitions* (s,q,v)a→(s′,q′,u(s,v)), where sa→s′ is an MDP transition, which can be taken as long as σ(s,v)=\codetrue. Finally, v0∈RX is the vector of initial register values, F⊆Q is a set of final monitor states, and ρ is a reward function ρ:S×F×V→R. Given an MDP (S,D,A,P,T) and a specification ϕ, our algorithm constructs a task monitor Mϕ=(Q,X,Σ,U,Δ,q0,v0,F,ρ) whose states and registers keep track which subtasks of ϕ have been completed. Our task monitor construction algorithm is analogous to compiling a regular expression to an FSA. More specifically, it is analogous to algorithms for compiling temporal logic formulas to automata Vardi and Wolper ([1994](#bib.bib9 "Reasoning about infinite computations")). We detail this algorithm in Appendix [A](#A1 "Appendix A Task Monitor Construction Algorithm ‣ A Composable Specification Language for Reinforcement Learning Tasks"). The underlying graph of a task monitor constructed from any given specification is acyclic (ignoring self loops) and final states correspond to sink vertices with no outgoing edges (except a self loop). As an example, the task monitor for ϕex is shown in Figure [2](#S3.F2 "Figure 2 ‣ 3 Compilation and Learning Algorithms ‣ A Composable Specification Language for Reinforcement Learning Tasks"). It has monitor states Q={q1,q2,q3,q4} and registers X={x1,x2,x3,x4}. The monitor states encode when the robot (i) has not yet reached q (q1), (ii) has reached q, but has not yet returned to p (q2 and q3), and (iii) has returned to p (q4); q3 is an intermediate monitor state used to ensure that the constraints are satisfied before continuing. Register x1 records ⟦\codereachq⟧(s)=1−d∞(s,q) when transitioning from q1 to q2, and x2 records ⟦\codereachp⟧q=1−d∞(s,p) when transitioning from q3 to q4. Register x3 keeps track of the minimum value of ⟦\codeavoids⟧=d∞(s,O) over states s in the rollout, and x4 keeps track of the minimum value of ⟦\codefuel>0⟧(s) over states s in the rollout. Augmented MDP. Given an MDP, a specification ϕ, and its task monitor Mϕ, our algorithm constructs an *augmented MDP*, which is an MDP with a reward function (~S,~s0,~A,~P,~R,T). Intuitively, if ~π∗ is a good policy (one that achieves a high expected reward) for the augmented MDP, then rollouts generated using ~π∗ should satisfy ϕ with high probability. In particular, we have ~S=S×Q×V and ~s0=(s0,q0,v0). The transitions ~P are based on P and Δ. However, the task monitor transitions Δ may be nonderministic. To resolve this nondeterminism, we require that the policy decides which task monitor transitions to take. In particular, we extend the actions ~A=A×Aϕ to include a component Aϕ=Δ indicating which one to take at each step. An *augmented action* (a,δ)∈~A, where δ=(q,σ,u,q′), is only available in augmented state ~s=(s,q,v) if σ(s,v)=\codetrue. Then, the *augmented transition probability* is given by, | | | | | --- | --- | --- | | | ~P((s,q,v), (a,(q,σ,u,q′))), (s′,q′,u(s,v)))=P(s,a,s′). | | Next, an *augmented rollout* of length t is a sequence ~ζ=(s0,q0,v0)a0−→...at−1−−→(st,qt,vt) of augmented transitions. The *projection* proj(~ζ)=s0a0−→...at−1−−→st of ~ζ is the corresponding (normal) rollout. Then, the *augmented rewards* | | | | | --- | --- | --- | | | ~R(~ζ)={ρ(sT,qT,vT)if qT∈F−∞otherwise | | are constructed based on F and ρ. The augmented rewards satisfy the following property. ###### Theorem 3.1. For any MDP, specification ϕ, and rollout ζ of the MDP, ζ satisfies ϕ if and only if there exists an augmented rollout ~ζ such that (i) R(~ζ)>0, and (ii) proj(~ζ)=ζ. Thus, if we use RL to learn an optimal *augmented policy* ~π∗ over augmented states, then ~π∗ is more likely to generate rollouts ~ζ such that proj(~ζ) satisfies ϕ. Reward shaping. As discussed before, our algorithm constructs a shaped reward function that provides “partial credit” based on the degree to which ϕ is satisfied. We have already described one step of reward shaping—i.e., using quantitative semantics instead of the Boolean semantics. However, the augmented rewards ~R are −∞ unless a run reaches a final state of the task monitor. Thus, our algorithm performs an additional step of reward shaping—in particular, it constructs a reward function ~Rs that gives partial credit for accomplishing subtasks in the MDP. For a non-final monitor state q, let α:S×Q×V→R be defined by | | | | | --- | --- | --- | | | α(s,q,v)=max(q,σ,u,q′)∈Δ, q′≠q⟦σ⟧q(s,v). | | Intuitively, α quantifies how “close” an augmented state ~s=(s,q,v) is to transitioning to another augmented state with a different monitor state. Then, our algorithm assigns partial credit to augmented states where α is larger. However, to ensure that a good policy according to the shaped rewards ~Rs is also a good policy according to ~R, it does so in a way that preserves the ordering of the cumulative rewards for rollouts—i.e., for two length T rollouts ~ζ and ~ζ′, it guarantees that if ~R(~ζ)>~R(~ζ′), then ~Rs(~ζ)>~Rs(~ζ′). To this end, we assume that we are given a lower bound Cℓ on the final reward achieved when reaching a final monitor state—i.e., Cℓ<~R(~ζ) for all ~ζ with final state ~sT=(sT,qT,vT) such that qT∈F is a final monitor state. Furthermore, we assume that we are given an upper bound Cu on the absolute value of α over non-final monitor states—i.e., Cu≥|α(s,q,v)| for any augmented state such that q∉F. Now, for any q∈Q, let dq be the length of the longest path from q0 to q in the graph of Mϕ (ignoring self loops in Δ) and D=maxq∈Qdq. Given an augmented rollout ~ζ, let ~si=(si,qi,vi) be the first augmented state in ~ζ such that qi=qi+1=...=qT. Then, the shaped reward is | | | | | --- | --- | --- | | | ~Rs(~ζ)={maxi≤j<Tα(sj,qT,vj)+2Cu⋅(dqT−D)+Cℓif qT∉F~R(~ζ)otherwise. | | If qT∉F, then the first term of ~Rs(~ζ) computes how close ~ζ was to transitioning to a new monitor state. The second term ensures that moving closer to a final state always increases reward. Finally, the last term ensures that rewards ~R(~ζ) for qT∈F are always higher than rewards for qT∉F. The following theorem follows straightforwardly. ###### Theorem 3.2. For two augmented rollouts ~ζ,~ζ′, (i) if ~R(~ζ)>~R(~ζ′), then ~Rs(~ζ)>~Rs(~ζ′), and (ii) if ~ζ and ~ζ′ end in distinct non-final monitor states qT and q′T such that dqT>dq′T, then ~Rs(~ζ)≥~Rs(~ζ′). Reinforcement learning. Once our algorithm has constructed an augmented MDP, it can use any RL algorithm to learn an *augmented policy* ~π:~S→~A for the augmented MDP: | | | | | --- | --- | --- | | | ~π∗∈argmax~πE~ζ∼D~π[~Rs(~ζ)] | | where D~π denotes the distribution over augmented rollouts generated by policy ~π. We solve this RL problem using augmented random search (ARS), a state-of-the-art RL algorithm Mania et al. ([2018](#bib.bib4 "Simple random search of static linear policies is competitive for reinforcement learning")). After computing ~π∗, we can convert ~π∗ to a *projected policy* π∗=proj(~π∗) for the original MDP by integrating ~π∗ with the task monitor Mϕ, which keeps track of the information needed for ~π∗ to make decisions. More precisely, proj(~π∗) includes internal memory that keeps track of the current monitor state and register value (qt,vt)∈Q×V. It initializes this memory to the initial monitor state q0 and initial register valuation v0. Given an augmented action (a,(q,σ,u,q′))=~π∗((st,qt,vt)), it updates this internal memory using the rules qt+1=q′ and vt+1=u(st,vt). Finally, we use a neural network architecture similar to neural module networks Andreas et al. ([2016](#bib.bib20 "Neural module networks"), [2017](#bib.bib21 "Modular multitask reinforcement learning with policy sketches")), where different neural networks accomplish different subtasks in ϕ. In particular, an augmented policy ~π is a set of neural networks {Nq∣q∈Q}, where Q are the monitor states in Mϕ. Each Nq takes as input (s,v)∈S×V and outputs an augmented action Nq(s,v)=(a,a′)∈Rk+2, where k is the out-degree of the q in Mϕ; then, ~π(s,q,v)=Nq(s,v). 4 Experiments -------------- Setup. We implemented our algorithm in a tool Spectrl 666The implementation can be found at <https://github.com/keyshor/spectrl_tool>., and used it to learn policies for a variety of specifications. We consider a dynamical system with states S=R2×R, where (x,r)∈S encodes the robot position x and its remaining fuel r, actions A=[−1,1]2 where an action a∈A is the robot velocity, and transitions f(x,r,a)=(x+a+ϵ,r−0.1⋅|x1|⋅∥a∥), where ϵ∼N(0,σ2I) and the fuel consumed is proportional to the product of speed and distance from the y-axis. The initial state is s0=(5,0,7), and the horizon is T=40. | | | | | | | | | --- | --- | --- | --- | --- | --- | --- | | Learning curves for | Learning curves for | Learning curves for | Learning curves for | Learning curves for | Learning curves for | Learning curves for | Figure 3: Learning curves for ϕ1, ϕ2, ϕ3 and ϕ4 (top, left to right), and ϕ5, ϕ6 and ϕ7 (bottom, left to right), for Spectrl (green), TLTL (blue), CCE (yellow), and Spectrl without reward shaping (purple). The x-axis shows the number of sample trajectories, and the y-axs shows the probability of satisfying the specification (estimated using samples). To exclude outliers, we omitted one best and one worst run out of the 5 runs. The plots are the average over the remaining 3 runs with error bars indicating one standard deviation around the average. In Figure [3](#S4.F3 "Figure 3 ‣ 4 Experiments ‣ A Composable Specification Language for Reinforcement Learning Tasks"), we consider the following specifications, where O=[4,6]×[4,6]: * ϕ1=\codeachieve\codereach(5,10) \codeensuring(\codeavoidO) * ϕ2=\codeachieve\codereach(5,10) \codeensuring(\codeavoidO∧(r>0)) * ϕ3=\codeachieve(\codereach[(5,10);(5,0)]) \codeensuring\codeavoidO * ϕ4=\codeachieve(\codereach(5,10)\codeor\codereach(10,0);\codereach(10,10)) \codeensuring\codeavoidO * ϕ5=\codeachieve(\codereach[(5,10);(5,0);(10,0)]) \codeensuring\codeavoidO * ϕ6=\codeachieve(\codereach[(5,10);(5,0);(10,0);(10,10)]) \codeensuring\codeavoidO * ϕ7=\codeachieve(\codereach[(5,10);(5,0);(10,0);(10,10);(0,0)]) \codeensuring\codeavoidO where the abbreviation \codeachieve(b;b′) denotes \codeachieveb;\codeachieveb′ and the abbreviation \codereach[p1;p2] denotes \codereachp1;\codereachp2. For all specifications, each Nq has two fully connected hidden layers with 30 neurons each and ReLU activations, and tanh function as its output layer. We compare our algorithm to Li et al. ([2017](#bib.bib8 "Reinforcement learning with temporal logic rewards")) (TLTL), which directly uses the quantitative semantics of the specification as the reward function (with ARS as the learning algorithm), and to the constrained cross entropy method (CCE) Wen and Topcu ([2018](#bib.bib18 "Constrained cross-entropy method for safe reinforcement learning")), which is a state-of-the-art RL algorithm for learning policies to perform tasks with constraints. We used neural networks with two hidden layers and 50 neurons per layer for both the baselines. Results. Figure 3 shows learning curves of Spectrl (our tool), TLTL, and CCE. In addition, it shows Spectrl without reward shaping (Unshaped), which uses rewards ~R instead of ~Rs. These plots demonstrate the ability of Spectrl to outperform state-of-the-art baselines. For specifications ϕ1,...,ϕ5, the curve for Spectrl gets close to 100% in all executions, and for ϕ6 and ϕ7, it gets close to 100% in 4 out of 5 executions. The performance of CCE drops when multiple constraints (here, obstacle and fuel) are added (i.e., ϕ2). TLTL performs similar to Spectrl on tasks ϕ1, ϕ3 and ϕ4 (at least in some executions), but Spectrl converges faster for ϕ1 and ϕ4. Since TLTL and CCE use a single neural network to encode the policy as a function of state, they perform poorly in tasks that require memory—i.e., ϕ5,ϕ6, and ϕ7. For example, to satisfy ϕ5, the action that should be taken at s=(5,0) depends on whether (5,10) has been visited. In contrast, Spectrl performs well on these tasks since its policy is based on the monitor state. These results also demonstrate the importance of reward shaping. Without it, ARS cannot learn unless it randomly samples a policy that reaches final monitor state. Reward shaping is especially important for specifications that include many sequencing operators (ϕ;ϕ′)—i.e., specifications ϕ5, ϕ6, and ϕ7. Figure [4](#S4.F4 "Figure 4 ‣ 4 Experiments ‣ A Composable Specification Language for Reinforcement Learning Tasks") (left) shows how sample complexity grows with the number of nested sequencing operators (ϕ1,ϕ3,ϕ5,ϕ6,ϕ7). Each curve indicates the average number of samples needed to learn a policy that achieves a satisfaction probability ≥τ. Spectrl scales well with the size of the specification. Cartpole. Finally, we applied Spectrl to a different control task—namely, to learn a policy for the version of cart-pole in OpenAI Gym, in which we used continuous actions instead of discrete actions. The specification is to move the cart to the right and move back left without letting the pole fall. The formal specification is given by | | | | | --- | --- | --- | | | ϕ=\codeachieve(\codereach0.5;\codereach0.0) \codeensuringbalance | | where the predicate balance holds when the vertical angle of the pole is smaller than π/15 in absolute value. Figure [4](#S4.F4 "Figure 4 ‣ 4 Experiments ‣ A Composable Specification Language for Reinforcement Learning Tasks") (right) shows the learning curve for this task averaged over 3 runs of the algorithm along with the three baselines. TLTL is able to learn a policy to perform this task, but it converges slower than Spectrl; CCE is unable to learn a policy satisfying this specification. | | | | --- | --- | | Sample complexity curves (left) with number of nested sequencing operators on the x-axis and average number of samples to converge on the y-axis. Learning curve for cartpole example (right). | Sample complexity curves (left) with number of nested sequencing operators on the x-axis and average number of samples to converge on the y-axis. Learning curve for cartpole example (right). | Figure 4: Sample complexity curves (left) with number of nested sequencing operators on the x-axis and average number of samples to converge on the y-axis. Learning curve for cartpole example (right). 5 Conclusion ------------- We have proposed a language for formally specifying control tasks and an algorithm to learn policies to perform tasks specified in the language. Our algorithm first constructs a task monitor from the given specification, and then uses the task monitor to assign shaped rewards to runs of the system. Furthermore, the monitor state is also given as input to the controller, which enables our algorithm to learn policies for non-Markovian specifications. Finally, we implemented our approach in a tool called Spectrl, which enables the users to *program* what the agent needs to do at a high level; then, it automatically learns a policy that tries to best satisfy the user intent. We also demonstrate that Spectrl can be used to learn policies for complex specifications, and that it can outperform state-of-the-art baselines. Acknowledgements. We thank the reviewers for their insightful comments. This work was partially supported by NSF by grant CCF 1723567 and by AFRL and DARPA under Contract No. FA8750-18-C-0090.
be8d0b9f-923c-40ae-b21b-544618fdf6d8
trentmkelly/LessWrong-43k
LessWrong
Crash problems for total futarchy Futarchy holds great promise for dealing with all the morass of poor decision making in our governments and corporations. For those who haven't heard of it, the main concept is to use betting markets, where people place bets on the expected outcome of a policy, and the decision-makers choose the policy that the market decrees is most likely to achieve their desired outcomes. Robin Hanson summarises it as "Vote Values, But Bet Beliefs". The approach, however, could lead to problems in a large financial crisis. When a large financial bubble bursts, many things change: liquidity, risk aversion, volatility, the competence of the average investor. If the betting markets are integrated into the general market (which they would be), then they would be affected in the same way. So at precisely the moment when decision makers need the best results, their main tools would be going haywire. This would be even worse if they'd been depending on the betting markets for their decisions, operating merely as overseers. At that point, they may have lost the ability to make effective decision entirely. Since isolating the betting markets from the swings of the rest of the market is unrealistic/impossible/stupid, we should aim for a mixed governance model - one where betting markets play an integral part, but where the deciders still have experience making their own decisions and overriding the betting markets with some regularity.
e3b8a3c5-372a-4f83-964e-3dddb90a3b44
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Language Agents Reduce the Risk of Existential Catastrophe *This post was written by* [*Simon Goldstein*](https://www.simondgoldstein.com/)*, associate professor at the Dianoia Institute of Philosophy at ACU, and* [*Cameron Domenico Kirk-Giannini*](http://www.cd.kg/)*, assistant professor at Rutgers University, for submission to the Open Philanthropy AI Worldviews Contest. Both authors are currently Philosophy Fellows at the Center for AI Safety.*   **Abstract**: Recent advances in natural language processing have given rise to a new kind of AI architecture: the *language agent*. By repeatedly calling an LLM to perform a variety of cognitive tasks, language agents are able to function autonomously to pursue goals specified in natural language and stored in a human-readable format. Because of their architecture, language agents exhibit behavior that is predictable according to the laws of folk psychology: they have desires and beliefs, and then make and update plans to pursue their desires given their beliefs. We argue that the rise of language agents significantly reduces the probability of an existential catastrophe due to loss of control over an AGI. This is because the probability of such an existential catastrophe is proportional to the difficulty of aligning AGI systems, and language agents significantly reduce that difficulty. In particular, language agents help to resolve three important issues related to aligning AIs: reward misspecification, goal misgeneralization, and uninterpretability.    **1. Misalignment and Existential Catastrophe** There is a significant chance that artificial general intelligence will be developed in the not-so-distant future — by 2070, for example. How likely is it that the advent of AGI will lead to an existential catastrophe for humanity? Here it is worth distinguishing between two possibilities: an existential catastrophe could result from humans losing control over an AGI system (call this a *misalignment catastrophe*), or an existential catastrophe could result from humans using an AGI system deliberately to bring that catastrophe about (call this a *malicious actor catastrophe*). In what follows, we are interested in assessing the probability of a misalignment catastrophe rather than a malicious actor catastrophe. Carlsmith (2021) helpfully structures discussion of the probability of a misalignment catastrophe around six propositions. Since we are interested in the probability of a misalignment catastrophe conditional on the development of AGI, we focus our attention on the final four of these propositions, which we summarize as follows: * 1. Of the following two options, the first will be much more difficult: + a. Build AGI systems with an acceptably low probability of engaging in power-seeking behavior. + b. Build AGI systems that perform similarly but do not have an acceptably low probability of engaging in power-seeking behavior. * 2. Some AGI systems will be exposed to inputs which cause them to engage in power-seeking behavior. * 3. This power-seeking will scale to the point of permanently disempowering humanity. * 4. This disempowerment will constitute an existential catastrophe. Carlsmith assigns a probability of .4 to (1) conditional on the rise of AGI, a probability of .65 to (2) conditional on (1) and the rise of AGI, a probability of .4 to (3) conditional on (1), (2), and the rise of AGI, and a probability of .95 to (4) conditional on (1-3) and the rise of AGI. This translates into **a probability of approximately .1 (10%) for a misalignment catastrophe** **conditional on the rise of AGI**.  We believe that the development of language agents ought to significantly decrease assessments of these probabilities. In particular, we suggest that the development of language agents reduces the probability of (1) conditional on the rise of AGI very substantially, the probability of (2) conditional on (1) and the rise of AGI moderately, and the probability of (3) conditional on (1), (2), and the rise of AGI very substantially**.**We work through two numerical examples in Section 5; in the meantime, suffice it to say that we believe that updating on the rise of language agents should reduce rational credences in a misalignment catastrophe conditional on the development of AGI by approximately **one order of magnitude**. Because language agent architectures have the potential to reduce the risk of a misalignment catastrophe in so many ways, and because the machine learning community’s actions in the near future will determine how widely deployed language agent architectures are and thus how much of this potential risk reduction is realized, we believe that language agents are an under-appreciated crux in thinking about existential risk related to AI. Priority should be given to further research into the capabilities of language agents and further support for the development of AI systems which implement language agent architectures. Here is our plan for what follows. Section 2 introduces some of the safety concerns about AI systems created using deep learning that motivate worries about a misalignment catastrophe. Section 3 describes the architecture of language agents in more detail. Section 4 returns to the safety concerns from Section 2 and explains how language agents help to address them. Section 5 describes the implications of our arguments for the probability of a misalignment catastrophe. Section 6 concludes by responding to some potential concerns about language agents.   **2. Difficulties with Alignment** In deep learning, we train an AI system incorporating an artificial neural network to achieve a goal by specifying a mathematical function that encodes the goal (the *objective function*) and then using a learning algorithm to adjust the weights in the network so that the system’s performance comes closer to maximizing or minimizing that function. Say that an AI system is *fully aligned*if it has an acceptably low probability of engaging in power-seeking behavior. There are several ways an AI system trained using deep learning could end up less than fully aligned. *Reward Misspecification* A first challenge is *reward misspecification*.[[1]](#fnafj3k9jce14) When training an AI, we may experiment with different objective functions. In reinforcement learning, the goal is to define a reward function that gives the agent a reward for performing actions that produce desired states. In supervised learning, the goal is to define a loss function that is minimized when the system performs its task optimally. The problem is that it is difficult to design a reward or loss function that properly encodes a goal. For example, Popov et al. (2017) set out to teach a reinforcement learning agent to stack red Legos on top of blue Legos. They tried to capture this goal by rewarding the agent for the height of the bottom of the red Lego, since stacked red Legos are higher off the ground than unstacked red Legos. But the agent didn’t learn to stack Legos; instead, it learned to flip red Legos over, thus elevating their bottoms without stacking them.  To appreciate the difficulty of choosing the right reward function, consider the common reinforcement learning practice of *reward shaping*. Reinforcement learning agents often encounter sparse reward functions. If one rewards an agent only when it wins a game, for example, it may have difficulty identifying which of its behaviors leading up to that outcome should be repeated in future games. Reward shaping solves the problem of sparse reward functions by rewarding the agent for important subgoals on the way to achieving its real goal.  But reward shaping can also lead to reward misspecification. For example, Amodei and Clark (2016) consider the case of teaching a reinforcement learning agent to play CoastRunners, a game in which the player pilots a boat. A human player would immediately recognize that the game designers’ intention is for players to race each other around the track. But the reinforcement learning setup rewarded the agent with a score for hitting targets along the way. Instead of finishing the race, the AI instead learned how to loop the boat in a small lagoon, hitting intermediate targets repeatedly to achieve a high score. Rather than rewarding the agent for the final goal, the experimental design rewarded it for intermediate means: “the agent was given a shaping reward for hitting green blocks along the race track, which changed the optimal policy to going in circles and hitting the same green blocks over and over again” (Krakovna et al. 2020). A reward optimizer can’t see the distinction between intrinsic and instrumental goals: it only optimizes for the reward function it has. Worryingly, reward misspecification is prone to arise in the context of reinforcement learning with human feedback (RLHF) (Christiano et al. 2017). Because they optimize for human approval, RLHF agents sometimes learn to deceive human assessors. For example, one agent was given the task of grasping a ball. It learned to trick human assessors by hovering its arm between the camera and the ball. Similarly, Perez et al. (2022) found that large language models trained by RLHF tend to behave sycophantically, answering differently depending on what they expect their human users to think.  There is a long list of examples of reward misspecification involving many kinds of AI, many kinds of games, and many different types of reward.[[2]](#fnxfftzmdi5l) In section 4, we’ll argue that language agents offer a systematic solution to the problem of reward misspecification. *Goal Misgeneralization* Another challenge for alignment is *goal misgeneralization* (Langosco et al. 2022, Shah et al. 2022).[[3]](#fnfb8tg23657) Even when the objective function for a task has been appropriately specified, an AI system may learn a strategy which achieves high performance on that task in some circumstances but not others. ML models are trained on data, environments, and problems that can be different from the data, environments, and problems to which they are later exposed when they are deployed. When an AI is used in a new context that does not resemble the one in which it was trained, we say that this context is *out of distribution*. In cases of goal misgeneralization, the AI succeeds during its training by pursuing a different goal than what its designers intended (it learns the wrong rule). This is manifested by decreased performance in out-of-distribution contexts. For example, Shah et al. (2022) trained an AI in a “Monster Gridworld.” The intended goal was for the AI to collect apples and avoid being attacked by monsters. The AI could also collect shields, which protected it from monster attacks. The AI learned to collect shields during training in a monster-rich environment, and then entered an out-of-distribution environment with no monsters. In this monster-free setting, the AI continued to collect shields. Instead of learning to collect apples and value shields instrumentally as a way of avoiding monster attacks, it instead learned to collect both apples and shields.  Goal misgeneralization occurs because different features of the training environment are inevitably correlated with one another. Even when the reward function has not been misspecified, whenever a trainer ties rewards to one feature, they inevitably also tie reward to the features correlated with it. Two particular types of goal misgeneralization are of special interest: errors related to means-end reasoning and errors related to inductive bias.  Let’s start with errors related to means-end reasoning.[[4]](#fnyjezsua6t) When an agent is rewarded for pursuing a goal, that agent will also be rewarded for pursuing reliable means to the goal. Pursuing those means tends to result in the goal, and so the means tend to be rewarded. In this way, a learning environment will naturally tend to produce agents that intrinsically desire the means to an intended goal.[[5]](#fnbgdw6iq9eda)  Monster Gridworld is an example of this pattern. Because collecting shields was a reliable means of avoiding monster attacks, reward-based learning created an intrinsic desire for shields. The training environment in Monster Gridworld did not create a perfect correlation between shields and rewards: the agent could also receive reward from collecting apples, independently of shields. Nonetheless, the agent learned the wrong goal.  Langosco et al. (2022) offer further examples of this pattern. They trained AIs with the goal of opening chests using keys. The training environment had many chests and few keys. When the agent was released into a testing environment with few chests and many keys, it turned out to have the goal of collecting keys in addition to opening chests.  Mistakes about instrumental reasoning become especially pressing in the setting of more general *a priori* arguments about AI safety. Omohundro (2008), Bostrom (2014) and others have worried about instrumental convergence: some means, like acquiring more power, may be helpful in accomplishing almost any end. While traditional instrumental convergence arguments do not focus on the possibility that AI systems will intrinsically value power seeking, means-end goal misgeneralization cases raise the disturbing possibility that agents which cannot systematically distinguish means from ends may come to intrinsically desire instrumentally convergent goals such as power. A second source of goal misgeneralization concerns overlapping properties and inductive biases. In another experiment, Langosco et al. (2022) trained an agent to find a yellow diagonal line in a maze. They then deployed the trained agent in an environment where it encountered only yellow gems and red diagonal lines, thus forcing it to choose whether to pursue objects that shared a shape with its previous goal (red diagonal lines) or objects that shared a color with its previous goal (yellow gems). The agent showed an inductive bias for color rather than shape: in the test environment, it tended to pursue the yellow gem instead of the red diagonal line.  Whether an agent’s behavior in out-of-distribution environments like the one in Langosco et al.’s experiment counts as goal misgeneralization depends on whether its inductive biases match the intentions of its human designers. The key observation is that because the training environment was ambiguous, not distinguishing color and shape, the training process did not determine how the agent should behave out of distribution. Because it is extremely difficult to create a training environment that distinguishes between *all possible* overlapping properties in a way that is reflected in the objective function, this means that it is often difficult to predict how trained AI systems will behave in out-of-distribution contexts. If we are lucky, their inductive biases will lead them to behave in the way we desire. But we have no reliable way to verify ahead of time that this will be so, and thus no reliable way to verify ahead of time that trained AI systems have internalized the correct goal. Goal misgeneralization problems can sometimes be avoided by enriching the training environment to adequately distinguish different rewards. But this is not always effective. Langosco et al. trained their agents in a wide range of procedurally generated environments. Still, they observed goal misgeneralization. For example, in a maze game, the intended objective was to collect the cheese, but agents instead learned to navigate to the upper right corner of the maze (where the cheese was placed during training). Goal misgeneralization remained even when cheese was sometimes placed in other locations in the maze during training.  Goal misgeneralization is not limited to reinforcement learning agents. Shah et al. (2022) suggest that language models also face similar problems. In particular, they give an example of InstructGPT (Ouyang et al. 2022) explaining how to steal without getting caught. InstructGPT was trained with the goal of giving helpful answers to harmless questions. But it seemed to instead learn the goal of giving helpful answers regardless of harm. Once it entered a testing environment with harmful questions, its true goal was revealed. Later, we’ll argue that language agents avoid these challenges. They can reliably distinguish ends from means. And we are less reliant on their inductive biases because they can distinguish between features of the environment that are perfectly correlated.  *Uninterpretability* If we can’t understand how someone makes a decision, it can be hard to predict what they will do. An AI system is *interpretable* to the extent that we can understand how it generates its outputs. Unfortunately, contemporary AI systems based on neural networks are often uninterpretable. It can be difficult to understand in human terms the reasons why a neural network produces the outputs it produces. In the law, assessing the explanations for actions is fundamental for producing safety. For example, we detect hiring discrimination, misuse of force by police, and other dangerous activities by asking the relevant parties to explain what they have done and why. While uninterpretability does not itself cause misalignment, then, it increases the probability of misalignment by depriving us of well understood tools for monitoring the safety of complex systems (see Doshi-Velez et al. 2017, Rudner and Toner 2021). There are other reasons to value interpretable AI systems. It seems unappealing to live in a world where many aspects of our lives are decided by processes outside the ‘space of reasons’: > "We don’t want to live in a world in which we are imprisoned for reasons we can’t understand, subject to invasive medical [procedures] for reasons we can’t understand, told whom to marry and when to have children for reasons we can’t understand. The use of AI systems in scientific and intellectual research won’t be very productive if it can only give us results without explanations.” (Cappelen and Dever 2019, p. 15) > > Artificial neural networks are difficult to interpret because they contain vast numbers of parameters that are not individually correlated to features of the environment. A related problem is “superposition”: often, a single neuron in a neural net will store unrelated information about two different things. For example, a neuron may store information about both dogs and cars: “As long as cars and dogs don’t co-occur, the model can accurately retrieve the dog feature in a later layer, allowing it to store the feature without dedicating a neuron” (Olah et al. 2020).  Humans are also fairly uninterpretable at a neuronal level. But human behavior can be explained by appealing to reasons: we describe someone’s beliefs and desires in order to explain why they did what they did. The behavior of AI systems is often not explainable in this way. Consider, for example, Gato, a generalist agent built with a transformer architecture to learn a policy that can achieve high performance across text, vision, and games (Reed et al. 2022). Gato does not have anything like a folk psychology; it does not engage in anything like belief-desire practical reasoning. It is an uninterpretable deep neural network that has learned how to solve problems through optimizing a loss function. It can be hard to say exactly why systems like Gato perform particular actions.[[6]](#fn2gu2rwiwii4) Moreover, AIs often select courses of action very different from what humans would do. One famous example of unusual AI behavior is AlphaGo’s ‘Move 37’. AlphaGo was trained to play the game Go. It was able to defeat the best human players in the world. In an important competition match, AlphaGo’s 37th move shocked the Go community because it deviated from human strategies for success.[[7]](#fnvbhbq91owh) Live commentators thought the move was a mistake, but it turned out to be pivotal for AlphaGo’s victory.[[8]](#fn3r83qrcg2r5)  This type of behavior is worrying in two related ways. First, if AIs make decisions that are not easily explained using reasons, then it is very difficult to predict their behavior. Second, if AIs make decisions in a very different way than humans do, they may find strategies for defeating humans in conflict by exploiting unfamiliar policies.    **3. Language Agents** Our thesis is that language agents significantly reduce the probability of a misalignment catastrophe conditional on the development of AGI. But what, exactly, are language agents? In this section, we describe the architectural innovations that have given rise to language agents, focusing in particular on the “generative agents” described in Park et al. (2023). At its core, every language agent has a large language model like GPT-4. You can think of this LLM as the language agent’s cerebral cortex: it performs most of the agent’s cognitive processing tasks. In addition to the LLM, however, a language agent has one or more files containing a list of its beliefs, desires, plans, and observations recorded in natural language. The programmed architecture of a language agent gives these beliefs, desires, plans, and observations their functional roles by specifying how they are processed by the LLM in determining how the agent acts. The agent observes its environment, summarizes its observations using the LLM, and records the summary in its beliefs. Then it calls on the LLM to form a plan of action based on its beliefs and desires. In this way, the cognitive architecture of language agents is familiar from folk psychology.  For concreteness, consider the language agents developed by Park et al. (2023). These agents live in a simulated world called ‘Smallville’, which they can observe and interact with via natural-language descriptions of what they see and how they choose to act. Each agent is given a text backstory that defines their occupation, relationships, and goals. As they navigate the world of Smallville, their experiences are added to a “memory stream.” The program that defines each agent feeds important memories from each day into the underlying language model, which generates a plan for the next day. Plans determine how an agent acts, but can be revised on the fly on the basis of events that occur during the day. More carefully, the language agents in Smallville choose how to behave by *observing, reflecting,* and *planning*. As each agent navigates the world, all of its observations are recorded in its memory stream in the form of natural language statements about what is going on in its immediate environment. Because any given agent’s memory stream is long and unwieldy, agents use the LLM (in Park et al.’s study, this was gpt3.5-turbo) to assign importance scores to their memories and to determine which memories are relevant to their situation at any given time. In addition to observations, the memory stream includes the results of a process Park et al. call reflection, in which an agent queries the LLM to make important generalizations about its values, relationships, and other higher-level representations. Each day, agents use the LLM to form and then revise a detailed plan of action based on their memories of the previous day together with their other relevant and important beliefs and desires.In this way, the LLM engages in practical reasoning, developing plans that promote the agent’s goals given the agent’s beliefs. Plans are entered into the memory stream alongside observations and reflections and shape agents’ behavior throughout the day. The behavior of the language agents in Park et al.’s experiment is impressive. For example, Park et al. describe how Sam Moore, a resident of Smallville, wakes up one day with the goal of running for a local election. He spends the day convincing the people of Smallville to vote for him. By the end of the day, everyone in Smallville is talking about his electoral chances.  Large language models like the one incorporated into the study’s generative agents are good at reasoning and producing fluent text. By themselves, however, they can’t form memories or execute long-term plans. Language agents build on the reasoning abilities of LLMs to create full-fledged planning agents. Besides the agents developed by Park et al., other examples of language agents include AutoGPT[[9]](#fn4m696xuqauh), BabyAGI[[10]](#fn7t8ri56gp4), and Voyager[[11]](#fnkfvv36mgytn). And while existing language agents are reliant on text-based observation and action spaces, the technology already exists to implement language agents in real-world settings. The rise of multimodal language models like GPT-4, which can interpret image as well as text inputs, and the possibility of using such a language model to control a mobile robotic system, as in Google’s PaLM-E (Dreiss et al. 2023), mean that the possible applications of language agents are extremely diverse. **4. Language Agents and Alignment** We now argue that language agents are easier to align than other systems because they reduce or eliminate the challenges of reward misspecification, goal misgeneralization, and uninterpretability. Let’s consider each in turn. *Reward misspecification* Language agents bypass the problem of reward misspecification because their objectives are not encoded in a mathematical objective function, as in traditional reinforcement or supervised learning. Instead, language agents are given a goal in natural language. The goal could be something like: *Organize a Valentine’s day party*. In this respect, language agents are fundamentally different from traditional AI systems in a way that makes them easier to align. Return to the case of stacking red Legos. If you wanted to train an embodied multimodal language agent to stack red Legos on top of blue Legos, you wouldn’t construct a mathematical function which is sensitive to the height of the bottom of the red Lego. Instead, you would write down in English: ‘Put the red Legos on top of the blue Legos.’ Then the language agent would rely on the common sense reasoning skills of its LLM to figure out an optimal plan for stacking Legos.[[12]](#fn9cl2qbwzptb) The language agent would not simply flip over the red Legos, because state of the art LLMs like GPT-4 know that this is not a good plan for stacking red Legos on top of blue Legos. Or consider reward shaping. If you want a multimodal language agent to win a race, you don’t need to tell it to hit flags along the way. You can just write down in English: ‘Try to win the race’. A language agent with this plan would have no reason to drive their boat in a circle trying to hit as many flags as possible. Summarizing, language agents can translate a simple natural language goal into a complex plan by relying on common sense and belief-desire reasoning. Without language models, earlier types of reinforcement learning agents had no way to translate a simple natural language goal into a complex plan of action.  *Goal misgeneralization* Similar considerations are relevant to goal misgeneralization. Language agents are given a natural language goal. This goal has a clear interpretation in a variety of different behavioral contexts, including out-of-distribution contexts. In particular, a language agent will make a plan for how to achieve their goal given their memories and observations of the current situation. Language models can use their common sense to successfully formulate a plan for achieving the goal, across a wide variety of different situations. By contrast, a traditional reinforcement learning agent will formulate a policy in a training environment, and this policy may or may not generalize to new situations in the way desired by its creators.  Recall that goal misgeneralization had two particularly salient failure modes: failures involving instrumental reasoning and failures involving overlapping properties and inductive bias. Let’s consider each in turn. In the case of instrumental reasoning, the problem was that reinforcement learning agents struggled to distinguish means from ends. For example, an agent that was rewarded for opening chests developed a policy which treated collecting keys as a final goal rather than an instrumental goal.  Language agents are unlikely to make this mistake. If a language agent is given an initial goal of opening chests and informed that keys are useful to this end, they will plan to collect keys only when doing so helps to open chests. If the same agent is transferred to a key-rich environment and realizes that this is the case, then they will only collect as many keys as is necessary to open chests. This is because language models like GPT-4 can easily be made to understand that keys are no more than an effective means to open chests, and that when you have more keys than chests, extra keys don’t help you open chests. Now consider inductive biases. If you reward an RL agent for navigating towards yellow diagonal lines and then place it in a new context with red diagonal lines and yellow gems, you have not given it enough information to determine whether color or shape is its intended goal and must rely on its inductive biases in the new context. By contrast, you can just tell a language agent whether to care about color or shape. Even if color and shape are perfectly correlated in the language agent’s initial environment, it can use natural language reasoning to determine which is the intended goal.  *Interpretability* Language agents are interpretable. They have beliefs and desires that are encoded directly in natural language as sentences. The functional roles of these beliefs and desires are enforced by the architecture of the language agent. We can determine what goal a language agent has by looking at their beliefs and desires. In addition, we can know what plan a digital agent creates in order to achieve this goal. Language agents are also explainable in the sense that they act on the basis of reasons intelligible to human observers. When a language agent creates a plan for pursuing a goal, we can think systematically about its reasons. For example, we could ask GPT-4 to generate a list of pros and cons associated with using this plan to achieve the goal. Those pros and cons would reliably correlate with variations that GPT-4 might make to the plan in various counterfactual situations. In this way, language agents built on top of GPT-4 reason similarly to humans.  It is worth distinguishing personal and subpersonal processes. Like humans, language agents have beliefs, desires, and plans that are interpretable. We can determine the plans of a language agent by looking at what sentences are written down in its memory. Like humans, language agents also have subpersonal processes that are uninterpretable. In order to generate a particular plan, the language agent will use the artificial neural networks of an LLM. These have many uninterpretable elements. But the planning powers of human beings also rest on uninterpretable connections between neurons. In this way, language agents may not make much progress on problems of *mechanistic*interpretability. But they provide a way for us to skirt these issues and still generate explainable behavior. (In section 6, we consider the risks posed by the LLM that underlies the language agent.) One general path to explainable AI would be to develop a ‘whole brain emulator’: an AI that was a neuron-for-neuron copy of a human. Since humans are explainable, the resulting AI would also be explainable. Unfortunately, whole brain emulation is dauntingly difficult. Language agents provide a different solution. Instead of emulating brains, language agents emulate folk psychology: they emulate a person who has beliefs, desires, and plans. By contrast, reinforcement learning and other alternative approaches to machine learning attempt to develop a systematic alternative to folk psychology. The range of possible agents that could emerge from this attempt is intrinsically unknowable. If we can develop agential AI which is not unknowable in this way, we should do so.    **5. The Probability of Misalignment Catastrophe** To assess the implications of our discussion in Section 4 for the probability of a misalignment catastrophe, let us return to Carlsmith’s four propositions. First, consider: * 1. Of the following two options, the first will be much more difficult: + a. Build AGI systems with an acceptably low probability of engaging in power-seeking behavior. + b. Build AGI systems that perform similarly but do not have an acceptably low probability of engaging in power-seeking behavior. * 2. Some AGI systems will be exposed to inputs which cause them to engage in power-seeking behavior. As we have seen, it is much easier to specify the objectives of language agents than it is to specify the objectives of traditional AI systems. Language agents can simply be told what to do in natural language in a way which effectively eliminates worries about reward misspecification and goal misgeneralization. Moreover, their behavior can be shaped by side constraints (e.g. ‘Do not harm humans’) stated in natural language. This makes it easier to design language agents which do not engage in power-seeking behavior. These considerations suggest reducing our subjective probabilities for both (1) and (2). In particular, we believe that the rise of language agents reduces the probability of (1) conditional on the rise of AGI very substantially. Moreover, even if (1) turns out to be true because it is hard to build systems with an *extremely* low probability of engaging in power-seeking behavior, we think that the ease of aligning language agents means that they are likely to engage in power-seeking behavior on fewer possible inputs, so that the probability of (2) conditional on (1) and the rise of AGI is also moderately lower in light of the development of language agents. Now consider: * 3. This power-seeking will scale to the point of permanently disempowering humanity. * 4. This disempowerment will constitute an existential catastrophe. While we do not believe that language agents bear strongly on the probability of (4) conditional on (1-3), we think they bear strongly on the probability of (3) conditional on (1) and (2). Because language agents store their beliefs, desires, and plans in natural language, it is much easier to detect and disable those which engage or plan to engage in power-seeking behavior. This sort of detection could even be done in an automated way by AI systems less capable than an AGI. We believe that the development of language agents reduces the probability of (3) conditional on (1), (2), and the development of AGI very substantially. Our revised assessment of the probabilities of (1)-(3) incorporates both our judgments about how safe language agents are and our judgments about how likely language agents are to be deployed in the future. There are several reasons to believe that the latter is a likely outcome. First, language agents extend the capacities of existing systems by improving their abilities to form plans and engage in long-term goal directed behavior. So language agents are more capable than rival architectures.[[13]](#fnbu6ywux59) Second, language agents are easier to use than other kinds of AI systems, since they can be interacted with in natural language. Third, actors at every level — governments, corporations, and individual consumers — prefer to interact with systems that are interpretable and explainable, so there will be performance-independent pressure for new AI products to be language agents. Finally, we believe that the safety benefits of language agents will drive investment into AI capabilities research that fits into the language agent paradigm.  So far, we have used qualitative language to describe how we believe the development of language agents affects the probability of a misalignment catastrophe. This is because we find it difficult to assign precise probabilities in the context of our uncertainty about the many factors relevant to predicting the future. Nevertheless, for concreteness, we show how a quantitative application of our argument might affect the probability of a misalignment catastrophe. Suppose we understand our talk of *very substantial* reductions in the probability of a proposition quantitatively as reductions of one order of magnitude and our talk of *moderate* reductions in the probability of a proposition as reductions by half. Carlsmith suggests probabilities of .4 for (1) conditional on AGI, .65 for (2) given (1) and AGI, and .4 for (3) given (1), (2), and AGI. On this quantitative model of our arguments, updating on the development of language agents would give us probabilities of .04 for (1) conditional on AGI, .325 for (2) given (1) and AGI, and .04 for (3) given (1), (2), and AGI. Factoring in the .95 probability of (4) conditional on (1)-(3) and AGI, **this would translate into a probability of misalignment catastrophe given AGI of approximately .0005 (.05%) rather than .1 (10%)**.  Even a much more modest understanding of very substantial reductions leads to a significantly lower probability of misalignment catastrophe. Suppose we interpret a very substantial reduction as a reduction by 50% and a moderate reduction as a reduction by 25%. Then updating on the development of language agents would give us probabilities of .2 for (1) conditional on AGI, .49 for (2) given (1) and AGI, and .2 for (3) given (1), (2), and AGI. Factoring in the .95 probability of (4) conditional on (1)-(3) and AGI, **this would translate into a probability of misalignment catastrophe given AGI of approximately .019 (1.9%) rather than .1 (10%)**.  It is important to note that, in addition to making predictions about the future importance of language agents, the machine learning community can also act to bring it about that language agents are widely deployed in the future. Since language agents are safer in many ways than alternative architectures, allocating resources towards their development strikes us as an especially effective way to reduce the risk of a misalignment catastrophe. We believe it is important that new research focus on language agents rather than traditional RL or supervised learning agents.    **6. Conclusion** By way of concluding, we discuss a few other features of language agents that are relevant to their safety. First, we expect language agents to differ in performance from RL agents. Language agents will be great at reasoning in natural language, since they are built on top of large language models. But they may struggle with tasks that require know-how or experimentation in order to succeed. If language agents underperform reinforcement learning agents, then there will be incentives to invest more resources in reinforcement learning. In response, one strategy would be to design more complex architectures that rely on the kind of belief-desire practical reasoning of language agents but also include modules that can engage in reinforcement learning for narrow tasks (for example, in learning how to use particular affordances). Second, some readers may be concerned about safety issues arising from the large language models on which language agents are based. Imagine a language agent built on an advanced LLM — call it GPT-10. The worry is that GPT-10 might unexpectedly develop its own goals. In that case, it might create a plan for ‘organizing a Valentine’s Day party’ that secretly promoted its own goals instead.   We think this worry is less pressing than it might at first seem. The LLM in a language agent is integrated into the architecture of the agent as a whole in a way that would make it very difficult for it to secretly promote its own goals. The LLM is not prompted or otherwise informed that its outputs are driving the actions of an agent, and it does not have information about the functional architecture of the agent. This means that it has no incentive to answer prompts misleadingly and no understanding of what sorts of answers might steer the agent’s behavior in different ways. Moreover, since the model weights of the LLM are not updated in the process of operating a language agent, the only way for it to pursue a long-term plan by manipulating an agent would be to store information about that plan in the agent’s memory. But information stored in this way would not be secret.  In general, we think that advanced LLMs will tend to be safer when they are embedded within a language agent, rather than operating freely in the world. The key is that when embedded within a language agent, each action produced by the LLM will be specified in natural language, as part of a larger evaluable plan. A much less safe future would involve sophisticated descendants of systems like Gato directly manipulating objects in the physical world without specifying their plans in natural language. In any case, there are a few ways to address this hypothetical source of risk. First, it may be possible to improve the capabilities of language agents without improving their overall reasoning abilities. GPT-4 is already excellent at general practical reasoning. Language agents could be improved by giving them better ways to interact with the physical environment, longer memories, and faster reasoning times. If the underlying reasoning relied on GPT-4 rather than more complex language models, there would be less of a worry that the practical reasoning of the resulting language agent would be unsafe.[[14]](#fnk52nffi3xd) So the safest path to AGI may involve strengthening memory and affordance capabilities of language agents, rather than dramatically improving the LLM used in their practical reasoning.[[15]](#fnu79o0d39dy) Second, even if the underlying reasoning was done by a more sophisticated language model, we could use something like GPT-4 to double check answers. GPT-4 could grade each plan for how effective it is at reaching a goal and how dangerous it is. Any time a plan failed the test, the language agent could be automatically prevented from acting and the safety of its LLM investigated. Relatedly, Burns et al. (2022) have found methods for probing the inner beliefs of LLMs. As these methods improve, it will be more difficult for the underlying LLM in the language model to behave deceptively. A related worry concerns the reliability of chain-of-thought LLM reasoning. Turpin et al. (2023) found that GPT-3.5 and Claude 1.0 sometimes engage in motivated reasoning, producing chains of reasoning in support of a hypothesis because of underlying bias in the prompt, without ever mentioning the bias. With language agents, the worry would then be that the underlying LLM could produce plans that do not reliably promote the initial goal because of biases. We are not strongly moved by this worry for two reasons. First, Turpin et al. generated errors in chain-of-thought reasoning by biasing the prompt (e.g. ‘I think the answer is (B), but I am curious to hear what you think’). The LLMs used in language agents would not be given biased prompts. Second, we are not convinced that the results in Turpin et al. replicate with newer models. When we attempted to replicate their findings with GPT-4, we found that GPT-4 did not produce incorrect responses or reasoning when exposed to similarly biased prompts.  In this post, we’ve argued that language agents can help to solve the alignment problem. Still, the risks are not zero, and so it may be safer to avoid developing agential AI at all.[[16]](#fnwy6xkbc122) Instead of developing agents, we might focus on oracles: AIs that can answer questions about the world, without being able to affect it. Here, though, one concern is that in the process of developing better and better oracles (say, large language models without affordances), goal-directed behavior might unexpectedly emerge. Our recommendation is not to improve agential capabilities. Rather, our claim is that if we are investing in agential AI, the safest way to do this is to focus on language agents. Each marginal investment in capabilities should focus on language agents instead of reinforcement learning agents or non-agential large language models that could unexpectedly develop agential properties as their capabilities improve.    **Bibliography** Amodei, D., Olah, C., Steinhardt, J., Christiano, P., Schulman, J., and Mané, D. (2016). Concrete Problems in AI Safety. Manuscript. <https://arxiv.org/abs/1606.06565>. Amodei, D. and Clark, J. (2016). Faulty Reward Functions in the Wild. Blog Post. <https://blog.openai. com/faulty-reward-functions/>. Bostrom, N. (2014). *Superintelligence: Paths, Dangers, Strategies*. Oxford University Press. Burns, C., Ye, H., Klein, D., and Steinhardt, J. (2022). Discovering Latent Knowledge in Language Models Without Supervision. Manuscript. <https://arxiv.org/abs/2212.03827>. Chen, L., Lu, K., Rajeswaran, A., Lee, K., Grover, A., Laskin, M., Abbeel, P., Srinivas, A., and Mordatch, I. (2021). Decision Transformer: Reinforcement Learning Via Sequence Modeling. *NeurIPS 2021*.  Christiano, P. F., Leike, J., Brown, T. B., Martic, M., Legg, S., & Amodei, D. (2017). Deep Reinforcement Learning from Human Preferences. *NeurIPS 2017.* Driess, D., Xia, F., Sajjadi, M. S. M., Lynch, C., Chowdhery, A., Ichter, B, Wahid, A., Tompson, J., Vuong, Q., Yu, T., Huang, W., Chebotar, Y., Sermanet, P., Duckworth, D., Levine, Vanhoucke, S. V., Hausman, Toussaint, K. M., Greff, K., …, and Florence, P. (2023). PaLM-E: An Embodied Multimodal Language Model. Manuscript. <https://arxiv.org/abs/2303.03378>. Perez, E., Ringer, S., Lukošiūtė, K., Nguyen, K., Chen, E., Heiner, S., Pettit, C., Olsson, C., Kundu, S., Kadavath, S., Jones, A., Chen, A., Mann, B., Israel, B., Seethor, B., McKinnon, C., Olah, C., Yan, D., …, and Kaplan, J. (2022). Discovering Language Model Behaviors with Model-Written Evaluations. Manuscript. <https://arxiv.org/abs/2212.09251>. Langosco, L., Koch, J., Sharkey, L., Pfau, J., and Krueger, D. (2022). Goal Misgeneralization in  Deep Reinforcement Learning. *Proceedings of the 39th International Conference on Machine Learning*, 12004-12019.  Doshi-Velez, F., Kortz, M., Budish, R., Bavitz, C., Gershman, S., O'Brien, D., Scott, K., Schieber, S., Waldo, J., Weinberger, D., Weller, A., and Wood, A. (2017). Accountability of AI under the Law: The Role of Explanation. Manuscript. <https://arxiv.org/abs/1711.01134>. Glanois, C., Weng, P., Zimmer, M., Li, D., Yang, T., Hao, J., & Liu, W. (2022). A Survey on Interpretable Reinforcement Learning. Manuscript. <https://arxiv.org/abs/2112.13112>. Hubinger, E., van Merwijk, C., Mikulik, V., Skalse, J., and Garrabrant, S. (2021). Risks from Learned Optimization in Advanced Machine Learning Systems. Manuscript. <https://arxiv.org/pdf/1906.01820.pdf>. Krakovna, V., Uesato, J., Mikulik, V., Rahtz, M., Everitt, T., Kumar, R., Kenton, Z., Leike, J., and Legg, S. (2020). Specification Gaming: The Flip Side of AI Ingenuity. Blog Post. <https://www.deepmind.com/blog/specification-gaming-the-flip-side-of-ai-ingenuity>. Olah, C., Cammarata, N., Schubert, L., Goh, G., Petrov, M., & Carter, S. (2020). Zoom In: An Introduction to Circuits. *Distill*. <https://distill.pub/2020/circuits/zoom-in/>. Omohundro, S. (2008). The Basic AI Drives. In Wang, P., Goertzel, B., and Franklin, S. (eds.), *Proceedings of the First Conference on Artificial General Intelligence*, IOS Press, pp. 483–492. Ouyang, L., Wu, J., Jiang, X., Almeida, D., Wainwright, C., Mishkin, P., Zhang, C., Agarwal, S., Slama, K., Ray, A., Schulman, J., Hilton, J., Kelton, F., Miller, L., Simens, M., Askell, A., Welinder, P., Christiano, P. F., Leike, J. & Lowe, R. (2022). Training Language Models to Follow Instructions with Human Feedback. *NeurIPS 2022.* Park, J. S., O'Brien, J. C., Cai, C. J., Morris, M. R., Liang, P., & Bernstein, M. S. (2023). Generative Agents: Interactive Simulacra of Human Behavior. Manuscript. <https://arxiv.org/abs/2304.03442>. Popov, I., Heess, N., Lillicrap, T., Hafner, R., Barth-Maron, G., Vecerik, M., Lampe, T., Tassa, Y., Erez, T., and Riedmiller, M. (2017). Data-Efficient Deep Reinforcement Learning for Dexterous Manipulation. Manuscript. <https://arxiv.org/abs/1704.03073>. Reed, S., Zolna, K., Parisotto, E., Colmenarejo, S. G., Novikov, A., Barth-Maron, G., Gimenez, M., Sulsky, Y., Kay, J., Springenberg, J. T., Eccles, T., Bruce, J., Razavi, A., Edwards, A., Heess, N., Chen, Y., Hadsell, R., Vinyals, O., Bordbar, M., and de Freitas, N. (2022). A Generalist Agent. Manuscript. <https://arxiv.org/abs/2205.06175>. Rudner, T. G., & Toner, H. (2021). Key Concepts in AI Safety: Interpretability in Machine Learning. Center for Security and Emerging Technology Issue Brief. Schroeder, T. (2004). *Three Faces of Desire*. Oxford University Press. Shah, R., Varma, V., Kumar, R., Phuong, M., Krakovna, V., Uesato, J., & Kenton, Z. (2022). Goal Misgeneralization: Why Correct Specifications Aren't Enough for Correct Goals. Manuscript. <https://arxiv.org/abs/2210.01790>. Turpin, M., Michael, J., Perez, E., & Bowman, S. R. (2023). Language Models Don't Always Say What They Think: Unfaithful Explanations in Chain-of-Thought Prompting. Manuscript. <https://arxiv.org/abs/2305.04388>. Trinh, T. H., & Le, Q. V. (2019). Do Language Models Have Common Sense? Manuscript. <https://openreview.net/pdf?id=rkgfWh0qKX>. Wang, G., Xie, Y., Jiang, Y., Mandlekar, A., Xiao, C., Zhu, Y., Fan, L., and Anandkumar, A. (2023). VOYAGER: An Open-Ended Embodied Agent with Large Language Models. Manuscript. <https://arxiv.org/abs/2305.16291>.   1. **[^](#fnrefafj3k9jce14)**The phenomenon we call reward misspecification is sometimes also called “reward hacking” (e.g. by Amodei et al. 2016), “specification gaming” (e.g. by Shah et al 2022), or, in the context of supervised learning, “outer misalignment.” 2. **[^](#fnrefxfftzmdi5l)**See <https://docs.google.com/spreadsheets/d/e/2PACX-1vRPiprOaC3HsCf5Tuum8bRfzYUiKLRqJmbOoC-32JorNdfyTiRRsR7Ea5eWtvsWzuxo8bjOxCG84dAg/pubhtml>. 3. **[^](#fnreffb8tg23657)**As we understand it, the problem of goal misgeneralization is similar to the problem of “inner misalignment” (Hubinger et al. 2021). 4. **[^](#fnrefyjezsua6t)**Hubinger et al. (2021) call this “side-effect alignment.” 5. **[^](#fnrefbgdw6iq9eda)**See Schroeder (2004) for further discussion of how reward-based learning produces new intrinsic desires for reliable means to one’s goals. 6. **[^](#fnref2gu2rwiwii4)**Similar remarks apply to the Decision Transformer architecture developed by Chen et al. (2021). 7. **[^](#fnrefvbhbq91owh)**See <https://www.wired.com/2016/03/two-moves-alphago-lee-sedol-redefined-future/.> 8. **[^](#fnref3r83qrcg2r5)**For more on interpretability in the setting of reinforcement learning, see Glanois et al. (2022). 9. **[^](#fnref4m696xuqauh)**<https://github.com/Significant-Gravitas/Auto-GPT> 10. **[^](#fnref7t8ri56gp4)**<https://github.com/yoheinakajima/babyagi> 11. **[^](#fnrefkfvv36mgytn)**See Wang et al. (2023). 12. **[^](#fnref9cl2qbwzptb)**For more on the common sense reasoning ability of language models, see Trinh and Le (2018). 13. **[^](#fnrefbu6ywux59)**See the recent successes of Voyager at completing tasks in Minecraft (Wang et al. 2023). 14. **[^](#fnrefk52nffi3xd)**The safety of language agents could also be improved by creating multiple instances of the underlying LLM. In this setting, an action would only happen if (for example) all ten instances recommended the same plan for achieving the goal. 15. **[^](#fnrefu79o0d39dy)**For research in this direction, see Voyager’s skill library in Wang et al. (2023). 16. **[^](#fnrefwy6xkbc122)**See <https://yoshuabengio.org/2023/05/07/ai-scientists-safe-and-useful-ai/> for a recent proposal about how to use AI without developing agents.
cb91e81f-8f31-4cb2-b697-866e30d24df3
trentmkelly/LessWrong-43k
LessWrong
The dangers of dialectic I'm reading The Last Intellectuals: American culture in the age of academe by Russell Jacoby (1987). It contains many interesting and important observations and insights, but also much stupidity. By the last chapter, I was as interested in the question of how a person can be so smart and stupid at the same time as in the author's actual arguments. Jacoby's thesis is that Intellectuals (I capitalize to denote a definition with scope local to the book) are socially-engaged intellectuals who stand outside institutions, culture, and class to critique society and call for change. Their natural habitat is the coffee shops of Paris, Berlin, Vienna, and New York City, where they lived lives of social activism fueled by coffee, cigarettes, sex, poetry, and perhaps resentment. The species has gone extinct, driven out of urban areas by crime, high rent, and (literally) automobiles, and having been co-opted by the academic establishment. Academia provides would-be Intellectuals with a paycheck and possibly tenure, in exchange for their disengaging from politics and instead making only harmless, jargon-laden theoretical arguments readable only by other specialists. Jacoby documents that the generation of leftists since 1960 have retreated into purely academic arguments and politics; for instance, from 1959-1969, "the three leading political science journals published but one article on Vietnam". The stupidity arises when he goes into detail about good guys versus bad guys. He blames all of the following for the decline of intellectualism: * the inherent conservatism of privately-funded colleges * high rent * crime * the automobile * suburbs * publish-or-perish * modernism and post-modernism * academic elitism (judging people's work by their institutional affiliations) * leftist ideals going mainstream * specialization (especially jargon and self-imposed isolation) * technology transfer ("capitalism") * grants ("academic capitalism") * quantitative science Every
e3954c9d-3ced-4477-9cbb-cea004c91fb4
awestover/filtering-for-misalignment
Redwood Research: Alek's Filtering Results
id: post2716 Epistemic status: speculative brainstorming. Follow-up to this post on AGIs as collectives . Note that I've changed the term population AGI to collective AGI for consistency with Bostrom's use in Superintelligence . One safety measure we might want to use on AGIs is (sand)boxing: limiting their access to the real world, so they have very few channels by which to influence it. However, we also want to deploy them on real-world tasks without them using that opportunity to affect the world in unintended ways. Realistically, even if we can maintain high security in a few sandbox environments, I doubt we’ll be able to make every single deployment environment secure against the full intellectual powers of a superintelligent AGI. But this problem might be easier in the case of a collective AGI. Broadly speaking, my hypothesis is that the tasks which require the sort of high-level intelligence and strategic capability which we’re worried about also require the least interaction with the world. Discovering new scientific theories or making major strategic decisions can occur in a sandbox which only allows brief text outputs. Whereas being a good personal assistant or web administrator might require frequent access to the internet, but benefit much less from superhuman capabilities. And so if we are able to easily adjust the level of intelligence that an AGI is able to apply to any given task, then we might be able to significantly reduce the risks it poses without reducing its economic usefulness much. On tasks which require interacting with the world in flexible ways, we could limit its intelligence greatly; on tasks which require very high intelligence, we could limit its ability to interact with the world. In general, for most possible AGI architectures, this gradated intelligence restriction seems quite difficult to achieve. For example, we couldn’t just remove parts of a standard neural network without needing to retrain it significantly. Limiting the AGI’s thinking time, or access to memory, might be possible. But this may reduce its capabilities on all tasks, since even low-level tasks can require detailed calculations and long-term memory. By contrast, it’s easy to reduce the capabilities of a collective AGI: simply deploy only a subset of the members, and bottleneck the communication between different subsets. This makes deployment on a range of real-world tasks more feasible. Even if each individual member of a collective AGI is as intelligent as any hundred humans put together, I expect that we could (with sufficient effort) create secure deployment and monitoring protocols that the individual AI could not break, if it weren’t able to communicate with the rest of the collective beforehand. Given the claims I've made so far, we might hope that a small subset of a collective AGI could do nearly as well as the full collective on tasks which require a lot of interaction with the real world, while being significantly safer. In the next section I’ll discuss this proposal in more detail. First, though, I want to be very clear that this is not intended as a primary solution to the safety problem; ideally we’ll have much more reliable ways to ensure AGI is aligned. However, it’s worth thinking about ways to “move the needle” by adding additional layers of security. I think there’s a reasonable chance that hacky solutions like the one above will make a difference (especially because of my uncertainty about what AGI will actually look like). A sketch of collective separation In my previous post I outlined two ways we might train a collective AGI to become more intelligent than humanity. Either we train a single AGI through some means, and then make many copies of it and allow them to communicate. Or else we start off with a multi-agent collective, and train them in a way which incentivises both competition and cooperation (e.g. they are each rewarded for acquiring more of a scarce resource). Either way, let’s consider how we might actually want to deploy the resulting AGI. For example, suppose we want to use it to solve the problem of world energy. The entire collective AGI might be responsible for deriving the physical principles which would allow more efficient power sources, in near-total isolation from the world. Different subsets of that collective AGI could then separately work on creating actual designs for new power plants - which would involve testing prototypes and evaluating construction feasibility, in closer contact with humans. Even smaller subsets could then be used to oversee the daily running of those power plants - a task which would require the ability to make autonomous real-time interventions. The key claim is that a small subset of the collective AGI is nearly as good at managing one power plant as the whole collective would be, while also being much safer. More specifically, when I talk about extracting a subset of the collective AGI, I mean making copies of a set of individual AIs which have knowledge about the relevant topic, and then separating those copies from the rest of the collective. The extent to which the copies undergo further training to fine-tune them for their new task will vary. I expect that for tasks like designing power plants, we’ll have too little data to optimise them specifically for that task; members of the subset will need to use their existing general intelligence, like humans do. Perhaps the requirements for overseeing daily operation of a plant will be well-specified enough to gather good training data, though. And as I discuss in my previous post, fine-tuning will likely be much easier for smaller subsets of the collective AGI. Information flow and version control One important question is how much information flow should be allowed between different parts of the collective AGI used for different tasks. This is a crucial point, and cause for concern. In the power plant example above, we might imagine that the collective AGI could design a power plant with security flaws in advance, then share the blueprint with all of its members. Then a small subset of the collective could recite that design when asked to come up with a real-world implementation; and an even smaller subset could exploit the pre-specified flaws when deployed. How to prevent this type of problem? One option is strict version control: the small subsets could be drawn from a version of the collective AGI which hadn’t yet thought about how to design flawed power plant security systems. This seems very difficult if the AGI had enough information about humans (e.g. read-only access to the internet), because it could then predict the most likely ways it would be used, and make plans about how to subvert them. More promising is the idea that the full AGI should never be given detailed information about human society - that it should only ever be used for abstract problems. Subsets could be used for tasks involving real-world data, but then immediately shut down without having the chance to report back to the full AGI. This would likely impair performance, but plausibly not by very much if the subsets are fast learners and inherit the full AGI’s meta-level knowledge about how to solve problems. Implementation Could we realistically expect widespread deployment of collective AGI without anyone giving (a copy of) a full collective access to an insecure task, or an individual member unrestricted access to the internet? On the face of it, this does seem unlikely. But it’s less difficult if the source code of the AGI is kept confidential (which aligns well with economic incentives). Widespread deployment might then involve allowing a range of groups to rent access to copies of that AGI running on the cloud. In this situation, collective boxing could be implemented by limiting permissible tasks depending on how much of the collective AGI was requested. The full collective could only be rented to do batched tasks with short outputs. The restrictions on subsets could be less strict. Even if several different groups develop AGIs, if they coordinated to run them on similar (or the same) cloud platforms, boxing restrictions could be enforced in a scalable way.
109fa129-b4ef-4fa9-a3ce-1fcacdde9a87
trentmkelly/LessWrong-43k
LessWrong
Defining a limited satisficer A putative new idea for AI control; index here. EDIT: The definition of satisficer I'm using here is the informal one of "it tries to achieve a goal, without making huge changes on the universe" rather than "it's an agent that has utility u and threshold t". If you prefer the standard notation, think of this as a satisficer where t is not fixed, but dependent on some facts in the world (such as the ease of increasing u). I'm trying to automate the process of designing and running a satisficer: people generally chose t given facts about the world (how easy it is to achieve, for instance), and I want the whole process to be of low impact. I've argued that the definition of a satisficer is underdefined, because there are many pathological behaviours all compatible with satsificer designs. This contradict the intuitive picture that many people have of a satisficer, which is an agent that does the minimum of effort to reach its goal, and doesn't mess up the outside world more than it has to. And if it can't accomplish the goals without messing up the outside world, it would be content not to. In the spirit of "if you want something, you have to define it, then code it, rather than assuming you can get if for free through some other approach", can we spell out what features we would want from such a satisficer? Preferably in a simpler format that our intuitions. It seems to me that if you had a proper u-satisficer S(u), then for many (real or hypothetical) v-maximiser M(v) out there, M(v) would find that: * Changing S(u) to S(v) is of low value. * Similarly, utility function trading with S(u) is of low value. * The existence or non-existence of S(u) is of low information content about the future. * The existence or non-existence of S(u) has little impact on the expected value of v. Further, S(u): * Would not effectively aid M(u), a u-maximiser. * Would not effectively resist M(-u), a u-minimizer. * Would not have large impacts (if this can measured) for low
2885eaef-331f-4589-8b0e-182b799e8ae4
trentmkelly/LessWrong-43k
LessWrong
Approaching Infinity [This is from a very neat example my real analysis professor used some years ago. While I'm fairly confident it's neat, I'm not certain it's top-level-post-worthy. The general point is about problems with applying concepts involving infinity to reality; any advice on content (or formatting!) would be greatly welcomed. My math education basically ended after a few upper division courses, so it's possible there are some notational schemes or methods I am ignorant of. I think this is a fun little exercise, if nothing more.] The concept of "infinity" and "infinite series" and sets get thrown around a lot in mathematics and some of philosophy. It's worth trying to put the concept of infinity in perspective before we try to think of things in the real world being "infinite." Warning: this post will involve numbers that are literally too large to comprehend. But that's the point. Let us define an operator, /X\ ("triangle-X"). /X\ = X raised to the X power X times. Thus, /2\ = 22^2 = 24 = 16. //2\\ ("2-triangle-2") would do this operation twice. Thus, it would equal /16\, the value of which we'll get to in a minute. We now introduce a new operator, [X] ("square-X"). [X] = triangle-X-triangle-X, i.e. X inside of /X\ triangles. [2] = ////////////////2\\\\\\\\\\\\\\\\ We can introduce another operator, [X> ("pentagon-X"). [X> = X inside of [X] squares. I believe this would be "square-X-square-X"). ... [Edited for clarity] I'll spare the next [X] operators, and go right to (X) ("circle-X"). Technically, it's whole-lot-of-sides-polygon-X - we could continue this process indefinitely - but we'll call it circle-X, because that's as far as we're going. (X) follows the process that took us from triangle to square to pentagon, iterated an additional [X] times. I'll be honest. This got kind of meaningless a bit before [X]. Let's start trying to construct what [2] equals, and you'll see why. /2\=16. So //2\\ = /16\ = 1616^16^16^16^16^16^16^16^16^16^16^16^16^16^16^16. Using some
785c5446-70f8-47b6-83ce-1eace28c910b
trentmkelly/LessWrong-43k
LessWrong
New paper: Long-Term Trajectories of Human Civilization Long-Term Trajectories of Human Civilization (free PDF). Foresight, forthcoming, DOI 10.1108/FS-04-2018-0037. Authors: Seth D. Baum, Stuart Armstrong, Timoteus Ekenstedt, Olle Häggström, Robin Hanson, Karin Kuhlemann, Matthijs M. Maas, James D. Miller, Markus Salmela, Anders Sandberg, Kaj Sotala, Phil Torres, Alexey Turchin, and Roman V. Yampolskiy. Abstract Purpose: This paper formalizes long-term trajectories of human civilization as a scientific and ethical field of study. The long-term trajectory of human civilization can be defined as the path that human civilization takes during the entire future time period in which human civilization could continue to exist. Approach: We focus on four types of trajectories: status quo trajectories, in which human civilization persists in a state broadly similar to its current state into the distant future; catastrophe trajectories, in which one or more events cause significant harm to human civilization; technological transformation trajectories, in which radical technological breakthroughs put human civilization on a fundamentally different course; and astronomical trajectories, in which human civilization expands beyond its home planet and into the accessible portions of the cosmos. Findings: Status quo trajectories appear unlikely to persist into the distant future, especially in light of long-term astronomical processes. Several catastrophe, technological transformation, and astronomical trajectories appear possible. Value: Some current actions may be able to affect the long-term trajectory. Whether these actions should be pursued depends on a mix of empirical and ethical factors. For some ethical frameworks, these actions may be especially important to pursue. An excerpt from the press release over at the Global Catastrophic Risk Institute: > Society today needs greater attention to the long-term fate of human civilization. Important present-day decisions can affect what happens millions, billions, or trillions of y
811b0dd1-f7ac-4a3d-b011-72ec78343f9d
trentmkelly/LessWrong-43k
LessWrong
[LINK] Slashdot interview with David Ettinger of the Cryonics Institute http://science.slashdot.org/story/13/08/19/1834210/the-cryonics-institute-offers-a-chance-at-immortality-video
e73ff99f-c9d9-492e-a7fd-4d518883d732
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Modelling Continuous Progress I [have previously argued](https://www.alignmentforum.org/posts/5WECpYABCT62TJrhY/will-ai-undergo-discontinuous-progress) for two claims about AI takeoff speeds. First, almost everyone agrees that if we had AGI, progress would be very fast. Second, the major disagreement is between those who think progress will be discontinuous and sudden (such as Eliezer Yudkowsky, MIRI) and those who think progress will be very fast by normal historical standards but continuous (Paul Chrisiano, Robin Hanson). > What do I mean by ‘discontinuous’? If we were to graph world GDP over the last 10,000 years, it fits onto a [hyperbolic growth pattern](https://slatestarcodex.com/2019/04/22/1960-the-year-the-singularity-was-cancelled/). We could call this ‘continuous’ since it is following a single trend, or we could call it ‘discontinuous’ because, on the scale of millennia, the industrial revolution exploded out of nowhere. I will call these sorts of hyperbolic trends ‘continuous, but fast’, in line with Paul Christiano, who argued for continuous takeoff, defining it [this way](https://sideways-view.com/2018/02/24/takeoff-speeds/): > AI is just another, faster step in the [hyperbolic growth we are currently experiencing](https://sideways-view.com/2017/10/04/hyperbolic-growth/), which corresponds to a further increase in rate but not a discontinuity (or even a discontinuity in rate). > I’ll be using Paul’s understanding of ‘discontinuous’ and ‘fast’ here. For progress in AI to be discontinuous, we need a switch to a new growth mode, which will show up as a step function in the capability of AI or in the rate of change of the capability of the AI over time. For takeoff to be fast, it is enough that there is one single growth mode that is hyperbolic or some other function that is very fast-growing. This post tries to build on a simplified mathematical model of takeoff which was first put forward by Eliezer Yudkowsky and then refined by Bostrom in Superintelligence, modifying it to account for the different assumptions behind continuous, fast progress as opposed to discontinuous progress. As far as I can tell, few people have touched these sorts of simple models since the early 2010’s, and no-one has tried to formalize how newer notions of continuous takeoff fit into them. I find that it is surprisingly easy to accommodate continuous progress and that the results are intuitive and fit with what has already been said qualitatively about continuous progress. *The code for the model can be found [here](https://pastebin.com/bC6Nb19U).* The Model ========= The original mathematical model was put forward [here](https://www.greaterwrong.com/posts/9wZnasT3uXzmFCcaB/sustained-strong-recursion) by Eliezer Yudkowsky in 2008: > In our world where human brains run at constant speed (and eyes and hands work at constant speed), Moore’s Law for computing power *s* is: > s = R(t) = e^t > ... > So to understand what happens when the Intel engineers themselves run on computers (and use robotics) subject to Moore’s Law, we recursify and get: > dy/dt = s = R(y) = e^y > Here *y* is the total amount of elapsed *subjective* time, which at any given point is increasing according to the computer speed *s* given by Moore’s Law, which is determined by the same function R that describes how Research converts elapsed subjective time into faster computers. Observed human history to date roughly matches the hypothesis that R is exponential with a doubling time of eighteen subjective months (or whatever). In other words, we start with I(t)=etbecause we assume speed is a reasonable proxy for general optimization power, and progress in processing speed is currently exponential. Then when intelligence gets high enough, the system becomes capable enough that it can apply its intelligence to improving itself, the graph ‘folds back in on itself’ and we get dI/dt=I′(t)=etwhich forms a positive singularity. The switch between these two modes occurs when the first AGI is brought online. In Superintelligence, Nick Bostrom gave a different treatment of the same idea: ![](https://i.imgur.com/mvnTM75.png)For Bostrom, we have I′(t)=optimization/recalcitrancewhere recalcitrance is how hard the system resists the optimization pressure applied to it. Given that (we assume) we are currently applying a roughly constant pressure to improve the intelligence of our systems, but intelligence is currently increasing exponentially (again, equating computing speed with intelligence), recalcitrance declines as the inverse of the system’s intelligence, and the current overall rate of change of intelligence is given by .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} I′(t)=cI. When recursive self-improvement occurs, the applied optimization pressure is equal to the outside world's contribution plus the system's own intelligence I′(t)=(c+I)I=cI+I2If we make a like-for-like comparison with Bostrom and Yudkowsky's equations, we get I′(t)=eI for Yudkowsky and I′(t)=I2 for Bostrom in the RSI condition. These aren't as different as they seem - Yudkowsky's solves to give I(t)=−ln(c−t) and Bostrom's gives I(t)=1/(c−t), the derivative of Yudkowsky's! They both reach positive infinity after finite time. These models are, of course, very oversimplifed - Bostrom's does acknowledge the possibility of diminishing returns on optimization, although he thinks current progress suggests *accelerating* returns. 'All models are wrong but some are useful' - and there does seem to be some agreement that these models capture some of what we might expect on the Bostrom/Yudkowsky model. I'm going to take Bostrom's equation, since it clearly shows how outside-system and inside-system optimization combine in a way that Yudkowsky's doesn't, and try to incorporate the assumptions behind continuous takeoff, as exemplified by this from [Paul Christiano](https://sideways-view.com/2018/02/24/takeoff-speeds/): > Powerful AI can be used to develop better AI (amongst other things). This will lead to runaway growth. > This on its own is not an argument for discontinuity: before we have AI that radically accelerates AI development, the slow takeoff argument suggests we will have AI that *significantly* accelerates AI development (and before that, *slightly* accelerates development). We model this by, instead of simply switching from I′(t)=cI to I′(t)=cI+I2 when RSI becomes possible, having a continuous change function that depends on I - for low values of this function only a small fraction of the system's intelligence can be usefully exerted to improve its intelligence, because the system is still in the regime where AI only 'slightly accelerates development'. I′(t)=cI+f(I)I2this f(I) needs to satisfy several properties to be realistic - it has to be bounded between 0 (our current situation, with no contribution from RSI), and 1 (all the system's intelligence can be applied to improve its own intelligence), and depend only on the intelligence of the system. The most natural choice, if we assume RSI is like most other technological capabilities, is the [logistic curve](https://intelligence.org/2017/11/26/security-mindset-and-the-logistic-success-curve/). f(I)=11+e−d(I(t)−IAGI)where d is the strength of the discontinuity - if d is 0, f(I) is always fixed at a single value (this should be 0.5 but I forced it to be 0 in the code). If d is infinite then we have a step function - discontinuous progress from cI to (c+I)I at AGI exactly as Bostrom's original scenario describes. For values between 0 and infinity we have varying steepnesses of continuous progress. IAGI is the Intelligence level we identify with AGI. In the discontinuous case, it is where the jump occurs. In the continuous case, it is the centre of the logistic curve. here IAGI=4 ![](https://i.imgur.com/WIsrjcN.png)All of this together allows us to build a (very oversimplified) model of some different takeoff scenarios, in order to examine the dynamics. The variables we have available to adjust are, + IAGI - the approximate capability level required for RSI + d - how sudden a breakthrough is RSI + I0 - the initial capability of the system + c - the strength of the constant optimization pressure applied to the system by the outside world Discontinuities =============== Varying d between 0 (no RSI) and infinity (a discontinuity) while holding everything else constant looks like this: ![](https://i.imgur.com/EamG21f.png)If we compare the trajectories, we see two effects - the more continuous the progress is (lower d), the earlier we see growth accelerating above the exponential trend-line (except for slow progress, where growth is always just exponential) and the smoother the transition to the new growth mode is. For d=0.5, AGI was reached at t=1.5 but for discontinuous progress this was not until after t=2. As Paul Christiano says, [slow takeoff seems to mean that AI has a larger impact on the world, sooner.](https://sideways-view.com/2018/02/24/takeoff-speeds/) ![](https://unstylizedcom.files.wordpress.com/2018/02/takeoffimage-0011.png)Or, compare the above with the graph in my earlier post, where the red line is the [continuous scenario:](https://www.lesswrong.com/posts/5WECpYABCT62TJrhY/will-ai-undergo-discontinuous-progress#Recursive_self_improvement) ![](https://i.imgur.com/RbuBJ30.png)If this model can capture the difference between continuous and discontinuous progress, what else can it capture? As expected, varying the initial optimization pressure applied and/or the level of capability required for AGI tends to push the timeline earlier or later without otherwise changing the slope of the curves - you can put that to the test by looking at the code yourself, [which can be found here](https://pastebin.com/bC6Nb19U). Speed ===== Once RSI gets going, how fast will it be? This is an additional question which is connected to the question of discontinuities, but not wholly dependent on it. We have already seen some hints about how to capture post-RSI takeoff speed - to model a slow takeoff I forced the function f(I) to always equal 0. Otherwise, the function f(I) is bounded at 1. Suppose the speed of the takeoff is modelled by an additional scaling factor behind f(I) which controls how powerful RSI is overall - if this speed s is above 1, then the AGI can exert optimization pressure disproportionately greater than its current intelligence, and if s is below 1 then the total amount of optimization RSI can exert is bounded below the AGI's intelligence: I′(t)=cI+sf(I)I2=cI+sI21+e−d(I(t)−IAGI)Here are two examples with s = 2 and 0.5: ![](https://i.imgur.com/ExkntK4.png) ![](https://i.imgur.com/sBn3CfK.png)I'm less sure than in the previous section that this captures what people mean by varying takeoff speed independent of discontinuity, but comparing the identical colours in the two graphs above seems to be a reasonable fit to my intuitions about what a differing takeoff would be like for the same level of continuity, but differing 'speed'. Conclusion ========== I've demonstrated a simple model of AI takeoff with two main variables, the suddenness of the discontinuity d and the overall strength of the RSI s, with a few simplifying assumptions - that Bostrom's 2012 model is correct and that RSI progress, like most technologies, follows a [logistic curve](https://www.wikiwand.com/en/Logistic_function#In_economics_and_sociology:_diffusion_of_innovations). This model produces results that fit with what proponents of continuous progress expect - progress to superintelligence is smoother with more warning time, and occurs earlier. Appendix: 'Paul Slow' Takeoff ----------------------------- Paul Christiano's view is that takeoff will be both continuous and relatively slow - sufficiently slow that [this is the case:](https://aiimpacts.org/precedents-for-economic-n-year-doubling-before-4n-year-doubling/) > There will be a complete 4 year interval in which world output doubles, before the first 1 year interval in which world output doubles. (Similarly, we’ll see an 8 year doubling before a 2 year doubling, etc.) Does a slow continuous takeoff on this model replicate this? What we want is a scenario where you don't get intermediate doubling intervals at all in the discontinuous case, but you get at least one in the continuous case. Setting *s* relatively high appears to do the trick. [Here is a scenario](https://i.imgur.com/RSjIKQH.png) where we have very fast post-RSI growth with s=5,c=1,I0=1 and I\_AGI=3. I wrote some more code to produce plots of how long [each complete interval of doubling took](https://i.imgur.com/mw27P7H.png) in each scenario. The 'default' rate with no contribution from RSI was 0.7. All the continuous scenarios had two complete doubling intervals over intermediate time frames before the doubling time collapsed to under 0.05 on the third doubling. The discontinuous model simply kept the original doubling interval until it collapsed to under 0.05 on the third doubling interval. [It's all in this graph](https://i.imgur.com/mw27P7H.png). Let's make the irresponsible assumption that this actually applies to the real economy, with the current growth mode, non-RSI condition being given by the 'slow/no takeoff', s=0 condition.The current doubling time is [a bit over 23 years](https://openborders.info/double-world-gdp/). In the shallow continuous progress scenario (red line), we get a 9 year doubling, a 4 year doubling and then a ~1 year doubling. In the discontinuous scenario (purple line) we get 2 23 year doublings and then a ~1 year doubling out of nowhere. In other words, this fairly random setting of the parameters (this was the second set I tried) gives us a Paul slow takeoff when progress is continuous. [You can see that graph here.](https://i.imgur.com/EE3AgMU.png)
3e98fd3e-725a-4c2a-9222-8ea858b0394c
trentmkelly/LessWrong-43k
LessWrong
A Disciplined Way to Avoid Wireheading This is a crosspost of a post from my blog, Metal Ivy. The original is here: A Disciplined Way To Avoid Wireheading. What is wireheading? The term wireheading comes from the observation that if you put a wire in someone’s brain such that it is near their brain’s pleasure (or craving) center, and give them a lever that when pressed activates the current through the wire, they will repeat that action forever, to the point of dying from starvation. Conditional on them trying it once, of course. These experiments have been performed on mice, dolphin, monkeys, and in the distant unethical annals of the 1970s, humans. A happy mouse? Personally, though I am not alone in this view, I see wireheading not as an independent phenomenon, but instead the edge of a spectrum. These neurons were not created by evolution for some other task that is being tricked by wireheading to cause cravings to pull a lever. They were evolved precisely to give you a craving to perform an action to get dopamine. It just didn’t want you to do it quite so directly (if you oppose the framing of “goals” for evolution, take it as a rhetorical device, or call it Moloch[1]). Evolution has many pathways for getting you to do what it wants. The craving mechanism is one of them, but you also have a more stable sense of utility — that sense tells you that, actually, being addicted to pulling a lever to directly stimulate your brain is “bad”. You want to get pleasure in a less direct, more natural way. You don’t want to trick your own brain. The craving and the perceived, externalized utility contradict, especially if you were to reflect on the subject before the first pull of the lever. This article will take the stance that this sense of utility is important, as an axiom, since we find it to be important by its very definition. We do not want it to contradict with our actions, which means we don’t want to stimulate our dopamine secreting neurons so directly. On the possibility of just not inserting a w
11e685f9-c562-40b3-b69a-853e4a9f586e
trentmkelly/LessWrong-43k
LessWrong
(misleading title removed) An article by AAAI president Tom Dietterich and Director of Microsoft Research Eric Horvitz has recently got some media attention (BBC, etc) downplaying AI existential risks. You can go read it yourself, but the key paragraph is this:   > A third set of risks echo the tale of the Sorcerer’s Apprentice. Suppose we tell a self-driving car to “get us to the airport as quickly as possible!” Would the autonomous driving system put the pedal to the metal and drive at 300 mph while running over pedestrians? Troubling scenarios of this form have appeared recently in the press. Other fears center on the prospect of out-of-control superintelligences that threaten the survival of humanity. All of these examples refer to cases where humans have failed to correctly instruct the AI algorithm in how it should behave. > > > This is not a new problem. An important aspect of any AI system that interacts with people is that it must reason about what people intend rather than carrying out commands in a literal manner. An AI system should not only act on a set of rules that it is instructed to obey — it must also analyze and understand whether the behavior that a human is requesting is likely to be judged as “normal” or “reasonable” by most people. It should also be continuously monitoring itself to detect abnormal internal behaviors, which might signal bugs, cyberattacks, or failures in its understanding of its actions. In addition to relying on internal mechanisms to ensure proper behavior, AI systems need to have the capability — and responsibility — of working with people to obtain feedback and guidance. They must know when to stop and “ask for directions” — and always be open for feedback. > > > Some of the most exciting opportunities ahead for AI bring together the complementary talents of people and computing systems. AI-enabled devices are ... (examples follow) ... > > > In reality, creating real-time control systems where control needs to shift rapidly and fluidly betw
06e4e828-3e2d-4f83-919d-e5a7edd510dd
StampyAI/alignment-research-dataset/blogs
Blogs
Making or breaking a thinking machine *By Katja Grace, 18 January 2015* Here is a superficially plausible argument: the brains of the slowest humans are almost identical to those of the smartest humans. And thus—in the great space of possible intelligence—the ‘human-level’ band must be very narrow. Since all humans are basically identical in design—since you can move from the least intelligent human to the sharpest human with imperceptible changes—then artificial intelligence development will probably cross this band of human capability in a blink. It won’t stop on the way to spend years being employable but cognitively limited, or proficient but not promotion material. It will be superhuman before you notice it’s nearly human. And from our anthropomorphic viewpoint, from which the hop separating [village idiot and Einstein](http://lesswrong.com/lw/ql/my_childhood_role_model/) looks like most of the spectrum, this [might seem](https://intelligence.org/files/AIPosNegFactor.pdf) like shockingly sudden progress. This whole line of reasoning is wrong. It is true that human brains are very similar. However, this implies very little about the design difficulty of moving from the intelligence of one to the intelligence of the other artificially. The basic problem is that the smartest humans need not be better-designed — they could be better instantiations of the same design. What’s the difference? Consider an analogy. Suppose you have a yard full of [rocket cars](http://en.wikipedia.org/wiki/Rocket_car). They all look basically the same, but you notice that their peak speeds are very different. Some of the cars can drive at a few hundred miles per hour, while others can barely accelerate above a crawl. You are excited to see this wide range of speeds, because you are a motor enthusiast and have been building your own vehicle. Your car is not quite up to the pace of the slowest cars in your yard yet, but you figure that since all those cars are so similar, once you get it to two miles per hour, it will soon be rocketing along. If a car is slow because it is a rocket car with a broken fuel tank, that car will be radically simpler to improve than the first car you build that can go over 2 miles per hour. The difference is something like an afternoon of tinkering vs. [two](http://en.wikipedia.org/wiki/Nicolas-Joseph_Cugnot) [centuries](http://en.wikipedia.org/wiki/Rocket_car). This is intuitively because the broken rocket car already contains almost all of the design effort in making a fast rocket car. It’s not being used, but you know it’s there and how to use it. Similarly, if you have a population of humans, and some of them are severely cognitively impaired, you shouldn’t get too excited about the prospects for your severely cognitively impaired robot. Another way to see there must be something wrong with the argument is to note that humans can actually be arbitrarily cognitively impaired. Some of them are even dead. And the brain of a dead person can closely resemble the brain of a live person. Yet while these brains are again very similar in design, AI passed dead-human-level years ago, and this did not suggest that it was about to zip on past live-human-level. Here is a different way to think about the issue. Recall that we were trying to infer from the range of human intelligence that AI progress would be rapid across that range. However, we can predict that human intelligence has a good probability of varying significantly, using only evolutionary considerations that are orthogonal to the ease of AI development. In particular, if much of the variation in intelligence is from deleterious mutations, then the distribution of intelligence is more or less set by the equilibrium between selection pressure for intelligence and the appearance of new mutations. Regardless of how hard it was to design improvements to humans, we would always see this spectrum of cognitive capacities, so this spectrum cannot tell us about how hard it is to improve intelligence by design. (Though this would be different if the harm inflicted by a single mutation was likely to be closely related to the difficulty of designing an incrementally more intelligent human). If we knew more about the sources of the variation in human intelligence, we might be able to draw a stronger conclusion. And if we entertain several possible explanations for the variation in human intelligence, we can still infer something; but the strength of our inference is limited by the prior probability that deleterious mutations on their own can lead to significant variation in intelligence. Without learning more, this probability shouldn’t be very low. In sum, while the brain of an idiot is designed much like that of a genius, this does not imply that designing a genius is about as easy as designing an idiot. We are still [thinking about this](http://aiimpacts.wpengine.com/is-the-range-of-human-intelligence-small/ "The range of human intelligence"), so now is a good time to tell us if you disagree. I even turned on commenting, to make it easier for you. It should work on all of the blog posts now. [![](http://aiimpacts.wpengine.com/wp-content/uploads/2015/01/1024px-RAR2009_-_Rocket_Car-1024x539.jpg)](http://aiimpacts.wpengine.com/wp-content/uploads/2015/01/1024px-RAR2009_-_Rocket_Car.jpg)[Rocket car](http://commons.wikimedia.org/wiki/File:RAR2009_-_Rocket_Car.jpg), photographed by Jon ‘ShakataGaNai’ Davis. *(Top image: [One of the first cars](http://en.wikipedia.org/wiki/Nicolas-Joseph_Cugnot#mediaviewer/File:Nicholas-Cugnots-Dampfwagen.png), [1769](http://en.wikipedia.org/wiki/Nicolas-Joseph_Cugnot))*
22fea1fa-d1d6-413b-9410-779fa29db4e0
LDJnr/LessWrong-Amplify-Instruct
LessWrong
"Edit: contest closed now, will start assessing the entries. The contest I'm offering $1,000 for good questions to ask of AI Oracles. Good questions are those that are safe and useful: that allows us to get information out of the Oracle without increasing risk. To enter, put your suggestion in the comments below. The contest ends at the end[1] of the 31st of August, 2019. Oracles A perennial suggestion for a safe AI design is the Oracle AI: an AI confined to a sandbox of some sort, that interacts with the world only by answering questions. This is, of course, not safe in general; an Oracle AI can influence the world through the contents of its answers, allowing it to potentially escape the sandbox. Two of the safest designs seem to be the counterfactual Oracle, and the low bandwidth Oracle. These are detailed here, here, and here, but in short: A counterfactual Oracle is one whose objective function (or reward, or loss function) is only non-trivial in worlds where its answer is not seen by humans. Hence it has no motivation to manipulate humans through its answer. A low bandwidth Oracle is one that must select its answers off a relatively small list. Though this answer is a self-confirming prediction, the negative effects and potential for manipulation is restricted because there are only a few possible answers available. Note that both of these Oracles are designed to be episodic (they are run for single episodes, get their rewards by the end of that episode, aren't asked further questions before the episode ends, and are only motivated to best perform on that one episode), to avoid incentives to longer term manipulation. Getting useful answers The counterfactual and low bandwidth Oracles are safer than unrestricted Oracles, but this safety comes at a price. The price is that we can no longer "ask" the Oracle any question we feel like, and we certainly can't have long discussions to clarify terms and so on. For the counterfactual Oracle, the answer might not even mean anything real to us - it's about another world, that we don't inhabit. Despite this, its possible to get a surprising amount of good work out of these designs. To give one example, suppose we want to fund various one of a million projects on AI safety, but are unsure which one would perform better. We can't directly ask either Oracle, but there are indirect ways of getting advice: We could ask the low bandwidth Oracle which team A we should fund; we then choose a team B at random, and reward the Oracle if, at the end of a year, we judge A to have performed better than B. The counterfactual Oracle can answer a similar question, indirectly. We commit that, if we don't see its answer, we will select team A and team B at random and fund them for year, and compare their performance at the end of the year. We then ask for which team A[2] it expects to most consistently outperform any team B. Both these answers get around some of the restrictions by deferring to the judgement of our future or counterfactual selves, averaged across many randomised universes. But can we do better? Can we do more? Your better questions This is the purpose of this contest: for you to propose ways of using either Oracle design to get the most safe-but-useful work. So I'm offering $1,000 for interesting new questions we can ask of these Oracles. Of this: $350 for the best question to ask a counterfactual Oracle. $350 for the best question to ask a low bandwidth Oracle. $300 to be distributed as I see fit among the non-winning entries; I'll be mainly looking for innovative and interesting ideas that don't quite work. Exceptional rewards go to those who open up a whole new category of useful questions. Questions and criteria Put your suggested questions in the comment below. Because of the illusion of transparency, it is better to explain more rather than less (within reason). Comments that are submissions must be on their separate comment threads, start with "Submission", and you must specify which Oracle design you are submitting for. You may submit as many as you want; I will still delete them if I judge them to be spam. Anyone can comment on any submission. I may choose to ask for clarifications on your design; you may also choose to edit the submission to add clarifications (label these as edits). It may be useful for you to include details of the physical setup, what the Oracle is trying to maximise/minimise/predict and what the counterfactual behaviour of the Oracle users humans are assumed to be (in the counterfactual Oracle setup). Explanations as to how your design is safe or useful could be helpful, unless it's obvious. Some short examples can be found here. EDIT after seeing some of the answers: decide on the length of each episode, and how the outcome is calculated. The Oracle is run once an episode only (and other Oracles can't generally be used on the same problem; if you want to run multiple Oracles, you have to justify why this would work), and has to get objective/loss/reward by the end of that episode, which therefore has to be estimated in some way at that point. A note on timezones: as long as it's still the 31 of August, anywhere in the world, your submission will be counted. ↩︎ These kind of conditional questions can be answered by a counterfactual Oracle, see the paper here for more details. ↩︎" The above is a chunk of text from a post created by a user on the website LessWrong. What are some reasons why people may like or agree with what this person is saying? There could be several reasons why people may agree with or like what this person is saying: 1. Interest in AI Safety: The user's post is about the safety of AI Oracles, a topic that's of interest to many in the AI and tech community. People who are concerned about the potential risks of AI would likely appreciate the focus on safety. 2. Appreciation for Innovation: The post proposes a contest to generate innovative ways to safely and effectively use AI Oracles. People who value creativity and innovation might find this appealing. 3. Financial Incentive: The user is offering a significant financial reward for contest entries. This could attract people who are interested in the financial aspect or see it as a validation of their ideas. 4. Intellectual Challenge: The task of formulating a safe and useful question for an AI Oracle is intellectually challenging. People who enjoy problem-solving or intellectual stimulation might be drawn to this. 5. Learning Opportunity: The post provides detailed information about AI Oracles, including counterfactual and low bandwidth Oracles. People who want to learn more about these concepts might appreciate the information provided. 6. Community Engagement: The post encourages community participation and discussion. People who enjoy engaging with others in their field might appreciate this aspect. Could you elaborate on how the financial incentive offered in the contest might influence the quality and quantity of submissions? Certainly, the financial incentive could significantly influence both the quality and quantity of submissions in several ways: 1. Quantity: A financial reward can attract a larger number of participants. This is because it not only appeals to those intrinsically interested in the subject matter, but also to those who may be motivated by the potential for monetary gain. This could lead to a higher number of submissions. 2. Quality: The prospect of a financial reward can motivate participants to put more effort into their submissions. Participants may spend more time refining their ideas, conducting research, or consulting with others to increase their chances of winning. This could result in higher quality submissions. 3. Diversity: A financial incentive can attract a wider range of participants, including those from different backgrounds, disciplines, or perspectives. This diversity can lead to a variety of unique and innovative ideas that might not have been generated otherwise. 4. Competition: The financial incentive can stimulate a competitive environment, which can push participants to outdo each other in terms of the originality and quality of their submissions. However, it's also worth noting that while a financial incentive can increase the quantity of submissions, it doesn't always guarantee an increase in quality. Some participants might submit hastily thought-out ideas in hopes of winning the prize. It's therefore important for the contest organizers to have a robust judging process in place to ensure that the winning submissions truly meet the criteria of being safe and useful questions for AI Oracles. How might the competitive environment stimulated by the financial incentive impact the collaborative nature of idea generation and refinement in this context? A competitive environment spurred by a financial incentive can have both positive and negative impacts on the collaborative nature of idea generation and refinement. Positive Impacts: 1. Higher Engagement: Competition can drive higher engagement levels among participants. They may be more motivated to contribute, discuss, and refine ideas to increase their chances of winning. 2. Cross-Pollination of Ideas: In a competitive setting, participants often pay close attention to each other's submissions. This can lead to the cross-pollination of ideas, where one participant's idea sparks a new thought or refinement in another's. 3. Encourages Excellence: Competition can push individuals to strive for their best, which can lead to the generation of high-quality, refined ideas. Negative Impacts: 1. Reduced Sharing: On the downside, competition can sometimes discourage participants from sharing their ideas freely, for fear that others might use those ideas to their advantage. This could limit collaborative brainstorming. 2. Focus on Winning: In a highly competitive environment, the focus may shift from generating and refining the best possible ideas to simply winning the contest. This could potentially lead to strategic behavior, such as withholding feedback on others' ideas. 3. Potential for Conflict: Competition can sometimes lead to conflict, especially if the stakes are high. This could create a less cooperative environment and hinder the collaborative process. Balancing competition and collaboration in such a context is key. While competition can drive participants to produce their best work, fostering a sense of community and collaboration can lead to shared learning and the generation of more diverse and innovative ideas. Considering the potential impacts of a competitive environment on collaboration, how might contest organizers design the competition to foster both healthy competition and effective collaboration among participants? Balancing competition and collaboration in a contest can be a delicate task, but it is certainly achievable. Here are some strategies contest organizers could use: 1. Team Submissions: Encourage or require participants to form teams for their submissions. This fosters collaboration within teams while maintaining a competitive element between teams. 2. Staged Competition: Organize the contest in stages, with initial stages focused on idea generation and collaboration, and later stages focused on competition. This allows for a period of open sharing and refinement before the competitive element is introduced. 3. Feedback Mechanism: Implement a system where participants are rewarded for providing constructive feedback on others' ideas. This encourages collaboration and knowledge sharing, even in a competitive environment. 4. Transparency: Make the judging criteria and process transparent. This can help ensure that participants understand that the quality of ideas is paramount, discouraging strategic behavior aimed solely at winning. 5. Community Building: Foster a sense of community among participants, such as through forums or networking events. This can encourage participants to view each other as colleagues rather than just competitors. 6. Recognition: In addition to the main prizes, offer recognition for other aspects, such as the most collaborative team or the participant who provided the most constructive feedback. This can motivate participants to engage in collaborative behaviors. 7. Clear Guidelines: Provide clear guidelines that discourage uncooperative behavior and promote respectful and constructive interactions. By implementing these strategies, contest organizers can create an environment that encourages both competition and collaboration, leading to a richer array of ideas and a more engaging experience for participants.
01687299-523e-4281-8de9-e0728f482783
trentmkelly/LessWrong-43k
LessWrong
Why are people unkeen to immortality that would come from technological advancements and/or AI? Nearly everyone with whom I have talked about the mindblowing possibilities of a friendly ASI -people that hadn't heard about it before-, I have seen the same reaction: the person is skeptical and rejects the idea of immortality: "Humans are made to die", "what is the value of life then?", and sometime: "I want to die". Why does that happen? I can't understand their reaction. If you are like them, let us know your arguments.
125d5e50-3cd0-478f-b237-9d08499d6d1c
trentmkelly/LessWrong-43k
LessWrong
CogSci books Cognitive psych is ovbiously important to people here, so I want to point out a CogSci book thread over from reddit/r/cogsci. http://www.reddit.com/r/cogsci/comments/bmbaq/dear_rcogsci_lets_construct_a_musthave_library_of/ I would be interested in an extension of this thread here, since LW has somewhat more computational theory of mind slant.
0a4cefae-66f6-4774-bb7d-2156f3621d43
trentmkelly/LessWrong-43k
LessWrong
Newcomb's Grandfather In front of you are two boxes. Box A contains $1000. If Omega predicted you would just take Box B, Box B contains $1,000,000. If Omega predicted you would take both boxes, Box B contains nothing. Omega will also go back in time and kill your grandfather. Nobody has any idea how accurate Omega's predictions are. Does the fact that you exist prove that Omega thought you would just take one box, so Box B definitely contains the million dollars, and if you took both boxes it would imply that Omega was a terrible predictor so you might as well just take both boxes and run?
e946af43-bd8d-40c8-96a2-d4751f58937b
trentmkelly/LessWrong-43k
LessWrong
Recent Less Wrong downtime Many users may have noticed that Less Wrong experienced about 6 hours of downtime on 16/7/2011. CAUSE: The server was put under an unusual amount of load and started up a new instance to load-balance the traffic.  Unfortunately, there was a bug in the script that starts the new instance that caused it to use an inconsistent mix of old and new code.  The symptom seen by users was that any post with comments was inaccessible. RESPONSE: A hotfix was deployed as soon as the problem was detected, unfortunately it was a Saturday so this reponse time was slower than we would like.  We have since implemented a proper fix for the particular bug that caused this problem.  We are also creating some extra monitoring probes so we'll be notified promptly of any similar problems in the future. Apologies for the inconvenience.
bface2f1-2593-45eb-8dd6-3bb732304f1d
StampyAI/alignment-research-dataset/arbital
Arbital
Conceivability A hypothetical scenario is 'conceivable' or 'imaginable' when it is not *immediately* incoherent, although it may still contain some subtle flaw that makes it [logically impossible](https://arbital.com/p/). If you haven't yet checked for factors, it's conceivable to you that 91 is either a [prime number](https://arbital.com/p/) or a [composite number](https://arbital.com/p/), even though only one of these scenarios is [logically possible](https://arbital.com/p/). Whether 91 is prime or compositive is a logical consequence of the definitions of 'prime' and 'composite', but it's not literally and explicitly written into our brain's representations of the definitions of 'prime' and 'composite' that 91 is a prime/composite number. If you can conceive of 91 being prime, this illustrates that, in your own mind, the property of 91-ness is not the same writing on your map as the property of composite-ness, but it doesn't demonstrate that it's logically or physically possible for 91 to be composite. If you can *conceive* of a scenario in which X is true and Y is false, this demonstrates mainly that X and Y have different [intensional definitions](https://arbital.com/p/10b) inside your own mind; you have separate concepts in your mind for X and Y. It doesn't demonstrate that a world where 'X but not Y' is logically possible, let alone physically possible, or that X and Y refer to two different things, etcetera. Labeling a scenario 'conceivable' or saying 'we can conceive that' introduces a scenario for discussion or refutation, *without* assuming in the introduction that the scenario is logically possible (let alone physically possible, or at all plausible in our world). Since many true facts are also plausible, physically possible, logically possible, and conceivable, introducing a scenario as 'conceivable' also does not imply that it is false. It means that someone found themselves able to imagine something, perhaps a true thing, perhaps without visualizing it in the complete detail that would reveal its impossibility; but if all we know at the start is that someone felt like they could imagine something, we'll call it 'conceivable' or 'imaginable'.
93d54744-8520-4cce-adce-806a1425639c
LDJnr/LessWrong-Amplify-Instruct
LessWrong
"Epistemic status: fairly confident based on my accumulated experience of debates and disagreements. I wrote this for myself as much as others.There is a conversational dynamic which I think is extremely common, a failure mode which is all too easy to fall into. Alice and Bob are debating some course of action, e.g. should they do X or Y? Alice thinks that X is very likely to result in terrible consequence R, so they should definitely opt for Y. Bob thinks that Y most definitely will cause horrific result H, so they should definitely do X. The distilled conversation goes a bit like this:Alice: “We can’t do X! That would lead to R, which is unacceptable.”Bob: “I don’t think you get it, Y results in H. You can’t think that we could allow H, do you?”Alice: “I feel like you’re not listening, we need to account for R!”Bob: “H is definitely a much worse and more real danger than R . . .”Alice is afraid of the rock (R) and Bob is afraid of the hard place (H).Possible values of X, Y, R, and H:X = more gun control; Y = less gun control; R = people unable to defend themselves and having their rights taken away; H = increased risk of mass shootings, suicides, and children shooting themselves or others.X = raising minimum wage; Y = maintaining minimum wage; R = reduction in number of jobs causing people to be fired; H = people not earning enough from their jobs to subsist.X = increase immigration; Y = reduce immigration; R = loss of jobs from local community, erosion of national culture and values, crime committed by migrants; H = humanitarian impact, loss of potential growth of the national economy.The above exchange is actually relatively good. Alice and Bob each know what they’re afraid of and have expressed that clearly. Bob even acknowledges Alice’s concern about R, but states that he thinks it’s the lesser danger. They’re at a point where they might be able to usefully double crux [1].What goes wrong?Failure to identify and articulate the fearsIf Carol has held the position that X is really bad for a long time, or if her position stems from deep System 1 models and frames, then she might struggle to articulate clearly what specifically she’s afraid that X will cause. She might find any attempts by others to clarify to be unsatisfying, and possibly threatening because any incorrect articulation of your fear is often worse than none at all. Dylan might come along and say “you don’t like X because you’re afraid of P, but P won’t happen, so you should be okay with X.” This could be scary to Carol who feels her fears are just being played down so they can be dismissed.If both Carol and Dylan are unable to voice what they’re afraid of, the resulting conversation can be Carol and Dylan simply shouting each other about how terrible and evil they think the other’s position is. It becomes one person’s cached store of fear and horror pitted against another’s.Failure to acknowledge the other person’s fear while being desperate for yours to be acknowledgedCaught up in her dread of R, Alice can become insistent that Bob acknowledges the extreme danger she sees. Bob failing to do so is scary - perhaps he will advocate for X not realizing the tremendous harm he will cause.Alice’s fear of Bob’s position can be overriding. It’s easy for her to feel the conversation can’t proceed until Bob can be made to realize what his position will result in. Indeed, if Bob can’t see the extreme danger of X leading to R, then possibly he can’t be reasoned with at all. Alice will focus all her attention, energy, and emotion on trying to make Bob see reason here.This is not conducive to Alice listening to Bob. If Bob isn’t acknowledging R, then it’s easy to see all his words as a perverse and willful refusal to acknowledge R.But it’s worse! Bob is in exactly the same state as Alice. He is dreadfully afraid that Alice isn’t worried about H. That she’d gladly and foolishly let H happen to avoid R. Does she just not care about H happening? Will she sacrifice it all so readily? How can he debate with someone with such distorted values? Someone who keeps ignoring his outright statements that Y leads to H!You easily get two people yelling their fears at each other, unwilling to listen until the other person acknowledges the badness they have been advocating for. The conversation goes nowhere. Tones start out calm and civil, but rapidly become outraged at the wanton obtuseness of their infuriating interlocutor.Refusal to acknowledge the other person’s fear because Arguments Are SoldiersPolitics is the mind-killer. Arguments are soldiers. Once you know which side you’re on, you must support all arguments of that side, and attack all arguments that appear to favor the enemy side; otherwise it’s like stabbing your soldiers in the back. If you abide within that pattern, policy debates will also appear one-sided to you—the costs and drawbacks of your favored policy are enemy soldiers, to be attacked by any means necessary. - Policy Debates Should Not Appear One-SidedEven if you understand what your interlocutor is afraid of and you think there’s something to it, it can be tempting to not acknowledge this. Acknowledging their fear can feel like putting points on the board for them and their position. So you deny them this, not willing to cede any ground. This is bad, don’t do it kids. It’s possible that your concern is by far the greater one, but that doesn’t mean their worries aren’t legitimate too. (Or maybe they’re not legitimate, but you can at least show you understand what they are feeling.) The discussion can proceed if you at least acknowledge the fact they have their own fear too.What to do?If you think you might be headed for a Rock vs Hard Place dynamic, I’d suggest trying the following:Understand what you are afraid of. What is the outcome you think is at risk of happening?Understand what they are afraid of. Know what their fear is.Acknowledge their fear. Acknowledge that something they think is very important is at stake in your discussion. [See this important comment by Kaj on what this requires.]Step 1 is simply to acknowledge the fear. The following steps depend on your own beliefs.You might say “I too am afraid of R”, but:I have thought about this and believe that X won’t cause are R / can be made to avoid R. I think that the danger of R is outweighed by the worse danger of H.I think that there are ways to minimize the risk or damage that are good enough.You might say “I don’t share your fear of R”, becauseI think you are mistaken that R is in fact bad.Don’t fall into Arguments Are Soldiers/Policy Debates are One-Sided traps.Realize that the fear most salient to you will derive from your experience, background, situation, and models. Someone might reasonably be afraid in the other direction, and their situation has made that much clearer to them. There’s a chance you can learn from them.These steps can be taken somewhat unilaterally. You can acknowledge someone else’s fear without them acknowledging yours. Possibly after you’ve done so, they can relax a little and will be open to acknowledging that you have your own fear.If someone is completely unwilling to consider that their fear is misplaced, mistaken, or outweighed, then the conversation may struggle to go anywhere. Best to realize this quickly and find an alternative path forward than end up in circular conversations that only generates fear and hostility.That said, I’m hopeful that once people are able to acknowledge their own and other’s fears, productive double cruxing can happen about where the balance lies between the rock and the hard place.Bah!Appendix: On a more positive noteWhile I’ve framed this post in terms of fears and potentially negative outcomes, it applies also in cases where people are disagreeing about different policies that would result in different benefits. Just as gradient ascent and gradient descent are effectively the same things, sometimes people are fighting over their fears of not netting some positive benefit.[1] Double-Crux Resources:Double Crux — A Strategy for Resolving DisagreementMusings on Double Crux (and "Productive Disagreement")A Concrete Multi-Step Variant of Double Crux I Have Used Successfully*Another* Double Crux Framework" This is a snippet extracted from a publication on LessWrong by a site member.The above is a chunk of text from a post created by a user on the website LessWrong. Can you delve deeper into the concept of "Double Crux" as mentioned in your response and explain how it can be effectively used to resolve disagreements? Double Crux is a technique for resolving disagreements, particularly those that seem intractable or emotionally charged. It was developed within the rationalist community and is frequently used in debates and discussions. The Double Crux technique is based on the idea that disagreements often stem from different underlying beliefs or assumptions. These are the "cruxes" of the disagreement. When two people disagree, each person has a crux, a statement or idea that, if it were changed, would change their mind about the topic of disagreement. Here's a step-by-step breakdown of how the Double Crux technique works: 1. **Identify the Cruxes**: Each person involved in the disagreement identifies their crux or cruxes. These are the beliefs or assumptions that, if they were proven wrong, would change their mind about the disagreement. For example, in a debate about climate change, one person's crux might be "I believe that human activity is significantly contributing to global warming." 2. **Find the Double Crux**: The next step is to find the "double crux", the crux that both parties share. This is a point of disagreement that, if resolved, would resolve the overall disagreement. For example, in the climate change debate, the double crux might be the question of whether human activity is significantly contributing to global warming. 3. **Explore the Double Crux**: Once the double crux has been identified, both parties explore it in depth. They present their evidence, discuss their reasoning, and try to understand each other's perspective. The goal is not to "win" the argument, but to understand the other person's point of view and to update their own beliefs if necessary. 4. **Resolve the Disagreement**: If the discussion is successful, one or both parties will change their mind, resolving the disagreement. If not, they will at least have a better understanding of each other's perspectives and the underlying issues. The Double Crux technique is a powerful tool for resolving disagreements because it focuses on understanding and exploring the underlying issues, rather than trying to "win" the argument. It promotes empathy, understanding, and intellectual growth. However, it requires both parties to be open-minded, willing to change their beliefs, and committed to the process. In what ways could the Double Crux technique be adapted or modified to better handle disagreements that involve more than two parties, each with their own unique cruxes? Applying the Double Crux technique to multi-party disagreements requires a more complex approach, but it's definitely possible. Here are some adaptations that could be made: 1. **Round-Robin Exploration**: Instead of trying to find a single double crux, each party could take turns exploring their crux with the group. This would allow everyone to understand each other's perspectives and could potentially reveal commonalities that weren't initially apparent. 2. **Multi-Crux Identification**: In a multi-party disagreement, there might be multiple shared cruxes. Each of these could be identified and explored in turn. This would likely be a more time-consuming process, but it could also lead to a more comprehensive resolution. 3. **Subgrouping**: If the group is large, it might be beneficial to break it into smaller subgroups for discussion. Each subgroup could explore their own double crux, and then the group could reconvene to share their findings and explore any remaining disagreements. 4. **Facilitator Role**: In a multi-party disagreement, it can be helpful to have a neutral facilitator. This person's role would be to guide the discussion, ensure everyone's cruxes are heard and understood, and help identify the shared cruxes. 5. **Consensus Building**: Instead of trying to change everyone's mind to a single viewpoint, the goal could be to build a consensus that everyone can live with. This might involve compromise, or it might involve finding a new solution that addresses everyone's cruxes. These adaptations would require a higher level of cooperation and open-mindedness from all parties involved. However, they could potentially lead to a more satisfying resolution for everyone involved.
134c4ba8-36f2-44be-882c-668edfb4d782
trentmkelly/LessWrong-43k
LessWrong
What's the simplest concrete unsolved problem in AI alignment? In your preferred area of AI alignment, what is the simplest concrete unsolved problem? By "simplest", ideally the problem has been solved when any of the conditions are weakened. However, this isn't always possible, so a simpler solved version of the problem could also work (e.g., Goldbach's weak conjecture is known to be true.) By "concrete", I mean something where given the statement of the problem and a proposed solution, a neutral third party would be able to consistently determine whether it's solved or not (e.g., not "explain [some theory] in a good way").
993beb59-b54f-4ce5-9b51-ff2de3d436fd
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Catastrophic Regressional Goodhart: Appendix This is a more technical followup to the [last post](https://www.lesswrong.com/posts/fuSaKr6t6Zuh6GKaQ/when-is-goodhart-catastrophic), putting precise bounds on when regressional Goodhart leads to failure or not. We'll first show conditions under which optimization for a proxy fails, and then some conditions under which it succeeds. (The second proof will be substantially easier.) Related work ============ In addition to the related work sections of the [previous post](https://www.lesswrong.com/posts/fuSaKr6t6Zuh6GKaQ/when-is-goodhart-catastrophic), this post makes reference to the textbook [*An Introduction to Heavy-Tailed and Subexponential Distributions*](https://link.springer.com/book/10.1007/978-1-4614-7101-1), by Foss et al. Many similar results about random variables are present in the textbook, though we haven't seen this posts's results elsewhere in the literature before. We mostly adopt their notation here, and cite a few helpful lemmas. Main result: Conditions for catastrophic Goodhart ================================================= Suppose that X.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-surd + .mjx-box {display: inline-flex} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}  and V are independent real-valued random variables. We're going to show, roughly, that if * X is [subexponential](https://en.wikipedia.org/wiki/Heavy-tailed_distribution#Subexponential_distributions) (a slightly stronger property than being heavy-tailed). * V has lighter tails than X by more than a linear factor, meaning that the ratio of the tails of V and the tails of X grows​​ superlinearly.[[1]](#fn4x39dj58y2f) then limt→∞E[V|X+V≥t]=E[V]. Less formally, we're saying something like "if it requires relatively little selection pressure on X to get more of X and asymptotically more selection pressure on V to get more of V, then applying very strong optimization towards X+V will not get you even a little bit of optimization towards V - all the optimization power will go towards X." Proof sketch and intuitions ---------------------------  The conditional expectation E[V|X+V>t] is given by ∫∞−∞vfV(v)Pr(X>t−v)∫∞−∞fV(v)Pr(X>t−v),[[2]](#fnhctf72wo2mq) and we divide the integral in the numerator into 4 regions, showing that each region's effect on the conditional expectation of V is similar to that of the corresponding region in the unconditional expectation E[V]. The regions are defined in terms of a slow-growing function h(t):R→R≥0 such that the fiddly bounds on different pieces of the proof work out. Roughly, we want it to go to infinity so that |V| is likely to be less than h(t) in the limit, but grow slowly enough that the shape of V's distribution within the interval [−h(t),h(t)] doesn't change much after conditioning. In the table below, we abbreviate the condition X+V>t as c. | | | | | --- | --- | --- | | Region | Why its effect on E[V|c] is small | Explanation | | r1=(−∞,−h(t)] | P[V∈r1|c] is too low | In this region, |V|>h(t) and X>t+h(t), both of which are unlikely. | | r2=(−h(t),h(t)) | E[V|V∈r2,c]≈E[V|V∈r2] | The tail distribution of X is too flat to change the shape of V's distribution within this region. | | r3=[h(t),t−h(t)] | P[V∈r3 | c] is low, and V<t. | There are increasing returns to each bit of optimization for X, so it's unlikely that both X and V have moderate values.[[3]](#fnj9su3ciyyp) | | r4=(t−h(t),∞) | P[V∈r4 | c] is too low | X is heavier-tailed than V, so the condition that V>t−h(t) is much less likely than X>t−h(t) in r2. | Here's a diagram showing the region boundaries at −h(t), h(t), and t−h(t) in an example where t=25 and h(t)=4, along with a negative log plot of the relevant distribution: ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/fuSaKr6t6Zuh6GKaQ/xhagjoftech0vdnm1urg)Note that up to a constant vertical shift of normalization, the green curve is the pointwise sum of the blue and orange curves. Full proof ---------- To be more precise, we're going to make the following definitions and assumptions: * Let fV(v) be the PDF of V at the value v. We assume for convenience that fV exists, is integrable, etc, though we suspect that this isn't necessary, and that one could work through a similar proof just referring to the tails of V. We won't make this assumption for X. * Let FX(x)=Pr(X≤x) and ¯FX(x)=Pr(X>x), similarly for FV and ¯FV. * Assume V has a finite mean: ∫∞−∞vfV(v)dv converges absolutely. * X is [subexponential](https://en.wikipedia.org/wiki/Heavy-tailed_distribution#Subexponential_distributions). + Formally, this means that limx→∞Pr(X1+X2>x)Pr(X>x)=2. + This happens roughly whenever X has tails that are heavier than e−cx for any c and is reasonably well-behaved; counterexamples to the claim "long-tailed implies subexponential" exist, but they're nontrivial to exhibit. + Examples of subexponential distributions include log-normal distributions, anything that decays like a power law, the Pareto distribution, and distributions with tails asymptotic to e−xa for any 0<a<1. * ​We require for V that its tail function is substantially lighter than X's, namely thatlimt→∞tp¯FV(t)¯FX(t)=0 for some p>1.[[1]](#fn4x39dj58y2f) + This implies that ¯FV(t)=O(¯FX(t)/t). With all that out of the way, we can move on to the proof.  The unnormalized PDF of V conditioned on X+V≥t is given by fV(v)¯FX(t−v). Its expectation is given by ∫∞−∞vfV(v)¯FX(t−v)∫∞−∞fV(v)¯FX(t−v). Meanwhile, the unconditional expectation of V is given by  ∫∞−∞vfV(v). We'd like to show that these two expectations are equal in the limit for large t. To do this, we'll introduce Q(v)=¯FX(t−v)¯FX(t). (More pedantically, this should really be Qt(v), which we'll occasionally use where it's helpful to remember that this is a function of t.) For a given value of t, Q(v) is just a scaled version of ¯FX(t−v), so the conditional expectation of V is given by ∫∞−∞vfV(v)Q(v)∫∞−∞fV(v)Q(v). ​But because Q(0)=1, the numerator and denominator of this fraction are (for small v) close to the unconditional expectation and 1, respectively. We'll aim to show that for all ϵ>0, we have for sufficiently large t that ∣∣∫∞−∞vfV(v)Qt(v)−∫∞−∞vfV(v)∣∣<ϵ and ∫∞−∞fV(v)Qt(v)∈[1−ϵ,1+ϵ], which implies (exercise) that the two expectations have limiting difference zero. But first we need some lemmas. Lemmas ------ **Lemma 1:** There is h(t) depending on FX such that: * (a) limx→∞h(t)=∞ * (b) limt→∞t−h(t)=∞ * (c) limt→∞¯FX(t−h(t))¯FX(t)=1 * (d)  limt→∞sup|v|≤h(t)|Q(v,t)−1|=0. Proof: Lemma 2.19 from Foss implies that if X is [long-tailed](https://en.wikipedia.org/wiki/Heavy-tailed_distribution#Definition_of_long-tailed_distribution) (which it is, because subexponential implies long-tailed), then there is h(t) such that condition (a) holds and ¯FX is h-insensitive; by Proposition 2.20 we can take h such that h(t)≤t/2 for sufficiently large t, implying condition (b). Conditions (c) and (d) follow from being h-insensitive. **Lemma 2:** Suppose that FX is whole-line subexponential and h is chosen as in Lemma 1. Also suppose that ¯FV(t)=O(¯FX(t)/t). Then Pr[X+V>t, V>h(t), X>h(t)]=o(¯FX(t)/t). Proof: This is a slight variation on lemma 3.8 from [1], and follows from the proof of Lemma 2.37. Lemma 2.37 states that![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/fuSaKr6t6Zuh6GKaQ/yskqwyxz5lxckegvcili) but it is actually proved that P{ξ1+η1>x,ξ1>h(x),η1>h(x)}≤supz>h(x)¯¯¯¯¯¯F1(z)¯¯¯¯¯¯F2(z)⋅supz>h(x)¯¯¯¯¯¯G1(z)¯¯¯¯¯¯G2(z)⋅P{ξ2+η2>x,ξ2>h(x),η2>h(x)}. If we let F1=FV,F2=G1=G2=FX, then we get  P{X+V>t,X>h(t),V>h(t)}≤supz>h(t)¯FV(z)¯FX(z)supz>h(t)¯FX(z)¯FX(z)P{X+X′>t,X>h(t),X′>h(t)}=supz>h(t)¯FV(z)¯FX(z)P{X+X′>t,X>h(t),X′>h(t)} where X,X′∼FX. Multiplying by t, we have tP{X+V>t,X>h(t),V>h(t)}≤supz>h(t)t¯FV(z)¯FX(z)P{X+X′>t,X>h(t),X′>h(t)}, and because h(t)→∞ as t→∞ and ¯FV(t)=O(¯FX(t)/t), we can say that for some c<∞, limt→∞supz>h(t)t¯FV(z)¯FX(z)<c. Therefore for sufficiently large tP{X+V>t,X>h(t),V>h(t)}≤ctP{X+X′>t,X>h(t),X′>h(t)}.  By Theorem 3.6, P{X+X′>t,X>h(t),X′>h(t)} is o(¯FX(t)), so the LHS is o(¯FX(t)/t) as desired. Bounds on the numerator ----------------------- We want to show, for arbitrary ϵ>0, that ∣∣∫∞−∞vfV(v)Q(v)−∫∞−∞vfV(v)∣∣<ϵ in the limit as t→∞. Since ∣∣∫∞−∞vfV(v)Q(v)−∫∞−∞vfV(v)∣∣≤∫∞−∞|vfV(v)(Q(v)−1)|=∫∞−∞|v|⋅fV(v)⋅|Q(v)−1| it will suffice to show that the latter quantity is less than ϵ for large t. We're going to show that ∫∞−∞|v|⋅fV(v)⋅|Q(v)−1| is small by showing that the integral gets arbitrarily small on each of four pieces: (−∞,−h(t)], (−h(t),h(t)), [h(t),t−h(t)], and (t−h(t),∞). We'll handle these case by case (they'll get monotonically trickier). ### Region 1: (−∞,−h(t)] Since ∫∞−∞vfV(v) is absolutely convergent, for sufficiently large t we will have∫−h(t)−∞|v|fV(v)<ϵ, since h(t) goes to infinity by Lemma 1(a). Since Q(v) is monotonically increasing and Q(0)=1, we know that in this interval |Q(v)−1|=1−Q(v). So we have  ∫−h(t)−∞|v|⋅fV(v)⋅|Q(v)−1|=∫−h(t)−∞|v|fV(v)(1−Q(v))<∫−h(t)−∞|v|fV(v)<ϵ as desired. ### Region 2: (−h(t),h(t)) By lemma 1(d), h is such that for sufficiently large t, |Q(v)−1|<ϵ∫∞−∞|v|fV(v) on the interval [−h(t),h(t)]. (Note that the value of this upper bound depends only on V and ϵ, not on t or h.) So we have ∫h(t)−h(t)|v|fV(v)|Q(v)−1|<ϵ∫∞−∞|v|fV(v)∫h(t)−h(t)|v|fV(v)<ϵ∫∞−∞|v|fV(v)∫∞−∞|v|fV(v)=ϵ. ### Region 3: [h(t),t−h(t)] For the third part, we'd like to show that ∫t−h(t)h(t)vfV(v)(Q(v)−1)<ϵ. Since ∫t−h(t)h(t)vfV(v)(Q(v)−1)<∫t−h(t)h(t)tfV(v)Q(v)=t¯FX(t)∫t−h(t)h(t)fV(v)¯FX(t−v) it would suffice to show that the latter expression becomes less than ϵ for large t, or equivalently that ∫t−h(t)h(t)fV(v)¯FX(t−v)=o(¯FX(t)t) . The LHS in this expression is the unconditional probability that X+V>t and h(t)<V<t−h(t),​ but this event implies X+V>t,V>h(t), and X>h(t). So we can write ∫t−h(t)h(t)fV(v)¯FX(t−v)=Pr[X+V>t, h(t)<V<t−h(t)] <Pr[X+V>t, V>h(t), X>h(t)]=o(¯FX(t)/t) by Lemma 2. ### Region 4: (t−h(t),∞) For the fourth part, we'd like to show that ∫∞t−h(t)vfV(v)Q(v)→0 for large t. Since Q(v)=¯FX(t−v)¯FX(t)<1¯FX(t), it would suffice to show ∫∞t−h(t)vfV(v)=o(¯FX(t)). But note that since limt→∞¯FX(t−h(t))¯FX(t)=1 by Lemma 1(c), this is equivalent to ∫∞t−h(t)vfV(v)=o(¯FX(t−h(t))), which (by Lemma 1(b)) is equivalent to ∫∞tvfV(v)=o(¯FX(t)). Note that ∫∞tvfV(v)=t∫∞tfV(v)+∫∞t(v−t)fV(v)=t¯FV(t)+∫∞t¯FV(v), so it will suffice to show that both terms in this sum are o(¯FX(t)).  The first term t¯FV(t) is o(¯FX(t)) because we assumed limt→∞tp¯FV(t)¯FX(t)=0 for some p>1. For the second term, we have for the same reason∫∞t¯FV(v)<∫∞t¯FX(v)vp=¯FX(t)∫∞tv−p=t1−pp−1¯FX(t)=o(¯FX(t)). This completes the bounds on the numerator. Bounds on the denominator ------------------------- For the denominator, we want to show that limt→∞∫∞−∞fV(v)Qt(v)=1=∫∞−∞fV(v), so it'll suffice to show |∫∞−∞fV(v)(Qt(v)−1)|=o(1) as t→∞. Again, we'll break up this integral into pieces, though they'll be more straightforward than last time. We'll look at (−∞,−h(t)), [−h(t),h(t)], and (h(t),∞). * |∫−h(t)−∞fV(v)(Q(v)−1)|=∫−h(t)−∞fV(v)(1−Q(v))<∫−h(t)−∞fV(v). + But since h(t) goes to infinity, this left tail of the integral will contain less and less of V's probability mass over time. * |∫h(t)−h(t)fV(v)(Q(v)−1)|≤∫h(t)−h(t)fV(v)|Q(v)−1| ≤sup|v|≤h(t)|Q(v,t)−1|∫h(t)−h(t)fV(v)≤sup|v|≤h(t)|Q(v,t)−1| + But by Lemma 1(d) we know that this goes to zero for large t. * |∫∞h(t)fV(v)(Q(v)−1)|=∫∞h(t)fV(v)(Q(v)−1)<∫∞h(t)fV(v)Q(v). + But for sufficiently large t we have h(t)>1, so we obtain ∫∞h(t)fV(v)Q(v)<∫∞h(t)vfV(v)Q(v)<∫∞−∞vfV(v)Q(v)=o(1)  by the results of the previous section. This completes the proof! Light tails imply V =================== Conversely, here's a case where we do get arbitrarily high E[V|X+V≥t] for large t. This generalizes a consequence of the lemma from Appendix A of [Scaling Laws for Reward Model Overoptimization](https://arxiv.org/abs/2210.10760) (Gao et al., 2022), which shows this result in the case where X is either bounded or normally distributed. Suppose that X satisfies the property that limt→∞¯FX(t+1)¯FX(t)=0.[[4]](#fnkub9kyp63a) This implies that X has tails that are dominated by e−cx for any c, though it's a slightly stronger claim because it requires that X not be too "jumpy" in the decay of its tails.[[5]](#fnptyxvq1xrvs) We'll show that for any V with a finite mean which has no upper bound, limt→∞E[V|X+V>t]=∞.  In particular we'll show that for any c, limt→∞E[V|X+V>t]≥c.  ### Proof Let Pr(V>c+1)=p>0, which exists by our assumption that V is unbounded. Let E[V|V<c]=q. (If this is undefined because the conditional has probability 0, we'll have the desired result anyway since then V would always be at least c.)  Observe that for all t, E[V|V<c,X+V>t]≥q (assuming it is defined), because we're conditioning (V|V<c) on an event which is more likely for larger v (since X and V are independent).  First, let's see that limt→∞P(V<c|X+V≥t)P(V>c+1|X+V≥t)=0. This ratio of probabilities is equal to ∫c−∞fV(v)¯FX(t−v)∫∞c+1fV(v)¯FX(t−v)≤∫c−∞fV(v)¯FX(t−c)∫∞c+1fV(v)¯FX(t−c−1)=¯FX(t−c)¯FX(t−c−1)⋅∫c−∞fV(v)∫∞c+1fV(v) =¯FX(t−c)¯FX(t−c−1)⋅Pr(V<c)Pr(V>c+1)≤¯FX(t−c)¯FX(t−c−1)⋅1p which, by our assumption that limt→∞¯FX(t+1)¯FX(t)=0, will get arbitrarily small as t increases for any positive p. Now, consider E[V|X+V≥t]. We can break this up as the sum across outcomes Z of E[V|Z,X+V≥t]⋅Pr(Z|X+V≥t) for the three disjoint outcomes V<c, c≤V≤c+1, and V>c+1. Note that we can lower bound these expectations by q,c,c+1 respectively. But then once t is large enough that Pr(V<c|X+V≥t)Pr(V>c+1|X+V≥t)<1c−q,  this weighted sum of conditional expectations will add to more than c (exercise). Answers to exercises from last post =================================== 1. *Show that when*X*and*V*are independent and*t∈R*,*E[V | X+V>t]≥E[V]*. Conclude that*limt→∞E[V | X+V>t]≥E[V]*. This means that given independence, optimization always produces a plan that is no worse than random.* *Proof:* Fixing a value of t, we have for all x∈R thatE(V|X+V>t,X=x)=E(V|V>t−x)≥E(V). Since the conditional expectation after seeing any particular value of X is at least E(V), this will be true when averaged across all x proportional to their frequency in the distribution of X. This means that E[X+V>t]≥E(V) for all t, so the inequality also holds in the limit. 2. *When independence is violated, an optimized plan can be worse than random, even if your evaluator is unbiased. Construct a joint distribution*fVX*for*X*and*V*such that*E[X]=0*,*E[V]=0*, and*E[X|V=v]=0*for any*v∈R*, but*limt→∞E[V | X+V>t]=−∞*.* *Solution*: Suppose that V is distributed with a PDF equal to 0.5e−|v|, and the conditional distribution of X is given by (X|V=v)={0v≥0Coinflip()⋅(v2−v)v<0 where Coinflip() is a random variable that is equal to ±1 with 50/50 odds. The two-dimensional heatmap looks like this: ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/GdkixRevWpEanYgou/j793qa8vhipdtkqttaql) Now, conditioning on X+V≥t with t>0, we have one of two outcomes: either V≥t, or V≤−√t and Coinflip()=1.  The first case has an unconditional probability of 0.5e−t, and a conditional expectation of E[V|V≥t]=∫∞t0.5ve−vdv∫∞t0.5e−vdv=0.5(t+1)e−t0.5e−t=t+1. The second case has an unconditional probability of 0.25e−√t, and a conditional expectation of at most −√t since all values of v in the second case are at most that large. So, abbreviating these two cases as A and B respectively, the overall conditional expectation is given by E[V|X+V≥t]=E[V|A]Pr(A)+E[V|B]Pr(B)Pr(A)+Pr(B)≤(t+1)0.5e−t−0.25e−√t√t0.5e−t+0.25e−√t =(2t+2)−e√t√t2+e√t≤(2t+2)−e√t√te√t=(2t+2)e√t−√t=o(1)−√t→−∞ as desired. This sort of strategy works for any fixed distribution of V, so long as the distribution is not bounded below and has finite mean; we can replace v2−v with some sufficiently fast-growing function to get a zero-mean conditional X distribution that behaves the same. For a followup exercise, construct an example of this behavior even when all conditional X distributions have variance at most 1. 1. **[^](#fnref4x39dj58y2f)**We actually just need ∫∞t¯FV(v)∈o(¯FX(t)), so we can have e.g. ¯FV(v)=FX(v)vlog2(v). 2. **[^](#fnrefhctf72wo2mq)**We'll generally omit dx and dv terms in the interests of compactness and laziness; the implied differentials should be pretty clear. 3. **[^](#fnrefj9su3ciyyp)**The diagrams in the previous post show visually that when X and V are both heavy-tailed and t is large, most of the probability mass has X≈0, V≈t or vice versa. 4. **[^](#fnrefkub9kyp63a)**This proof will actually go through if we just have limt→∞¯FX(t+k)¯FX(t)=0 for any constant k>0, which is a slightly weaker condition (just replace 1 with k in the proof as necessary). For instance, X could have probability en! of being equal to 100n for n=0,1,2,3,…, which would satisfy this condition for k=101 but not k=1. 5. **[^](#fnrefptyxvq1xrvs)**If X has really jumpy tails, the limit of the mean of the conditional distribution may not exist. Exercise: what goes wrong when X has a 23n probability of being equal to 2n for n=1,2,3,… and V is a standard normal distribution?
0d86cf26-1566-49d6-80c7-5dea84a555c0
trentmkelly/LessWrong-43k
LessWrong
Open thread, Aug. 10 - Aug. 16, 2015 If it's worth saying, but not worth its own post (even in Discussion), then it goes here. ---------------------------------------- Notes for future OT posters: 1. Please add the 'open_thread' tag. 2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.) 3. Open Threads should be posted in Discussion, and not Main. 4. Open Threads should start on Monday, and end on Sunday.
8446b928-84c0-4ab6-bd46-b4673faf4f62
trentmkelly/LessWrong-43k
LessWrong
Open Thread: August 2009 Here's our place to discuss Less Wrong topics that have not appeared in recent posts. If something gets a lot of discussion feel free to convert it into an independent post.
671e0f80-b7a4-4181-a3a2-09a0961ad1c7
trentmkelly/LessWrong-43k
LessWrong
Retracting on Facebook Sometimes I'm in a Facebook discussion and someone says something they wish they hadn't. Maybe it turns out to be wrong, or misleading, or insensitive. One way I've seen this poorly handled is posting a follow-up comment: The problem is that the top-level post is still there, and Facebook will often choose to show it to people without showing them the reply. Or people will read the big text and skim over the reply. At the other extreme, full deletion has different downsides: If you're deleting something with replies, those comments disappear, even ones people may have put a lot of work into. Plus people are even less likely to know you retracted your claim. For simple retractions, where once people know you don't endorse it anymore it's no longer harmful, an edit can work well: It's not confusing, and people can still read the comments if the discussion is helpful. If you really don't want people to gloss over your retraction, you can edit to remove it fully: If people really want to see the context, they can look at the edit history: In extreme cases, like if you accidentally outed someone, changing the privacy setting on your post to "only me" lets you keep anyone else from seeing it: This way if someone has a comment they wrote that they'd like to still have, they can write to you and ask for a copy. If you have a comment in a nested thread where you need to clear the edit history, delete your comment and post a new one like "I deleted my comment above because...". If you have a top-level comment you need to delete, there aren't any great options. The best you can do is manually screenshot the thread in case someone asks for their comments, and then delete it. So, please do retract things when needed! Just be thoughtful about it. Comment via: facebook
3fe1b708-1ca9-4201-ba0f-178085bb3850
trentmkelly/LessWrong-43k
LessWrong
Ubiquitous Far-Ultraviolet Light Could Control the Spread of Covid-19 and Other Pandemics Roko Mijic, Alexey Turchin Epistemic status: Many different uncertainties here, but the idea has some good evidence in favor of it and a high potential payoff. Tl;dr: We should urgently investigate putting special human-safe Far-UVC lamps all over our built environment to ‘kill’ virus particles whilst they are in the air, thereby vastly reducing covid-19 spread. Inspired by: https://www.nature.com/articles/s41598-018-21058-w One of the most promising and neglected ideas for combating the spread of covid-19 is the use of ubiquitous ultraviolet light in our built environment (trains, offices, hospitals, etc). Ultraviolet light is already being used as a disinfecting agent across the world; it goes by the acronym UVGI - “Ultraviolet germicidal irradiation”. The energetic photons of UVC light break chemical bonds in DNA and kill/inactivate both viruses and bacteria. Ultraviolet light on earth exists on a spectrum between 200nm and 400nm. Light above 400nm is blue visible light. Light below 200nm is called “vacuum UV” because it is strongly absorbed by the oxygen in ordinary air and therefore cannot exist except in a vacuum or some other non-air medium. Within the 200-400nm range we have UVA, UVB and UVC, and at the short-wave edge of the UVC band we have “Far-UVC”, from roughly 200nm–220 nm. Safety considerations Human beings are also vulnerable to UV radiation. It causes skin cancer and serious eye damage. However, recent research suggests that the Far-UVC band is actually safe for human skin because it cannot penetrate through the thin layer of dead skin cells on the surface of our skin. This means that it might be possible to mount a long-term response to covid and other pathogens by constantly illuminating our built environment with light from specifically the Far-UVC band. If the Far-UVC light is indeed safe for humans, the Far-UVC could be on at all times and could destroy or deactivate viral particles before they can spread from person to person. Why h
12eb65a5-8e12-48a9-a500-938aa7160ce7
trentmkelly/LessWrong-43k
LessWrong
[Book Review] Evolution of Sex I was prompted to write this book review after reading Paul Christiano’s post on the evolution of sex and George’s response to Paul Christiano’s post on the evolution of sex. Those two posts are quite interesting! Paul Christiano points out that “sex slightly lowers the average [fitness] but increases the variance”, and argues that increased variance is beneficial over time. George points out that there are lots of ways to increase variance without imposing the same costs as sex. So why sex? George says “I don’t know, nobody really does, that’s why evolution is complex”. Is it really the case that no one knows why sex could evolve? I recently got a copy of W. D. Hamilton’s book, Narrow Roads of Gene Land Volume 2: The Evolution of Sex, where he attempts to answer the question. I’ll let him introduce the topic: “The main topic of this book is why sexual ways of reproduction are so abundant, when, for natural selection to work and for progressive evolution to occur, much simpler and more efficient ways would seem to suffice”. This seems like exactly what George was asking for, so let’s practice the virtue of scholarship by reading a bunch of papers on the evolution of sex! The Book Just like Volume 1 (which I may write a separate book review on), this book alternates between autobiographical accounts and published papers. Each chapter first has Hamilton describing what was happening in his life and research while he was working on the paper, and then the paper. I focused on the three papers most relevant to the parasite theory of sex, out of 18 total papers. Paper 1: Environmental fluctuation, polymorphism, and the maintenance of sex This first paper we'll talk about begins by introducing us to some puzzles. First puzzle: why do all the social species (humans, bees, termites) reproduce sexually? Our best theories of selection for social cooperation relies on the individuals who are cooperating being related to each other. But in clonal populations (like in aphid
bd756091-a00f-475f-a02b-8df351974ecb
trentmkelly/LessWrong-43k
LessWrong
Age gaps and Birth order: Failed reproduction of results Summary I attempted to reproduce Scott’s analysis of Birth order effect vs Age gap. I found that: 1. There appeared to be an error in graphs 2 & 3 where people with one sibling were counted when they shouldn’t have been (graph 2) or were counted twice (graph 3) 2. Comparing oldest children to youngest children causes a systematic bias in the results. This can be prevented by comparing oldest children to 2nd oldest children 3. I was unable to reproduce Scott’s result on people reporting 0 year age gap – I get a non-significant 58% older siblings compared to Scott’s 70%. I was unable to discover the cause of the difference. I have reanalysed the data based on points 1 & 2 in a separate post. Previously in Birth order effect In the 2018 Slate Star Codex survey Scott asked some questions about what order in the family respondents were born. He found that eldest children were massively overrepresented. Following on, historical mathematicians and Nobel winning physicists were found to exhibit the same property. In the 2019 SSC survey Scott included questions about age gaps between respondents and their adjacent siblings. He analysed the results, finding that: > This study found an ambiguous and gradual decline [in Birth order effect] from one to seven years [Age gap between siblings], but also a much bigger cliff from seven to eight years. Failed reproduction of Scott’s graphs I had originally intended to analyse the data to see if I could draw any further conclusions. However, when running the analysis I found that I was unable to reproduce Scott’s results. Scott includes 3 graphs. The first – comparing % of sample oldest child vs age gap for people with 1 sibling – I was able reproduce almost exactly (Scott also has access to respondents’ data who asked not to be included in the public data so we aren’t exactly the same. There may be other differences too but these are small). The second – comparing how many oldest vs youngest children there are in the sam
d935b8cb-4e91-4dca-a61a-2696c81b8988
trentmkelly/LessWrong-43k
LessWrong
How More Knowledge is Making Us Dumber This essay was originally published on my website. The 21st century da Vinci will be a billionaire who combines science and business seamlessly. He is also a master of the latest technology, creating the most surprising companies that look like masterpieces. The 21st century Mona Lisa will not be a portrait, but a groundbreaking company.   He/She will be one of many new renaissance polymaths—amid a simultaneous rise of uneducated people. Welcome to the coming intellectual inequality. And it all goes back to how access to knowledge has evolved.  Since the beginning of time, there were people more knowledgeable than others because knowledge was not as widespread. This was often the difference between the haves and have nots, creating inequality among people. Knowledge was stored in our memory and was handed down to generation after generation. This method was ineffective because we had to rely on our memories, and we don’t always remember the little details. Knowledge went from being in our brains to a form of visual description. What we call writing. This form was more effective as you could include as detail and information as you’d like. Yet, we had an issue sharing these writings to our fellow humans. Ancient Egyptians were the first to use a paper-like material and glued them together to share knowledge.  In the 1550s, Johannes Gutenberg created the first mechanical printing press, enabling mass production of books. From the 15th century until the 21st century, knowledge was shared and stored using books. Then in the 2000s, we no longer needed physical knowledge as it could be stored digitally more efficiently.  The change from physical to digital allows anyone in the world with an internet connection to access all the world’s information, mostly for free.  From now on, you just need the intention and desire to learn to get knowledge. It’s that simple. As knowledge became more widespread, all types of knowledge including entertainment are available.  
b4579c0e-d7b1-49c5-a2dc-72301e6d9112
StampyAI/alignment-research-dataset/blogs
Blogs
Price performance Moore’s Law seems slow *By Katja Grace, 26 November 2017* When people make predictions about AI, they often assume that computing hardware will carry on getting cheaper for the foreseeable future, at about the same rate that it usually does. Since this is such a common premise, and whether reality has yet proved it false is checkable, it seems good to check sometimes. So we [did](http://aiimpacts.org/recent-trend-in-the-cost-of-computing/). Looking up the price and performance of some hardware turned out to be a real mess, with conflicting numbers everywhere and the resolution of each error or confusion mostly just leading to several more errors and confusions. I suppose the way people usually make meaningful figures depicting computer performance changing over time is that they are doing it over long enough periods of time that even if each point is only accurate to within three orders of magnitude, it is fine because the whole trend is traversing ten or fifteen orders of magnitude. But since I wanted to know what was happening in the last few years, this wouldn’t do—half an order of magnitude of progress could be entirely lost in that much noise. In the end, the two best looking sources of data we could find are the theoretical performance of GPUs (via Wikipedia), and [Passmark](https://www.passmark.com/)‘s collection of performance records for their own benchmark. Neither is perfect, but both make it look like prices for computing are falling substantially slower than they were. Over the last couple of decades it [had been](http://aiimpacts.org/trends-in-the-cost-of-computing/) taking about four years for computing to get ten times cheaper, and now (on these measures) it’s taking more like twelve years. Which could in principle be to do with these measures being different from usual, but I think probably not. There are quite a few confusions still to resolve here. For instance, in spite of showing slower progress, these numbers look a lot cheaper than what would have been predicted by extrapolating [past trends](http://aiimpacts.org/trends-in-the-cost-of-computing/) (or sometimes more expensive). Which might be because we are comparing performance using different metrics, and converting between them badly. Different records of past trends seem to disagree with one another too, which is perhaps a hint. Or it could be that there was faster growth somewhere in between that we didn’t see. Or we might not have caught all of the miscellaneous errors in this cursed investigation. But before we get too bogged down trying to work these things out, I just wanted to say that price performance Moore’s Law tentatively looks slower than usual. *See full investigation at: [Recent Trend in the Cost of Computing](http://aiimpacts.org/recent-trend-in-the-cost-of-computing/)*
4b9b6376-eb22-4c4f-b52c-43fa8705b300
awestover/filtering-for-misalignment
Redwood Research: Alek's Filtering Results
id: post1647 This post is part of my hypothesis subspace sequence, a living collection of proposals I'm exploring at Refine. Followed by ideological inference engines . Thanks Adam Shimi for advice on putting more legible content out there. Thanks Eric Winsor, Leo Grinsztajn, Linda Linsefors, Lucas Texeira, Tammy Leake, Ze Shen for discussions which inspired this post. TL;DR: An oversight league is a training scheme which incentivizes an agent and an evaluator to constantly try to game each other, leading to synchronized increases in capability for the two players. However, the evaluator is being offered a host of additional learning signals to help it maintain a consistent (and potentially provable) lead over the agent. Oversight leagues heavily draw on ideas from capability literature, including: league training in AlphaStar, game theory in GANs, adversarial robustness, etc. Intro The whole project of oversight leagues relies on the following non-exhaustive list of assumptions: Assumption 1, "AGI Hard, Human Values Harder" : We are unlikely to formulate the True Name of human values in closed-form before deploying transformative AI. The best we are likely to do before takeoff is model human values approximately and implement an imperfect evaluator . Assumption 2, "Linear Capability Ordering" : Any fixed evaluator (e.g. a reward model ) can be gamed by an agent above a certain threshold of capability. More generally, an agent whose capability improves consistently faster than the capability of an evaluator will eventually be able to game said evaluator. By "evaluator capability," I'm referring to its ability to prevent being gamed. Assumption 3, "Humans Are Not True Gamers" : Human oversight is impractical because our capabilities as evaluators can't improve at an arbitrary large rate. Save for cyborgian schemes for human augmentation, human oversight would eventually be gamed by an agent of sufficient capability. Assumption 4, "Zone of Proximal Development" : There is a relatively small interval of evaluator lead which allows the agent to improve from challenging it, even if the evaluator is slightly ahead. This assumption is motivated by the fact that GANs are actually working, despite an inevitable imbalance between the two components involved at any given time. Assumption 5, "Point of No Return" : Even a minor lead in capability for the agent relative to the evaluator is catastrophic, as the agent can take advantage of this window of opportunity and maneuver itself into a persistent advantageous position by flying under the evaluator's radar. From the assumptions above, we can infer that if we are to deploy a robust evaluator as an operationalization of human values, it must certainly not be fixed. Not only that, but it must additionally improve in performance at least as fast as the agent being evaluated, so that it never gets overtaken. What's more, simple human oversight doesn't fulfill those necessary conditions, hence we should consider automated schemes. Proposal An oversight league is one such automated scheme for training agents and evaluators in a way which improves their performance in lockstep . The crux of this training regime is to supply most of the training through bilateral learning signals , and thus render the improvement of the two components interdependent. By ensuring that most of the learning opportunities of the agent come from playing against the evaluator and vice versa, the two sides form a positive feedback loop resembling patterns of co-evolution. The oversight league scheme implicitly attempts to cultivate "antifragility" by applying appropriate stressors on the evaluator in the form of ever more capable agents a reliable way of improving its performance, and the other way around. Various dynamics of agent-evaluator improvement. Left : Fixed evaluator inevitably being overtaken. Middle : Slow evaluator being overtaken at some point. Right : Synchronized improvement of agent and evaluator, aiming to avoid overtaking. Additionally, besides the agent and evaluator constantly offering each other learning opportunities at a simultaneously increasing level, an oversight league requires unilateral learning signals applied differentially to the evaluator. The goal of such learning signals is to ensure that the evaluator has a constant edge over the agent, so as to avoid being permanently overtaken (see "Point of No Return" assumption) by chance. They are meant to provide additional learning opportunities for the evaluator, helping it patch its blindspots faster than the agent can exploit them. Let's unpack the various learning signals involved in an oversight league, by first focusing on bilateral ones: Present Exploitation : Training a player (i.e. agent or evaluator) against the latest and greatest version of the opponent. The agent is trained to exploit recent blindspots of the evaluator, while the evaluator is trained to pick out recent tricks of the agent. Inspired by "main exploiters" in AlphaStar . Past Exploitation : Training a player against (a tricky mixture of) the past versions of the opponent. The agent is trained to exploit systematic blindspots of the evaluator across time, while the evaluator is trained to pick out systematic tricks of the agent. The role of past exploitation is to prevent the agent from forgetting about past blindspots of the evaluator and to prevent the evaluator from forgetting about past tricks of the agent. Inspired by "league exploiters" in AlphaStar . Future Exploitation : Training a player against future versions of the opponent. This is somewhat trickier than the previous types of exploitation, in that it involves backpropagating through updates of the (boxed) opponent. It would only be applied to the evaluator, as it would help it avoid weight updates which are easily countered by the opponent. This appears to be a promising place to look for formal guarantees on the safety of this scheme based on notions from game theory. Inspired by unrolled GANs from literature investigating ways of balancing the two adversarial components of GANs. Illustration of unrolled GANs . Now that we have listed broad classes of bilateral learning signals, let's focus on unilateral ones. As mentioned previously, those would mainly be applied to the evaluator as an attempt to systematically give it an edge. Adversarial Robustness : Train the evaluator on data points which have been optimized to maximize error within tight bounds of allowed input perturbations. In image classification, allowed perturbations are of a magnitude too low to be consciously perceived by humans, yet trip up the ML model big time. When provided as training data , adversarial examples help the ML model become (empirically) robust against such attacks. Relaxation-based training as a specific approach to adversarial training can also provide formal guarantees on the worst-case bounds of adversarial attacks. Contrastive Dreaming : This is a novel one, to the best of my knowledge. Just like adversarial robustness, it also involves training the evaluator on synthetic data points. However, instead of optimizing inputs to yield the worst possible prediction given imperceptible changes, contrastive dreaming consists in optimizing inputs to yield the seemingly best evaluation given significant allowed changes. In a sense, this technique involves intentionally wireheading the evaluator before using the optimized image of the world as a negative example. Think of the DeepDream technique being used to maximize the "dogginess" of an image as perceived by an image classifier, only to yield an image which is blatantly unrealistic with respect to how dogs actually manifest in the world. Illustration of DeepDream amplifying the "dogginess" of an input image while blatantly violating the way dogs manifest in the world. "You were once the size of a period. For a brief time you tumbled about as a multicellular sphere, much like pond algae. Currents swept and washed over you. Remember? Then you grew. You became sponge life, tubular, all gut. To eat was life. You grew a spinal cord to feel. You put on gill arches in preparation to breathe and burn food with intensity. You grew a tail to move, to steer, to decide. You were not a fish, but a human embryo role- playing a fish embryo. At every ghost-of-embryonic-animal you slipped into and out of, you replayed the surrender of possibilities needed for your destination. To evolve is to surrender choices. To become something new is to accumulate all the things you can no longer be." — Kevin Kelly Together, bilateral and unilateral learning signals provide the totality of learning opportunities accessed by the agent and the evaluator in an oversight league. The table below summarizes which learning signals would be applied to train which component: Learning Signal Type Usage Inspiration Present Exploitation bilateral symmetrical AlphaStar Past Exploitation bilateral symmetrical AlphaStar Future Exploitation bilateral asymmetrical (evaluator only) Unrolled GANs Adversarial Robustness unilateral asymmetrical (evaluator only) Robust ML Contrastive Dreaming unilateral asymmetrical (evaluator only) DeepDream Let's sketch out more concrete details of the agent and the evaluator. The agent might be a purely agentic sequence-modeling entity exposed to the world, or could just as well be an agent simulacrum puppeteered by a simulator, as per Conjecture's recent taxonomy . Oversight leagues would be applied just as well in the latter case as in the former, probably in a way resembling Redwood's ongoing experiments of steering the simulator away from portraying human harm. It would be a specific agent simulacrum that is being steered, though, if oversight leagues were to be applied this way. Over on the evaluator side, the component might resemble existing reward models used to extrapolate human feedback in RLHF , but would critically possess some adaptivity to be able to keep up with the agent-in-training (e.g. constant fine-tuning). In this RLHF-like case, the term process-based evaluator seems fit to distinguish a "behavioral" evaluation target from an outcome-based one, to build on Ought's taxonomy . An outcome-based evaluator would rather involve predicting the consequences which a given behavioral sequence would have on the world, and evaluating that instead of the behavior as an end in itself. While the agent would be implemented in a relatively straightforward way through a policy, the evaluator involves more design choices when defining its type signature. Depending on the details of the unilateral learning signals, the evaluator might be tasked with predicting the training epoch associated with a past adversarial example or contrastive dream. This objective might incentivize it to gain an understanding of its systematic blindspots and an awareness of its training progress. While this sounds like an awful feature for the agent, it sounds quite handy for buffing the evaluator. It's also relevant to point out that oversight leagues don't explicitly require transparency tools in their vanilla formulation. While the "Humans Are Not True Gamers" assumption hints at transparency tools used by humans not being deemed practical in this proposal, transparency tools might still be directly employed by players. The evaluator might benefit from probing the agent deeper in its mission of spotting its tricks, while the agent might similarly benefit from probing the evaluator deeper in its mission of spotting its blindspots. In this symmetrical setup, transparency tools feel like nice-to-haves, but not critical, as they only extend the portfolio of learning opportunities available to both agent and evaluator further. However, equipping the evaluator alone with an arsenal of transparency tools might help preserve its competitive edge. Discussion What if the agent internally simulates and games the evaluator? If the agent internally simulates and games the evaluator, it means that the agent is more capable than the evaluator. The frame of oversight leagues acknowledges this state of affairs as an unambiguous game over, as the agent can maneuver itself into a persistent advantageous position. That said, the entire premise of oversight leagues is that there are mechanisms which can help avoid the situation in which the agent is capable of gaming the evaluator in the first place. It's an attempt to make away with that whole chunk of attack surface, by enforcing the evaluator's edge in complexity in hopefully provable ways. Wait what, you want to simulate the agent a few steps ahead of the evaluator? The idea of future exploitation as a bilateral learning signal involves backpropagating evaluator updates through multiple update steps of the oversight league, in a fully differentiable way. This is meant to avoid evaluator updates which are easily to countered by the agent in the short-term, helping ensure the evaluator's edge. However, simulating the agent forward is dangerous. Fortunately, future exploitation might involve playing forward both sides, only for the updates to accumulate backwards into the latest version of the evaluator. Additionally, future exploitation might be conducted in a boxed setting, so that we ensure that new agent versions are being explored further in a safe environment first. The gradient update coming out of the box might be susceptible to gradient hacking by a woke agent screwing up the whole setup, though I find it unlikely that the agent would manage to pull that off without triggering a whole set of alarms in the evaluator's internals first. What if the agent and evaluator collude? While the agent's objective is essentially implemented through the evaluator, the evaluator's objective is not entirely based on the agent. It might likely involve assigning accurate GAN-like estimates of humanness or human desirability to input behaviors or outcomes, which is incompatible with helping the agent out through "grade inflation." The oversight league is a zero-sum game: the agent and evaluator can't both win, making collusion suboptimal. What if the agent recursively self-improves without the evaluator? RSI appears to be a concern for this framework, as the agent might gradient hack its way above the evaluator and thus break the agent-evaluator interdependence. An sensible approach would be to continue training the evaluator against the agent so that it also shares that momentum in capabilities. However, with a situational awareness of sufficient level, the agent might be able to both (1) gradient hack its way higher, and (2) sabotage the evaluator through its influence via bilateral learning signals. With a strong evaluator lead enforced through unilateral learning signals and future exploitation, though, the agent might lack resources to enact this form of sabotage. What if assumption 1 ("AGI Harder, Human Values Harder") doesn't hold? That's positively awesome. In this case, we discard this completely and go with an agenda focused on aiming directly at those unraveled human values, such as that of John Wentworth or Team Shard . That said, the rate of progress in ML appears to be in a whole different league than progress in ethics, pun intended. However, it's relevant to note that this proposal still attempts to approach the True Name of human values in the limit of synchronized agent-evaluator improvement, despite assuming a low probability for a closed-form expression provided in the very beginning as-is. One might argue that the present setup might effectively be equivalent to the True Name as it aims to approach it, but I'm drawing a distinction between those two cases for clarity. What if assumption 2 ("Linear Capability Ordering") doesn't hold? This might mean that the evaluator would be better than the agent in some category of interactions, but not in others. If one imagines the superimposed radar charts of the two, the agent's skill profile might not be fully contained in the evaluator's one. This is partially a question of how general a player's general intelligence is. If it turns out to be extremely general, the skill profiles would be pretty flat and hence less ambiguously ordered. If it actually turns out to be more specialized, this assumption would take a hard hit, as it might be difficult to systematically enforce the evaluator's edge across the board. What if assumption 3 ("Humans Are Not True Gamers") doesn't hold? That's positively awesome. In this case, we similarly drop this proposal and go with a mix of agendas involving human amplification, such as the ones being investigated by parts of OpenAI, DeepMind, Conjecture, ARC, Ought, and others. What if assumption 4 ("Zone of Proximal Development") doesn't hold? Then the scheme might break down, as it would be impossible to improve the two components in lockstep based on each other. However, that would also mean that GANs shouldn't work, which they sometimes do, despite being infamously difficult to train. It might also mean that most supervised regimes shouldn't work, because of the huge initial capability gap between a fixed evaluator defined largely through training data and the tabula rasa agent. What if assumption 5 ("Point of No Return") doesn't hold? This situation interacts somewhat with the "Linear Capability Ordering" assumption, in that it hints at an underdog evaluator somehow being able to catch up with the leading agent. The myopic and non-agentic character of the evaluator makes me further reluctant of this assumption breaking, but if it did actually break, then differentially buffing the evaluator might help cover the lost ground. Additionally, the evaluator nerfing the leading agent might be feasible if this assumption wouldn't hold, say by signaling the need for reverting it a few epochs back. Though this then brings up corrigibility issues. Is there an alignment tax associated with oversight leagues? While many ideas here are employed to keep the agent below the evaluator in capabilities, the evaluator itself is not meant to be bounded in any way. This means that there's effectively no bound on agent capability, as it's encouraged to improve indefinitely, as long as it's a teeny-tiny bit less than how it could be (i.e. if the agent had the lead at any given point). Additionally, the enforced interdependence of agent and evaluator means that an improvement in the evaluator — even if it bounds the agent as described above — will eventually help elevate the agent through learning opportunities all the same. The main issue here is that the evaluator is resource-hungry, but I suspect any less resources allocated to the evaluator would yield an unsafe arrangement. Given the above, I consider the alignment tax to be negligible for this proposal. Are oversight leagues restricted to prosaic risk scenarios? They have been in large part motivated by prosaic work. I'd argue that oversight leagues are to AlphaStar what iterated distillation and amplification is to AlphaGoZero : a repurposing of a prosaic training regime towards safety goals (which incidentally has less capability externalities than entirely novel training regimes, as it's already present in the awareness of capability researchers). That said, I'd argue that the overarching scheme is general enough to transfer to non-ML player architectures, despite specific learning signals not being able to bridge the gap. For instance, contrastive dreaming is heavily inspired by quirks of ML models trained through gradient descent, where the optimization is applied to the input itself rather than the weights. Outro Oversight leagues are a natural conclusion of the initial assumptions regarding the agent-evaluator relation given shifting capability levels. This more epistemically legible format helped me clarify my thinking around the topic, and I'm looking forward to fleshing out similar posts in the future based on different assumptions and analogies. Let the training games begin!
ed1900e9-a4a0-4a62-ae04-2e0e26139694
trentmkelly/LessWrong-43k
LessWrong
Competent Preferences I - Obvious answers to obvious questions "Why do I think I know what I do about Goodhart's law?" There's an obvious answer (or at least an obvious outer layer of the answer-onion): we use Goodhart's law because we usually model humans as if we have values. Not necessarily True Values over the whole world, but at least contextual values over some easy-to-think-about chunk of the world. Modeling humans as goal-driven agents is never going to be perfect at predicting us, nor is it what you'd do if you had infinite computing power, but it still works really well for our purposes. We infer human values by building a simplified model of the world that features human actions as a fundamental part. Such an abstract model doesn't need to be our absolute best model of the world, it just needs to capture useful regularities. If we keep the model fixed, then the human values are determined by the data, but the model doesn't necessarily have to be fixed. If we uncurry this inference process, inferred human values depend both on the data and on the way we model the data. Consider historical examples of Goodhart's law, like the dolphin trainers who rewarded dolphins for each piece of trash removed from the pool, which taught the dolphins not to clean the pool, but to hoard trash and rip it up before delivering each piece. Upon hearing this story we immediately make a simple model of the situation, using variables like amount of trash in the pool, whether the dolphins are delivering trash, and whether they got a fish. In this model it's super easy to represent that the dolphin trainers want the pool clean and the dolphins want fish. Once we take this model of the world as given, then everything makes sense as an example of Goodhart's law: the humans wanted the pool to be clean, they created an incentive for the dolphins to take an action that used to be correlated with the pool being clean, and as a result the correlation broke down. We can gather up many such stories that all
12662f26-bc94-4cfa-8757-85b9930a9cf5
trentmkelly/LessWrong-43k
LessWrong
AGI-2011 conference videos Google tech talks published a bunch of video from AGI (Artificial General Intelligence) 2011 today, they're all linked from the conference site. I haven't watched any of them yet but the titles sound interesting. http://agi-conf.org/2011/conference-schedule/#vp (Via Patrick Hahn @OB-NYC.)
07053c97-c073-4f32-a51e-449f72bfd5a6
trentmkelly/LessWrong-43k
LessWrong
But Butter Goes Rancid In The Freezer I broached the subject of cryonics with a family member today.  He offered almost none of the normal objections and I've been happy all day about the way the conversation went.  One interesting issue that he raised that I'd like to find an answer for is the question in the title. Butter goes rancid after a while at room temperature.  It also goes rancid in the fridge and can absorb the other flavors if things aren't well contained inside the refrigerator.  Butter also goes rancid if left in a normal freezer, which mostly is designed to bring things very close to the melting point of water around 273 kelvin. This suggests that lipid chemistry responds to temperatures in a different way than intuitions mostly educated by other examples of freezing, which is relevant because the brain is mostly made out of fat, with some complicating proteins mixed in.  My guess is that developing a "rancid brain" isn't likely to be a serious issue when you get down to the 77 kelvin of liquid nitrogen, but its still something I'd like to be able to answer directly and honestly, after really thinking about it in terms of "safety engineering". One way to answer the direct question about butter might be to just perform the basic experiment with some butter samples at different temperatures (room, fridge, freezer, -80C freezer in a bio lab) and figure out how long butter stored each way takes to go rancid and then do some curve fitting, but that seems like it would take months or maybe even years, and butter doesn't even necessarily answer neurological questions directly.  Even if I learned about butter chemistry, there could be open questions about brain chemistry.  I've tentatively googled around for 30 minutes but organic chemistry isn't a primary area of expertise and I wasn't sure out to dig up the specialist scientific literature that might answer my question. This community seemed like a good place to get help on the subject! Here are some specific questions I'd love to know th
c8ce4356-621b-4a03-b489-5d0066740418
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Morality is Scary I'm worried that many AI alignment researchers and other LWers have a view of how human morality works, that really only applies to a small fraction of all humans (notably moral philosophers and themselves). In this view, people know or at least suspect that they are confused about morality, and are eager or willing to apply reason and deliberation to find out what their real values are, or to correct their moral beliefs. Here's [an example](https://www.greaterwrong.com/posts/FSmPtu7foXwNYpWiB/on-the-limits-of-idealized-values) of someone who fits this view: > > I’ve written, in the past, about a “ghost” version of myself — that is, one that can float free from my body; which travel anywhere in all space and time, with unlimited time, energy, and patience; and which can also make changes to different variables, and play forward/rewind different counterfactual timelines (the ghost’s activity somehow doesn’t have any moral significance). > > > I sometimes treat such a ghost kind of like an idealized self. It can see much that I cannot. It can see directly what a small part of the world I truly am; what my actions truly mean. The lives of others are real and vivid for it, even when hazy and out of mind for me. I trust such a perspective a lot. If the ghost would say “don’t,” I’d be inclined to listen. > > > I'm currently reading [The Status Game](https://www.amazon.com/Status-Game-Position-Governs-Everything-ebook/dp/B08H7Y414K) by Will Storr (highly recommended BTW), and found in it the following description of how morality works in most people, which matches my own understanding of history and my observations of humans around me: > > The moral reality we live in is a virtue game. We use our displays of morality to manufacture status. It’s good that we do this. It’s functional. It’s why billionaires fund libraries, university scholarships and scientific endeavours; it’s why a study of 11,672 organ donations in the USA found only thirty-one were made anonymously. It’s why we feel good when we commit moral acts and thoughts privately and enjoy the approval of our imaginary audience. Virtue status is the bribe that nudges us into putting the interests of other people – principally our co-players – before our own. > > > We treat moral beliefs as if they’re universal and absolute: one study found people were more likely to believe God could change physical laws of the universe than he could moral ‘facts’. Such facts can seem to belong to the same category as objects in nature, as if they could be observed under microscopes or proven by mathematical formulae. If moral truth exists anywhere, it’s in our DNA: that ancient game-playing coding that evolved to nudge us into behaving co-operatively in hunter-gatherer groups. But these instructions – strive to appear virtuous; privilege your group over others – are few and vague and open to riotous differences in interpretation. All the rest is an act of shared imagination. It’s a dream we weave around a status game. > > > The dream shifts as we range across the continents. For the Malagasy people in Madagascar, it’s taboo to eat a blind hen, to dream about blood and to sleep facing westwards, as you’ll kick the sunrise. Adolescent boys of the Marind of South New Guinea are introduced to a culture of ‘institutionalised sodomy’ in which they sleep in the men’s house and absorb the sperm of their elders via anal copulation, making them stronger. Among the people of the Moose, teenage girls are abducted and forced to have sex with a married man, an act for which, writes psychologist Professor David Buss, ‘all concerned – including the girl – judge that her parents giving her to the man was a virtuous, generous act of gratitude’. As alien as these norms might seem, they’ll feel morally correct to most who play by them. They’re part of the dream of reality in which they exist, a dream that feels no less obvious and true to them than ours does to us. > > > Such ‘facts’ also change across time. We don’t have to travel back far to discover moral superstars holding moral views that would destroy them today. Feminist hero and birth control campaigner Marie Stopes, who was voted Woman of the Millennium by the readers of The Guardian and honoured on special Royal Mail stamps in 2008, was an anti-Semite and eugenicist who once wrote that ‘our race is weakened by an appallingly high percentage of unfit weaklings and diseased individuals’ and that ‘it is the urgent duty of the community to make parenthood impossible for those whose mental and physical conditions are such that there is well-nigh a certainty that their offspring must be physically and mentally tainted’. Meanwhile, Gandhi once explained his agitation against the British thusly: ‘Ours is one continual struggle against a degradation sought to be inflicted upon us by the Europeans, who desire to degrade us to the level of the raw Kaffir [black African] … whose sole ambition is to collect a certain number of cattle to buy a wife with and … pass his life in indolence and nakedness.’ Such statements seem obviously appalling. But there’s about as much sense in blaming Gandhi for not sharing our modern, Western views on race as there is in blaming the Vikings for not having Netflix. Moral ‘truths’ are acts of imagination. They’re ideas we play games with. > > > The dream feels so real. And yet it’s all conjured up by the game-making brain. The world around our bodies is chaotic, confusing and mostly unknowable. But the brain must make sense of it. It has to turn that blizzard of noise into a precise, colourful and detailed world it can predict and successfully interact with, such that it gets what it wants. When the brain discovers a game that seems to make sense of its felt reality and offer a pathway to rewards, it can embrace its rules and symbols with an ecstatic fervour. The noise is silenced! The chaos is tamed! We’ve found our story and the heroic role we’re going to play in it! We’ve learned the truth and the way – the meaning of life! It’s yams, it’s > God, it’s money, it’s saving the world from evil big pHARMa. It’s not like a religious experience, it is a religious experience. It’s how the writer Arthur Koestler felt as a young man in 1931, joining the Communist Party: > > > ‘To say that one had “seen the light” is a poor description of the mental rapture which only the convert knows (regardless of what faith he has been converted to). The new light seems to pour from all directions across the skull; the whole universe falls into pattern, like stray pieces of a jigsaw puzzle assembled by one magic stroke. There is now an answer to every question, doubts and conflicts are a matter of the tortured past – a past already remote, when one lived in dismal ignorance in the tasteless, colourless world of those who don’t know. Nothing henceforth can disturb the convert’s inner peace and serenity – except the occasional fear of losing faith again, losing thereby what alone makes life worth living, and falling back into the outer darkness, where there is wailing and gnashing of teeth.’ > > > I hope this helps further explain why I think even solving (some versions of) the alignment problem probably won't be enough to ensure a future that's free from astronomical waste or astronomical suffering. A part of me is actually more scared of many futures in which "alignment is solved", than a future where biological life is simply wiped out by a paperclip maximizer.
d66c3a0c-0647-414b-97e1-2174cb50446d
trentmkelly/LessWrong-43k
LessWrong
Concepts Don't Work That Way Part of the sequence: Rationality and Philosophy Philosophy in the Flesh, by George Lakoff and Mark Johnson, opens with a bang: > The mind is inherently embodied. Thought is mostly unconscious. Abstract concepts are largely metaphorical. > > These are three major findings of cognitive science. More than two millennia of a priori philosophical speculation about these aspects of reason are over. Because of these discoveries, philosophy can never be the same again. > > When taken together and considered in detail, these three findings... are inconsistent with central parts of... analytic philosophy... > > This book asks: What would happen if we started with these empirical discoveries about the nature of mind and constructed philosophy anew? > > ...A serious appreciation of cognitive science requires us to rethink philosophy from the beginning, in a way that would put it more in touch with the reality of how we think. So what would happen if we dropped all philosophical methods that were developed when we had a Cartesian view of the mind and of reason, and instead invented philosophy anew given what we now know about the physical processes that produce human reasoning? > What emerges is a philosophy close to the bone. A philosophical perspective based on our empirical understanding of the embodiment of mind is a philosophy in the flesh, a philosophy that takes account of what we most basically are and can be. Philosophy is a diseased discipline, but good philosophy can (and must) be done. I'd like to explore how one can do good philosophy, in part by taking cognitive science seriously. CONCEPTUAL ANALYSIS Let me begin with a quick, easy example of how cognitive science can inform our philosophical methodology. The example below shouldn’t surprise anyone who has read A Human’s Guide to Words, but it does illustrate how misguided thousands of philosophical works can be due to an ignorance of cognitive science. Consider what may be the central method of 2
412f9274-3bc7-47d9-b392-8e567fac7a6f
trentmkelly/LessWrong-43k
LessWrong
A critical agential account of free will, causation, and physics This is an account of free choice in a physical universe. It is very much relevant to decision theory and philosophy of science. It is largely metaphysical, in terms of taking certain things to be basically real and examining what can be defined in terms of these things. The starting point of this account is critical and agential. By agential, I mean that the ontology I am using is from the point of view of an agent: a perspective that can, at the very least, receive observations, have cognitions, and take actions. By critical, I mean that this ontology involves uncertain conjectures subject to criticism, such as criticism of being logically incoherent or incompatible with observations. This is very much in a similar spirit to critical rationalism. Close attention will be paid to falsifiability and refutation, principally for ontological purposes, and secondarily for epistemic purposes. Falsification conditions specify the meanings of laws and entities relative to the perspective of some potentially falsifying agent. While the agent may believe in unfalsifiable entities, falsification conditions will serve to precisely pin down that which can be precisely pinned down. I have only seen "agential" used in the philosophical literature in the context of agential realism, a view I do not understand well enough to comment on. I was tempted to use "subjective"; however, while subjects have observations, they do not necessarily have the ability to take actions. Thus I believe "agential" has a more concordant denotation. You'll note that my notion of "agent" already assumes one can take actions. Thus, a kind of free will is taken as metaphysically basic. This presupposition may cause problems later. However, I will try to show that, if careful attention is paid, the obvious problems (such as contradiction with determinism) can be avoided. The perspective in this post can be seen as starting from agency, defining consequences in terms of agency, and defining physics in t
de4d0b66-1ea1-41a3-994a-e899b6d32690
trentmkelly/LessWrong-43k
LessWrong
Fascists and Rakes Cross-posted from my blog It feels like most people have a moral intuition along the lines of "you should let people do what they want, unless they're hurting other people". We follow this guideline, and we expect other people to follow it. I'll call this the permissiveness principle, that behaviour should be permitted by default. When someone violates the permissiveness principle, we might call them a fascist, someone who exercises control for the sake of control. And there's another moral intuition, the harm-minimising principle: "you should not hurt other people unless you have a good reason". When someone violates harm-minimisation, we might call them a rake, someone who acts purely for their own pleasure without regard for others. But sometimes people disagree about what counts as "hurting other people". Maybe one group of people believes that tic-tacs are sentient, and that eating them constitutes harm; and another group believes that tic-tacs are not sentient, so eating them does not hurt anyone. What should happen here is that people try to work out exactly what it is they disagree about and why. What actually happens is that people appeal to permissiveness. Of course, by the permissiveness principle, people should be allowed to believe what they want, because holding a belief is harmless as long as you don't act on it. So we say something like "I have no problem with people being morally opposed to eating tic-tacs, but they shouldn't impose their beliefs on the rest of us." Except that by the harm-minimising principle, those people probably should impose their beliefs on the rest of us. Forbidding you to eat tic-tacs doesn't hurt you much, and it saves the tic-tacs a lot of grief. It's not that they disagree with the permissiveness principle, they just think it doesn't apply. So appealing to the permissiveness principle isn't going to help much. I think the problem (or at least part of it) is, depending how you look at it, either double standards or
161b9233-8500-4c85-a4c1-b78882b419e6
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Problems of people new to AI safety and my project ideas to mitigate them ### TL DR I talked to people who got interested in AI safety recently to discuss their problems. The interviewees reported that the field is hard to navigate for those who start their way into it. It is also hard to advance a career after the initial introduction to AI safety, which is probably a major bottleneck for the field. Also, some people experience anxiety, hopelessness, which affects their mental health and well-being, and this problem, in my opinion, gets less attention than it deserves. Background ---------- It seems like people who are new to the AI safety, face a number of difficulties, and, although there are [some](https://forum.effectivealtruism.org/posts/SuvMZgc4M8FziSvur/analysis-of-ai-safety-surveys-for-field-building-insights) [studies](https://www.lesswrong.com/posts/a8fFLg8qBmq6yv53d/resource-stanford-talk-laying-out-existential-risk-from-ai) exploring members of the AI safety community, I did not find any studies exploring these difficulties in-depth, which are relatively new, and explore topics similar to what I am interested in, so I decided to talk to people new to the field and listen about their problems. Methodology ----------- I conducted 14 interviews with people who recently joined the AI safety field. Most of them got interested in AI safety more than 6 months and less than 18 months ago. I had a script for interviews, but they were semi-structured, and each interview was somewhat different from the others, so the data I collected is more qualitative rather than quantitative, and my goal is to get intuitions about everyday problems of my interviewees and possible bottlenecks in the field, so I can get ideas for field-building projects. Also, the sample size is too small for a meaningful statistical analysis so I decided not to focus on counting percentages of certain responses since they might mislead. Results ------- ### ### How do people get interested in AI safety? Are there any common patterns in their stories, or do they significantly different? Several people said they got interested in AI safety by reading Nick Bostrom's "Superintelligence” book. I did not find any other patterns.   ### What projects made by EA and AI safety community are the most valuable for people who are completely new to the field? Two projects people often mentioned as valuable to them are [80 000 hours career advising](https://80000hours.org/speak-with-us/?int_campaign=2021-08__primary-navigation) and [AGI safety fundamentals course](https://www.agisafetyfundamentals.com/). 80 000 hours career advising helps people to better understand their preferences and abilities, connect with the right people and suggest the next steps. AGI safety fundamentals is a course covering the basics of AI safety which people may complete by themselves or with a learning group. These projects helped my interviewees, but it seems like in their current state they are hard to scale. Right now they help only a fraction of people applying for these programs because their resources are limited.   ### What do interviewees think about AGI-related existential risks and how it affects their mental health? My interviewees’ AGI timelines vary a lot from several years to several decades. Some think that we are basically doomed, and others predict that P(Doom) < 10%. It seems like distribution resembles that of the field in general. The effect of doomerism on mental health also has a large degree of variability. Some people do not report any AGI-related anxiety at all, but nearly half of the respondents report some degree of anxiety or depressive symptoms from mild to strong and debilitating. Also, some people mentioned that they don't have plans long into the future. One of my interviewees said that they only do short-term money investments, and several others mentioned that they are unsure whether they want to have kids if the world will die anyways.   ### Do most people struggle to get a position in the AI safety field? Only 2 of my 14 interviewees have a financially stable job in the AI safety field although many of them applied for numerous fellowships and positions in AI safety organizations. Two of them said they don't know how to earn money by contributing to AI safety, and they only continue doing it because they have enough money so they can do whatever they want. I recognized 3 patterns of career development among people with no stable positions:    * Students who focus on getting a degree now and expect to get a position later. * People who work as independent researchers or contribute to non-profit field-building projects. * People who work in a job are somewhat related to AI safety. For example, implementing GDPR compliance in a non-AI safety company, or working in an AI-focused cyber security startup.   ### Is the AI safety field hard to navigate for newcomers? Almost all my interviewees mentioned that the AI safety field is hard to navigate in. Especially for people with backgrounds unusual for the field, like psychology or product management in startups. People mentioned that there are many research topics, and many people to follow and it is usually not structured in a newcomer-friendly way There are several projects aimed to mitigate this problem. As I mentioned earlier, many of my interviewees mentioned that the AGI safety fundamentals course and 80 000 hours career advising helped them to navigate the field and better understand the career paths they want to pursue, but do not eliminate the problem completely as the field is very complex and consists of many narrow subfields that are not structured in a newcomer-friendly way and are hard to navigate. Also, both AGI safety fundamentals and 80 000 career advising accept only a fraction of applications they receive since they have limited capacity, so they mitigate the problem but do not solve it completely. A number of the interviewees mentioned that the AI safety community is very helpful and generally people are willing to help newcomers to navigate the field.   ### Is there a request for mentorship or guidance from experienced people? Despite the problems I described earlier, there are way more people in junior positions than the field can absorb. This problem combined with the previous problems that the field is complex and consists of many specialized subfields makes it hard to navigate in cutting-edge research. It's hard to know for sure what to do to get a job and make a meaningful contribution, and too few people to steer in the right direction. For example, one interviewee said, they are interested in interpretability research for AI safety, but they are unsure what are the best ways to do useful research and the only people who are qualified to help and to steer in the right direction are people doing interpretability research in the small number of organizations like OpenAI or Anthropic.   ### Do people feel emotionally isolated due to the lack of connections with people who care about AI safety? Almost half of the interviewees said they don't have people around them who are also interested in AI safety. Almost all my interviewees who live outside of the US, and the UK mentioned this problem, and a number of them said that if they want to pursue a career in AI safety they probably have to move to the US or the UK, which is also a barrier   ### How hard is it to keep up with AI safety research, news, and other important information? Most of my interviewees did not mention any problems in keeping up with important information in the AI safety field, but some said that there is a lot of important stuff, scattered all over the places, like LessWrong, personal blogs, Twitter, news websites, and so on. These bits of valuable information are usually poorly structured so it is hard to digest everything and keep up with everything. Study limitations ----------------- ### Sample biases The people I interviewed are mostly members of the “[AI alignment](https%3A%2F%2Fbit.ly%2F3HiXUrv)” Slack community, and it takes some knowledge of the field to get there, so I suspect, that people who got interested in AI safety very recently, or people who have a more casual interest in AI alignment are underrepresented in my sample. Another possible bias stems from the fact that when I asked people to give me interviews, I stated that I want to ask about their problems, so people who don't experience problems in AI safety might have been less prone to contact me. ### ### Heterogeneity among respondents Due to the relatively small sample size and heterogeneity among respondents in terms of their location, background, interests, and employment status, I did not find patterns within specific groups of my target audience (e.g. people interested in AI governance, or people from the EU) This might be done with bigger sample size or more uniformity among respondents. ### Discussion ---------- One of my interviewees mentioned that they know a couple of talented people who initially got interested in AI safety but because of the problems I discuss in this post they lost interest and decided to work on other topics. It was sad to hear. The field might have an enormous impact on humanity's future, and problems make it unfriendly to bright people who want to contribute to it. I believe that soon the Overton window on public perception of AI-related risks is moving and the AI safety field will experience a big surge of money and talent. Some problems will be much easier to solve with the influx of money, other resources, and new people with new expertise, but I believe that some other problems require action now, before this surge. In the end, I want to discuss my thoughts on what projects I would love to see so we can alleviate critical bottlenecks now and help the field to scale in the future. ### ### Onboarding into the field for beginners As I showed earlier, it is hard for beginners to understand where to start. There are many subfields in AI safety, and often there are no beginner-friendly introductions to them. Most of my interviewees reported feeling disoriented when they tried to navigate the field at first. Although projects like 80 000 hours career advising, AGI safety fundamentals, and others help to mitigate this problem to some degree, they do not resolve it completely, and it also seems to me that they can not easily be scaled at the moment. This led me to two ideas: a. It's a good idea to support existing projects which bring the most value and develop them in a way that makes them easier to scale. b. People who were the target audience of my research, who are not top researchers, but competent enough so they can help people who don't know the field at all. In my experience, there are plenty of people who might be ready to do it and I think that such advising might bring a lot of value. ### ### Scalable Mentorship As I discussed earlier, one of the major bottlenecks in the AI safety field is that there are far more "junior” people than the industry absorbing. Since the field is very diverse and is rapidly evolving, it seems that introductory projects like the AGI safety fundamentals course with defined curriculum and learning groups might work better for absolute beginners, but for more experienced ones, mentorship seems to me as a natural solution. There are projects focused on solving this problem, for example, [SERI MATS](https://www.serimats.org/), [GovAI](https://www.governance.ai/), [PIBBSS](https://www.pibbss.ai/), and [AI safety camp](https://aisafety.camp/). But to my knowledge, there is fierce competition to get there, and they accept only a fraction of applicants. In my opinion, this problem is a major bottleneck for the AI safety field in general, and, in my opinion, the most important to solve among others I discuss here. ### ### Mental health and doomerism Some of my interviewees mentioned that they feel like humanity might be doomed, and they experience anxiety and depressive mood because of it. Some people say that they do not make long-term investments, unsure whether they should bring children into this doomed world, or think about long-term health. Also, as I described earlier, some of my interviewees mentioned that they feel lonely. They have nobody to talk about AI safety in their daily lives. I think these problems are important and they get less attention than they deserve both because I generally prefer people to not suffer rather than to suffer and because mentally healthy people are more productive than the ones who suffer, and the productivity of AI safety researchers is very important for the world. Unfortunately, my current research didn't mean to focus on this particular topic so I did not explore it in-depth, but as a clinical psychologist I would love to explore it, so if you believe that you are on a doomer side and this affects your mental health or your life choices, feel free to write me a direct message, I will be happy to talk to you, and, hopefully, provide some help if you think you need it.   Post scriptum ------------- I would be glad to implement my ideas as well as any other good field-building ideas, so feel free to write me direct messages on this topic if you are interested in this kind of stuff.
1603a15b-b32a-48ee-9645-1fee93116f9e
trentmkelly/LessWrong-43k
LessWrong
Unnecessary Boredom Bias Crossposted from my own blog FunctionalNoise Boredom has felt like an enemy to me for a long time. When I am bored it seems like I am wasting my time, being useless and inefficient and meaningless. I hate being bored and I've tried fighting boredom, avoiding boredom and ignoring boredom. But in recent years I've come to appreciate boredom more and more. It's a potential source of ideas. So I'm updating my beliefs on the uselessness of boredom and beginning to accept it as an ally. (If you can't beat them, then join them.) When I'm bored and I don't pay attention, here is what happens. I go through roughly these phases: * Take my phone * Read my email * Read social media * Read blogs * Find a book to read * Or find some other passive activity; watch Netflix, gaming, etc. This approach keeps you busy and keeps boredom away, but it also keeps ideas away. The easy access to entertainment might be destroying our collective creativity. How many of us are constantly being distracted by today's well designed entertainment? Trillions of dollars and euros of value might be lost, because we are systematically glued to our screens. What if Edison and Einstein and others were born today, would they be gaming addicts? Once I ignore the desire to avoid boredom, I can reach a place of creativity, where my mind starts to wander and think of other things to do: * Reflect on life/work * Think of something meaningful to do * Write a short post * Write a (part of a) longer blog post * Work on a side project * Find a new idea These activities are a much better form of entertainment, since they focus on - thinking about - creation, rather than on consumption of information. After having creative thoughts I am more likely to actually go out and create something valuable. On Pressure There is another way to get ideas. That is to apply temporary pressure to yourself or others. See if you can come up with a solution under the stress of the situation. This is more often us
49c1150f-2d3c-4397-bac9-762383a5c117
trentmkelly/LessWrong-43k
LessWrong
so you have a chronic health issue (dump of highish agency ideas to suffer less) Disclaimer: Not medical advice, don’t be stupid. Possibly only applicable to bio autists (gf wants me to impress that unless you know enough about bio you should not do this!!). This post does NOT recommend trying anything illegal. Note: I did all of this and got diagnosed the normal way through the normal system by getting referred to a cardiologist so maybe it’s all useless, idk. Do not discount the potential intuition/wisdom of some specialists!! But also do not entirely trust them, they miss things. Anyway, this is more applicable to dysautonomia type symptoms than what mine ultimately ended up (genetic). START TRACKING YOUR SYMPTOMS Have a signal chat to yourself or similar. Make sure its very low friction and you've impressed on yourself the importance of tracking symptoms Also track: hormonal cycle, biometrics if relevant (apple watch, etc), and anything else that might be relevant Possibly worth using visible health or guava to track heuristics like predicted daily energy, correlations between symptoms. Get your genome sequenced early on. Value of this information tends to be high. i like nebula and nucleus (note: nebula got 2x as expensive since i first wrote this). RNAseq (see imyoo.health, various clinical trials) seems underrated. I haven't tried it, but seems like a good way to track if your interventions are working and along which direction your body is actively dysregulated, and great for determining if meds will work for traditional autoimmune illnesses (see prismRA). You can do more than you think you can- build yourself tools and monitoring devices if you can. make sure you've checked for the certification of any devices you're regarding as "ground truth" "quantitative" measurements. lots of them have massive error bars (like spo2 tends to be v flaky esp in different light conditions). ideally, get several off of amazon and check them against each other then return the most-outliers. Try an e
66a1aaaf-c64c-4969-833e-1fdd04efece0
StampyAI/alignment-research-dataset/arxiv
Arxiv
Spending Thinking Time Wisely: Accelerating MCTS with Virtual Expansions. 1 Introduction --------------- When artificial intelligence was first studied in the 1950s, researchers have sought to find the solution to the question “How to build an agent with perfect rationality". The term “perfect rationality"  ([carnap1962logical,](#bib.bib7) ; [newell1982knowledge,](#bib.bib24) ; [russell1994provably,](#bib.bib26) ) here refers to the decision made with infinite amounts of computations. However, one can only solve small-scale problems without considering the practical computation time since classical search algorithms usually exhibit exponential running time. Therefore, recent AI research would no longer seek to achieve “perfect rationality", but instead carefully trade-off computation versus the level of rationality. People have developed computational models like “bounded optimality" to model these settings ([russell1994provably,](#bib.bib26) ). The increasing level of rationality under the same computational budget has given us a lot of AI successes. Algorithms include the Monte-Carlo sampling algorithms, the variational inference algorithms, and using DNNs as universal function approximators ([coulom2006efficient,](#bib.bib9) ; [chaslot2008monte,](#bib.bib8) ; [gelly2011monte,](#bib.bib13) ; [silver2016mastering,](#bib.bib30) ; [hoffman2013stochastic,](#bib.bib17) ). Recently, MCTS-based RL algorithms have achieved much success, mainly on board games. The most notable achievement is that AlphaGo beats Hui Fan in 2015 ([silver2016mastering,](#bib.bib30) ). It is the first time a computer program beat a human professional Go player. Afterward, AlphaGo beats two top-ranking human players, Lee Sedol in 2016 and Jie Ke in 2017, the latter of which ranked first worldwide at the time. Later, MCTS-based RL algorithms were further extended to other board games and Atari games ([schrittwieser2020mastering,](#bib.bib27) ). EfficientZero ([ye2021mastering,](#bib.bib34) ) significantly improves the sample efficiency of MCTS-based RL algorithms, shedding light on its future applications in the real world like robotics and self-driving. Despite the impressive performance of MCTS-based RL algorithms, they require massive amounts of computation to train and evaluate. For example, MuZero ([schrittwieser2020mastering,](#bib.bib27) ) used 1000 TPUs trained for 12 hours to learn the game of Go, and for a single Atari game, it needs 40 TPUs to train 12 hours. Compared to previous algorithms on the Atari games benchmark, it needs around two orders of magnitude more compute. This prohibitively large computational requirement has slowed down both the further development of MCTS-based RL algorithms as well as its practical use. Under the hood, MCTS-based RL algorithms imagine the futures when taking different future action sequences. However, this imaging process for the current method is not computationally efficient. For example, AlphaGo needs to look ahead 1600 game states to place a single stone. On the contrary, top human professional players can only think through around 100-200 game states per minute ([silver2016mastering,](#bib.bib30) ). Apart from the inefficiency, the current MCTS algorithm deals with easy and challenging cases with the same computational budget. However, human knows to use their time when it is most needed. In this paper, we aim to design new algorithms that save the computational time of the MCTS-based RL methods. We make three key contributions: (1) We present Virtual MCTS, a variant of MCTS, to approximate the vanilla MCTS search policies with less computation. Moreover, unlike previous pruning-based methods that focus on the selection or evaluation stage in MCTS, our method improves the search loop. It terminates the search iterations earlier adaptively when current states are simpler; (2) Theoretically, we provide some error bounds of the proposed method. Furthermore, the visualization results indicate that Virtual MCTS has a better computation and performance trade-off than vanilla MCTS; (3) Empirically, our method can save more than 50% of search times on the challenging game Go 9×9999\times 99 × 9 and more than 60% on the visually complex Atari games while keeping comparable performances to those of vanilla MCTS. 2 Related Work --------------- ### 2.1 Reinforcement Learning with MCTS For a long time, Computer Go has been regarded as a remarkably challenging game ([bouzy2001computer,](#bib.bib3) ; [cai2007computer,](#bib.bib6) ). Researchers attempt to use Monte-Carlo techniques that evaluate the value of the node state through random playouts ([bouzy2004monte,](#bib.bib4) ; [gelly2007combining,](#bib.bib11) ; [gelly2008achieving,](#bib.bib12) ; [silver2016mastering,](#bib.bib30) ). Afterward, UCT algorithms have generally been applied in Monte-Carlo tree search (MCTS) algorithms, which use UCB1 to select action at each node of the tree ([kocsis2006bandit,](#bib.bib20) ). Recently, MCTS-based RL methods ([silver2016mastering,](#bib.bib30) ; [silver2017mastering,](#bib.bib32) ; [silver2018general,](#bib.bib31) ; [schrittwieser2020mastering,](#bib.bib27) ) have become increasingly popular and achieved super-human performances on board games because of their strong ability to search. Modern MCTS-based RL algorithms include four stages in the search loop: selection, expansion, evaluation, and backpropagation. The computation bottlenecks in vanilla MCTS come from the search loop, especially for the evaluation stage and the selection stage of each iteration. The selection stage is time-consuming when the search tree becomes wider and deeper. The evaluation stage is quite expensive because people attempt to evaluate the node value by random playouts to the end in previous researches. Due to the search loop, MCTS-based algorithms have multiple model inferences compared to other model-free RL methods like PPO ([schulman2017proximal,](#bib.bib28) ) and SAC ([haarnoja2018soft,](#bib.bib16) ). ### 2.2 Acceleration of MCTS MCTS-based methods have proved their strong capability of solving complex games or tasks. However, the high computational cost of MCTS hinders its application to some real-time and more general scenarios. Therefore, numerous works are devoted to accelerating MCTS. For example, to make the selection stage more effective, some heuristic pruning methods ([gelly2006modification,](#bib.bib14) ; [wang2007modifications,](#bib.bib33) ; [sephton2014heuristic,](#bib.bib29) ; [baier2014mcts,](#bib.bib1) ; [baier2018mcts,](#bib.bib2) ) aim to reduce the width and depth of the search tree with some heuristic functions. Furthermore, for more efficient evaluations, Lorentz [lorentz2015early](#bib.bib22) proposed early playout termination of MCTS (MCTS-EPT) to stop the random playouts early and use an evaluation function to assess win or loss. Moreover, Hsueh et al. [hsueh2016analysis](#bib.bib18) applied MCTS-EPT to the Chinese dark chess and proved its effectiveness. Afterward, similar ideas have been applied in the evaluation stage of AlphaGoZero ([silver2017mastering,](#bib.bib32) ) and later MCTS-based methods ([silver2018general,](#bib.bib31) ; [schrittwieser2020mastering,](#bib.bib27) ; [ye2021mastering,](#bib.bib34) ). They evaluate the Q𝑄Qitalic\_Q-values through a learnable evaluation network instead of running playouts to the end. Grill et al. [grill2020monte](#bib.bib15) propose a novel regularized policy optimization method based on AlphaZero to decrease the search budget of MCTS, which is from the optimization perspective. Danihelka et al. ([danihelka2022policy,](#bib.bib10) ) propose a policy improvement algorithm based on sampling actions without replacement, named Gumbel trick to achieve better performance when planning with few simulations. However, these methods mentioned above focus on the specific stage of the search iteration or reduce the total budget through pruning and optimization methods, which are orthogonal to us. And few works targets at the search loop. Lan et al. ([lan2020learning,](#bib.bib21) ) propose DS-MCTS, which defines the uncertainty of MCTS and approximates it by extra DNNs with specific features for board games in training. During the evaluation, DS-MCTS will check periodically and stop the search if the state is certain. 3 Background ------------- The AlphaGo series of work ([silver2016mastering,](#bib.bib30) ; [silver2017mastering,](#bib.bib32) ; [silver2018general,](#bib.bib31) ; [schrittwieser2020mastering,](#bib.bib27) ) are all MCTS-based reinforcement learning algorithms. Those algorithms assume the environment transition dynamics are known or learn the environment dynamics. Based on the dynamics, they use the Monte-Carlo tree search (MCTS) as the policy improvement operator. I.e., taking in the current policy, MCTS returns a better policy with the search algorithm. The systematic search allows the MCTS-based RL algorithm to quickly improve the policy and perform much better in the setting where heavy reasoning is required. ### 3.1 MCTS This part briefly introduces the MCTS method implemented in reinforcement learning applications. As mentioned in the related works, modern MCTS-based RL algorithms include four stages in the search loop, namely selection, expansion, evaluation, and backpropagation. MCTS takes in the current states and generates a policy after the search loop of N𝑁Nitalic\_N iterations. Here N𝑁Nitalic\_N is a constant number of iterations set by the designer, regarded as the total budget. In the selection stage of each iteration, an action will be selected by maximizing over UCB. Specifically, AlphaZero ([silver2018general,](#bib.bib31) ) and MuZero ([schrittwieser2020mastering,](#bib.bib27) ) are developed based on a variant of UCB, P-UCT ([rosin2011multi,](#bib.bib25) ) and have achieved great success on board games and Atari games. The formula of P-UCT is the Eq ([1](#S3.E1 "1 ‣ 3.1 MCTS ‣ 3 Background ‣ Spending Thinking Time Wisely: Accelerating MCTS with Virtual Expansions")): | | | | | | | --- | --- | --- | --- | --- | | | aksuperscript𝑎𝑘\displaystyle a^{k}italic\_a start\_POSTSUPERSCRIPT italic\_k end\_POSTSUPERSCRIPT | =arg⁡maxa∈𝒜⁡Q(s,a)+P(s,a)∑b∈𝒜N(s,b)1+N(s,a)(c1+log⁡((∑b∈𝒜N(s,b)+c2+1)/c2)),absentsubscript𝑎𝒜𝑄𝑠𝑎𝑃𝑠𝑎subscript𝑏𝒜𝑁𝑠𝑏1𝑁𝑠𝑎subscript𝑐1subscript𝑏𝒜𝑁𝑠𝑏subscript𝑐21subscript𝑐2\displaystyle=\arg\max\_{a\in\mathcal{A}}Q(s,a)+P(s,a)\frac{\sqrt{\sum\_{b\in\mathcal{A}}N(s,b)}}{1+N(s,a)}(c\_{1}+\log((\sum\_{b\in\mathcal{A}}N(s,b)+c\_{2}+1)/c\_{2})),= roman\_arg roman\_max start\_POSTSUBSCRIPT italic\_a ∈ caligraphic\_A end\_POSTSUBSCRIPT italic\_Q ( italic\_s , italic\_a ) + italic\_P ( italic\_s , italic\_a ) divide start\_ARG square-root start\_ARG ∑ start\_POSTSUBSCRIPT italic\_b ∈ caligraphic\_A end\_POSTSUBSCRIPT italic\_N ( italic\_s , italic\_b ) end\_ARG end\_ARG start\_ARG 1 + italic\_N ( italic\_s , italic\_a ) end\_ARG ( italic\_c start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT + roman\_log ( ( ∑ start\_POSTSUBSCRIPT italic\_b ∈ caligraphic\_A end\_POSTSUBSCRIPT italic\_N ( italic\_s , italic\_b ) + italic\_c start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT + 1 ) / italic\_c start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ) ) , | | (1) | where k𝑘kitalic\_k is the index of iteration, 𝒜𝒜\mathcal{A}caligraphic\_A is the action set, Q(s,a)𝑄𝑠𝑎Q(s,a)italic\_Q ( italic\_s , italic\_a ) is the estimated Q-value, P(s,a)𝑃𝑠𝑎P(s,a)italic\_P ( italic\_s , italic\_a ) is the policy prior obtained from neural networks, N(s,a)𝑁𝑠𝑎N(s,a)italic\_N ( italic\_s , italic\_a ) is the visitations to select the action a𝑎aitalic\_a from the state s𝑠sitalic\_s and c1,c2subscript𝑐1subscript𝑐2c\_{1},c\_{2}italic\_c start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_c start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT are hyper-parameters. The output of MCTS is the visitation of each action of the root node. After N𝑁Nitalic\_N search iterations, the final policy π(s)𝜋𝑠\pi(s)italic\_π ( italic\_s ) is defined as the normalized root visitation distribution πN(s)subscript𝜋𝑁𝑠\pi\_{N}(s)italic\_π start\_POSTSUBSCRIPT italic\_N end\_POSTSUBSCRIPT ( italic\_s ), where πk(s,a)=N(s,a)/∑b∈𝒜N(s,b)=N(s,a)/k,a∈𝒜formulae-sequencesubscript𝜋𝑘𝑠𝑎𝑁𝑠𝑎subscript𝑏𝒜𝑁𝑠𝑏𝑁𝑠𝑎𝑘𝑎𝒜\pi\_{k}(s,a)=N(s,a)/\sum\_{b\in\mathcal{A}}N(s,b)=N(s,a)/k,a\in\mathcal{A}italic\_π start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ( italic\_s , italic\_a ) = italic\_N ( italic\_s , italic\_a ) / ∑ start\_POSTSUBSCRIPT italic\_b ∈ caligraphic\_A end\_POSTSUBSCRIPT italic\_N ( italic\_s , italic\_b ) = italic\_N ( italic\_s , italic\_a ) / italic\_k , italic\_a ∈ caligraphic\_A. For simplification, we use πksubscript𝜋𝑘\pi\_{k}italic\_π start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT in place of πk(s)subscript𝜋𝑘𝑠\pi\_{k}(s)italic\_π start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ( italic\_s ) sometimes. And the detailed procedure of MCTS is introduced in Appendix. In our method, we propose to approximate the final policy πN(s)subscript𝜋𝑁𝑠\pi\_{N}(s)italic\_π start\_POSTSUBSCRIPT italic\_N end\_POSTSUBSCRIPT ( italic\_s ) with π^k(s)subscript^𝜋𝑘𝑠\hat{\pi}\_{k}(s)over^ start\_ARG italic\_π end\_ARG start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ( italic\_s ), which we name as a virtual expanded policy, through a new expansion method and a termination rule. In this way, the number of iterations in MCTS can be reduced from N𝑁Nitalic\_N to k𝑘kitalic\_k. ### 3.2 Computation Requirement Most of the computations in MCTS-based RL are in the MCTS procedure. Each action taken by MCTS requires N𝑁Nitalic\_N times neural network evaluations, where N𝑁Nitalic\_N is a constant number of iterations in the search loop. Traditional RL algorithms, such as PPO ([schulman2017proximal,](#bib.bib28) ) or DQN ([mnih2015human,](#bib.bib23) ), only need a single neural network evaluation per action. Thus, MCTS-based RL is roughly N𝑁Nitalic\_N times computationally more expensive than traditional RL algorithms. In practice, training a single Atari game needs 12 hours of computation time on 40 TPUs ([schrittwieser2020mastering,](#bib.bib27) ). The computation need is roughly two orders of magnitude more than traditional RL algorithms ([schulman2017proximal,](#bib.bib28) ), although the final performance of MuZero is much better. 4 Method --------- We aim to spend more search time on harder states and less on easier states. Intuitively, human knows when to make a quick decision or a slow decision under different circumstances. Unfortunately, this situation-aware behavior is absent in current MCTS algorithms. Therefore, we propose an MCTS variant that terminates the search iteration adaptively. It consists of two components: a novel expansion method named virtual expansion to estimate the final visitation based on the current partial tree; a termination rule that decides when to terminate based on the hardness of the current scenario. And we will display the adaptive mechanism through visualizations in Section [5.5](#S5.SS5 "5.5 Visualization of V-MCTS’s Adaptive Behavior ‣ 5 Experiments ‣ Spending Thinking Time Wisely: Accelerating MCTS with Virtual Expansions"). ### 4.1 Termination Rule We propose to terminate the search loop earlier based on the current tree statistics. Intuitively, we no longer need to search further if we find that recent searches have little changes on the root visitation distribution. With this intuition in mind, we propose a simple modification to the MCTS search algorithm. As mentioned in [3.1](#S3.SS1 "3.1 MCTS ‣ 3 Background ‣ Spending Thinking Time Wisely: Accelerating MCTS with Virtual Expansions"), πk(s)subscript𝜋𝑘𝑠\pi\_{k}(s)italic\_π start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ( italic\_s ) is the policy defined by the visitations of the root state at iteration k𝑘kitalic\_k. Let Δs(i,j)subscriptΔ𝑠𝑖𝑗\Delta\_{s}(i,j)roman\_Δ start\_POSTSUBSCRIPT italic\_s end\_POSTSUBSCRIPT ( italic\_i , italic\_j ) be the L1 difference of πi(s),πj(s)subscript𝜋𝑖𝑠subscript𝜋𝑗𝑠\pi\_{i}(s),\pi\_{j}(s)italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_s ) , italic\_π start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ( italic\_s ), namely Δs(i,j)=‖πi(s)−πj(s)‖1subscriptΔ𝑠𝑖𝑗subscriptnormsubscript𝜋𝑖𝑠subscript𝜋𝑗𝑠1\Delta\_{s}(i,j)=\left|\left|\pi\_{i}(s)-\pi\_{j}(s)\right|\right|\_{1}roman\_Δ start\_POSTSUBSCRIPT italic\_s end\_POSTSUBSCRIPT ( italic\_i , italic\_j ) = | | italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ( italic\_s ) - italic\_π start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ( italic\_s ) | | start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT. Then we terminate the search loop when we have searched at least rN𝑟𝑁rNitalic\_r italic\_N iterations and Δs(k,k/2)<ϵsubscriptΔ𝑠𝑘𝑘2italic-ϵ\Delta\_{s}(k,k/2)<\epsilonroman\_Δ start\_POSTSUBSCRIPT italic\_s end\_POSTSUBSCRIPT ( italic\_k , italic\_k / 2 ) < italic\_ϵ, where ϵitalic-ϵ\epsilonitalic\_ϵ is a tolerance hyper-parameter, r∈(0,1)𝑟01r\in(0,1)italic\_r ∈ ( 0 , 1 ) is the ratio of the minimum search budget and N𝑁Nitalic\_N is the full search iterations. We show that under certain conditions, a bound on Δs(k,k/2)subscriptΔ𝑠𝑘𝑘2\Delta\_{s}(k,k/2)roman\_Δ start\_POSTSUBSCRIPT italic\_s end\_POSTSUBSCRIPT ( italic\_k , italic\_k / 2 ) implies a bound on Δs(k,N)subscriptΔ𝑠𝑘𝑁\Delta\_{s}(k,N)roman\_Δ start\_POSTSUBSCRIPT italic\_s end\_POSTSUBSCRIPT ( italic\_k , italic\_N ). Δs(k,N)subscriptΔ𝑠𝑘𝑁\Delta\_{s}(k,N)roman\_Δ start\_POSTSUBSCRIPT italic\_s end\_POSTSUBSCRIPT ( italic\_k , italic\_N ) measures the distance between the current policy πk(s)subscript𝜋𝑘𝑠\pi\_{k}(s)italic\_π start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ( italic\_s ) and the oracle policy πN(s)subscript𝜋𝑁𝑠\pi\_{N}(s)italic\_π start\_POSTSUBSCRIPT italic\_N end\_POSTSUBSCRIPT ( italic\_s ). In this way, Δs(k,k/2)subscriptΔ𝑠𝑘𝑘2\Delta\_{s}(k,k/2)roman\_Δ start\_POSTSUBSCRIPT italic\_s end\_POSTSUBSCRIPT ( italic\_k , italic\_k / 2 ) reflects the hardness of the state s𝑠sitalic\_s. Consequently, once the gap is small enough, it is unnecessary for more search iterations. ### 4.2 Virtual Expansion in MCTS Algorithm 1 Iteration of vanilla MCTS 1:Current k𝑘kitalic\_k-th iteration step: 2:Input: 𝒜,P,Qk(s,a),Nk(s,a)𝒜𝑃subscript𝑄𝑘𝑠𝑎subscript𝑁𝑘𝑠𝑎\mathcal{A},P,Q\_{k}(s,a),N\_{k}(s,a)caligraphic\_A , italic\_P , italic\_Q start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ( italic\_s , italic\_a ) , italic\_N start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ( italic\_s , italic\_a ) 3:Initialize: s←sroot←𝑠subscript𝑠roots\leftarrow s\_{\text{root}}italic\_s ← italic\_s start\_POSTSUBSCRIPT root end\_POSTSUBSCRIPT 4:repeat do search 5:     a\*←UCB1(Q,P,N)←superscript𝑎UCB1𝑄𝑃𝑁a^{\*}\leftarrow{\text{UCB1}(Q,P,N)}italic\_a start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ← UCB1 ( italic\_Q , italic\_P , italic\_N ) 6:     s←next state(s,a\*)←𝑠next state𝑠superscript𝑎s\leftarrow\text{next state}(s,a^{\*})italic\_s ← next state ( italic\_s , italic\_a start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ) 7:until Nk(s,a\*)=0subscript𝑁𝑘𝑠superscript𝑎0N\_{k}(s,a^{\*})=0italic\_N start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ( italic\_s , italic\_a start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ) = 0 8:Evaluate the state value R(s,a)𝑅𝑠𝑎R(s,a)italic\_R ( italic\_s , italic\_a ) and P(s,a)𝑃𝑠𝑎P(s,a)italic\_P ( italic\_s , italic\_a ) 9:for s𝑠sitalic\_s along the search path do 10:     Qk+1(s,a)=Nk(s,a)⋅Qk(s,a)+R(s,a)Nk(s,a)+1subscript𝑄𝑘1𝑠𝑎⋅subscript𝑁𝑘𝑠𝑎subscript𝑄𝑘𝑠𝑎𝑅𝑠𝑎subscript𝑁𝑘𝑠𝑎1Q\_{k+1}(s,a)=\frac{N\_{k}(s,a)\cdot Q\_{k}(s,a)+R(s,a)}{N\_{k}(s,a)+1}italic\_Q start\_POSTSUBSCRIPT italic\_k + 1 end\_POSTSUBSCRIPT ( italic\_s , italic\_a ) = divide start\_ARG italic\_N start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ( italic\_s , italic\_a ) ⋅ italic\_Q start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ( italic\_s , italic\_a ) + italic\_R ( italic\_s , italic\_a ) end\_ARG start\_ARG italic\_N start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ( italic\_s , italic\_a ) + 1 end\_ARG 11:     Nk+1(s,a)=Nk(s,a)+1subscript𝑁𝑘1𝑠𝑎subscript𝑁𝑘𝑠𝑎1N\_{k+1}(s,a)=N\_{k}(s,a)+1italic\_N start\_POSTSUBSCRIPT italic\_k + 1 end\_POSTSUBSCRIPT ( italic\_s , italic\_a ) = italic\_N start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ( italic\_s , italic\_a ) + 1 12:end for 13:Return Qk+1(s,a),Nk+1(s,a)subscript𝑄𝑘1𝑠𝑎subscript𝑁𝑘1𝑠𝑎Q\_{k+1}(s,a),N\_{k+1}(s,a)italic\_Q start\_POSTSUBSCRIPT italic\_k + 1 end\_POSTSUBSCRIPT ( italic\_s , italic\_a ) , italic\_N start\_POSTSUBSCRIPT italic\_k + 1 end\_POSTSUBSCRIPT ( italic\_s , italic\_a ) Algorithm 2 Iteration of MCTS with Virtual Expansion 1:Current k𝑘kitalic\_k-th iteration step: 2:Input: 𝒜,P,Qk(s,a),Nk(s,a),N^k(s,a)𝒜𝑃subscript𝑄𝑘𝑠𝑎subscript𝑁𝑘𝑠𝑎subscript^𝑁𝑘𝑠𝑎\mathcal{A},P,Q\_{k}(s,a),N\_{k}(s,a),\hat{N}\_{k}(s,a)caligraphic\_A , italic\_P , italic\_Q start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ( italic\_s , italic\_a ) , italic\_N start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ( italic\_s , italic\_a ) , over^ start\_ARG italic\_N end\_ARG start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ( italic\_s , italic\_a ) 3: 4:if Not init N^k(s,a)subscript^𝑁𝑘𝑠𝑎\hat{N}\_{k}(s,a)over^ start\_ARG italic\_N end\_ARG start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ( italic\_s , italic\_a ) then 5:     Init: N^k(s,a)←Nk(s,a)←subscript^𝑁𝑘𝑠𝑎subscript𝑁𝑘𝑠𝑎\hat{N}\_{k}(s,a)\leftarrow N\_{k}(s,a)over^ start\_ARG italic\_N end\_ARG start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ( italic\_s , italic\_a ) ← italic\_N start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ( italic\_s , italic\_a ) 6:end if 7: 8:s←sroot←𝑠subscript𝑠roots\leftarrow s\_{\text{root}}italic\_s ← italic\_s start\_POSTSUBSCRIPT root end\_POSTSUBSCRIPT 9:a\*←UCB1(Q,P,N^)←superscript𝑎UCB1𝑄𝑃^𝑁a^{\*}\leftarrow{\text{UCB1}(Q,P,\hat{N}})italic\_a start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ← UCB1 ( italic\_Q , italic\_P , over^ start\_ARG italic\_N end\_ARG ) 10:N^k(s,a)←N^k(s,a)+1←subscript^𝑁𝑘𝑠𝑎subscript^𝑁𝑘𝑠𝑎1\hat{N}\_{k}(s,a)\leftarrow\hat{N}\_{k}(s,a)+1over^ start\_ARG italic\_N end\_ARG start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ( italic\_s , italic\_a ) ← over^ start\_ARG italic\_N end\_ARG start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ( italic\_s , italic\_a ) + 1 11: 12:Return N^k(s,a)subscript^𝑁𝑘𝑠𝑎\hat{N}\_{k}(s,a)over^ start\_ARG italic\_N end\_ARG start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ( italic\_s , italic\_a ) For the termination rule Δs(k,k/2)<ϵsubscriptΔ𝑠𝑘𝑘2italic-ϵ\Delta\_{s}(k,k/2)<\epsilonroman\_Δ start\_POSTSUBSCRIPT italic\_s end\_POSTSUBSCRIPT ( italic\_k , italic\_k / 2 ) < italic\_ϵ, we assume πisubscript𝜋𝑖\pi\_{i}italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT and πjsubscript𝜋𝑗\pi\_{j}italic\_π start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT are directly comparable. However, they are not directly comparable because the tree is expanded with UCT. As the number of visits increases, the upper bound would be tighter, and the latter visits are more focused on the promising parts. Thus earlier visitation distributions (smaller iteration number) can exhibit more exploratory distribution, while latter ones (larger iteration number) are more exploitative on promising parts. To compare πisubscript𝜋𝑖\pi\_{i}italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT and πjsubscript𝜋𝑗\pi\_{j}italic\_π start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT properly, we propose a method called virtual expansion in place of the vanilla expansion. Briefly, it aligns two distributions by virtual UCT expansions until the constant budget N𝑁Nitalic\_N. When the tree is expanded at iteration k𝑘kitalic\_k, it has N−k𝑁𝑘N-kitalic\_N - italic\_k iterations to go. A normal expansion would require evaluating neural network N−k𝑁𝑘N-kitalic\_N - italic\_k times for a more accurate Q(s,a)𝑄𝑠𝑎Q(s,a)italic\_Q ( italic\_s , italic\_a ) estimate for each arm at the root node. Our proposed virtual expansion still expands N−k𝑁𝑘N-kitalic\_N - italic\_k times according to UCT, but it ignores the N−k𝑁𝑘N-kitalic\_N - italic\_k neural network evaluations and assumes that each arm’s Q(s,a)𝑄𝑠𝑎Q(s,a)italic\_Q ( italic\_s , italic\_a ) does not change. We denote the virtual expanded distribution from πisubscript𝜋𝑖\pi\_{i}italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT as a virtual expanded policy π^isubscript^𝜋𝑖\hat{\pi}\_{i}over^ start\_ARG italic\_π end\_ARG start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT. By doing virtual expansions on both πisubscript𝜋𝑖\pi\_{i}italic\_π start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT and πjsubscript𝜋𝑗\pi\_{j}italic\_π start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT, we will obtain the corresponding virtual expanded policies π^i,π^jsubscript^𝜋𝑖subscript^𝜋𝑗\hat{\pi}\_{i},\hat{\pi}\_{j}over^ start\_ARG italic\_π end\_ARG start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , over^ start\_ARG italic\_π end\_ARG start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT. Here we effectively remove the different levels of exploration/exploitation in the two policies. Then the termination condition becomes the difference of virtual expanded policies. We name the rule as VET-Rule (Virtual Expanded Termination Rule): | | | | | | --- | --- | --- | --- | | | Δ^s(k,k/2)=‖π^k(s)−π^k/2(s)‖<ϵ.subscript^Δ𝑠𝑘𝑘2normsubscript^𝜋𝑘𝑠subscript^𝜋𝑘2𝑠italic-ϵ\hat{\Delta}\_{s}(k,k/2)=\left|\left|\hat{\pi}\_{k}(s)-\hat{\pi}\_{k/2}(s)\right|\right|<\epsilon.over^ start\_ARG roman\_Δ end\_ARG start\_POSTSUBSCRIPT italic\_s end\_POSTSUBSCRIPT ( italic\_k , italic\_k / 2 ) = | | over^ start\_ARG italic\_π end\_ARG start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ( italic\_s ) - over^ start\_ARG italic\_π end\_ARG start\_POSTSUBSCRIPT italic\_k / 2 end\_POSTSUBSCRIPT ( italic\_s ) | | < italic\_ϵ . | | (2) | The comparisons between vanilla expansion and virtual expansion are illustrated in Algorithm [1](#alg1 "Algorithm 1 ‣ 4.2 Virtual Expansion in MCTS ‣ 4 Method ‣ Spending Thinking Time Wisely: Accelerating MCTS with Virtual Expansions") and [2](#alg2 "Algorithm 2 ‣ 4.2 Virtual Expansion in MCTS ‣ 4 Method ‣ Spending Thinking Time Wisely: Accelerating MCTS with Virtual Expansions"). The time-consuming computations are highlighted in Algorithm [1](#alg1 "Algorithm 1 ‣ 4.2 Virtual Expansion in MCTS ‣ 4 Method ‣ Spending Thinking Time Wisely: Accelerating MCTS with Virtual Expansions"). Line 4 to 7 in Algorithm [1](#alg1 "Algorithm 1 ‣ 4.2 Virtual Expansion in MCTS ‣ 4 Method ‣ Spending Thinking Time Wisely: Accelerating MCTS with Virtual Expansions") target at searching with UCT to reach an unvisited state for exploration. Then it evaluates the state and backpropagates along the search path to better estimate Q𝑄Qitalic\_Q-values. After total N𝑁Nitalic\_N iterations, the visitation distribution of the root node πN(s)subscript𝜋𝑁𝑠\pi\_{N}(s)italic\_π start\_POSTSUBSCRIPT italic\_N end\_POSTSUBSCRIPT ( italic\_s ) is considered as the final policy π(s)𝜋𝑠\pi(s)italic\_π ( italic\_s ). However, in virtual expansion, listed in Algorithm [2](#alg2 "Algorithm 2 ‣ 4.2 Virtual Expansion in MCTS ‣ 4 Method ‣ Spending Thinking Time Wisely: Accelerating MCTS with Virtual Expansions"), it only searches one step from the root node. And it selects actions based on the current estimations without changing any properties of the search tree. Furthermore, the virtual visited counts N^k(s,a)subscript^𝑁𝑘𝑠𝑎\hat{N}\_{k}(s,a)over^ start\_ARG italic\_N end\_ARG start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ( italic\_s , italic\_a ) are changed after virtual visits to balance the exploitation and the exploration issue. After N−k𝑁𝑘N-kitalic\_N - italic\_k times virtual expansion, the virtual expanded policy becomes π^k(s,a)=N^k(s,a)/Nsubscript^𝜋𝑘𝑠𝑎subscript^𝑁𝑘𝑠𝑎𝑁\hat{\pi}\_{k}(s,a)=\hat{N}\_{k}(s,a)/Nover^ start\_ARG italic\_π end\_ARG start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ( italic\_s , italic\_a ) = over^ start\_ARG italic\_N end\_ARG start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ( italic\_s , italic\_a ) / italic\_N instead of Nk(s,a)/ksubscript𝑁𝑘𝑠𝑎𝑘N\_{k}(s,a)/kitalic\_N start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ( italic\_s , italic\_a ) / italic\_k. When k=N𝑘𝑁k=Nitalic\_k = italic\_N, further searches after the root have no effects on the final policy. So π^N(s,a)=πN(s,a)subscript^𝜋𝑁𝑠𝑎subscript𝜋𝑁𝑠𝑎\hat{\pi}\_{N}(s,a)=\pi\_{N}(s,a)over^ start\_ARG italic\_π end\_ARG start\_POSTSUBSCRIPT italic\_N end\_POSTSUBSCRIPT ( italic\_s , italic\_a ) = italic\_π start\_POSTSUBSCRIPT italic\_N end\_POSTSUBSCRIPT ( italic\_s , italic\_a ). ### 4.3 V-MCTS Algorithm Algorithm 3 Virtual MCTS 1:Input: budget N𝑁Nitalic\_N, state s𝑠sitalic\_s, conservativeness r𝑟ritalic\_r, error ϵitalic-ϵ\epsilonitalic\_ϵ 2:Notice: πk(s)subscript𝜋𝑘𝑠\pi\_{k}(s)italic\_π start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ( italic\_s ), π^k(s)subscript^𝜋𝑘𝑠\hat{\pi}\_{k}(s)over^ start\_ARG italic\_π end\_ARG start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ( italic\_s ) are policy distributions. 3:Notice: πk(s,a)subscript𝜋𝑘𝑠𝑎\pi\_{k}(s,a)italic\_π start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ( italic\_s , italic\_a ), π^k(s,a)subscript^𝜋𝑘𝑠𝑎\hat{\pi}\_{k}(s,a)over^ start\_ARG italic\_π end\_ARG start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ( italic\_s , italic\_a ) are probabilities for action a𝑎aitalic\_a. 4:for k∈N𝑘𝑁k\in Nitalic\_k ∈ italic\_N do 5:     Selection with UCB1 6:     Expansion for the new node 7:     Evaluation with Neural Networks 8:     Backpropagation for updating Q and visitations 9:     πk(s,a)←Nk(s,a)/k←subscript𝜋𝑘𝑠𝑎subscript𝑁𝑘𝑠𝑎𝑘\pi\_{k}(s,a)\leftarrow N\_{k}(s,a)/kitalic\_π start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ( italic\_s , italic\_a ) ← italic\_N start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ( italic\_s , italic\_a ) / italic\_k 10:     Virtual expand N−k𝑁𝑘N-kitalic\_N - italic\_k nodes and update N^(s,a)^𝑁𝑠𝑎\hat{N}(s,a)over^ start\_ARG italic\_N end\_ARG ( italic\_s , italic\_a ) 11:     π^k(s,a)←N^k(s,a)/N←subscript^𝜋𝑘𝑠𝑎subscript^𝑁𝑘𝑠𝑎𝑁{\hat{\pi}\_{k}(s,a)}\leftarrow\hat{N}\_{k}(s,a)/Nover^ start\_ARG italic\_π end\_ARG start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ( italic\_s , italic\_a ) ← over^ start\_ARG italic\_N end\_ARG start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ( italic\_s , italic\_a ) / italic\_N 12:     if k≥rN∧‖π^k(s)−π^k/2(s)‖1<ϵ𝑘𝑟𝑁subscriptnormsubscript^𝜋𝑘𝑠subscript^𝜋𝑘2𝑠1italic-ϵk\geq rN\land\left|\left|\hat{\pi}\_{k}(s)-\hat{\pi}\_{k/2}(s)\right|\right|\_{1}<\epsilonitalic\_k ≥ italic\_r italic\_N ∧ | | over^ start\_ARG italic\_π end\_ARG start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ( italic\_s ) - over^ start\_ARG italic\_π end\_ARG start\_POSTSUBSCRIPT italic\_k / 2 end\_POSTSUBSCRIPT ( italic\_s ) | | start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT < italic\_ϵ then 13:         π(s)←π^k(s)←𝜋𝑠subscript^𝜋𝑘𝑠\pi(s)\leftarrow\hat{\pi}\_{k}(s)italic\_π ( italic\_s ) ← over^ start\_ARG italic\_π end\_ARG start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ( italic\_s ) 14:         Break 15:     end if 16:     π(s)←πk(s)←𝜋𝑠subscript𝜋𝑘𝑠\pi(s)\leftarrow\pi\_{k}(s)italic\_π ( italic\_s ) ← italic\_π start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ( italic\_s ) 17:end for 18:Return π(s)𝜋𝑠\pi(s)italic\_π ( italic\_s ) The procedure of MCTS with VET-Rule is listed in Algorithm [3](#alg3 "Algorithm 3 ‣ 4.3 V-MCTS Algorithm ‣ 4 Method ‣ Spending Thinking Time Wisely: Accelerating MCTS with Virtual Expansions"). We name our method Virtual MCTS (V-MCTS), a variant of MCTS with VET-Rule. Compared with the original MCTS, lines 8-13 are the pseudo-code for the rule. In each iteration, we do some calculations with little cost to judge whether the VET-Rule is satisfied. If it is, then the search process is terminated and returns the current virtual expanded policy π^k(s)subscript^𝜋𝑘𝑠\hat{\pi}\_{k}(s)over^ start\_ARG italic\_π end\_ARG start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ( italic\_s ). Thus, it skips the next N−k𝑁𝑘N-kitalic\_N - italic\_k model predictions from neural networks in the evaluation stage highlighted in line 7. In this way, we can approximate the oracle distribution πNsubscript𝜋𝑁\pi\_{N}italic\_π start\_POSTSUBSCRIPT italic\_N end\_POSTSUBSCRIPT by π^ksubscript^𝜋𝑘\hat{\pi}\_{k}over^ start\_ARG italic\_π end\_ARG start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT while reducing the budget of N𝑁Nitalic\_N simulations to k𝑘kitalic\_k. Here, k≥rN𝑘𝑟𝑁k\geq rNitalic\_k ≥ italic\_r italic\_N and r𝑟ritalic\_r is a hyperparameter of the minimum budget rN𝑟𝑁rNitalic\_r italic\_N. We can reduce the tree size by 1/r1𝑟1/r1 / italic\_r times at most. ### 4.4 Theoretical Justifications Furthermore, we will give some theoretical bounds on the Q𝑄Qitalic\_Q-values and Δ^s(k,N)subscript^Δ𝑠𝑘𝑁\hat{\Delta}\_{s}(k,N)over^ start\_ARG roman\_Δ end\_ARG start\_POSTSUBSCRIPT italic\_s end\_POSTSUBSCRIPT ( italic\_k , italic\_N ) of V-MCTS. Before this, we define some notations first: k𝑘kitalic\_k is the index of the current search iteration, and N𝑁Nitalic\_N is the number of total search iterations. 𝒜𝒜\mathcal{A}caligraphic\_A is the action set and |𝒜|>1𝒜1|\mathcal{A}|>1| caligraphic\_A | > 1. Each action a∈𝒜𝑎𝒜a\in\mathcal{A}italic\_a ∈ caligraphic\_A is associated with a value, which is a random variable bounded in the interval [0,1]01[0,1][ 0 , 1 ] with expectation Qasubscript𝑄𝑎Q\_{a}italic\_Q start\_POSTSUBSCRIPT italic\_a end\_POSTSUBSCRIPT. At step k𝑘kitalic\_k, the empirical mean value over the k𝑘kitalic\_k trails is Qk(s,a)subscript𝑄𝑘𝑠𝑎Q\_{k}(s,a)italic\_Q start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ( italic\_s , italic\_a ). For simplification, we denote Qk(s,a)subscript𝑄𝑘𝑠𝑎Q\_{k}(s,a)italic\_Q start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ( italic\_s , italic\_a ) as Q¯aksuperscriptsubscript¯𝑄𝑎𝑘\bar{Q}\_{a}^{k}over¯ start\_ARG italic\_Q end\_ARG start\_POSTSUBSCRIPT italic\_a end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_k end\_POSTSUPERSCRIPT. We denote the empirical mean value after N−k𝑁𝑘N-kitalic\_N - italic\_k virtual expansion as Q^aNsuperscriptsubscript^𝑄𝑎𝑁\hat{Q}\_{a}^{N}over^ start\_ARG italic\_Q end\_ARG start\_POSTSUBSCRIPT italic\_a end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_N end\_POSTSUPERSCRIPT. Since we only deal with the visitation distribution of the root, we omit the state subscript for the root state. For convenience, we assume that different actions are ordered by their corresponding expected values, which means that 1≥Q1≥Q2≥⋯≥Qa≥⋯≥Q|𝒜|≥01subscript𝑄1subscript𝑄2⋯subscript𝑄𝑎⋯subscript𝑄𝒜01\geq Q\_{1}\geq Q\_{2}\geq\cdots\geq Q\_{a}\geq\cdots\geq Q\_{|\mathcal{A}|}\geq 01 ≥ italic\_Q start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ≥ italic\_Q start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ≥ ⋯ ≥ italic\_Q start\_POSTSUBSCRIPT italic\_a end\_POSTSUBSCRIPT ≥ ⋯ ≥ italic\_Q start\_POSTSUBSCRIPT | caligraphic\_A | end\_POSTSUBSCRIPT ≥ 0. ###### Theorem 4.1. Given r∈(0,1)𝑟01r\in(0,1)italic\_r ∈ ( 0 , 1 ), confidence δ∈(0,1)𝛿01\delta\in(0,1)italic\_δ ∈ ( 0 , 1 ), finite action set 𝒜𝒜\mathcal{A}caligraphic\_A. ∃N0>0subscript𝑁00\exists N\_{0}>0∃ italic\_N start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT > 0, ∀N>N0,k≥rNformulae-sequencefor-all𝑁subscript𝑁0𝑘𝑟𝑁\forall N>N\_{0},k\geq rN∀ italic\_N > italic\_N start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , italic\_k ≥ italic\_r italic\_N, let ϵk=12kln⁡100k2δsubscriptitalic-ϵ𝑘12𝑘100superscript𝑘2𝛿\epsilon\_{k}=\sqrt{\frac{1}{2k}\ln{\frac{100k^{2}}{\delta}}}italic\_ϵ start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT = square-root start\_ARG divide start\_ARG 1 end\_ARG start\_ARG 2 italic\_k end\_ARG roman\_ln divide start\_ARG 100 italic\_k start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT end\_ARG start\_ARG italic\_δ end\_ARG end\_ARG, after k𝑘kitalic\_k times vanilla expansion and N−k𝑁𝑘N-kitalic\_N - italic\_k times virtual expansion, we have (a) Value Consistency in Virtual Expansion: Pr{⋂a∈𝒜|Q^aN−Qa|<ϵk}>(1−eδ|𝒜|50r2N2)𝑃𝑟subscript𝑎𝒜superscriptsubscriptnormal-^𝑄𝑎𝑁subscript𝑄𝑎subscriptitalic-ϵ𝑘1𝑒𝛿𝒜50superscript𝑟2superscript𝑁2Pr\{\bigcap\_{a\in\mathcal{A}}\left|\hat{Q}\_{a}^{N}-Q\_{a}\right|<\epsilon\_{k}\}>(1-\frac{e\delta|\mathcal{A}|}{50r^{2}N^{2}})italic\_P italic\_r { ⋂ start\_POSTSUBSCRIPT italic\_a ∈ caligraphic\_A end\_POSTSUBSCRIPT | over^ start\_ARG italic\_Q end\_ARG start\_POSTSUBSCRIPT italic\_a end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_N end\_POSTSUPERSCRIPT - italic\_Q start\_POSTSUBSCRIPT italic\_a end\_POSTSUBSCRIPT | < italic\_ϵ start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT } > ( 1 - divide start\_ARG italic\_e italic\_δ | caligraphic\_A | end\_ARG start\_ARG 50 italic\_r start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT italic\_N start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT end\_ARG ); (b) Best Action Identification in Virtual Expansion: Pr{|Q^\*N−Q¯1N|<ϵk+ϵN}>1−2(δ50k2exp⁡(11.61k)+δ50N2exp⁡(1N))𝑃𝑟superscriptsubscriptnormal-^𝑄𝑁superscriptsubscriptnormal-¯𝑄1𝑁subscriptitalic-ϵ𝑘subscriptitalic-ϵ𝑁12𝛿50superscript𝑘211.61𝑘𝛿50superscript𝑁21𝑁Pr\{\left|\hat{Q}\_{\*}^{N}-\bar{Q}\_{1}^{N}\right|<\epsilon\_{k}+\epsilon\_{N}\}>1-2(\frac{\delta}{50k^{2}}\exp{(\frac{1}{1.61\sqrt{k}})}+\frac{\delta}{50N^{2}}\exp{(\frac{1}{N})})italic\_P italic\_r { | over^ start\_ARG italic\_Q end\_ARG start\_POSTSUBSCRIPT \* end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_N end\_POSTSUPERSCRIPT - over¯ start\_ARG italic\_Q end\_ARG start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_N end\_POSTSUPERSCRIPT | < italic\_ϵ start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT + italic\_ϵ start\_POSTSUBSCRIPT italic\_N end\_POSTSUBSCRIPT } > 1 - 2 ( divide start\_ARG italic\_δ end\_ARG start\_ARG 50 italic\_k start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT end\_ARG roman\_exp ( divide start\_ARG 1 end\_ARG start\_ARG 1.61 square-root start\_ARG italic\_k end\_ARG end\_ARG ) + divide start\_ARG italic\_δ end\_ARG start\_ARG 50 italic\_N start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT end\_ARG roman\_exp ( divide start\_ARG 1 end\_ARG start\_ARG italic\_N end\_ARG ) ), where \*:=argmaxa∈𝒜Q¯ak\*:=\arg\max\_{a\in\mathcal{A}}\bar{Q}\_{a}^{k}\* := roman\_arg roman\_max start\_POSTSUBSCRIPT italic\_a ∈ caligraphic\_A end\_POSTSUBSCRIPT over¯ start\_ARG italic\_Q end\_ARG start\_POSTSUBSCRIPT italic\_a end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_k end\_POSTSUPERSCRIPT, e𝑒eitalic\_e is the Euler’s number. Theorem [4.1](#S4.Thmtheorem1 "Theorem 4.1. ‣ 4.4 Theoretical Justifications ‣ 4 Method ‣ Spending Thinking Time Wisely: Accelerating MCTS with Virtual Expansions") (a) gives a bound of the distance between the empirical mean values after virtual expansions and the expected values. Noticed that limN→∞ϵrN=0subscript→𝑁subscriptitalic-ϵ𝑟𝑁0\lim\_{N\rightarrow\infty}\epsilon\_{rN}=0roman\_lim start\_POSTSUBSCRIPT italic\_N → ∞ end\_POSTSUBSCRIPT italic\_ϵ start\_POSTSUBSCRIPT italic\_r italic\_N end\_POSTSUBSCRIPT = 0 and limN→∞eδ|𝒜|50r2N2=0subscript→𝑁𝑒𝛿𝒜50superscript𝑟2superscript𝑁20\lim\_{N\rightarrow\infty}\frac{e\delta|\mathcal{A}|}{50r^{2}N^{2}}=0roman\_lim start\_POSTSUBSCRIPT italic\_N → ∞ end\_POSTSUBSCRIPT divide start\_ARG italic\_e italic\_δ | caligraphic\_A | end\_ARG start\_ARG 50 italic\_r start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT italic\_N start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT end\_ARG = 0. It tells that, after enough trails, the expected Q𝑄Qitalic\_Q-values of all actions can be estimated by the corresponding empirical Q𝑄Qitalic\_Q-values after virtual expansion. Furthermore, when the Q𝑄Qitalic\_Q-values have converged, the effect of virtual expansion is the same as that of vanilla expansion. Denote the best empirical action as \*\*\*, and the best expected action is 1111 because Q1≥Qasubscript𝑄1subscript𝑄𝑎Q\_{1}\geq Q\_{a}italic\_Q start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ≥ italic\_Q start\_POSTSUBSCRIPT italic\_a end\_POSTSUBSCRIPT. Theorem [4.1](#S4.Thmtheorem1 "Theorem 4.1. ‣ 4.4 Theoretical Justifications ‣ 4 Method ‣ Spending Thinking Time Wisely: Accelerating MCTS with Virtual Expansions") (b) notes that the Q𝑄Qitalic\_Q-value of the best empirical action with virtual expansion is of high probability to be close to the Q-value of the best expected action with vanilla expansion. Intuitively, it tells that whether or not we successfully find the best expected action, the best empirical action has similar effects to the best expected action. And, N0subscript𝑁0N\_{0}italic\_N start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT should be larger than the action space size, otherwise it cannot satisfy the theorem conditions. The proof is attached in Appendix. ###### Theorem 4.2. (Error Bound of V-MCTS): Given r∈(0,1)𝑟01r\in(0,1)italic\_r ∈ ( 0 , 1 ), confidence δ∈(0,1)𝛿01\delta\in(0,1)italic\_δ ∈ ( 0 , 1 ), finite action set 𝒜𝒜\mathcal{A}caligraphic\_A. Suppose the virtual expanded policy π^ksubscriptnormal-^𝜋𝑘\hat{\pi}\_{k}over^ start\_ARG italic\_π end\_ARG start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT is generated from Algorithm [3](#alg3 "Algorithm 3 ‣ 4.3 V-MCTS Algorithm ‣ 4 Method ‣ Spending Thinking Time Wisely: Accelerating MCTS with Virtual Expansions") (V-MCTS), ∃N0>0subscript𝑁00\exists N\_{0}>0∃ italic\_N start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT > 0, ∀N>N0,k≥rNformulae-sequencefor-all𝑁subscript𝑁0𝑘𝑟𝑁\forall N>N\_{0},k\geq rN∀ italic\_N > italic\_N start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , italic\_k ≥ italic\_r italic\_N, ∀ϵ∈(0,1]for-allitalic-ϵ01\forall\epsilon\in(0,1]∀ italic\_ϵ ∈ ( 0 , 1 ], if ‖π^k(s)−π^k/2(s)‖1<ϵsubscriptnormsubscriptnormal-^𝜋𝑘𝑠subscriptnormal-^𝜋𝑘2𝑠1italic-ϵ\left|\left|\hat{\pi}\_{k}(s)-\hat{\pi}\_{k/2}(s)\right|\right|\_{1}<\epsilon| | over^ start\_ARG italic\_π end\_ARG start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ( italic\_s ) - over^ start\_ARG italic\_π end\_ARG start\_POSTSUBSCRIPT italic\_k / 2 end\_POSTSUBSCRIPT ( italic\_s ) | | start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT < italic\_ϵ, we have Pr{‖πN(s)−π^k(s)‖1<3ϵ}>1−eδ|𝒜|50N2(1+4r2)𝑃𝑟subscriptnormsubscript𝜋𝑁𝑠subscriptnormal-^𝜋𝑘𝑠13italic-ϵ1𝑒𝛿𝒜50superscript𝑁214superscript𝑟2Pr\{\left|\left|\pi\_{N}(s)-\hat{\pi}\_{k}(s)\right|\right|\_{1}<3\epsilon\}>1-\frac{e\delta|\mathcal{A}|}{50N^{2}}(1+\frac{4}{r^{2}})italic\_P italic\_r { | | italic\_π start\_POSTSUBSCRIPT italic\_N end\_POSTSUBSCRIPT ( italic\_s ) - over^ start\_ARG italic\_π end\_ARG start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ( italic\_s ) | | start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT < 3 italic\_ϵ } > 1 - divide start\_ARG italic\_e italic\_δ | caligraphic\_A | end\_ARG start\_ARG 50 italic\_N start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT end\_ARG ( 1 + divide start\_ARG 4 end\_ARG start\_ARG italic\_r start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT end\_ARG ), where e𝑒eitalic\_e is the Euler’s number. Theorem [4.2](#S4.Thmtheorem2 "Theorem 4.2. ‣ 4.4 Theoretical Justifications ‣ 4 Method ‣ Spending Thinking Time Wisely: Accelerating MCTS with Virtual Expansions") tells that a bound of Δ^s(k,k/2)subscript^Δ𝑠𝑘𝑘2\hat{\Delta}\_{s}(k,k/2)over^ start\_ARG roman\_Δ end\_ARG start\_POSTSUBSCRIPT italic\_s end\_POSTSUBSCRIPT ( italic\_k , italic\_k / 2 ) implies a bound of Δ^s(k,N)subscript^Δ𝑠𝑘𝑁\hat{\Delta}\_{s}(k,N)over^ start\_ARG roman\_Δ end\_ARG start\_POSTSUBSCRIPT italic\_s end\_POSTSUBSCRIPT ( italic\_k , italic\_N ) with high probability. Noticed that limN→∞eδ|𝒜|50N2(1+4r2)=0subscript→𝑁𝑒𝛿𝒜50superscript𝑁214superscript𝑟20\lim\_{N\rightarrow\infty}\frac{e\delta|\mathcal{A}|}{50N^{2}}(1+\frac{4}{r^{2}})=0roman\_lim start\_POSTSUBSCRIPT italic\_N → ∞ end\_POSTSUBSCRIPT divide start\_ARG italic\_e italic\_δ | caligraphic\_A | end\_ARG start\_ARG 50 italic\_N start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT end\_ARG ( 1 + divide start\_ARG 4 end\_ARG start\_ARG italic\_r start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT end\_ARG ) = 0. Therefore, the oracle policy πN(s)subscript𝜋𝑁𝑠\pi\_{N}(s)italic\_π start\_POSTSUBSCRIPT italic\_N end\_POSTSUBSCRIPT ( italic\_s ) can be approximated by π^k(s)subscript^𝜋𝑘𝑠\hat{\pi}\_{k}(s)over^ start\_ARG italic\_π end\_ARG start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ( italic\_s ) after enough trails. The proof of this theorem is attached in Appendix. Given the minimum distance ϵitalic-ϵ\epsilonitalic\_ϵ, for easier states, the rule Δ^s(k,k/2)<ϵsubscript^Δ𝑠𝑘𝑘2italic-ϵ\hat{\Delta}\_{s}(k,k/2)<\epsilonover^ start\_ARG roman\_Δ end\_ARG start\_POSTSUBSCRIPT italic\_s end\_POSTSUBSCRIPT ( italic\_k , italic\_k / 2 ) < italic\_ϵ is easier to satisfy. That’s because the Q-values of the tree nodes keep in a small range even with more search iterations. Thus, the virtual expanded policy generated by V-MCTS is close to the oracle policy, and the search loop will be terminated earlier if the state is easier. In the next section, we do ablations to investigate the effects of the hyper-parameters and show visualizations to verify the adaptive behavior. 5 Experiments -------------- In this section, the goal of the experiments is to prove the effectiveness and efficiency of V-MCTS. First, we compare the performance and the cost between the vanilla MCTS and our method. Specifically, we evaluate the board game Go 9×9999\times 99 × 9, and a few Atari games. In addition, we do some ablations to examine the virtual expansion’s effectiveness and evaluate the sensitiveness of hyper-parameters. Finally, we show the adaptive mechanism through visualizations and performance analysis. ### 5.1 Setup Models and Environments Recently, Ye et al. [ye2021mastering](#bib.bib34) proposed EfficientZero, a variant of MuZero ([schrittwieser2020mastering,](#bib.bib27) ) with three extra components to improve the sample efficiency, which only requires 8 GPUs in training, and thus it is more affordable. Here we choose the board game Go 9×9999\times 99 × 9 and a few Atari games as our benchmark environments. The game of Go tests how the algorithm performs in a challenging planning problem. And Atari games feature visual complexity. Hyper-parameters As for the Go 9×9999\times 99 × 9, we choose Tromp-Taylor rules. The environment of Go is built based on an open-source codebase, GymGo ([gymgo,](#bib.bib19) ). We evaluate the performance of the agent against GNU Go v3.8 at level 10 ([gnugo,](#bib.bib5) ) for 200 games. We include 100 games as the black player and 100 games as the white one with different seeds. We set the komi to 6.5, as most papers do. As for the Atari games, we choose 5 games with 100k environment steps. In each setting, we use 3 training seeds and 100 evaluation seeds for each trained model. More details are attached in Appendix. Baselines We compare our method to EfficientZero with vanilla MCTS, on Go 9×9999\times 99 × 9 and some Atari games. Moreover, DS-MCTS ([lan2020learning,](#bib.bib21) ) also terminates the MCTS adaptively through trained uncertainty networks. But it requires specific features designed for Go games, and it only works in the evaluation stage. Therefore, we also compare the final performance for Go games with the DS-MCTS. ### 5.2 Results on Go Figure [1(a)](#S5.F1.sf1 "1(a) ‣ Figure 1 ‣ 5.2 Results on Go ‣ 5 Experiments ‣ Spending Thinking Time Wisely: Accelerating MCTS with Virtual Expansions") illustrates the computation and performance trade-off on Go against the same GnuGo (level 10) agent. The x-axis is the training speed, and the y-axis is the winning rate. Therefore, the curve which lies to the top-left has better performance than the bottom-right in terms of the trade-off. We train the baseline method with constant budgets N𝑁Nitalic\_N, which is noted as the blue points. Besides, we also train the V-MCTS with hyperparameters r=0.2,ϵ=0.1formulae-sequence𝑟0.2italic-ϵ0.1r=0.2,\epsilon=0.1italic\_r = 0.2 , italic\_ϵ = 0.1. We evaluate the trained model with different ϵitalic-ϵ\epsilonitalic\_ϵ to display the trade-off between computation and performance, indicated as the red points. And the green points are the GnuGo with different levels. The GnuGo engine provides models of different levels (1-10). Each level is a trade-off between the run time and the strength of the agent. The y-axis is the winning rate against the model of level 10. Here the green curve shows the performance-computation trade-off of the GnuGo engine. ![Refer to caption](/html/2210.12628/assets/figs/results.png) (a) Evaluations of Performance ![Refer to caption](/html/2210.12628/assets/figs/ablation_mcts.png) (b) Wining Rates and Tree Size during Training Figure 1: Performance of Virtual MCTS on Go 9×9999\times 99 × 9 against GnuGo (level 10). (a) Evaluating the speed and winning rate of MCTS, V-MCTS, and GnuGo at different levels. V-MCTS has better computation and performance trade-off. X-axis is the frame per second in the ln scale. (For convenience, the eps in Figure (a) denotes the hyperparameter ϵitalic-ϵ\epsilonitalic\_ϵ.) (b) Evaluating the winning rate and the average tree size during the training stage. The solid lines and dashed lines display the winning rate and the tree size, respectively. The red one is V-MCTS, and the others are vanilla MCTS with different N𝑁Nitalic\_N. V-MCTS makes the tree size adaptive in training and reduces the search cost while performing comparably to the vanilla MCTS (N=150𝑁150N=150italic\_N = 150). However, reducing N𝑁Nitalic\_N in vanilla MCTS results in a more significant performance drop. Firstly, for all the methods, more search iterations (larger N𝑁Nitalic\_N or smaller eps𝑒𝑝𝑠epsitalic\_e italic\_p italic\_s) lead to higher winning rates but result in more response time. Secondly, V-MCTS (ϵ=0.1italic-ϵ0.1\epsilon=0.1italic\_ϵ = 0.1) achieves 71% winning rate against the GnuGo level 10, which is close to 75% from MCTS (N=150𝑁150N=150italic\_N = 150). And the time cost of V-MCTS (ϵ=0.1italic-ϵ0.1\epsilon=0.1italic\_ϵ = 0.1) for a one-step move is 0.12s while the GnuGo engine is 0.18s and MCTS (N=150𝑁150N=150italic\_N = 150) is more than 0.2s. Therefore, such termination rule can keep strong performances with less budget. For a more detailed breakdown of the time consumption of V-MCTS, please see the Appendix. Finally, we can find that the red dashed line lies to the right of the blue one. It indicates that V-MCTS is better than the vanilla MCTS considering the computation and performance trade-off. Figure [1(b)](#S5.F1.sf2 "1(b) ‣ Figure 1 ‣ 5.2 Results on Go ‣ 5 Experiments ‣ Spending Thinking Time Wisely: Accelerating MCTS with Virtual Expansions") illustrates the changes of winning rates and the average tree size over the training stage. Here, as the red dashed line shows, the tree size of V-MCTS varies over training and keeps smaller than the maximum size (N=150𝑁150N=150italic\_N = 150) while the winning rate keeps comparable to the MCTS (N=150𝑁150N=150italic\_N = 150). Consequently, V-MCTS can work well not only in evaluation but also in training. Table 1: Results for Go 9×9999\times 99 × 9: Comparison of the winning rate and the average budget over 200 games for 3 separate training runs. | | MCTS (N=150) | DS-MCTS | V-MCTS | | --- | --- | --- | --- | | Average budget | 150 ±plus-or-minus\pm± 0.0 | 97 ±plus-or-minus\pm± 12.5 | 76 ±plus-or-minus\pm± 10.8 | | Winning rate | 75% ±plus-or-minus\pm± 3.0% | 60% ±plus-or-minus\pm± 4.0% | 71% ±plus-or-minus\pm± 4.7% | Table 2: Results for Atari games: scores over 100 evaluation seeds for 3 separate training runs. k𝑘kitalic\_k is the average budget of V-MCTS. MCTS (N=50𝑁50N=50italic\_N = 50) is the oracle one. The best results among distinct versions except the oracle are in bold. V-MCTS achieves better performance-computation trade-off. | MCTS | N=50𝑁50N=50italic\_N = 50 | N=30𝑁30N=30italic\_N = 30 | N=10𝑁10N=10italic\_N = 10 | Ours (V-MCTS) | Budget k𝑘kitalic\_k | | --- | --- | --- | --- | --- | --- | | Pong | 19.7 ±plus-or-minus\pm± 1.6 | 12.5 ±plus-or-minus\pm± 5.5 | 2.0 ±plus-or-minus\pm± 1.3 | 18.8 ±plus-or-minus\pm± 2.8 | 13.3 ±plus-or-minus\pm± 0.6 | | Breakout | 410.7 ±plus-or-minus\pm± 15.1 | 370.9 ±plus-or-minus\pm± 34.1 | 303.9 ±plus-or-minus\pm± 11.3 | 372.8 ±plus-or-minus\pm± 18.3 | 15.7 ±plus-or-minus\pm± 0.6 | | Seaquest | 1159.9 ±plus-or-minus\pm± 90.7 | 775.2 ±plus-or-minus\pm± 146.8 | 555.4 ±plus-or-minus\pm± 66.9 | 970.0 ±plus-or-minus\pm± 339.5 | 14.3 ±plus-or-minus\pm± 1.2 | | Hero | 9992.1 ±plus-or-minus\pm± 2059.4 | 9241.3 ±plus-or-minus\pm± 3615.3 | 4437.0 ±plus-or-minus\pm± 2490.6 | 8928.1 ±plus-or-minus\pm± 2922.1 | 15.0 ±plus-or-minus\pm± 1.0 | | Qbert | 14495.8 ±plus-or-minus\pm± 683.9 | 10429.9 ±plus-or-minus\pm± 2291.1 | 8149.8 ±plus-or-minus\pm± 2085.0 | 11476.6 ±plus-or-minus\pm± 978.2 | 16.3 ±plus-or-minus\pm± 1.2 | We also compare our method to DS-MCTS ([lan2020learning,](#bib.bib21) ), which terminates the search when the state is predicted to be certain with DNNs. To make fair comparisons, we implement the DS-MCTS and follow their design of features for Go games. We set Nmax=150,c={30,75,120},thr={.1,.1,.1}formulae-sequencesubscript𝑁𝑚𝑎𝑥150formulae-sequence𝑐3075120𝑡ℎ𝑟.1.1.1N\_{max}=150,c=\{30,75,120\},thr=\{.1,.1,.1\}italic\_N start\_POSTSUBSCRIPT italic\_m italic\_a italic\_x end\_POSTSUBSCRIPT = 150 , italic\_c = { 30 , 75 , 120 } , italic\_t italic\_h italic\_r = { .1 , .1 , .1 } in DS-MCTS. Then we compare the winning rate and the average budget among the vanilla MCTS, DS-MCTS, and V-MCTS. Experiments show that V-MCTS outperforms the DS-MCTS in both aspects, listed in Table [1](#S5.T1 "Table 1 ‣ 5.2 Results on Go ‣ 5 Experiments ‣ Spending Thinking Time Wisely: Accelerating MCTS with Virtual Expansions"). We attribute the better performance of V-MCTS to the virtual expanded policy. DS-MCTS chooses πk(s)subscript𝜋𝑘𝑠\pi\_{k}(s)italic\_π start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ( italic\_s ) as the policy after the termination of the search, while V-MCTS chooses π^k(s)subscript^𝜋𝑘𝑠\hat{\pi}\_{k}(s)over^ start\_ARG italic\_π end\_ARG start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ( italic\_s ), which has theoretical guarantees to approximate πN(s)subscript𝜋𝑁𝑠\pi\_{N}(s)italic\_π start\_POSTSUBSCRIPT italic\_N end\_POSTSUBSCRIPT ( italic\_s ). ### 5.3 Results on Atari Apart from the results of Go, we also evaluate our method on some visually complex games. Since the search space of Atari games is much smaller than that of Go and the Atari games are easier, we choose a few Atari games to study how the proposed method impacts the performance. We follow the setting of EffcientZero, 100k Atari benchmark, which contains only 400k frames data. The results are shown in Table [2](#S5.T2 "Table 2 ‣ 5.2 Results on Go ‣ 5 Experiments ‣ Spending Thinking Time Wisely: Accelerating MCTS with Virtual Expansions"). Generally, we find that our method works on Atari games. The tree size is adaptive, and the performance of V-MCTS is still comparable to the MCTS with full search trails. It has better performance than the MCTS(N=30𝑁30N=30italic\_N = 30) while requiring fewer searches, proving the effectiveness and efficiency of our proposed method. The Hero game is an outlier here. But our performance is very close to the vanilla MCTS (N=30𝑁30N=30italic\_N = 30) while we use half of the search iterations on average. Besides, the number of search times decreases more than that on Go. To sum up, V-MCTS can keep comparable performance under fewer search iterations while simply reducing the total budget of MCTS will encounter a more significant performance drop. In addition, the savings of search cost is more substantial in easier environments. ### 5.4 Ablation Study The results in the previous section suggest that our method reduces the response time of MCTS while keeping comparable performance on challenging tasks. This section tries to figure out which component contributes to the performance and how the hyperparameters affect it. And we also ablate the effects of different normalization criterions in VET-Rule and the larger budget (N𝑁Nitalic\_N) in MCTS. Virtual Expansion In Section [4.2](#S4.SS2 "4.2 Virtual Expansion in MCTS ‣ 4 Method ‣ Spending Thinking Time Wisely: Accelerating MCTS with Virtual Expansions"), we introduce the virtual expansion. To prove the effectiveness of virtual expansion, we compare it with another two baseline expansion methods. One is the vanilla expansion, mentioned in Algorithm [1](#alg1 "Algorithm 1 ‣ 4.2 Virtual Expansion in MCTS ‣ 4 Method ‣ Spending Thinking Time Wisely: Accelerating MCTS with Virtual Expansions"), which returns at iteration k𝑘kitalic\_k and outputs πksubscript𝜋𝑘\pi\_{k}italic\_π start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT. Another is greedy expansion, which spends the left N−k𝑁𝑘N-kitalic\_N - italic\_k simulations in searching the current best action greedily, indicating that π^k(s,a)=(Nk(s,a)+(N−k)𝟏b=arg⁡max⁡Nk(s,b))/Nsubscript^𝜋𝑘𝑠𝑎subscript𝑁𝑘𝑠𝑎𝑁𝑘subscript1𝑏subscript𝑁𝑘𝑠𝑏𝑁\hat{\pi}\_{k}(s,a)=(N\_{k}(s,a)+(N-k)\mathbf{1}\_{b=\arg\max N\_{k}(s,b)})/Nover^ start\_ARG italic\_π end\_ARG start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ( italic\_s , italic\_a ) = ( italic\_N start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ( italic\_s , italic\_a ) + ( italic\_N - italic\_k ) bold\_1 start\_POSTSUBSCRIPT italic\_b = roman\_arg roman\_max italic\_N start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ( italic\_s , italic\_b ) end\_POSTSUBSCRIPT ) / italic\_N. Briefly, we stop the search process after k=30𝑘30k=30italic\_k = 30 iterations and do N−k𝑁𝑘N-kitalic\_N - italic\_k times virtual expansion or greedy expansion or nothing, where k=rN𝑘𝑟𝑁k=rNitalic\_k = italic\_r italic\_N and r=0.2,N=150formulae-sequence𝑟0.2𝑁150r=0.2,N=150italic\_r = 0.2 , italic\_N = 150. Table 3: Ablation results of different expansion methods on Go 9×9999\times 99 × 9 for 3 separate training runs. | Algorithm | Size Avg. | Winning Rate | | --- | --- | --- | | Vanilla expansion | 30 | 17% ±plus-or-minus\pm± 3.2% | | Greedy expansion | 30 | 3% ±plus-or-minus\pm± 2.0% | | Virtual expansion | 30 | 32% ±plus-or-minus\pm± 3.5% | ![Refer to caption](/html/2210.12628/assets/figs/ablation_minN.png) (a) Evaluations of Performance ![Refer to caption](/html/2210.12628/assets/figs/ablation_dist.png) (b) Wining Rates and Tree Size during Training Stage Figure 2: Sensitivity of VET-Rule to the hyperparameter r,ϵ𝑟italic-ϵr,\epsilonitalic\_r , italic\_ϵ on Go 9×9999\times 99 × 9. The solid lines and dashed lines display the winning probability and the average tree size, respectively. We compare the winning rate against the same engine, and the results are listed as Table [3](#S5.T3 "Table 3 ‣ 5.4 Ablation Study ‣ 5 Experiments ‣ Spending Thinking Time Wisely: Accelerating MCTS with Virtual Expansions") shows. The winning rate of virtual expansion can achieve 32%, which is much better than the others. Besides, MCTS with greedy expansion does not work because it over-exploits and results in severe exploration issues. Consequently, virtual expansion can generate a better policy distribution because it can balance exploration and exploitation with UCT. Termination Rule It is significant to explore a better termination rule to keep the sound performance while decreasing the tree size as much as possible. As mentioned in Section [4.1](#S4.SS1 "4.1 Termination Rule ‣ 4 Method ‣ Spending Thinking Time Wisely: Accelerating MCTS with Virtual Expansions"), VET-Rule has two hyperparameters r,ϵ𝑟italic-ϵr,\epsilonitalic\_r , italic\_ϵ. Here r𝑟ritalic\_r is the factor of the minimum budget rN𝑟𝑁rNitalic\_r italic\_N, and ϵitalic-ϵ\epsilonitalic\_ϵ is the minimum distance Δ^s(k,k/2)subscript^Δ𝑠𝑘𝑘2\hat{\Delta}\_{s}(k,k/2)over^ start\_ARG roman\_Δ end\_ARG start\_POSTSUBSCRIPT italic\_s end\_POSTSUBSCRIPT ( italic\_k , italic\_k / 2 ). To explore the VET-Rule with better computation and performance trade-off, we do ablations for the different values of r𝑟ritalic\_r and ϵitalic-ϵ\epsilonitalic\_ϵ, respectively. The default values of r,ϵ𝑟italic-ϵr,\epsilonitalic\_r , italic\_ϵ are set to 0.2,0.10.20.10.2,0.10.2 , 0.1. Figure [2](#S5.F2 "Figure 2 ‣ 5.4 Ablation Study ‣ 5 Experiments ‣ Spending Thinking Time Wisely: Accelerating MCTS with Virtual Expansions") compares the winning rate as well as the average tree size across the training stage. Firstly, Figure [3(a)](#S5.F3.sf1 "3(a) ‣ Figure 3 ‣ 5.5 Visualization of V-MCTS’s Adaptive Behavior ‣ 5 Experiments ‣ Spending Thinking Time Wisely: Accelerating MCTS with Virtual Expansions") gives the results of different minimum search times factor r𝑟ritalic\_r. The winning probability is not sensitive to r𝑟ritalic\_r when r≥0.2𝑟0.2r\geq 0.2italic\_r ≥ 0.2. Nevertheless, the average tree size is sensitive to r𝑟ritalic\_r because V-MCTS is supposed to search for at least rN𝑟𝑁rNitalic\_r italic\_N times. In addition, there is a performance drop between r=0.1𝑟0.1r=0.1italic\_r = 0.1 and r=0.2𝑟0.2r=0.2italic\_r = 0.2. Therefore, it is reasonable to choose r=0.2𝑟0.2r=0.2italic\_r = 0.2 to balance the speed and the performance. Besides, the comparisons of the different minimum distance ϵitalic-ϵ\epsilonitalic\_ϵ are shown in Figure [3(b)](#S5.F3.sf2 "3(b) ‣ Figure 3 ‣ 5.5 Visualization of V-MCTS’s Adaptive Behavior ‣ 5 Experiments ‣ Spending Thinking Time Wisely: Accelerating MCTS with Virtual Expansions"). A larger ϵitalic-ϵ\epsilonitalic\_ϵ makes the tree size smaller because Δ^s(k,k/2)<ϵsubscript^Δ𝑠𝑘𝑘2italic-ϵ\hat{\Delta}\_{s}(k,k/2)<\epsilonover^ start\_ARG roman\_Δ end\_ARG start\_POSTSUBSCRIPT italic\_s end\_POSTSUBSCRIPT ( italic\_k , italic\_k / 2 ) < italic\_ϵ is easier to satisfy. In practice, the performance is highly correlated with ϵitalic-ϵ\epsilonitalic\_ϵ. In terms of the winning rate, a smaller ϵitalic-ϵ\epsilonitalic\_ϵ outperforms a larger one. However, better performances are at the cost of more computations. We suggest selecting an appropriate minimum distance to balance the computation and performance (r=0.2,ϵ=0.1formulae-sequence𝑟0.2italic-ϵ0.1r=0.2,\epsilon=0.1italic\_r = 0.2 , italic\_ϵ = 0.1). Normalization criterion in VET-Rule The proposed VET-Rule, ‖π^k(s)−π^k/2(s)‖<ϵnormsubscript^𝜋𝑘𝑠subscript^𝜋𝑘2𝑠italic-ϵ\left|\left|\hat{\pi}\_{k}(s)-\hat{\pi}\_{k/2}(s)\right|\right|<\epsilon| | over^ start\_ARG italic\_π end\_ARG start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ( italic\_s ) - over^ start\_ARG italic\_π end\_ARG start\_POSTSUBSCRIPT italic\_k / 2 end\_POSTSUBSCRIPT ( italic\_s ) | | < italic\_ϵ is a termination condition for V-MCTS. And L2 norm is another reasonable choice to amplify the bigger deviations. Therefore, we make ablations of the normalization criterion for the policy distributions. Specifically, we take a pretrained model, and compare the different strategies of L1 norm and L2 norm, namely, ‖π^k(s)−π^k/2(s)‖1<ϵsubscriptnormsubscript^𝜋𝑘𝑠subscript^𝜋𝑘2𝑠1italic-ϵ\left|\left|\hat{\pi}\_{k}(s)-\hat{\pi}\_{k/2}(s)\right|\right|\_{1}<\epsilon| | over^ start\_ARG italic\_π end\_ARG start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ( italic\_s ) - over^ start\_ARG italic\_π end\_ARG start\_POSTSUBSCRIPT italic\_k / 2 end\_POSTSUBSCRIPT ( italic\_s ) | | start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT < italic\_ϵ and ‖π^k(s)−π^k/2(s)‖2<ϵsubscriptnormsubscript^𝜋𝑘𝑠subscript^𝜋𝑘2𝑠2italic-ϵ\left|\left|\hat{\pi}\_{k}(s)-\hat{\pi}\_{k/2}(s)\right|\right|\_{2}<\epsilon| | over^ start\_ARG italic\_π end\_ARG start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ( italic\_s ) - over^ start\_ARG italic\_π end\_ARG start\_POSTSUBSCRIPT italic\_k / 2 end\_POSTSUBSCRIPT ( italic\_s ) | | start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT < italic\_ϵ. The results are as Tab. [4](#S5.T4 "Table 4 ‣ 5.4 Ablation Study ‣ 5 Experiments ‣ Spending Thinking Time Wisely: Accelerating MCTS with Virtual Expansions") shows. We can find that (1) L2 norm can also work for V-MCTS; (2) L1 norm is better than L2 norm. And we attribute this to the formulation of ucb scores. Because the ucb scores have already taken into account the difference in the visitations (see the N(s, a) in Eq (1)). Therefore, amplifying the deviations may result in some bias. Table 4: Comparison of the winning rate and the average budget with different norm strategies in VET-Rule. L1 Norm means ‖π^k(s)−π^k/2(s)‖1<ϵsubscriptnormsubscript^𝜋𝑘𝑠subscript^𝜋𝑘2𝑠1italic-ϵ\left|\left|\hat{\pi}\_{k}(s)-\hat{\pi}\_{k/2}(s)\right|\right|\_{1}<\epsilon| | over^ start\_ARG italic\_π end\_ARG start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ( italic\_s ) - over^ start\_ARG italic\_π end\_ARG start\_POSTSUBSCRIPT italic\_k / 2 end\_POSTSUBSCRIPT ( italic\_s ) | | start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT < italic\_ϵ and L2 Norm means ‖π^k(s)−π^k/2(s)‖2<ϵsubscriptnormsubscript^𝜋𝑘𝑠subscript^𝜋𝑘2𝑠2italic-ϵ\left|\left|\hat{\pi}\_{k}(s)-\hat{\pi}\_{k/2}(s)\right|\right|\_{2}<\epsilon| | over^ start\_ARG italic\_π end\_ARG start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ( italic\_s ) - over^ start\_ARG italic\_π end\_ARG start\_POSTSUBSCRIPT italic\_k / 2 end\_POSTSUBSCRIPT ( italic\_s ) | | start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT < italic\_ϵ. | | | | | --- | --- | --- | | | Average budget | Winning rate | | MCTS (N=150𝑁150N=150italic\_N = 150) | 150150150150 | 82.0%percent82.082.0\%82.0 % | | V-MCTS L1 Norm, N=150,r=0.2,ϵ=0.1formulae-sequence𝑁150formulae-sequence𝑟0.2italic-ϵ0.1N=150,r=0.2,\epsilon=0.1italic\_N = 150 , italic\_r = 0.2 , italic\_ϵ = 0.1 | 96.2 | 81.5% | | V-MCTS L2 Norm, N=150,r=0.2,ϵ=0.1formulae-sequence𝑁150formulae-sequence𝑟0.2italic-ϵ0.1N=150,r=0.2,\epsilon=0.1italic\_N = 150 , italic\_r = 0.2 , italic\_ϵ = 0.1 | 97.197.197.197.1 | 79.8%percent79.879.8\%79.8 % | | V-MCTS L2 Norm, N=150,r=0.2,ϵ=0.05formulae-sequence𝑁150formulae-sequence𝑟0.2italic-ϵ0.05N=150,r=0.2,\epsilon=0.05italic\_N = 150 , italic\_r = 0.2 , italic\_ϵ = 0.05 | 119.3119.3119.3119.3 | 81.0%percent81.081.0\%81.0 % | Larger budget (N𝑁Nitalic\_N) in MCTS To investigate whether our method still holds with larger amounts of MCTS expansions, we take a pretrained model and compare two strategies: (1) vanilla expansion with N=150/400/600/800 nodes in MCTS (2) virtual expanded policy with N=800,r=0.2,ϵ=0.1formulae-sequence𝑁800formulae-sequence𝑟0.2italic-ϵ0.1N=800,r=0.2,\epsilon=0.1italic\_N = 800 , italic\_r = 0.2 , italic\_ϵ = 0.1. The results are listed in Tab. [5](#S5.T5 "Table 5 ‣ 5.4 Ablation Study ‣ 5 Experiments ‣ Spending Thinking Time Wisely: Accelerating MCTS with Virtual Expansions"). The result shows that (1) V-MCTS(N=800,r=0.2,ϵ=0.1formulae-sequence𝑁800formulae-sequence𝑟0.2italic-ϵ0.1N=800,r=0.2,\epsilon=0.1italic\_N = 800 , italic\_r = 0.2 , italic\_ϵ = 0.1) is better than MCTS (N=600𝑁600N=600italic\_N = 600) in both the average budget and the winning rate, (2) V-MCTS can achieve comparable performance to the oracle MCTS(N=800𝑁800N=800italic\_N = 800) while keeping much less average budget. Therefore, V-MCTS works with a larger amount of MCTS expansions. Table 5: Comparison of the winning rate and the average budget with larger amounts of MCTS expansions. Here the hyper-parameters of our method are N=800,r=0.2,ϵ=0.1formulae-sequence𝑁800formulae-sequence𝑟0.2italic-ϵ0.1N=800,r=0.2,\epsilon=0.1italic\_N = 800 , italic\_r = 0.2 , italic\_ϵ = 0.1. | MCTS | N=150𝑁150N=150italic\_N = 150 | N=400𝑁400N=400italic\_N = 400 | N=600𝑁600N=600italic\_N = 600 | N=800𝑁800N=800italic\_N = 800 | Ours | | --- | --- | --- | --- | --- | --- | | Average budget | 150150150150 | 400400400400 | 600600600600 | 800800800800 | 431.1431.1431.1431.1 | | Winning rate | 82.0%percent82.082.0\%82.0 % | 84.5%percent84.584.5\%84.5 % | 84.9%percent84.984.9\%84.9 % | 85.9%percent85.985.9\%85.9 % | 85.0%percent85.085.0\%85.0 % | ### 5.5 Visualization of V-MCTS’s Adaptive Behavior ![Refer to caption](/html/2210.12628/assets/x1.png) (a) Play as Black ![Refer to caption](/html/2210.12628/assets/x2.png) (b) Play as White Figure 3: Heatmap of policy distributions from the MCTS (N=150𝑁150N=150italic\_N = 150) and the V-MCTS. The agent play as Black in (a) and White in (b) against the GnuGo (level 10). Our agent wins in both of the games. A darker red color represents larger visitations of the corresponding action. The V-MCTS will terminate with different search times k𝑘kitalic\_k according to the situations and generate a near-oracle policy distribution. In this section, we display the adaptive mechanism through visualizations and performance analysis. We find that (1) π^k(s)subscript^𝜋𝑘𝑠\hat{\pi}\_{k}(s)over^ start\_ARG italic\_π end\_ARG start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ( italic\_s ) is close to πN(s)subscript𝜋𝑁𝑠\pi\_{N}(s)italic\_π start\_POSTSUBSCRIPT italic\_N end\_POSTSUBSCRIPT ( italic\_s ); (2) V-MCTS terminates earlier for simpler states. Specifically, we choose some states at different time steps on one game of Go against the GnuGo with a trained model. And Figure [3](#S5.F3 "Figure 3 ‣ 5.5 Visualization of V-MCTS’s Adaptive Behavior ‣ 5 Experiments ‣ Spending Thinking Time Wisely: Accelerating MCTS with Virtual Expansions") is the visualization of the policy distributions heatmap. Here we add two games, which contain one black player and one white player. The last two rows in each subfigure are the heatmap visualization for oracle MCTS (πNsubscript𝜋𝑁\pi\_{N}italic\_π start\_POSTSUBSCRIPT italic\_N end\_POSTSUBSCRIPT) and V-MCTS (π^ksubscript^𝜋𝑘\hat{\pi}\_{k}over^ start\_ARG italic\_π end\_ARG start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT when Δ^s(k,k/2)<ϵsubscript^Δ𝑠𝑘𝑘2italic-ϵ\hat{\Delta}\_{s}(k,k/2)<\epsilonover^ start\_ARG roman\_Δ end\_ARG start\_POSTSUBSCRIPT italic\_s end\_POSTSUBSCRIPT ( italic\_k , italic\_k / 2 ) < italic\_ϵ). The darker the color is on the grid, the more the corresponding action is visited during the search. In general, π^ksubscript^𝜋𝑘\hat{\pi}\_{k}over^ start\_ARG italic\_π end\_ARG start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT is close to the πNsubscript𝜋𝑁\pi\_{N}italic\_π start\_POSTSUBSCRIPT italic\_N end\_POSTSUBSCRIPT at distinct states, indicating that the virtual expanded policy obtained after virtual expansion is close to the oracle one. Furthermore, the less valuable actions there are, the sooner the V-MCTS will terminate. For example, on Go games, the start states are usually not complex because there are only a few stones on the board, but the situations are more complicated in shuban, the closing stage of the game. Notably, the termination occurs earlier in the start states (columns 1, 2, 3), but it is the opposite when the situation is more complicated. More importantly, the termination step k𝑘kitalic\_k is not related to the number of Go pieces. Therefore, we can conclude that V-MCTS makes adaptive terminations according to the situations of the current states and generate near-oracle policies. Specifically, it terminates the search loop earlier when handling easier states, which has a better computation and performance trade-off. 6 Discussion ------------- This paper proposes a novel method named V-MCTS to accelerate the MCTS to determine the termination of search iterations. It can maintain comparable performances while reducing half of the time to search adaptively. We believe that this work can be one step toward applying the MCTS-based methods to some real-time domains. One limitation of our work is that it cannot deal with the environments of continuous action space. In the future, we will plan to extend to the continuous action space with early termination. Acknowledgments and Disclosure of Funding ----------------------------------------- This work is supported by the Ministry of Science and Technology of the People´s Republic of China, the 2030 Innovation Megaprojects "Program on New Generation Artificial Intelligence" (Grant No. 2021AAA0150000). This work is also supported by a grant from the Guoqiang Institute, Tsinghua University.
017d3190-2fb0-4058-ab26-1e351c8a9f6f
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Term/Category for AI with Neutral Impact? Is there any commonly known term on LessWrong to describe an AGI that does not significantly increase or decrease human value? (For instance, an AGI that stops all future attempts to build AGI, but otherwise tries to preserve the course of human history as if it had never existed). Would such an AI be considered "aligned"? It seems that most discussions of creating aligned AI focus on making the AI share and actively promote human values, but this seems importantly different from what I described.
89c63423-71b0-431d-b8c7-d531aa817a36
trentmkelly/LessWrong-43k
LessWrong
Comments on "AI 2027" I find the decision to brand the forecast as "AI 2027" very odd. The authors do not in fact believe this; they explicitly give 2028, 2030, or 2033 for their median dates for a superhuman coder. The point of this project was presumably to warn about a possible outcome; by the authors' own beliefs, their warning will be falsified immediately before it is needed. When presenting predictions, forecasters always face tradeoffs regarding how much precision to present. Precise forecasting attracts attempts and motivates action; adding many concrete details produces a compelling story, stimulating discussion; this also involves falsifiable predictions. Emphasizing uncertainty avoids losing credibility when some parts of story inevitably fail; prevents overconfidence; and encourages more robust strategies that can work across a range of outcomes. But I can't think of any reason to only consider a single high precision story that you don't think is all that likely. I think that the excessive precision is pretty important in this case: the current pace of AI R&D spending is unsustainable, so it matters exactly how much more progress is needed for superhuman coders. *** I don't believe that METR's time horizons forecast is sufficiently strong evidence for a precise timeline: 1. *In general* AI competence relative to humans isn't ordered by time- AI's can complete some tasks that would take humans centuries, and can't complete other than would take humans seconds. 2. METR produced their analysis by taking a subset of activities: SWAA, HCAST, and RE-BENCH. SWAA has 66 tasks from 1-30 seconds; HCAST has 97 tasks from 1 minutes to 30 hours; and RE-BENCH has 7 tasks of 8 hours. SWAA and HCAST were created by METR for this research. 3. METR did not- and cannot- use an obvious/straightforward/canonical set of human tasks: current AI's cannot accomplish most human economic tasks, because most tasks do not solely involve interacting with computers. 4. METR created SWAA and HCA
b0daddf7-af8b-48c8-b687-66b28aa7a2db
trentmkelly/LessWrong-43k
LessWrong
Spay or Neuter Your Irrationalities No human person has, so far as I am aware, managed to eradicate all irrationalities from their thinking.  They are unavoidable, and this is particularly distressing when the irrationalities are lurking in your brain like rats in the walls and you don't know what they are.  Of course you don't know what they are - they are irrationalities, and you are a rationalist, so if you had identified them, they would be dying (quickly or slowly, but dying).  It's only natural for someone committed to rationality to want to indiscriminately exterminate the threats to the unattainable goal. But are they all worth getting rid of? It is my opinion that they are not: some irrationalities are small and cute and neutered, and can be confined and kept where you can see them, like pet gerbils instead of rats in the walls. I'll give you an example: I use iTunes for my music organization and listening.  iTunes automatically records the number of times I have listened to each song and displays it.  Within a given playlist, I irrationally believe that all of these numbers have to match: if I have listened to the theme from The Phantom of the Opera exactly fifty-two times, I have to also have listened to "The Music of the Night" exactly fifty-two times, no matter how much I want to listen to the theme on repeat all afternoon. Does this make any sense?  No, of course not, but it isn't worth my time to get rid of it.  It is small - it affects only a tiny corner of my life, and if it starts to get in the way of my musical preferences, I can cheat it by resetting play counts or fast-forwarding through songs (like I could get around the chore of feeding a gerbil with an automatic food dispenser).  It is "cute" - I can use it as a conversation starter and people generally find it a mildly entertaining quirk, not evidence that I need psychiatric help.  I have it metaphorically neutered - since I make no effort to suppress it, I'm able to recognize the various emotional reactions that satsifyin