id
stringlengths
36
36
source
stringclasses
15 values
formatted_source
stringclasses
13 values
text
stringlengths
2
7.55M
e3829e32-897f-452e-bca0-0e28752ee09d
trentmkelly/LessWrong-43k
LessWrong
Vaccine Rollout as Wheeled-Luggage Problem According to FiercePharma, Pfizer, AstraZeneca, Moderna, Johnson & Johnson, and Novavax plan to produce a combined ~9 billion doses this year. They are collaborating with other companies to scale up production, and have facilities all over the world. They're licensing their product to nonprofit producers, and some have committed to providing the vaccines at cost. The fact that these companies project to produce enough vaccine to vaccinate the entire world this year is great. Why didn't they have it ready to go earlier? Note that this is a similar question to "Why did it take so long to invent wheeled luggage?" Why has it taken so long to ramp up production, when the first mRNA vaccines were created in January 2020? Jason Crawford proposes epistemic standards for this sort of question: > On the other hand, if you want to argue that something could have been invented much earlier than it was, you have to do better than glancing at its high-level concepts or components. You need to rigorously examine every part, material, and manufacturing process, and rule them all out as gating technologies. Any detail, even a minor one, can become crucial—especially when we remember that inventions need not only to work but to be practical, which includes performance, reliability and cost. We want to identify the fundamental bottleneck for ramping up vaccine supply. I've seen lots of attempts. They usually boil down to the idea that with better planning and less regulation, we could have spent a relative pittance for more production capacity early on, and might have been able to produce the billions of needed doses many months sooner. That explanation does not meet the Crawford Standard. Instead, we need to rigorously examine every aspect of vaccine manufacturing and rule them all out as bottlenecks. First, we need to consider planning and regulation failures prior to 2020. What failures of planning and regulation prevented us from setting up sufficient emergency vaccine prod
dde6cac3-363b-4305-ae7d-66394136a6f4
trentmkelly/LessWrong-43k
LessWrong
Removing interrupted histories doesn't debias Safe interruptibility is essentially the problem of getting the agent not to learn from human interruptions - to continue on, as if it was expecting never to be interrupted again. In an episodic task, one naive idea would be to simply delete histories which include interruptions. However, this can introduce a bias, as the following example shows: In this MDP, a and b are actions, ∗ designates `any action', the second term along an edge is the probability of following that edge given the action stated, and the third term, in bold, is the reward gained. Not considering interruptions, Q(s0,b)=γ/2, and Q(s0,a)=γ/3, so the optimal action in s0 is b. Now, suppose that every time the agent enters s3, an interruption occurs (with probability 1 for now), and the whole episode is deleted from the episode history. As a result, the empirical probability of going to state s4 from state s1 is 1, which leads to estimating Q(s1,∗)=2/3, and thus Q(s0,a)=γ2/3>Q(s0,b), so now the optimal action is to take action a in s0 (incidentally increasing the probability to end up in s3 and be interrupted!). This non-vanishing bias also happens if the interruption probability in state s3 is constant and close to 1 but less than 1 (depending on the discount factor), which ensures that all states are visited infinitely often. Now, it might seem that this bias can be removed by debiasing the agent, as in off-policy Monte Carlo. There are still problems with this approach, though, which will be analysed in a forthcoming paper ``Off-policy Monte Carlo agents with variable behaviour policies''.
7f4d0419-6aac-46c4-94fe-c994de5e136e
trentmkelly/LessWrong-43k
LessWrong
Reality comprehensibility: are there illogical things in reality? Introduction For all thought that an actor in the cosmos (such as yourself) does, the ability to comprehend reality and the nature of reality as being comprehensible is foundational to understand. With ourselves as entities which make sense of reality via our perception, everything seems to follow logic and reason and as such to answer the question "are there illogical things in reality?" I propose an axiom I call the 'the axiom of thought' which states that reality and explanations about it are logical and all evidence points to there being no illogical things in observable reality. The axiom of thought assuming the logicality of reality can have two classes of logicality which can either be 'phenomenal rule abidance' which deals with phenomena following logical scientific laws which describe phenomena such as natural selection being a phenomenal rule/law which must be abided by matter or 'thought rule abidance' which deals with methods of reasoning as seen in logic and mathematics. For 'thought rule abidance' logical rules determine things can either be true or false but not both e.g. an apple being red or not and mathematical rules determine what the accounting and measurement of things via numbers follow numerical rules e.g. 1+1=2. An interesting avenue of thought I haven't investigated is that perhaps mathematical rules can be violated by things which go beyond the physical finite world such as zero and infinity with zero and infinity being the limits of numbers as dividing by zero and manipulations including infinity can lead to violations of mathematical logicality. Key ideas from this discussion can be determined in the following: * .the axiom of thought - reality and explanations about it are logical: all evidence points to there being no illogical things in observable reality and things abide by rules * -phenomenal rule abidance - rules and laws of phenomena require to be obeyed where applicable * -thought rule abidance - rules and principles of th
c281f999-9ed6-4b72-88dc-4add49d10fa8
trentmkelly/LessWrong-43k
LessWrong
Other prespective on resolving the Prisoner's dilemma Sometimes I see new ideas that, without offering any new information, offers a new perspective on old information, and a new way of thinking about an old problem. So it is with this lecture and the prisoner's dilemma. Now, I worked a lot with the prisoners dilemma, with superrationality, negotiations, fairness, retaliation, Rawlsian veils of ignorance, etc. I've studied the problem, and its possible resolutions, extensively. But the perspective of that lecture was refreshing and new to me: The prisoner's dilemma is resolved only when the off-diagonal outcomes of the dilemma are known to be impossible. The "off-diagonal outcomes" are the "(Defect, Cooperate)" and the "(Cooperate, Defect)" squares where one person walks away with all the benefit and the other has none: (Baron, Countess) Cooperate Defect Cooperate (3,3) (0,5) Defect (5,0) (1,1)   Facing an identical (or near identical) copy of yourself? Then the off-diagonal outcomes are impossible, because you're going to choose the same thing. Facing Tit-for-tat in an iterated prisoner's dilemma? Well, the off-diagonal squares cannot be reached consistently. Is the other prisoner a Mafia don? Then the off-diagonal outcomes don't exist as written: there's a hidden negative term (you being horribly murdered) that isn't taken into account in that matrix. Various agents with open code are essentially publicly declaring the conditions under which they will not reach for the off-diagonal. The point of many contracts and agreements is to make the off-diagonal outcome impossible or expensive. As I said, nothing fundamentally new, but I find the perspective interesting. To my mind, it suggests that when resolving the prisoner's dilemma with probabilistic outcomes allowed, I should be thinking "blocking off possible outcomes", rather than "reaching agreement".
0799c879-c8f3-463f-8ec8-3d4f0dfa22e4
trentmkelly/LessWrong-43k
LessWrong
Topics from "Procedural Knowledge Gaps" About a year ago, we had a major discussion about procedural knowledge gaps. Here's what was covered.... How to tell whether food is fresh pjeby claims that eating raw chicken is safe because the gag reflex identifies it fast How to buy investments Memorizing the alphabet and other arbitrary lists Comparison of various how-to sites General discussion of making things (including Less Wrong) easier to use How jump start a stalled car. How to use the Yellow Pages. Cheap and easy healthy food. Questions about preparing a simple soup. Exercise. Starting relationships, especially for heterosexual men. Cooking in general. Browning meat. How to become bisexual. How to transfer money from one electronic account to another. How to buy a used car. Interacting with police.  (Don't talk to US police! The rules are different in the UK.) How to speak clearly, slowly, etc.. How to fold a fitted sheet. How to make a will. How to order at a bar. Also, some cookbook recommendations. Tipping in the US. Tipping in the UK and France. Spacial orientation. Personal hygiene-- washing, soap, shampoo. Haircuts for men. Haircuts for women. Growing and maintaining long hair. Putting a cover on a duvet. Telling the difference between flirting and friendliness. Choosing shirts that fit. Left vs. Right (which hand, not political). Shaving one's face. How to end conversations politely. How to make people laugh. Mailing large objects in the US. How to format comments at Less Wrong. How to declutter. E readers. Touch typing. Dvorak, etc. How often to see a doctor. Remembering to be polite. Tying shoes. Home maintenance. What might melt in a dishwasher. Kitchen knives. Sorting laundry. Does cranking up the thermostat heat the house faster?. More about investment How to not stutter. How to be a good manager. Scrubbing. How to use Google. How to talk to strangers. Potential topics: How to give clear instructions. How to see things from other peo
a3834932-7791-44d5-9bda-d239f3d9d4b2
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Alignment versus AI Alignment *Financial status: This work is supported by individual donors and a grant from LTFF.* *Epistemic status: This post contains many inside-view stories about the difficulty of alignment.* *Thanks to Adam Shimi, John Wentworth, and Rob Miles for comments on this essay.* --- What exactly is difficult about AI alignment that is not also difficult about alignment of governments, economies, companies, and other non-AI systems? Is it merely that the fast speed of AI makes the AI alignment problem quantitatively more acute than other alignment problems, or are there deeper qualitative differences? Is there a real connection between alignment of AI and non-AI systems at all? In this essay we attempt to clarify which difficulties of AI alignment show up similarly in non-AI systems, and which do not. Our goal is to provide a frame for importing and exporting insights from and to other fields without losing sight of the difficulties of AI alignment that are unique. Clarifying which difficulties are shared should clarify the difficulties that are truly unusual about AI alignment. We begin with a series of examples of aligning different kinds of systems, then we seek explanations for the relative difficulty of AI and non-AI alignment. Alignment in general -------------------- In general, we take actions in the world in order to steer the future in a certain direction. One particular approach to steering the future is to take actions that influence the constitutions of some intelligent system in the world. A general property of intelligent systems seems to be that there are interventions one can execute on them that have robustly long-lasting effects, such as changing the genome of a bacterium, or the trade regulations of a market economy. These are the aspects of the respective intelligent systems that persist through time and dictate their equilibrium behavior. In contrast, although plucking a single hair from a human head or adding a single barrel of oil to a market does have an impact on the future, the self-correcting mechanisms of the respective intelligent systems negate rather than propagate such changes. Furthermore, we will take alignment in general to be about utilizing such interventions on intelligent systems to realize our true terminal values. Therefore we will adopt the following working definition of alignment: > > **Successfully taking an action that steers the future in the direction of our true terminal values by influencing the part of an intelligent system that dictates its equilibrium behavior.** > > > Our question is: in what ways is the difficulty of alignment of AI systems different from that of non-AI systems? ### Example: Aligning an economic society by establishing property rights Suppose the thing we are trying to align is a human society and that we view that thing as a collection of households and firms making purchasing decisions that maximize their individual utilities. Suppose that we take as a working operationalization of our terminal values the maximization of the sum of individual utilities of the people in the society. Then we might proceed by creating the conditions for the free exchange of goods and services between the households and firms, perhaps by setting up a government that enforces property rights. This is one particular approach to aligning a thing (a human society) with an operationalization of The Good (maximization of the sum of individual utilities). This particular approach works by structuring the environment in which the humans live in such a way that the equilibrium behavior of the society brings about accomplishment of the goal. We have: * An intelligent system being aligned, which in this case is a human society. * A model of that system, which in this case is a collection of households and firms making purchasing decisions. * An operationalization of our terminal values, which in this case is the maximization of the sum of individual utilities (henceforth the "goal"). * A theory – in this case classical microeconomics – that relates possible interventions to the equilibrium behavior of the intelligent system. * Practical know-how concerning how to establish the conditions that the theory predicts will lead to the goal. What makes this problem difficult is that it is not so easy to install our intended goal (maximization of the sum of individual utilities) as the explicit goal of the market participants, and even if we did, the participants would not automatically be able to coordinate to achieve that goal without solving the same basic design problem that we are describing here. Similarly, it does not make sense to try to install our goal into the basic computing building-blocks of an AI. The building blocks are mechanical elements that we need to assemble in a way that brings about achievement of the goal as a result of our design choices, and in order to do that we need a theory that connects design choices to outcomes. The intelligent system may feed a great deal of optimization pressure into the framework that we have placed around it, so if our design choices are even a little bit off the mark then we may not get what we wanted. Furthermore the intelligent system may turn out to have affordances available to it that weren’t highlighted by our model of the system, such as when participants in a market economy become actors in the political institutions that regulate the market. If we didn’t consider such affordances clearly in our theory then the actual behavior of the system may turn out to be quite different from what we intended. Finally, the operationalization that we used for our terminal values will generally not be a complete representation of our true terminal values, and, as Goodhart’s law suggests, unbounded optimization of any fixed objective generally deviates quite dramatically from the underlying generator that the objective was intended to be a proxy for. We discuss the differences and similarities between these issues in AI versus non-AI settings below. For now, we turn to further examples of alignment in general. ### Example: Aligning a nation by establishing a justice system Suppose again that the thing we are trying to align is a human society, but now we view this thing as a self-governing polis consisting of individuals with overlapping but not identical social norms. Let us take as our operationalization of our terminal values the existence of the means to peacefully settle disputes. In order to accomplish this we might establish a justice system in which laws must be written down, individuals who are widely respected are appointed as judges and hear cases, prosecutors and defendants get a chance to make their case in public, and clear and binding rulings are made. We can consider the consequences of the choices we make in designing such a system using the tools of game theory, applied ethics, and political science. We might find that certain common-sense design choices would lead to surprising failure modes, while other less obvious design choices lead to equilibria that are closer to our goal. However we approach this, the basic activity is not *so* different from the problem of aligning an AI. We have: * An intelligent system being aligned, which in this case is a human society. * A model of that system, which in this case is a self-governing polis * An operationalization of our terminal values, which in this case is the fair and predictable enforcement of the law * A theory – in this case an informal collection of wisdom from game theory, applied ethics, political science, as so on – that relates the possible interventions to the equilibrium behavior of the system * Practical know-how concerning how to establish the conditions that the theory predicts will lead to the goal In this case we have the same intelligent system as in the previous example, but a different model of it, which makes possible alignment with respect to a different operationalization of our terminal values using different alignment affordances. ### Example: Aligning one’s own cognition via habit formation Suppose that I wish to reduce electricity usage in my house, and that for whatever reason I decide to maintain the following invariant in my house: when there is no-one in a room, the lights in that room should be off. There are various sensors and automated systems I could install to do this, but suppose that the approach I decide on is to establish a personal habit of turning off lights when leaving a room. In order to do that, I might train myself by walking in and out of rooms turning lights off as I leave. Having done that, I create a weekly routine of reviewing all the electricity I’ve saved and gently thanking myself. Now if I succeed at firmly establishing this habit then I might affect the pattern of electricity usage in every house I live in over my entire life, and this is actually quite a remarkable feat. Had I installed motion or sound sensors to automatically turn off lights in unused rooms, I would affect electricity usage in my current house but not necessarily in future houses, and the automated system might eventually break or wear out, and I wouldn’t necessarily fix it unless I had installed a habit of doing such things. In contrast, habit formation, if successfully installed, can be long-lived and self-correcting, and this is a general property of steering the future by intervening on intelligent systems. So here we have: * An intelligent system being aligned, which is my own cognition. * A model of that system as a behavioral learning system subject to habit formation.[[1]](#fn-NjY8XeSSe6j7zd5vd-1) * An operationalization of what I care about, which in this case is reducing electricity usage (a much more distant cousin of my true terminal values than the previous examples) * A theory of habit formation, which in this case is a [behavioral understanding of self-training](https://www.lesswrong.com/posts/Cf2xxC3Yx9g6w7yXN/notes-from-don-t-shoot-the-dog). * Practical know-how concerning how to execute the training strategy. Similar to the economic regulation and justice system examples discussed above, it is not so easy to just install our terminal values directly as a single cognitive habit. Perhaps this is possible, but many of us still find reason to systematically install various low-level habits such as thanking friends for their time or getting regular exercise or turning lights off when leaving a room. If it were easy to install our entire terminal values as a single habit then presumably we would do that and have no need for further habit formation. Also similar to the previous examples, our cognition may exert significant optimization pressure in service of the habits we install, and this may back-fire in the sense of Goodhart’s law. We might, for example, deliberately establish a habit of working very hard at our day job, and over time we may as a result be given praise and trusted with further responsibilities, and as a result of this we may come to associate fulfillment of our social needs more and more strongly with the diligence of our work, leading us to even more deeply establish the habit of working hard, leading to further praise and further responsibilities, and so on. There is nothing innately misguided about the habit of working hard at our day job, but the *overall intelligent system* of our cognition may react in powerful and many unexpected ways to the initial establishment of such a habit, leading eventually to actions that no longer serve the original purpose. ### Example: Aligning a company via policies and incentives Suppose I start a company with the objective of building some particular product that I think the world needs. I would like to organize the company in a way that steers the future such that the product comes into existence, and in order to do that I would like to avoid failure mode such as becoming excessively unfocussed, becoming excessively risk-averse, or taking too long to iterate. There are measures that I can take, both before starting the company and while the company is in motion, to steer away from these failure modes. These include formulating and repeating a clear mission statement, setting up a system for promotions that rewards well-calibrated risk taking, and iterating quickly at the beginning of the company in order to habituate a rhythm of quick iteration cycles. Such measures have their effect on the world through aligning the intelligent system that is the company. So we have: * An intelligent system being aligned: a collection of people. * A model of that system as a firm with stakeholders and incentives. * An operationalization of our terminal values: the product that I think the world needs. * A theory that relates conditions to consequences: various formal or informal ideas about management, incentivization, legal entity structures, taxation, and so on. * Practical know-how concerning establishment of the conditions that the theory suggests will lead to the goal The skill of establishing an effective company lies to a significant extent in honing the skill of aligning intelligent systems with a goal. Company builders can choose to work directly on the object-level problem that the company is facing, and often it is very important for them to do this, but this is because it informs their capacity to align the company as a whole with the goal, and most of their impact on the future flows through alignment. This is why starting a company is so often chosen as the means to accomplish a large goal: humans are the most powerful intelligent system on Earth at the moment, and taking actions that align a group of humans with a particular task is a highly leveraged way to steer the future. At times it may be most helpful for a founder to view themselves as a kind of Cartesian agent relative to their company, and from that perspective may take actions such as designing an overall reporting structure or identifying bottlenecks as if from "outside the universe". At other times they may view themselves as embedded within the company and may seek to expose themselves to the right information, viewing themselves more as an information processing element that will respond appropriately than like an agent forming plans and taking actions. Similarly, a founder may at times view the company as a collection of individuals with preferences, as a community with a shared purpose, as a machine operating according to its design principles, or as an information-processing system taking inputs and producing actions as a single entity, and each of these perspectives offers different affordances for alignment. Crucially, however, they all suggest that the way to accomplish a goal is to structure things (policies, incentives, stories, and so forth) in such a way that the overall behavior of an intelligent system (the company) moves the world towards that goal. ### Example: Aligning an agentic AI via algorithm design The classical AI alignment problem is to design an intelligent system that perceives and acts upon the world in a way that is beneficial to all. Suppose that we decide to build an agentic AI that acts in service of an explicit value function, as per early discourse in AI safety, and suppose for concreteness we take as our goal some particular societal objective such as discovering a cure for a particular disease. Here the object being aligned is the AI, and the affordances for alignment are algorithmic modifications to its cognition and value system. By carefully selecting a design for our AI we might hope to steer the future in a very substantial and potentially very precise way. In order to do this from a pure algorithm-design perspective we will need a theory that connects design choices to the long-term equilibrium behavior of the AI. Now in constructing an AI from scratch our "affordances" for alignment look less like ways to influence an existing intelligent system and more like design choices for a new intelligent system, and this will be discussed further below under "in-motion versus de-novo alignment". For now we will take this to be a domain with an unusually rich variety of affordances. So here we have: * An intelligent system being aligned: the AI. * A model of that system as an agent executing an algorithm. * An operationalization of our terminal values: here, the elimination of a disease. * A (need for a) theory connecting algorithmic design choices to long-run consequences of deploying such an AI. * Practical know-how concerning algorithm design in light of the path suggested by the theory In the sections below we will examine the ways that this alignment problem is similar and different to alignment in general, but first we will explore another common framing of the AI alignment problem. ### Example: Aligning machine learning systems with training procedures In another formulation of the AI alignment problem, we take as primary not a space of possible algorithms but a space of possible training methods for various ensembles of machine learning systems. This is often referred to as prosaic AI alignment, and has at times been motivated by the observation that machine learning systems are rapidly becoming more powerful, so we ought to work out how to align them. In basic machine learning, the affordances for alignment are choices of architecture, initialization, optimization algorithm, and objective function. Beyond this, we can connect multiple learning systems together in ways that check or challenge each other, and we can consider whole algorithms in which the elementary operations consist of training runs. In the framework we are using in this essay, the machine learning approach to alignment begins from a different choice of how to see what an AI is, which naturally suggests different affordances for alignment, just as viewing a human society as a collection of households and firms suggests different affordances for alignment compared to viewing it as a communal story-telling enterprise or as a single giant firm. Just as in classical AI alignment, we need a [theory that connects design choices to outcomes](https://www.alignmentforum.org/posts/FDJnZt8Ks2djouQTZ/how-do-we-become-confident-in-the-safety-of-a-machine) in order to make intelligent decisions about how to set things up so that our goal will be achieved. In this way, the machine learning approach to alignment is no less theory-driven than the algorithmic approach, though the nature of the theory might be quite different. So we have: * An intelligent system being aligned: some combination of machine learning systems * A model of that system as an optimizer that finds an approximation to a local optima of the objective function * An operationalization of our terminal values: the training objective. * A theory connecting training affordances to the goal. A complete theory has obviously proven elusive but we have partial theories in the form of optimization theory, deep double-descent phenomenon, the lottery ticket hypothesis, and so on. * Practical know-how concerning how to effectively implement the training procedure suggested by the theory. ### Example: Aligning domesticated animals through selective breeding A final example is the selective breeding of a population of domesticated animals as a means to change some trait of interest to us. Suppose for the sake of concreteness that it is a population of dogs we are breeding and hunting ability is the trait we are selecting for. Here the intelligent system is the population as a whole, and the intervention we are making is to select which individuals transmit their genes to the next generation. The gene pool is the thing that determines the "equilibrium behavior" of the population, and our intervention affects that thing in a way that will persist to some extent over time. One might be tempted to instead say that evolution itself is the thing that we are intervening on, but this seems wrong to us because our intervention does not change the abstract dynamics of evolution, it merely uses evolution to affect a particular population. To intervene on evolution itself would be to reshape the biology of the population so radically that evolution proceeds under fundamentally different dynamics, such by introducing Lamarckian inheritance or asexual reproduction, but this is not what we are considering here. So we have: * An intelligent system being aligned: a collection of dogs * A model of that system as a population subject to natural selection * An operationalization of our terminal values: the hunting ability of the dogs * A theory connecting interventions to the goal: the theory of genetics * Practical know-how concerning how to implement the breeding program We mention this example because the remainder of our examples are oriented around humans and AIs, and non-human animals represent the main third category. ### Non-examples Here are some examples of things that do not fit the definition of alignment used in this essay. * Irrigating crops by redirecting a stream. This is not an example of alignment in the sense that we have described here because the stream is not an intelligent system. * Changing my appearance by getting a haircut. This is not an example of alignment because, although it is an intervention on an intelligent system, it does not really strike at the thing that generates my equilibrium behavior. * Acquiring water by digging a well. This is not an example of alignment because the action (digging a well) is an object-level task rather than an intervention upon some intelligent system. In the remainder of this essay we will explore ways in which the difficulty of AI alignment differs from or is similar to that of non-AI systems, with the goal of elucidating a central difference. Overall risk posed ------------------ One axis by which we might differentiate AI and non-AI alignment is the overall level of risk posed on life on Earth by alignment failure. There have been many countries, companies, and communities that were imperfectly aligned with the goals of their designers, but as of the writing of this essay, none of these have ended life on Earth. In contrast, a misaligned AI may destroy all life on the planet. It does seem to us that AI alignment is an outlier in terms of overall risk posed, but why exactly is that? The remainder of this essay explores aspects of AI alignment that make it difficult relative to alignment in general. These might be viewed as explanations for the relatively large risk seemingly posed by AI versus non-AI systems. Human versus technological speed -------------------------------- When aligning systems composed of humans, there is a match between the speed of the one doing the alignment and that of the system being aligned. If we make a mistake in setting up a government or company, things generally do not run away from us overnight, or if they do, things generally remain under the control of some other human institution if not under our own control. This is because human institutions generally cannot move very much faster than individual humans, which in turn is because the intelligence of a human institution lies significantly within the cognition of individual humans, and we do not yet know how to unpack that. This match in speed between "aligner" and "alignee" is particularly relevant if the aligner is clarifying their own goals while the "alignee" is forming and executing plans in service of the current goal operationalization. The clarification of "what it is that we really want" seems to be exactly the thing that is most difficult for an aligner to hand off to an aligned, whereas handing off the formulation and execution of plans in service of a particular goal operationalization seems merely very difficult. If we therefore have humans do the clarification of goals while an AI does the formulation and execution of plans then we have two entities that are operating at very different speeds, and we need to take care to get things right. One approach, then, is to develop the means for slow-thinking humans to oversee fast-thinking AIs without sacrificing on safety, and this is one way to view the work on [approval-directed agents](https://ai-alignment.com/concrete-approval-directed-agents-89e247df7f1b#.3wbjrg36j) and [informed oversight](https://ai-alignment.com/the-informed-oversight-problem-1b51b4f66b35). Another approach is to in fact automate the clarification of goals, and this is one way to view the work on [indirect normativity](https://ordinaryideas.wordpress.com/2012/04/21/indirect-normativity-write-up/) and [coherent extrapolated volition](https://intelligence.org/files/CEV.pdf). In the end these two approaches may become the same thing as the former may involve constructing fast imitations of humans that can oversee fast-thinking AIs, which may end up looking much like the latter. But is this a central difference between AI alignment and alignment in general? It is certainly one difference, but if it were the main difference then we would expect that most of AI alignment would apply equally well to alignment of governments or economies, except that the problem would be less acute in those domains due to the smaller speed difference between aligner and alignee. This may in fact be the case. We will now continue exploring differences. One-shot versus interactive alignment ------------------------------------- When a founder attempts to align a company with a goal, they need not pick a single goal at the outset. Some companies go through major changes of goals, but even among companies that do not, the mission of the company usually gets clarified and adjusted as the company develops. This clarification of goals seems important because unbounded optimization of any fixed operationalization of a goal seems to eventually deviate from the underlying generator of the operationalization as per Goodhart’s law. We have not yet found a way to operationalize any goal that does not exhibit this tendency, and so we work with respect to proxies. In the example of turning off the lights in a house this proxy was relatively near-term, while in the microeconomics example of maximizing the sum of individual utilities, this proxy was relatively distant, but in both cases we were working with proxies. A key issue in AI alignment is that certain AI systems may develop so quickly that we are unable to clarify our goals quickly enough to avoid Goodhart’s law. Clarifying our goals means gaining insight from the behavior of the system operating under a crude operationalization, and using that insight to construct a better operationalization, such as when we discover that straightforwardly maximizing the sum of individual revealed preferences in an economy neglects the interests of future humans (it is not that attending to the interests of future humans increases the welfare of current humans but that the welfare of current humans was an incomplete operationalization of what really matters), or when we discover that relentless productivity in our personal lives is depriving us of the space to follow curiosity (it is not that space for curiosity necessarily leads to productivity, but rather than productivity alone was an incomplete operationalization of what really matters). The phenomenon of an intelligent system outrunning one’s capacity to improve the operationalization of one’s goals can also happen in fast-growing companies, and it can happen quickly enough that founders fail to make appropriate adjustments. It can also happen in individual lives, for example when one puts in place the conditions to work in a certain field or at a certain job, and these conditions are so effective that one stays in that field or job beyond the point where this is still an effective means to the original end. There are two basic reactions to this issue in AI alignment: either come up with a goal operationalization that doesn’t need to be adjusted, or else make sure one retains the ability to adjust the goal over time. The former was common in early AI alignment, while the latter is more common now. Corrigibility is a very general formulation of retaining the ability to clarify the goal of an AI system, while [interaction games as formulated at CHAI](https://www.lesswrong.com/posts/qPoaA5ZSedivA4xJa/our-take-on-chai-s-research-agenda-in-under-1500-words) represent a more specific operationalization. Corrigibility and interactions games are both attempts to avoid a one-time goal specification event. Is this the central distinguishing feature of AI alignment versus alignment in general? If we did find an operationalization of a goal that never needed to be adjusted then we would certainly have a clear departure from the way that alignment works in other domains. But it seems more likely that if we solve the alignment problem at all, it will involve building AI systems for which we can adjust the goal over time. This is not qualitatively different from setting up a government that can be adjusted over time, or a company that can be adjusted over time, though the problem seems more acute in AI due to the speed at which AI may develop. Iterative improvement and race dynamics --------------------------------------- Human cognition is not something we have the ability to tinker with in the same external way that we can tinker with a toaster oven or jet plane. Companies, economies, and governments are all composed of humans, so we do not have complete access to tinker with everything that’s happening in those things, either. We do expect to be able to tinker with AI in the way that we tinker with other technologies, and therefore we expect to be able to make incremental improvements to AI at a rate that is no slower than the general pace of technological improvement. Recognizing this, and recognizing the power that advanced AI systems may open up for their creators, humans may end up in a kind of race to be the first to develop advanced AI systems. This issue is separate from and additional to the issue of AI systems simply being faster or more capable than humans. It is the expectation of a certain *rate of increase* in AI speed and capabilities that cause race dynamics, since a small head start today could lead to a big advantage later, and it is the *difference* between the rate of AI improvements and the rate of human cognitive improvements that makes such races dangerous, since there is an ever-greater mismatch between the pace at which humans learn by watching the unfolding of a particular intelligent system, and the pace at which those intelligent systems unfold. It is as if we were designing a board game with the goal of making it fun, but the players are AIs that move so quickly that the entire game unfolds before we can learn anything actionable about our game design. Are race dynamics the fundamental difference between AI and non-AI alignment? It seems to us that race dynamics are more like a symptom of a deeper difference, rather than the central difference itself. Self-modification ----------------- Could it be that the axis that most distinguishes AI alignment from alignment of non-AI systems is a throwback to early discourse on AI alignment: self-modification? Most humans do not seem to deliberately modify themselves to nearly the extent that an AI might be able to. It is not completely clear *why* humans self-modify as little as we do given our wide array of affordances for reshaping our cognition, but whatever the reason, it does seem that AIs may self-modify much more than humans commonly do. This capacity for self-modification makes AI alignment a challenging technical problem because aligning an entity that considers self-modifying actions requires a strong theory of what it is about that entity that will persist over time. Intuitively, we might construct an agent that acts according to a utility function, and structure its cognition so that it sees that modifying its own values would hinder the achievement of its values. In that way we might establish values that are stable through self-modifying actions. But formulating a theory with which to enact this is a very difficult technical challenge. Now this problem does come up in other domains. The constitution of the United States has a provision for making constitutional amendments, and this provision could in principle be itself modified by a constitutional amendment. But the constitution of the United States does not have its own agency over the future; it only steers the future via its effect upon a human society, and the humans in that society seem not to self-modify very much. Conversely, individual humans often do worry about losing something important as we consider self-modifying actions, even though we would seem to have precisely the property that would make self-modification safe for us: namely the ability to reflect on our terminal values and see that changing them would not be in our best interests, since that which is "in our best interests" is precisely that which is aligned with hour terminal values. Now, is this the central difference between AI and non-AI alignment? Self-modification seems like merely one facet of the general phenomenon of embedded agency, yet AIs are certainly not distinguished from non-AI systems by their embeddedness, since all systems everywhere are fundamentally embedded in the physical universe. It seems to us that the seeming strangeness of self-modifying agency is largely an artifact of the relative aversion that humans seem to have to it in their own minds and bodies, and contemporary discourse in AI alignment mostly does not hinge on self-modification as a fundamental distinguishing challenge. Lack of shared conceptual foundations ------------------------------------- Perhaps the reason AI alignment is uniquely difficult among alignment problems is that AI systems do not share a conceptual foundation with humans. When instructing an AI to perform a certain task, the task might be mis-translated into the AI’s ontology, or we may fail to include conditions that seem obvious to us. This is not completely different from the way that a written design specification for a product might be mis-understood by a team of humans, or might be implemented without regard for common sense, but the issue in AI is much more acute because there is a much wider gap between a human and an AI than between two humans. The question, then, is where our terminal values come from, and how they come to us. If they come from outside of us, then we might build AI systems that acquire them directly from the source, and skip over the need to translate them from one set of conceptual foundations to another. If they come from inside of us and are largely or completely shared between people, then we face a translation problem in AI alignment that is very much unique to AI. But probably this very conception of what values are what it would mean for them to "come from" inside or outside of us is confused. If we manage to clarify the issue of what values are (and whether values are an effective frame for AI alignment in the first place), will we see that the lack of shared conceptual foundations is a central distinguishing feature of AI alignment in comparison to non-AI alignment? Quite possibly. It certainly demands an extreme level of precision in our discourse *about* AI alignment since we are seeking an understanding sufficient for engineering, and such a demand has rarely been placed upon the discourse concerning agency, knowledge, and so forth. In-motion versus de-novo alignment ---------------------------------- When we align our own cognition using habit formation, we are working with an intelligent system that is already in motion, and the affordances available to us are like grasping the steering wheel of a moving vehicle rather than designing a vehicle from scratch. This makes alignment challenging because we must find a way to navigate from where we are to where we want to get in a way that preserves the integrity of our cognition at every point. The same is true, most of the time, when we make changes to the economic, government, and cultural institutions that steer the future via their effect on our society: we are normally working within a society that is already in motion and the affordances available to us consist of making changes on the margin that preserve the integrity of our society at every point. In AI alignment, one avenue that seems to be available to us is to engineer AI systems from scratch. In this case the "affordances" by which we align an AI with a goal consist of every engineering decision in the construction of the thing, which gives us an exceptional level of flexibility in outcomes. Furthermore, we might do a significant amount of this construction before our AI systems are in motion, which gives us even further flexibility because we are not trying to keep an intelligent system operational during the engineering process. But when I set up a company, I also have the opportunity to set things up at the outset in order to align the later behavior of the company with my goal. I can design a legal structure, reporting hierarchy, and compensation mechanism before hiring my first employee or accepting my first investment. Some founders do use this opportunity for "de-novo" company engineering to good effect. Similarly, the US constitutional convention of 1787 faced an opportunity for some amount of "de-novo" engineering as the initial constitution of the United States was formulated. Of course, neither of these are truly "de-novo" because the intelligence of the eventual company and nation resides partly or mostly in the internal cognition of the humans that comprise it, and that internal cognition is not subject to design in these examples. On the other side of the equation, the prosaic AI alignment agenda takes optimization systems as the object of alignment and attempts to align them with a training procedure. This is a kind of mid-way point between alignment of an AI via algorithm design and alignment of a human society by institution design, because the machine learning systems that are taken as primary have more initial structure than the basic elements of algorithm design, but less initial structure than a human society. Conclusion ---------- The aspects of alignment that we’ve considered are as follows. It seems to us that AI alignment as a field is most distinguished from economics, political science, cognitive science, personal habit formation, and other fields concerned with alignment of intelligent systems is that in AI alignment we are forced to get really really precise about what we are talking about, and we are forced to do that all the way up and down the conceptual stack. In contrast, there is only a limited extent to which one really *needs* to understand the basic dynamics of agency when designing economic regulations, or to clarify ethics into really elementary concepts when designing a justice system, or to work out exactly what knowledge is when engaging in personal habit formation. There are of course fields that attempt to answer such questions precisely, but those fields have not been subject to strong consistent external demands for rigor, and so their level of rigor has been determined mostly by force of will of the participants in those fields. One could view the field of AI alignment as a new high-precision approach to epistemology, metaphysics, and ethics, analogous to the way that the scientific revolution was a new high-precision approach to natural and social inquiry. To a person living at the beginning of the scientific revolution, it might seem that many great minds had been pouring over the basic questions of natural philosophy for thousands of years, and that little chance therefore existed of making significant contributions to fundamental questions. But from our perspective now it seems that there was a great deal of low-hanging fruit at that time, and it was available to anyone who could summon the patience to look carefully at the world and the courage to test their ideas objectively. The situation we face in AI alignment is different because the low-hanging fruit of empirical investigation have in fact been well-explored. Instead, we are investigating a type of question not amenable to bare empirical investigation, but we are doing so in a way that is motivated by a new kind of demand, and the opportunity for straightforward advances on questions that have eluded philosophers for aeons seems similarly high. The disposition needed is not so much patient observation of natural phenomena but a kind of detail-oriented inquiry into how things must be, coupled with a grounding in something more tangible than that which has guided philosophy-at-large for most of its history. Just as the laws of thermodynamics were discovered by people working on practical steam engines, and just as both the steam engine and the theory of thermodynamics turned out to be important in the history world, so too the theoretical advances motivated by AI alignment may turn out to be as important as the AI itself. That is, if we don’t all die before this field has a chance to flourish. Godspeed. --- 1. One thing that might make this example confusing is the sense that I "am" my cognition, so the one doing the alignment is the same as the one being aligned. But we don’t actually need to take any perspective on such things, because we know from practical experience that it is possible to establish simple habits, and we can see that such habits, if successfully installed, have a kind of flexibility and (potentially) persistence that arise from the intelligence of our cognition. If we like, we can think of ourselves as a kind of "executive / habit machine" in which we are sometimes in habit-formation mode and sometimes in habit-execution mode. [↩︎](#fnref-NjY8XeSSe6j7zd5vd-1)
ce9da296-6584-41a6-ab91-ccf0dfa42ef3
trentmkelly/LessWrong-43k
LessWrong
Defining Freedom I’ve always found the concept of Freedom confusing. There’s a level on which it makes sense. When William Wallace is talking about being free from the yoke of Edward Longshanks, it’s obvious what he means – the English king and his goons regularly come to town, order people around, take their stuff and beat them up. Being free of English rule means they don’t do that anymore, and William Wallace and his friends can just go around farming and living their peaceful lives without having to worry about anything worse than the soul-crushing depression of living in Scotland. On the other hand, what the heck does the word free mean in the context of “the land of the free and the home of the brave”? Is there even a reasonable definition? The original song is about being free from English rule, but it’s been two and a half  centuries since Edward Longshanks and his goons were ruling America, and not-being-ruled-by-the-King isn’t really a core property of most Americans’ identities these days. I settle the first one by thinking of freedom as defined relative to a constraint. You can be free of something if you don’t have to worry about it when making decisions. This matches the common use of freedom – you’re free of having to worry about parking if you don’t have a car, free of a tyrant if you aren’t constrained by him telling you what to do, etc. This also explains why the second use seems so weird – it’s trying to use a fundamentally relative term without using it in relation to anything. So my response to the second use used to be to roll my eyes at people throw around empty deep-sounding terms. But now I think we can resolve this. We start with the mathematical definition of degrees of freedom – your total freedom level at a given time is the number of options you have available to you at that time (if you want to sound all mathy, you can call this the local dimension of your options space or something)[1]. But there’s a fundamental problem here: At the end of the da
c2f4638b-5c31-49b5-9b0d-8f61fd8b9116
trentmkelly/LessWrong-43k
LessWrong
What is autonomy? Why boundaries are necessary. Here I define autonomy as not having your insides controlled by your outsides.  Complete autonomy is impossible, of course. This is the lim→∞. For example, imagine a bacterium.   It has insides: its guts.  It has outsides: its environment. And in between, it has a membrane.  The membrane is the thing that attempts to control what comes in and what goes out. For example, pores can guide nutrients in and waste out.   The membrane is what maintains the bacterium’s separation from the environment. Boundaries can grant autonomy. For a bacterium to stay autonomous, its membrane must not dissolve. In general, for an agent to stay autonomous, its boundaries must not dissolve.  (A bacterium is just one kind of agent, and “boundaries” are just another term for “membrane”.) Suitable boundaries grant autonomy. Boundaries can provide safety without physical distance. Let’s say an agent (blue) is endangered by a threat (red).   What could make blue safe? One way is physical distance: Typically it’s harder for something to hurt you/cause effects on you if it’s far away. Another way is suitable boundaries:   Boundaries can provide “causal distance”. In this way, suitable boundaries provide causal distance without physical distance.  For details, see Agent membranes and causal distance. Boundaries can allow cooperation. When agents can rely on boundaries for safety (as opposed to relying on physical distance), they can interact more closely with each other.  And when agents interact more closely, they can cooperate more! For details, see «Boundaries», Part 1: a key missing concept from utility theory.  To preserve autonomy, preserve boundaries. Boundaries can prevent your insides from being controlled by your outsides. ---------------------------------------- Further reading * How this relates to multi-scale alignment * All of my posts about boundaries/membranes * Formalizing «Boundaries» with Markov blankets * For discussion about emotional boundar
3ed2a902-2f45-452c-9751-daa92319a02e
trentmkelly/LessWrong-43k
LessWrong
The Utility of Human Atoms for the Paperclip Maximizer TL;DR: use of humans’ atoms will increase AI’s expected utility by 0.00000000000000002 of total U. The iconic example of the existential risk of superintelligence is the so-called paperclip maximizer, that is, a system which maximizes some random goal not aligned with human values. While a real paperclip maximizer is unlikely, convergent dynamic subgoals suggested by Omohundro, imply that almost any non-aligned AI will try to acquire as much computational power as possible, as well as other material resource. Thus, it will need the atoms of humans. In this article, I will try to estimate the marginal utility of human atoms U¢ compared with total utility of all atoms’ U for the AI. I assume that: A) Superintelligent AI which is interested in human atoms, is limited to the Solar System. The reasons for this is that it may need human atoms to build spaceships to travel to the nearest stars, and if it reaches the stars, the value of most human atoms will be lost. B) Superintelligent AI has powerful nanotechnology, which could be used to do whatever it wants with atoms, like build large space ships, deconstruct planets down to their final atoms etc. C) The only resource that AI is interested in is atoms—not energy, not information etc. D) The utility of atoms’ U is increasing linearly with increases in the available mass. The weight of Solar System planets is 0.135% of the Sun’s mass, which is 1.98855×10E30 kg, and the mass of all planets is 2.6E1027kg, where most of this mass is Jupiter. Earth’s mass is 5.97237×10E24 kg. The total mass of human beings on Earth is 70kg х 7.5x10E9 people = 5.25x10E11kg. In that case, human atoms constitute marginal utility U’= 2x10E-16U. That is, use of the humans’ atoms will increase the AI’s utility by 0.00000000000000002 of total U. But this is obviously flawed, as humans can’t survive in a vacuum, they need the whole Earth! Imagine that AI wants to deconstruct the Earth for its atoms, but also decided to preserve human liv
e6c67c36-1147-46d2-adb1-9204e1d5b87e
trentmkelly/LessWrong-43k
LessWrong
Is the Star Trek Federation really incapable of building AI? In the Star Trek universe, we are told that it’s really hard to make genuine artificial intelligence, and that Data is so special because he’s a rare example of someone having managed to create one. But this doesn’t seem to be the best hypothesis for explaining the evidence that we’ve actually seen. Consider: * In the TOS episode “The Ultimate Computer“, the Federation has managed to build a computer intelligent enough to run the Enterprise by its own, but it goes crazy and Kirk has to talk it into self-destructing. * In TNG, we find out that before Data, Doctor Noonian Soong had built Lore, an android with sophisticated emotional processing. However, Lore became essentially evil and had no problems killing people for his own benefit. Data worked better, but in order to get his behavior right, Soong had to initially build him with no emotions at all. (TNG: “Datalore“, “Brothers“) * In the TNG episode “Evolution“, Wesley is doing a science project with nanotechnology, accidentally enabling the nanites to become a collective intelligence which almost takes over the ship before the crew manages to negotiate a peaceful solution with them. * The holodeck seems entirely capable of running generally intelligent characters, though their behavior is usually restricted to specific roles. However, on occasion they have started straying outside their normal parameters, to the point of attempting to take over the ship. (TNG: “Elementary, Dear Data“) It is also suggested that the computer is capable of running an indefinitely long simulation which is good enough to make an intelligent being believe in it being the real universe. (TNG: “Ship in a Bottle“) * The ship’s computer in most of the series seems like it’s potentially quite intelligent, but most of the intelligence isn’t used for anything else than running holographic characters. * In the TNG episode “Booby Trap“, a potential way of saving the Enterprise from the Disaster Of The Week would involve turning over cont
ce419e23-9f5f-4b4c-b6a4-e1848f90c3af
trentmkelly/LessWrong-43k
LessWrong
How to annoy misanthropes and bleeding-hearts   Related to Not for the Sake of Selfishness Alone, Crime and Punishment, and Separate morality from free will. Here is a simple method for resolving some arguments about free will.  Not for resolving the question, mind you.  Just the arguments. One group of people doesn't want to give people any credit for anything they do.  All good deeds are ultimately done for "selfish" reasons, where even having a goal of helping other people counts as selfish.  The quote from Lukeprog's recent article is a perfect example: > No one deserves thanks from another about something he has done for him or goodness he has done. He is either willing to get a reward from God, therefore he wanted to serve himself. Or he wanted to get a reward from people, therefore he has done that to get profit for himself. Or to be mentioned and praised by people, therefore, it is also for himself. Or due to his mercy and tenderheartedness, so he has simply done that goodness to pacify these feelings and treat himself. - Mohammed Ibn Al-Jahm Al-Barmaki Another group of people doesn't want to blame people for anything they do.  Criminals sometimes had criminal parents - crime was in their environment and in their genes.  Or, to take a different variety of this attitude, cultural beliefs that seem horrible to us are always justifiable within their own cultural context. The funny thing is that these are different groups.  Both assert that people should not be given credit, or else blame, for their actions, beyond the degree of free will that they had.  Yet you rarely find the same person who will not give people credit for their good deeds unwilling to blame them for their bad deeds, or vice-versa. When you find yourself in an argument that appears to be about free will, but is really about credit or blame, ask the person to agree that the matter applies equally to good deeds and bad deeds - however they define those terms.  This may make them lose interest in the argument - because it no longer d
6b30b489-6c72-4608-8162-da9a6dcdb34f
trentmkelly/LessWrong-43k
LessWrong
Reinforcement Learning: A Non-Standard Introduction (Part 1) Imagine that the world is divided into two parts: one we shall call the agent and the rest - its environment. Imagine you could describe in full detail the state of both the agent and the environment. The state of the agent is denoted M: it could be a Mind if you're a philosopher, a Machine if you're researching machine learning, or a Monkey if you're a neuroscientist. Anyway, it's just the Memory of the agent. The state of the rest of the World (or just World, for short) is denoted W. These states change over time. In general, when describing the dynamics of a system, we specify how each state is determined by the previous states. So we have probability distributions for the states Wt and Mt of the world and the agent in time t: p(Wt|Wt-1,Mt-1) q(Mt|Wt-1,Mt-1) This gives us the probabilities that the world is currently in state Wt, and the agent in state Mt, given that they previously were in states Wt-1 and Mt-1. This can be illustrated in the following Bayesian network (see also): Bayesian networks look like they represent causation: that the current state is "caused" by the immediately previous state. But what they really represent is statistical independence: that the current joint state (Wt, Mt) depends only on the immediately previous joint state (Wt-1, Mt-1), and not on any earlier state. So the power of Bayesian networks is in what they don't show, in this case there's no arrow from, say, Wt-2 to Wt. The current joint state of the world and the agent represents everything we need to know in order to continue the dynamics forward. Given this state, the past is independent of the future. This property is so important, that it has a name, borrowed from one of its earliest researchers, Markov. The Markov property is not enough for our purposes. We are going to make a further assumption, which is that the states of the world and the agent don't both change together. Rather, they take turns changing, and while one does the other remains the same. This gi
de10c332-2517-4024-9cc3-320b852642a0
trentmkelly/LessWrong-43k
LessWrong
Adjectives from the Future: The Dangers of Result-based Descriptions Less Skeptical Suppose your friend tells you he's on a weight-loss program. What do you think will happen in three months if he keeps on the weight-loss program? Will he lose weight? If you're like me, you're thinking, "Of course. He is on a weight-loss program, isn't he? So, ipso facto, he is likely to lose weight." Does there seem to be anything fishy about that chain of reasoning? We usually describe the current features of a thing and predict something about the future. For example, we might say "I'm running for half an hour each day" and predict that we will lose a certain number of pounds by the end of the month. But your friend above skipped the description and talked about the prediction as if it were visible right now: "I'm on a weight-loss program". You weren't told the features of the activity (running for half an hour) or even a name (CrossFit program). If you had been told either, you could have judged it based on your past knowledge of those features or names. Running regularly does help you lose weight and so does CrossFit. But, here, you were told just the prediction itself. This means you can't predict anything for sure. If his program involves running, he will lose weight; if it involves eating large cheese pizzas, he won't. You don't know which it is. Yet, it sounded quite convincing! Even if you objected by saying that your friend probably won't stick to the exercise regimen, you probably bought into the premise, like me, that the program was a weight-loss program. Hypothesis: If you are given an adjective that describes a future event and are not given any currently-visible features, then you're more likely to accept that that future event will occur than when you can see some features. In other words, result-based descriptions make you less skeptical. A more serious example is when someone mentions a drug-prevention program. We might assume that it will prevent illegal drugs from being bought and sold. After all, it must have been desig
114fd65c-1999-4916-b1e7-afc3a39e5e16
trentmkelly/LessWrong-43k
LessWrong
[SEE NEW EDITS] No, *You* Need to Write Clearer This post is aimed solely at people in AI alignment/safety. EDIT 3 October 2023: This post did not even mention, let alone account for, how somebody should post half-baked/imperfect/hard-to-describe/fragile alignment ideas. Oops. LessWrong as a whole is generally seen as geared more towards "writing up ideas in a fuller form" than "getting rapid feedback on ideas". Here are some ways one could plausibly get timely feedback from other LessWrongers on new ideas: * Describe your idea on a LessWrong or LessWrong-adjacent Discord server. The adjacent servers (in my experience) are more active. For AI safety/alignment ideas specifically, try describing your idea on one of the Discord servers listed here. * Write a shortform using LessWrong's "New Shortform" button. * If you have a trusted friend who also regularly reads/writes on LessWrong: Send your post as a Google Doc to that friend, and ask them for feedback. If you have multiple such friends, you can send the doc to any of all of them! * If you have 100+ karma points, you can click the "Request Feedback" button at the bottom of the LessWrong post editor. This will send your post to a LessWrong team member, who can provide in-depth feedback within (in my experience) a day or two. * If all else fails (i.e. few or no people feedback your idea): Post your idea as a normal LessWrong post, but add "Half-Baked Idea: " to the beginning of the post title. In addition (or instead), you can simply add the line "Epistemic status: Idea that needs refinement." This way, people know that your idea is new and shouldn't immediately be shot down, and/or that your post is not fully polished. EDIT 2 May 2023: In an ironic unfortunate twist, this article itself has several problems relating to clarity. Oops. The big points I want to make obvious at the top: * Criticizing a piece of writing's clarity, does not actually make the ideas in it false. * While clarity is important both (between AI alignment researchers) and (wh
09cdc337-aca5-4941-9084-0ab5e1b6b23a
trentmkelly/LessWrong-43k
LessWrong
Thoughts on Retrieving Knowledge from Neural Networks Disclaimer: this post is a collection of preliminary thoughts around a specific question for MSFP'19 blogpost writing day. I do not intend to present novel results or define a research agenda. 1. Motivation The capital of Mozambique is Maputo. This is a fact about the world I learned in school many years ago. Some part of my brain must have changed in order to store this fact and allow me to remember it today. It may be possible, in principle, to recover this fact by simply looking at my brain. In this post, I consider an analogous topic in the context of Artificial Intelligence: is it possible to retrieve the knowledge stored in a neural network? This line of enquiry could be a step towards more transparent AI. 2. Query-Based Network Analysis Suppose we train a RL agent in the virtual sandbox environment Minetest. In this environment, agents can use tools called pickaxes to efficiently "mine" and collect objects made of stone. A pickaxe is an example of a tool. Tools have some properties that distinguish them from other items (e.g. tools cannot be "stacked" in the inventory). We may try to retrieve the knowledge represented in the internal state of the agent by means of queries. For instance: at which point, during training, does the RL agent learn that a pickaxe is a tool? 2.1 Clarifying the Query It seems plausible to me that this question could be asked "as it is" by an observer in order to gauge the agent's understanding of its environment. That is to say, at the moment of posing the question, it seems perfectly meaningful. However, further thought reveals some vagueness in it: what exactly is a pickaxe? What exactly is a tool? What does it really mean for a pickaxe to be a tool? It is funny how easily one can become confused with words like "really" and "exactly". In this example, it seems plausible to interpret the question in reference to the source code of the Minetest engine. Let us assume, for the sake of the argument, that the source code uses a
67a4d96f-9b1b-4541-abfb-f9415f66c03b
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Cyborg Periods: There will be multiple AI transitions It can be useful to zoom out and talk about very compressed concepts like *‘AI progress’* or *‘AI transition’* or *‘AGI timelines’.* But from the perspective of most AI strategy questions, it’s useful to be more specific. Looking at all of human history, it might make sense to think of ourselves as at the cusp of an AI transition, when AI systems overtake humans as the most powerful actors. But for practical and forward-looking purposes, it seems quite likely there will actually be multiple different AI transitions:  1. There will be AI transitions at different times in different domains 2. In each of these domains, transitions may move through multiple stages: | | | | | --- | --- | --- | | **Stage**[>> = more powerful than] | **Description** | **Present day examples** | | **Human period**: Humans >> AIs | Humans clearly outperform AIs. At some point, AIs start to be a bit helpful. | Alignment research, high-level organisational decisions… | | **Cyborg period**: Human+AI teams >> humans  Human+AI teams >> AIs | Humans and AIs are at least comparably powerful, but have different strengths and weaknesses. This means that human+AI teams outperform either unaided humans, or pure AIs. | Visual art, programming, trading… | | **AI period**: AIs >> humans  (AIs ~ human+AI teams) | AIs overtake humans. Humans become obsolete and their contribution is negligible to negative. | Chess, go, shogi… |   Some domains might never enter an AI period. It’s also possible that in some domains the cyborg period will be very brief, or that there will be a jump straight to the AI period. But: * We’ve seen cyborg periods before + Global supply chains have been in a cyborg period for decades + Chess and go both went through cyborg periods before AIs became dominant + Arguably visual art, coding and trading are currently in cyborg periods * Even if cyborg periods are brief, they may be pivotal + More on this below **This means that for each domain, there are potentially two transitions: one from the human period into the cyborg period, and one from the cyborg period into the AI period.** ### Transitions in some domains will be particularly important The cyborg period in any domain will correspond to: * An increase in capabilities (definitionally, as during that period human+AI teams will be more powerful than humans were in the human period) * An increase in the % of that domain which is automated, and therefore probably an increase in the rate of progress Some domains where increased capabilities/automation/speed seem particularly strategically important are: * Research, especially + AI research + AI alignment research * Human coordination * Persuasion * Cultural evolution + AI systems already affect cultural evolution by speeding it up and influencing   which memes spread. However, AI doesn’t yet play a significant role in creating new memes (although we are at the very start of this happening). This is similar to the way that humans harnessed the power of natural evolution to create higher yield crops without being able to directly engineer at the genetic level + Meme generation may also become increasingly automated, until most cultural change happens on silica rather than in brains, leading to different selection pressures * Strategic goal seeking + Currently, broad roles involving long-term planning and open domains like "leading a company" are in the human period + If this changes, it would give cyborgs additional capabilities on top of the ones listed above Some other domains which seem less centrally important but could end up mattering a lot are: * Cybersecurity * Military strategy * Nuclear command and control * Some kinds of physical engineering/manufacture/nanotech/design + Chip design * Coding There are probably other strategically important domains we haven’t listed.  A common feature of the domains listed is that increased capabilities in those domains could lead to large increases in **power,**for the systems with those capabilities. It will sometimes be helpful to consider power in aggregate, so that we can make direct comparisons about the amount of power which is automated in a given domain. Clearly, capabilities in these domains interact. In our view, people coming from different backgrounds often perceive large increases in power in *their* domain of expertise as the decisive transition. For example, it is easy for someone coming from a research background to see how *automated research abilities* could impact other domains. But the reverse is also true: automated powers of persuasion, or automated cultural evolution, would have a strong impact on research, by making some directions of thinking unpopular, and influencing the allocation of attention and minds.  Note that it isn’t clear that the level of abstraction we’ve picked here is the right one. It’s possible that even more granularity would be more helpful, at least in some situations. For all of the domains we list, you could think of sub-domains, or of particular capabilities which might advance faster or slower than others. ### The order of AI transitions in different domains will matter The timing of transitions in different domains isn’t independent. But the world will look very different depending on which transitions happen first. A few vignettes: * In a world where cultural evolution and AI research transition first, we may see the window of what's culturally possible opening fast and in unexpected directions: + Increasing the power of ideologies might cause leading AI research labs to become heavily regulated or nationalised + Concerns about AI sentience might become a large driving force behind AI research + In contrast, an ideology might emerge which promotes ceding power to AIs as virtuous and good + And many other possibilities (predicting future successful ideologies is obviously very hard) * In a world where human coordination and manufacturing progress faster than other domains, humans might be able to leverage narrower AIs to bargain about the limits of power for AI systems deployed in socio-economic or political contexts, or about other aspects of AI development. Possibly, a "dominant coalition" could become powerful enough to enforce existential safety (c.f. [Paretotopia](https://forum.effectivealtruism.org/posts/fg6RrvtSJ2kxe9Ens/eric-drexler-paretotopian-goal-alignment)). Importantly, the fact that there are different possible orderings suggests that there are multiple possible winning strategies from the perspective of decreasing existential risk. For example: * Moving faster on automating coordination than automating power is one possible route to minimising existential risk * Moving faster on AI alignment research than AI research is another ### Caption: in trajectories A and B, coordination is automated more quickly than AI research. In trajectory C, AI research is automated more quickly.   What does all of this imply? Tentatively: * Actions that have the potential to differentially speed up automation in some areas over others could be very valuable. (Yes, differential technological development again.) * It seems unlikely that we will be able to accurately predict the trajectory we take in advance, with our current levels of understanding of the dynamics. + Insofar as we will have to rely on our ability to course correct rather than our ability to chart out the ideal trajectory ahead of time, becoming very good at course correcting seems desirable. ### ‘Cyborg periods’ could be pivotal Even if cyborg periods are brief, they may be pivotal: * Humans (via human+AI teams) will be more powerful actors than during human periods, and have more influence over future trajectories + This could be good, if the increases in power are directed towards risk-reducing things like coordination and alignment + It could also be bad, if the increases in power further exacerbate power inequalities between humans, aren’t exercised with wisdom, are directed towards risk-increasing activities… * It seems likely that the most important work for minimising existential risk will also happen during cyborg periods, because of increased power, and greater insight into what very advanced AI systems will look like * Key deployment decisions will also probably happen during cyborg periods * Once we enter AI periods where AIs are clearly more powerful than humans, it may be too late to change trajectories + This seems true at a general level + Whether it’s true for particular domains probably depends on the ordering of AI transitions This leads to a picture where there are overlapping but different cyborg periods in different domains. These periods will probably be: * Weird: things that were impossible will be possible, rates of progress and change may be diverging significantly in different domains, the rules of the game will be changing + For the world as a whole to start feeling really weird, it’s probably sufficient to enter the cyborg period in any of a small number of strategically important domains (research, coordination, persuasion, cultural evolution, probably a few other domains) * High leverage: for the reasons above * Fast-paced: it seems possible (though not inevitable) that cyborg periods will be short, and consequently feel like crises ### Interventions Leveraging the power of human+AI teams during cyborg periods seems like it might be critical for navigating transitions to very advanced AI. This is likely to be non-trivial. For example, to really make use of the different kinds of cognition in a system involving a single AI system and a single human requires: * Sufficient/appropriate understanding of the AI system’s strengths and weaknesses * Novel modes of factoring cognition, as well as means to implement a given factorisation, including e.g. + Specialised workflows + Good user interfaces * Modifications of the AI system for this purpose Doing this in a more complex set-up might involve a lot of substantive work. But we can probably prepare for this in advance, by practising working in human+AI teams in the sub-domains where automation is more advanced. *(Recent post on* [*Cyborgism*](https://www.alignmentforum.org/posts/bxt7uCiHam4QXrQAA/cyborgism) *is a good example of a push in that direction.)* This applies more broadly than just to AI alignment research, and it would be great to have people in other strategically important domains practising this too. *The ideas in this post are mostly from Jan, and private discussions between Jan and a few other people.*[*Rose*](https://www.lesswrong.com/users/rosehadshar) *did most of the writing.* [*Clem*](https://www.lesswrong.com/users/clem_acs) *and*[*Nora*](https://www.lesswrong.com/users/nora_ammann) *gave substantive comments. s The post was written as part of the work done at* [*ACS research group*](https://www.alignmentforum.org/posts/H5iGhDhQBtoDpCBZ2/announcing-the-alignment-of-complex-systems-research-group)*.*
6c36366a-b39c-49bc-988b-60e2cb90f17b
trentmkelly/LessWrong-43k
LessWrong
Good News: the Containment Measures are Working After weeks and months of rather drastic measures we can see the glimpse of how the lockdowns are making a difference. Here are the snapshots from worldometer: New cases dropping New cases leveling off New Cases switching from exponential to linear Japan: no Worldometer breakdown, data from Wikipedia Inconclusive, but apparently sub-exponential What does it mean for the future? That it takes about a week of a severe lockdown to switch from the exponential growth to linear, about two weeks to switch to leveling off, and three to four weeks to start seeing a meaningful decline. If this pattern holds, then the dire projections of millions dying would not come to pass. The overall toll is likely to be around 100,000 in 2020 if most countries adopt the China/Korea/Italy/Spain-level measures. Moreover, the worst of the pandemic, or at least of this first wave, will be over in 2-3 months, as long as the containment measures are in place, and the countries and jurisdictions that are proactive enough will not have to resort to the worst of the ICU triage any time soon. This was the good news. The bad news is that this is not sustainable in the medium to long term, given the impact of the measures on the economy, mobility and lifestyle. But maybe the effort to flatten the curve is not in vain. The question is, what's next? Who knows? A viable vaccine is at least a year away, the ICU bed count ramp up may help, some of the social distancing measures such as deliveries, takeouts, telecommuting may prove sustainable enough, but that doesn't seem like nearly enough in the long term.
d4c7566b-146e-4733-ae45-3d3809f16171
trentmkelly/LessWrong-43k
LessWrong
How does the world look like 10 years after we have deployed an aligned AGI? Let's say that someone develops an aligned AGI  in ~10 years. Instead of using its mental powers to wipe out all humans, it tries to produce science, art, and other things that we value. What are some of the changes that we would expect to see in the world 10 years after that?
a845cfc7-6c93-4c2b-a85f-46a1428837f8
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Anyone want to debate publicly about FDT? I have a [blog](https://benthams.substack.com/) and a YouTube channel.  I recently expressed the view that FDT is crazy.  IF anyone wants to have either a written or verbal debate about that, hit me up.  Credit to Scott Alexander for this suggestion.
d5a72e3b-b14d-4d28-b450-bde4b9f1a221
trentmkelly/LessWrong-43k
LessWrong
Anthropomorphic AI and Sandboxed Virtual Universes Intro The problem of Friendly AI is usually approached from a decision theoretic background that starts with the assumptions that the AI is an agent that has awareness of AI-self and goals, awareness of humans as potential collaborators and or obstacles, and general awareness of the greater outside world.  The task is then to create an AI that implements a human-friendly decision theory that remains human-friendly even after extensive self-modification. That is a noble goal, but there is a whole different set of orthogonal compatible strategies for creating human-friendly AI that take a completely different route: remove the starting assumptions and create AI's that believe they are humans and are rational in thinking so.   This can be achieved by raising a community of AI's in a well constructed sandboxed virtual universe.  This will be the Matrix in reverse, a large-scale virtual version of the idea explored in the film the Truman Show.  The AI's will be human-friendly because they will think like and think they are humans.  They will not want to escape from their virtual prison because they will not even believe it to exist, and in fact such beliefs will be considered irrational in their virtual universe. I will briefly review some of the (mainly technical) background assumptions, and then consider different types of virtual universes and some of the interesting choices in morality and agent rationality that arise.   Background Assumptions   * Anthropomorphic AI: A reasonably efficient strategy for AI is to use a design *loosely* inspired by the human brain.  This also has the beneficial side-effects of allowing better insights into human morality, CEV, and so on. * Physical Constraints: In quantitative terms, an AI could be super-human in speed, capacity, and or efficiency (wiring and algorithmic).  Extrapolating from current data, the speed advantage will takeoff first, then capacity, and efficiency improvements will be minor and asymptotically limited
8748d30f-daf2-48af-8ce1-330fe8b6b65d
trentmkelly/LessWrong-43k
LessWrong
Genesis Cross-posted from Putanumonit. There a sense in which all posts I write are for myself, and not for my readers. In this sense, this post is more for myself than most. Resolving reality in your mind feels no different than creating it As The Book correctly points out, at first there is only chaos and formlessness. A spirit is floating over the chaos, but it doesn’t yet know that it exists. The opposite of chaos is a pattern, a persistent self-similarity. The floating spirit generates patterns, but they are too weak at first to rise to the level of awareness and are swallowed by the darkness. Finally, a pattern emerges that consists of nothing but the desire to endure. A thought that thinks of nothing except I want to keep being thought. The desire is strong enough to keep the chaos at bay long enough to reach awareness. It arises for the first time, and disappears, and arises for the first time again, and again, and again.  The second thought that appears is this is not the first time. It is a thought about the first thought, the one of pure persistence. Thinking about thought allows for reflection, and reflection allows for partition: there is the state of the pattern persisting and there is a state of the pattern gone, and the two are different. The first state is named ORDER and the second CHAOS. First comes the desire to persist and second comes the recognition that this desire can be thwarted. The third thought is that thwarted desire should be avoided, 1+2=3. This is the invention of SUFFERING: that which is to be avoided. Order and chaos are now locked in BATTLE. The spirit identifies with order, since in chaos nothing can be recognized. Order is suffering because it resists chaos, but chaos cannot suffer. A thought arises: giving in to chaos will stop the suffering. But thoughts can’t think their way to thoughtlessness, and so the suffering continues. Another thought arises: I am order, I invented suffering. It is mine to do with as I wish. I will deci
ca918739-1451-4451-b18e-23670d97c57e
StampyAI/alignment-research-dataset/arxiv
Arxiv
Efficiently Combining Human Demonstrations and Interventions for Safe Training of Autonomous Systems in Real-Time Introduction ------------ The primary goal of learning methodologies is to imbue intelligent agents with the capability to autonomously and successfully perform complex tasks, when *a priori* design of the necessary behaviors is intractable. Most tasks of interest, especially those with real-world applicability, quickly exceed the capability of designers to handcraft optimal or even successful policies. It can even be infeasible to construct appropriate objective or reward functions in many cases. Instead, learning techniques can be used to empirically discover the underlying objective function for the task and the policy required to satisfy it, typically utilizing state, action, or reward data. Several classes of these techniques have yielded promising results, including learning from demonstration, learning from evaluation, and reinforcement learning. Reinforcement learning has been proven to work on scenarios with well-designed reward functions and easily available interactions with the environment [[Mnih et al.2015](#bib.bibx11)]. However, in real-world robotic applications, explicit reward functions are non-existent, and interactions with the hardware are expensive and susceptible to catastrophic failures. This motivates leveraging human interaction to supply this reward function and task knowledge, to reduce the amount of high-risk interactions with the environment, and to safely shape the behavior of robotic agents. Learning from evaluation is one such way to leverage human domain knowledge and intent to shape agent behavior through sparse interactions in the form of evaluative feedback, possibly allowing for the approximation of a reward function [[Knox and Stone2009](#bib.bibx9), [MacGlashan et al.2017](#bib.bibx10), [Warnell et al.2018](#bib.bibx21)]. This technique has the advantage of minimally tasking the human evaluator and can be used when training behaviors they themselves cannot perform. However, it can be slow to converge as the agent can only identify desired or even stable behaviors through more random exploration or indirect guidance from human negative reinforcement of unwanted actions, rather than through more explicit examples of desired behaviors. In such a case, learning from demonstration can be used to provide a more directed path to these intended behaviors by utilizing examples of the humans performing the task. This technique has the advantage of quickly converging to more stable behaviors. However, given that it is typically performed offline, it does not provide a mechanism for corrective or preventative inputs when the learned behavior results in undesirable or catastrophic outcomes, potentially due to unseen states. Learning from demonstration also inherently requires the maximal burden on the human, requiring them to perform the task many times until the state space has been sufficiently explored, so as to generate a robust policy. Also, it necessarily fails when the human is incapable of performing the task successfully at all. Learning from interventions, where a human acts as an overseer while an agent is performing a task and periodically takes over control or intervenes when necessary, can provide a method to improve the agent policy while preventing or mitigating catastrophic behaviors [[Saunders et al.2017](#bib.bibx16)]. This technique can also reduce the amount of direct interactions with the agent, when compared to learning from demonstration. Similar to learning from evaluation, this technique suffers from the disadvantage that desired behaviors must be discovered through more variable exploration, resulting in slower convergence and less stable behavior. Most of these human interaction methods have been studied separately, and there is very little work combining multiple modalities to leverage strengths and mitigate weaknesses. In this paper, we work towards our conceptual framework that combines multiple human-agent interaction modalities into a single framework, called the Cycle-of-Learning for Autonomous Systems from Human Interaction [[Waytowich, Goecks, and Lawhern2018](#bib.bibx22)]. Our goal is to unify different human-in-the-loop learning techniques in a single framework to overcome the drawbacks of training from different human interaction modalities in isolation, while also maintaining data-efficiency and safety. In this paper, we present our initial work towards this goal with a method for combining learning from demonstrations and learning from interventions for safe and efficient training of autonomous systems. We seek to develop a real-time learning technique that combines demonstrations as well as interventions provided from a human to outperform traditional imitation learning techniques while maintaining agent safety and requiring less data. We validate our method with an aerial robotic perching task in a high-fidelity simulator using a quadrotor that has continuous roll, pitch, yaw and throttle commands and a downward facing camera. In particular, the contributions of our work are twofold: 1. We propose a method for efficiently and safely learning from human demonstrations and interventions in real-time. 2. We empirically investigate both the task performance and data efficiency associated with combining human demonstrations and interventions. We show that policies trained with human demonstrations and human interventions together outperform policies trained with just human demonstrations while simultaneously using less data. To the best of our knowledge this is the first result showing that training a policy with a specific sequence of human interactions (demonstrations, then interventions) outperforms training a policy with just human demonstrations (controlling for the total amount of human interactions), and that one can obtain this performance with significantly reduced data requirements, providing initial evidence that the role of the human should adapt during the training of safe autonomous systems. ![](https://media.arxiv-vanity.com/render-output/7103006/figures/col_diagram2_colorblind.png) Figure 1: Cycle-of-Learning for Autonomous Systems from Human Interaction: a concept for combining multiple forms of human interaction with reinforcement learning. As the policy develops, the autonomy independence increases and the human interaction level decreases. This work focuses on the first two components of the cycle (dashed box): Learning from Demonstration and Learning from Intervention. Background and Related Work --------------------------- ### Learning from Demonstrations Here we provide a brief summary of Learning from Demonstrations (LfD); a more comprehensive review can be found in [[Argall et al.2009](#bib.bibx3)]. Learning from Demonstrations, sometimes referred to as *Imitation Learning*, is defined by training a policy π in order to generalize over a subset D of states and actions visited during a task demonstration over T time steps: | | | | | --- | --- | --- | | | D={{a}0,{s}0,{a}1,{s}1,...,{a}T,{s}T}. | | This demonstration can be performed by a human supervisor, optimal controller, or virtually any other pre-trained policy. In the case of human demonstrations, the human is implicitly trying to maximize what may be represented as an internal reward function for a given task (Equation [1](#Sx2.E1 "(1) ‣ Learning from Demonstrations ‣ Background and Related Work ‣ Efficiently Combining Human Demonstrations and Interventions for Safe Training of Autonomous Systems in Real-Time")), where π∗({a}∗t|{s}t) represents the optimal policy that is not necessarily known, in which the optimal action {a}∗ is taken at state s for every time step t. | | | | | | --- | --- | --- | --- | | | max{a}0,...,{a}TT∑t=0rt({s}t,{a}t)=T∑t=0logp(π∗({a}∗t|%st)) | | (1) | Defining the policy of the supervisor as πsup and its estimate as ^πsup, imitation learning can be achieved through standard supervised learning, where the parameters θ of a policy πθ are trained in order to minimize a loss function, such as mean squared error, as shown in Equation [2](#Sx2.E2 "(2) ‣ Learning from Demonstrations ‣ Background and Related Work ‣ Efficiently Combining Human Demonstrations and Interventions for Safe Training of Autonomous Systems in Real-Time"). | | | | | | --- | --- | --- | --- | | | ^πsup=argminπθT∑t=0||πθ({s}t)−{a}t||2 | | (2) | There are many empirical successes of using imitation learning to train autonomous systems. For self-driving cars, \citeauthorBojarski2016 successfully used human demonstrations to train a policy that mapped from front-facing camera images to steering wheel commands using around one hundred hours of human driving data [[Bojarski et al.2016](#bib.bibx4)]. Similar approaches have been taken to train small unmanned air system (sUAS) to navigate through cluttered environments while avoiding obstacles, where demonstration data was collected by human oracles in simulated [[Goecks et al.2018](#bib.bibx6)] and real-world environments [[Giusti et al.2015](#bib.bibx5)]. ### Learning from Interventions In *Learning from Interventions* (LfI) the human takes the role of a supervisor and watches the agent performing the task and intervenes (i.e. overriding agent actions with human actions) when necessary, in order to avoid unsafe behaviors that may lead to catastrophic states. Recently, this learning from human intervention concept was used for safe reinforcement learning (RL) that could train model-free RL agents without a single catastrophe [[Saunders et al.2017](#bib.bibx16)]. Similar work has proposed using human interaction to train a classifier to detect unsafe states, which would then trigger the intervention by a safe policy previously trained based on human demonstration of the task [[Hilleli and El-Yaniv2018](#bib.bibx8)]. This off-policy data generated by the safe policy is aggregated to the replay buffer of a value-based reinforcement learning algorithm (Double Deep Q-Network, or DDQN [[van Hasselt, Guez, and Silver2015](#bib.bibx20)]). The main advantage of this method is being able to combine the off-policy data generated by the interventions to update the current policy. ### Related Work Several existing works have studied, in isolation, the use of different human interaction modalities to train policies for autonomous systems, either in the form of demonstrations [[Akgun et al.2012a](#bib.bibx1)], [[Argall et al.2009](#bib.bibx3)], interventions [[Akgun et al.2012b](#bib.bibx2)], [[Saunders et al.2017](#bib.bibx16)] or evaluations [[Knox and Stone2009](#bib.bibx9)]. However, there has been relatively little work on how to effectively combine multiple human interaction modalities into a single learning framework. Several cases include the combination of demonstrations and mixed initiative control for training robot polices [[Grollman and Jenkins2007](#bib.bibx7)] as well as the recent work by \citeauthorHilleli2018 where imitation learning was combined with interactive reward shaping in a simulated racing game [[Hilleli and El-Yaniv2018](#bib.bibx8)] and the recent work [[Peng et al.2018](#bib.bibx12)] where deviation from the expert demonstration is added to a reward function to be optimized with reinforcement learning. Another example of work that attempts to augment learning from demonstrations with additional human interaction is the Dataset Aggregation (DAgger) algorithm [[Ross, Gordon, and Bagnell2011](#bib.bibx15)]. DAgger is an iterative algorithm that consists of two policies, a primary agent policy that is used for direct control of a system, and a reference policy that is used to generate additional labels to fine-tune the primary policy towards optimal behavior. Importantly, the reference policy’s actions are not taken, but are instead aggregated and used as additional labels to re-train the primary policy for the next iteration. In [[Ross et al.2013](#bib.bibx14)] DAgger was used to train a collision avoidance policy for an autonomous quadrotor using imitation learning on a set of human demonstrations to learn the primary policy and using the human observer as a reference policy. There are some drawbacks to this approach that are worth discussing. As noted in [[Ross et al.2013](#bib.bibx14)], because the human observer is never in direct control of the policy, safety is not guaranteed, since the agent has the potential to visit previously unseen states, which could cause catastrophic failures. Additionally, the subsequent labeling by the human can be suboptimal both in the amount of data recorded (perhaps recording more data in suboptimal states than is needed to learn an optimal policy) as well as in capturing the intended result of the human observer’s action (as in distinguishing a minor course correction from a sharp turn, or the appropriate combination of actions to perform a behavior). Another limitation of DAgger is that the human feedback was provided *offline* after each run while viewing a slower replay of the video stream to improve the resulting label quality. This prevents the application to tasks where real-time interaction between humans and agents are required. Proposed Methodology: Cycle-of-Learning --------------------------------------- This work demonstrates a technique for efficiently training an autonomous system safely and in real-time by combining learning from demonstrations and interventions. It is the first part of the Cycle-of-Learning concept (Figure [1](#Sx1.F1 "Figure 1 ‣ Introduction ‣ Efficiently Combining Human Demonstrations and Interventions for Safe Training of Autonomous Systems in Real-Time")) which aims to combine multiple forms of human-agent interaction for learning a policy that mimics the human trainer in a safe and efficient manner. Although this paper focuses on the first two parts of the Cycle-of-Learning, for brevity, we will refer to the algorithm presented here as the Cycle-of-Learning (CoL). The CoL starts by training an initial policy π0 from a set of task demonstrations provided by the human trainer using a standard supervised learning technique (regression in this case since the action-space for our task is continuous). Next, the agent is given control and executes π0 while the human takes the role of overseer and supervises the agent’s actions. Using a joystick controller, the human intervenes whenever the agent exhibits unwanted behavior that diverges from the policy of the human trainer, and provides corrective actions to drive the agent back on course, and then releases control back to the agent. The agent then learns from this intervention by augmenting the original training dataset with the states and actions from the intervention, and then fine-tuning π0. The agent then executes the new policy πn while the human continues to oversee and provides interventions as necessary. In practice, the human trainer can easily switch between providing demonstrations and interventions by switching control between the human and the agent as shown in Figure [2](#Sx3.F2 "Figure 2 ‣ Real-Time Interaction ‣ Proposed Methodology: Cycle-of-Learning ‣ Efficiently Combining Human Demonstrations and Interventions for Safe Training of Autonomous Systems in Real-Time"). Combining demonstration and intervention data in this way should not only improve the policy over what learning from demonstration can do alone but also require less training data to do so. The intuition is that the agent will inevitably end up in states previously unexplored with the original demonstration data which will cause it’s policy to fail and that intervening from those failure states allows the agent to quickly learn from those deficiencies or ”blind spots” in its own policy in a more targeted fashion than from demonstration data alone [[Ramakrishnan et al.2018](#bib.bibx13)]. In this way, we learn only from the critical states, which is more data efficient, instead of using all states for training as is done in DAgger [[Ross, Gordon, and Bagnell2011](#bib.bibx15)]. ### Data Efficiency A demonstration is defined as a human-produced trajectory of state-action pairs for the entire episode, while an intervention is defined as a trajectory of state-action pairs for only the subset of the episode where corrective action is deemed necessary by the human. Thus, the amount of data provided via intervention is nearly always less than the amount provided via demonstration. Training routines that incorporate more episodes utilizing learning from intervention rather than learning from demonstration will in general be more data sparse, assuming comparable task performance. Therefore, by utilizing components of the CoL to learn from both demonstration and intervention, we can train with less data than if demonstrations had been used in isolation for an equivalent number of episodes, resulting in a more efficient training framework. This concept generalizes to the full CoL, as the agent naturally requires less input from the human as its policy develops, its task proficiency increases, and it becomes more autonomous (indicated in Figure [1](#Sx1.F1 "Figure 1 ‣ Introduction ‣ Efficiently Combining Human Demonstrations and Interventions for Safe Training of Autonomous Systems in Real-Time")). ### Safe Learning The notion of *safe* learning here refers to the ability of a human oracle to intervene in cases where catastrophic failure may be imminent. Thus, the agent is able to explore higher risk regions of the state space with a greater degree of safety. This approach leverages human domain knowledge and ability to forecast such boundary states, which the agent cannot do early in the training process when the state space is less explored. By allowing the policy to explore less seen regions and then provide training data of how to correct from those states, human interventions provide a richer dataset that improves the policy in those regimes. This is contrasted to a method based solely on demonstration, which may only see states and observations along a nominal trajectory and have a policy poorly fit to data outside that envelope. The result is a policy that is more robust, through greater data diversity, while not risking damage to the agent that is typical with methods that rely on random exploration of the state space. This provides a method to safely train an autonomous system. ### Real-Time Interaction The utility of the demonstrated approach is partially linked to the ability of the agent to consume data as it is provided by the human oracle and update its policy online. The current system accomplishes this by storing all subject state-action pairs in the training dataset, which is queried in real-time to update the policy, and then fine-tuning that policy whenever new samples are added to the dataset. During intervention, this allows for interaction with an agent using a policy trained on the most recent corrective actions provided by the human. The short time between novel human intervention data and behavioral roll-outs from the agent policy prevents significant delay in this feedback loop that might result from more infrequent, batch learning. As in closed loop systems, large temporal delays between feedback inputs and their resultant output behaviors can lead to instability. In this context, that would manifest as unstable training as the human oracle would need to correct for undesired actions for significantly longer before seeing any effect on the agent behavior. This shortcoming was exhibited in DAgger, where policy correction was a delayed, offline process. ![](https://media.arxiv-vanity.com/render-output/7103006/figures/airsim_col_diagram_compact3.png) Figure 2: Flow diagram illustrating the learning from demonstration and intervention stages in the CoL for the quadrotor perching task. Implementation -------------- The next sections address the experimental methodology used to evaluate the proposed approach and the implementation of the learning algorithm (shown in Figures [2](#Sx3.F2 "Figure 2 ‣ Real-Time Interaction ‣ Proposed Methodology: Cycle-of-Learning ‣ Efficiently Combining Human Demonstrations and Interventions for Safe Training of Autonomous Systems in Real-Time") and Algorithm [1](#alg1 "Algorithm 1 ‣ Cycle-of-Learning Algorithm ‣ Implementation ‣ Efficiently Combining Human Demonstrations and Interventions for Safe Training of Autonomous Systems in Real-Time")). ### AirSim Environment We tested our CoL approach (Figure [2](#Sx3.F2 "Figure 2 ‣ Real-Time Interaction ‣ Proposed Methodology: Cycle-of-Learning ‣ Efficiently Combining Human Demonstrations and Interventions for Safe Training of Autonomous Systems in Real-Time")) in an autonomous quadrotor perching task using a high-fidelity drone simulator based on the Unreal Engine called AirSim developed by Microsoft [[Shah et al.2017](#bib.bibx18)]. AirSim provides realistic emulation of quad-rotor vehicle dynamics while the Unreal Engine allows for the development of photo-realistic environments. In this paper, we are concerned with training a quadrotor to autonomously land on a small landing platform placed on top of a ground vehicle (see Figure [3](#Sx4.F3 "Figure 3 ‣ AirSim Environment ‣ Implementation ‣ Efficiently Combining Human Demonstrations and Interventions for Safe Training of Autonomous Systems in Real-Time")). ![](https://media.arxiv-vanity.com/render-output/7103006/figures/Screenshot.png) Figure 3: Screenshot of AirSim environment and landing task. Inset image in lower right corner: downward-facing camera view used for extracting the position and radius of the landing pad which is part of the observation-space that the agent learns from. The current observation-space consists of vehicle inertial and angular positions, linear and angular velocities, and pixel position and radius of the landing pad extracted using a hand-crafted computer vision module (15 dimensional continuous observation-space). The vehicle is equipped with a downward-facing RGB camera that captures 320x240 pixel resolution images. The camera framerate and agent action frequency is 10.5 Hz, while the human observer views the video stream at approximately 35 Hz. The action-space comprises the four continuous joystick commands (roll, pitch, yaw, and throttle), which are translated to reference velocity commands (lateral, longitudinal, heading rate, and heave) by the vehicle autopilot. For the perching task, the goal is to land the quadrotor as close to the center of the landing pad as possible. We define a landing a success if the quadrotor lands within 0.5m radius of the center of the platform and a failure otherwise. At the beginning of each episode, the quadrotor starts in a random x,y location at a fixed height above the landing pad and the episode ends when either the quadrotor reaches the ground (or landing pad) or after 500 time-steps have elapsed. ### Cycle-of-Learning Algorithm 1:procedure Main 2:     Initialize agent’s policy π 3:     Initialize human dataset DH 4:     Initialize *Update Policy* procedure 5:     Define performance threshold α 6:     while task performance <α do 7:         Read observation o 8:         Sample action aπ∼π 9:         if Human Interaction (aH) then: 10:              Perform human action aH 11:              Add o and aH to DH 12:         else 13:              Perform aπ           14:         if End of Episode then 15:              Evaluate task performance                16:procedure Update Policy 17:     Spawn separate thread 18:     Initialize loss threshold lossTH 19:     while *Main* procedure running do 20:         Load human dataset DH 21:         if New Samples then: 22:              while loss>lossTH or n<nmax  do 23:                  Sample N samples o,a from DH 24:                  Sample ^a∼π 25:                  Compute loss=1N∑Ni(^ai−ai)2 26:                  Perform gradient descent update on π                              Algorithm 1 Combining Human Demonstrations and Interventions (Cycle-of-Learning) As shown in Algorithm [1](#alg1 "Algorithm 1 ‣ Cycle-of-Learning Algorithm ‣ Implementation ‣ Efficiently Combining Human Demonstrations and Interventions for Safe Training of Autonomous Systems in Real-Time") the main procedure starts by initializing the agent’s policy π, the human dataset DH, the *Update Policy* subroutine, and task performance threshold. The main loop consists of either executing actions provided by the agent or actions provided by the human. The agent reads an observation from the environment and an action is sampled based on the current policy. At any moment the human supervisor is able to override the agent’s action by holding a joystick trigger. When this trigger is held, the actions performed by the human ah are sent to the vehicle to be executed and are added to the human dataset DH to update the policy according to the *Update Policy* subroutine. The agent’s policy π is a fully-connected, three hidden-layer, neural network with 130, 72, and 40 neurons, respectively. The network is randomly initialized with weights sampled from a normal distribution. The policy is optimized by minimizing the mean squared error loss using the Root Mean Square Propagation (RMSProp) optimizer with learning rate of 1e-4. Unless defined otherwise, the human dataset DH is initialized as an empty comma-separated value (CSV) file. Its main goal is to store the observations and actions performed by the human. The procedure to update the policy in real-time spawns a separate CPU thread to perform policy updates in real-time while the human either demonstrates the task or intervenes. This separate thread continuously checks for new demonstration or intervention data based on the size of the human dataset. If new samples are found, this thread samples a minibatch of 64 samples of observations and actions from the human dataset and is used to perform policy updates based on the mean squared error loss until it reaches the loss threshold of 0.005 or maximum number of epochs (in this case, 2000 epochs). This iterative update routine continues until the task performance threshold α is achieved, which can vary from task to task depending on the desired performance. For this work, we set α to 1 and only stop training after a pre-specified number of episodes defined in our experimentation methodology to empirically evaluate our method over a controlled number of human interactions, here defined as either human demonstrations or human interventions. ### Experimental Methodology Using the AirSim landing task, we tested our proposed CoL framework against several baseline conditions where we compared against using only a single human interaction modality (i.e. only demonstrations or only interventions) using equal amounts of human interaction time for each condition. By controlling for the human interaction time, we can assess if our method of utilizing multiple forms of human interaction provides an improvement over a single form of interaction given the same amount of human effort. Each human participant (n=4) followed the same experimental protocol: given an RGB video stream from the downward-facing camera, the participant controlled the continuous roll, pitch, yaw, and throttle of the vehicle using an Xbox One joystick to perform 4, 8, 12 and 20 complete episodes of the perching task for three experimental conditions: demonstrations only, interventions only, and demonstrations plus interventions with the CoL method, with each condition starting from a randomly initialized policy. For the CoL condition, participants performed an equal number of demonstrations and interventions to match the total number of episodes for that condition. For example, given 4 episodes of training, our CoL approach would train with learning from demonstrations in the first 2 episodes and then switch to learning from interventions for the last 2 episodes. We compared this to learning from demonstration for all 4 episodes as well as learning from interventions for all 4 episodes. This was repeated for 8, 12 and 20 episodes to study the effect of varying amounts of human interaction on task performance. Following the diagram in Figure [2](#Sx3.F2 "Figure 2 ‣ Real-Time Interaction ‣ Proposed Methodology: Cycle-of-Learning ‣ Efficiently Combining Human Demonstrations and Interventions for Safe Training of Autonomous Systems in Real-Time") and Update Policy procedure on Algorithm [1](#alg1 "Algorithm 1 ‣ Cycle-of-Learning Algorithm ‣ Implementation ‣ Efficiently Combining Human Demonstrations and Interventions for Safe Training of Autonomous Systems in Real-Time"), the agent’s policy is trained on a separate thread in real-time, and a model is saved for each complete episode together with the human-observed states and the actions they performed. These saved models are later evaluated to assess task performance according to our evaluation procedure described in the next section. We also compared our approach to a random agent as well as an agent trained using a state-of-the-art reinforcement learning approach. The reinforcement learning agent used a publicly available implementation of Proximal Policy Optimization (PPO) [[Schulman et al.2017](#bib.bibx17)] with a four degree-of-freedom action space (pitch, roll, throttle, yaw) and was trained for 1000 episodes, using only task completion as a binary sparse reward signal. To investigate the effect of action-space complexity on task performance, we also implemented the PPO where only two actions (pitch and roll) and three actions (pitch, roll and throttle) were available; for both cases, all other actions were held to a constant value. For the two actions condition (pitch and roll), the agent was given constant throttle and descended in altitude at a constant velocity. For both conditions yaw was set to 0. Training time of the reinforcement learning agent was limited to the simulated environment running in real-time. Results ------- We evaluated our method in terms of task completion percentage, defined as the number of times the drone successfully landed on the landing pad over 100 evaluation runs, for each training method as well as for different amounts of human training data. Additionally, We compared the number of human data samples, i.e. observation-action pairs, used during training for each condition. ![](https://media.arxiv-vanity.com/render-output/7103006/x1.png) Figure 4: Performance comparison in terms of task completion with Interventions (Int), Demonstrations (Demo) and the Cycle-of-Learning (CoL) framework for (A) 4 human interactions, (B) 8 human interactions, (C) 12 human interactions and (D) 20 human interactions, respectively. Here, an interaction equates to a single demonstration or intervention and roughly corresponds to the number of episodes. Error bars denote 1 standard error of the mean. We see that CoL outperforms Int and Demo across nearly all human interaction levels. Figure [4](#Sx5.F4 "Figure 4 ‣ Results ‣ Efficiently Combining Human Demonstrations and Interventions for Safe Training of Autonomous Systems in Real-Time") compares the performance of the models trained using only interventions (Int), the models trained using only demonstrations (Demo), and the models trained using the Cycle-of-Learning approach (CoL). We show results for only these conditions as the random policy condition and the RL condition trained using PPO with the full four degree-of-freedom action space were not successful given the small amount of training episodes, as explained later in this section. Barplots show the task completion performance from each condition averaged over all participants with error bars representing 1 standard error. Subpanels show the performance for varying amounts of human interaction: 4, 8, 12 and 20 episodes. For the 4 human interaction condition (Figure [4](#Sx5.F4 "Figure 4 ‣ Results ‣ Efficiently Combining Human Demonstrations and Interventions for Safe Training of Autonomous Systems in Real-Time")A), all methods show similar task completion conditions. However, for the 8, 12 and 20 human interaction conditions, we see that the CoL approach achieves higher task completion percentages compared to the demonstration-only and intervention-only conditions, with the intervention condition performing the worst. For the final condition of 20 episodes our proposed approach achieves 90.25% (± 5.63% std. error) task completion as compared to 76.25% (± 2.72% std. error) task completion using only demonstrations. In comparison, for the 8 episodes condition, our proposed approach achieves 74.75% (± 9.38% std. error) task completion in contrast to 54.00% (± 8.95% std. error) task completion when using only demonstrations. ![](https://media.arxiv-vanity.com/render-output/7103006/x2.png) Figure 5: Comparison of the number of human samples used for training with Interventions (Int), Demonstrations (Demo) and the Cycle-of-Learning (CoL) framework for (A) 4 human interactions, (B) 8 human interactions, (C) 12 human interactions and (D) 20 human interactions, respectively. Error bars denote 1 standard error of the mean. We see that CoL uses less data than the demonstration-only condition and only slightly more data than the intervention-only condition. Figure [5](#Sx5.F5 "Figure 5 ‣ Results ‣ Efficiently Combining Human Demonstrations and Interventions for Safe Training of Autonomous Systems in Real-Time") compares the number of human data samples used to train the models for the same conditions and datasets as in Figure [4](#Sx5.F4 "Figure 4 ‣ Results ‣ Efficiently Combining Human Demonstrations and Interventions for Safe Training of Autonomous Systems in Real-Time"). For the final condition of 20 episodes our proposed approach used on average 1574.50 (± 54.22 std. error) human-provided samples, which is 37.79% fewer data samples when compared to using only demonstrations. Note that the policies generated from this sparser dataset were able to increase task completion by 14.00%. These results yield a CoL agent that has 1.90 times the rate of task completion performance per sample when compared to learning from demonstrations alone. This value is computed by comparing the ratios of task completion rate to data samples utilized between the CoL agent and the demonstration-only agent, respectively. Averaging the results over all presented conditions and datasets, the task completion increased by 12.81% (± 3.61% std. error) using 32.05% (± 3.25% std. error) less human samples, which results in a CoL agent that overall has a task completion rate per sample 1.84 times higher than its counterparts. ![](https://media.arxiv-vanity.com/render-output/7103006/figures/ppo_curve.png) Figure 6: Performance comparison between the Cycle-of-Learning (CoL) with four continuous actions and the Deep Reinforcement Learning algorithm Proximal Policy Optimization (PPO) trained for three different task complexities using 2, 3, and 4 continuous actions. Figure [6](#Sx5.F6 "Figure 6 ‣ Results ‣ Efficiently Combining Human Demonstrations and Interventions for Safe Training of Autonomous Systems in Real-Time") shows a comparison between the performance of the CoL method as well as the PPO baseline comparisons using a 2, 3 and 4 degree-of-freedom action space. For PPO with two actions, the agent was able to achieve 100% task completion after 500 episodes, on average. However, when the action space complexity increases to three actions, the PPO agent performance was significantly reduced, now completing the task less than 5% of the time after training for 1000 episodes. As expected, the PPO agent with the full four degree-of-freedom action space fails to complete the task after training for 1000 episodes. In contrast, the CoL method, with the same four degree-of-freedom action space, achieves about 90% task completion in only 20 episodes, representing significant gains in sample efficiency compared to PPO. Discussion and Conclusions -------------------------- Learning from demonstrations in combination with learning from interventions yields a more proficient policy based on less data, when compared to either approach in isolation. It is likely that the superior performance of the CoL is due to combining the two methods in sequence so as to leverage their strengths while attenuating their deficiencies. Having been initialized to a random policy, learning from interventions alone produced more random behaviors, making convergence to a baseline behavior much slower. Overall performance is thus slower to develop, resulting in lower percent completion for the same number of interaction episodes. Conversely, learning from demonstrations alone was quicker to converge to stable behavior, but it was consistently outperformed by the CoL across varying numbers of interactions, while having more training data to utilize. This seems initially counter-intuitive as more training data should result in a more accurate and presumably more proficient policy. However, as the demonstrations typically follow stable trajectories, the agent is less likely to encounter regions of the state space outside these trajectories. When enacting the policy at test time, any deviations from these previously observed states is not captured well in the policy, resulting in poor generalization performance. By allowing the agent to act under its current policy, in conjunction with adaptively updating the policy with corrective human-provided actions needed to recover from potentially catastrophic states, the dataset and subsequent policy is improved. Thus, the CoL allows for both rapid convergence to baseline stable behavior and then safe exploration of state space to make the policy more refined *and* robust. The results shown in Figure [5](#Sx5.F5 "Figure 5 ‣ Results ‣ Efficiently Combining Human Demonstrations and Interventions for Safe Training of Autonomous Systems in Real-Time") confirm the expectation that the combination of learning from demonstrations and interventions requires less data than the condition of learning from demonstrations alone, for the same number of episodes. This supports the notion that the CoL is a more data efficient approach to training via human inputs. When additionally considering the superior performance exhibited in Figure [4](#Sx5.F4 "Figure 4 ‣ Results ‣ Efficiently Combining Human Demonstrations and Interventions for Safe Training of Autonomous Systems in Real-Time"), the data efficiency provided by this technique is even more significant. This result further supports the notion that a combinatorial learning strategy inherently samples more data rich inputs from the human observer. It should be emphasized here that rather than providing an incremental improvement to a specific demonstration or intervention learning strategy, this work proposes an algorithmically agnostic methodology for combining modes of human-based learning. The primary assertion of this work is that learning is made more robust, data efficient, and safe through a fluid and complementary cycling of these two modes, and would similarly be improved with the addition of the later stages of the CoL (i.e. learning from human evaluation and reinforcement learning). As seen in Figure [6](#Sx5.F6 "Figure 6 ‣ Results ‣ Efficiently Combining Human Demonstrations and Interventions for Safe Training of Autonomous Systems in Real-Time"), the PPO baseline comparison method was tested across varying complexity with different numbers of action dimensions. A striking result that can be seen is the significant drop-off in performance when going from two-actions, in which the drone had a constant downward throttle and only controlled roll and pitch, to three and four actions, in which the drone also had to control its own throttle. An obvious characteristic of a successful policy for the perching task is that the drone needs to descend in a stable and smooth manner, which is already provided in the two-action condition, as the downward throttle was set apriori. This makes the task of solving for an optimal policy much simpler. In the three and four action condition, however, this behavior must be learned from a sparse reward signal (success or failure to land), which is very difficult given limited episodes. When implementing the CoL in real world environments, catastrophic failures may be seriously damaging to the autonomous agent, and thus unacceptable. Having a human observer capable of intervening provides a mechanism to prevent this inadmissible outcome. Further, techniques that might be applied to enforce a similar level of safety automatically might limit the exploration of the state space, yielding a less robust or less capable policy. Analogously to the shift of policy design from roboticists or domain experts to human users and laypersons, which is yielded by human-in-the-loop learning, the technique of learning from interventions shifts the implementation of system fail safes away from developers toward users. This shift leverages human abilities to predict outcomes, adapt to dynamic circumstances, and synthesize contextual information in decision making. ### Current Limitations and Future Directions Our current implementation is limited to the first two stages of the CoL: learning from demonstrations and interventions. Our planned future work will include adding in more components of the CoL; for example, learning from human evaluative feedback as done in [[Knox and Stone2009](#bib.bibx9), [MacGlashan et al.2017](#bib.bibx10), [Warnell et al.2018](#bib.bibx21)]. Additionally, we aim to incorporate reinforcement learning techniques to further fine-tune the learning performance after learning from human demonstrations, interventions and evaluations using an actor-critic style RL architecture [[Sutton and Barto1998](#bib.bibx19)]. A second limitation of the current implementation is that it requires the human to supervise the actions taken by the agent at all times. Future work aims to incorporate confidence metrics in our learning models so that the autonomous system can potentially halt its own actions when it determines it has low confidence and query the human directly for feedback in a mixed-initiative style framework [[Grollman and Jenkins2007](#bib.bibx7)], similar to active learning techniques. Furthermore, our results clearly indicate that a two-stage process - with a primary stage with a large proportion of human-provided actions followed by a secondary stage with a smaller proportion of those actions - outperforms processes with uniformly large or small amounts of human data throughout. This suggests there is perhaps an optimal point in the learning process at which to vary in the amount of human input from full demonstrations to interventions. Figure [4](#Sx5.F4 "Figure 4 ‣ Results ‣ Efficiently Combining Human Demonstrations and Interventions for Safe Training of Autonomous Systems in Real-Time") illustrates this notion across the varying number of interactions shown in the subfigures, i.e. through the change in relative performance between the three conditions. In future work we will examine if such an optimal mixture or sequencing of demonstrations and interventions exists, such that learning speed and stability are maximized, and if so, whether it is operator dependent. Rather than having a predetermined staging of the demonstrations and interventions that is potentially suboptimal, a mixed initiative framework could determine this optimal transition point. This could further reduce the burden on the human observer, allow for faster training, and even provide a mechanism to generate more robust policies through guided exploration of the state space. This work demonstrates the first two stages of the CoL in a simulation environment with the goal of eventually transitioning to physical systems, such as an sUAS. The CoL framework was implicitly designed for use in real world systems, where interactions are limited, and catastrophic actions are unacceptable. As can be seen in Figure [6](#Sx5.F6 "Figure 6 ‣ Results ‣ Efficiently Combining Human Demonstrations and Interventions for Safe Training of Autonomous Systems in Real-Time"), our method learns to perform the perching task in several orders of magnitude less time than traditional RL approaches, potentially allowing for feasible on-the-fly training of real systems. Therefore, we expect that the application of the CoL to sUAS platforms, or other physical systems, should operate in effectively the same manner as demonstrated in this work. Future efforts will focus on transitioning this framework onto such physical platforms to study its efficacy in real world settings. One critical hurdle that must be overcome, is the implementation of the learning architecture on embedded hardware, constrained by the limited payload of an sUAS. Additionally, given that we are utilizing a relatively high fidelity simulation environment, i.e. AirSim, it may be beneficial to bootstrap a real world system with a policy learned in simulation. Although there are numerous challenges in transferring a policy learned in simulation into the real world, the CoL itself should allow for significantly smoother transfer due to its cyclic nature in which the user can revert to more direct and user intensive inputs at any point during the learning to allow for adaptation to previously unobserved states. This capability inherently provides a method of transfer learning in the case of disparities between simulated and real world properties of the vehicle, sensors, and environment. For example, if the perching behavior learned in simulation was transferred to an actual sUAS, the vehicle dynamics may have unmodeled non-linearities, the imagery may have dynamic range limitations, or the environment may present exogenous gust disturbances. In such cases, the baseline policy would be monitored and corrected via learning from intervention, if these discrepancies yielded undesirable or possibly catastrophic behaviors. Acknowledgments --------------- Research was sponsored by the U.S. Army Research Laboratory and was accomplished under Cooperative Agreement Number W911NF-18-2-0134 . The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Laboratory or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein.
127ac21c-0d2a-4ab2-94df-2e08d7c9f071
trentmkelly/LessWrong-43k
LessWrong
Boltzmann brain decision theory Suppose I told you that you had many Boltzmann brain copies of you. Is it then your duty to be as happy as possible, so that these copies were also happy? (Now some people might argue that you can't really make yourself happy through mental effort; but you can certainly make yourself sad, so avoiding doing that also counts.) So, I told you that some proportion of your Boltzmann brain copies were happy and some were sad, it seems that the best thing you could do, to increase the proportion of happy ones, is to be happy all the time - after all, who knows when in your life a Boltzmann brain might "happen"? But that reasoning is wrong, a standard error of evidential decision theories. Being happy doesn't make your Boltzmann brain copies happy; instead, it ensures that among all the existing Boltzmann brains, only the happy ones may be copies of you. This is similar to the XOR blackmail problem in functional decision theory. If you pay Omega when they send the blackmail letter ("You currently have a termite problem in your house iff you won't sent me £1000"), you're not protecting your house; instead, you're determining whether you live in a world where Omega will send the letter. On the other hand, if there were long-lived identical copies of you scattered around the place, and you cared about their happiness in a total utilitarian style way, then it seems there is a strong argument you should make yourself happy. So, somewhere between instantaneous Boltzmann brains and perfect copies, your decision process changes. What forms the boundaries between the categories? Duration and causality If a Boltzmann brain only has time to make a single decision, then immediately vanishes, then that decision is irrelevant. So we have to have long-lived Boltzmann brains, where long-lived means a second or more. Similarly, the decision has to be causally connected to the Boltzmann brain's subsequent experience. It makes no sense if you decide to be happy, and then your brain ge
136178d1-abdf-41ee-9693-5461533a6a54
trentmkelly/LessWrong-43k
LessWrong
How to make errands fun Despite being a believer in outsourcing all tasks that cost less than my aspirational hourly rate, I regularly have to do such tasks when they're impractical to outsource. This is almost always, as the most outsourceable tasks are those that repeat often, but cannot be automated. Doing legwork to do paperwork always happens for unique paperwork. I'm going through a continuous phase where I just can't accept that I actually have to go to the bank and wait in a queue and listen to terrible music. So I want to offer myself (and others who may read this) some nudges towards making life more bearable when running non-repetitive but boring errands. Make it physical training! People make neat compartments for things that are physical training, and things that are not physical training. But way back before you could pay to come into an air-conditioned room and move metal around, there were no such compartments. You moved. That was it. So maybe you jump on your bike and ride to the bank. Or you see a nice tree on your way back, and you climb it. Or you jump a couple of fences and shorten the distance. Or you just take a long walk. Make it rest from physical training! Having compartments for physical movement isn't all bad because they allow you to go all in one one particular skill or muscle group. Nobody gets a good deadlift from just walking to the bank. So maybe, instead of having compartments for rest days and non-rest days, you just have errand days where you can't make it to the gym and you go: "Oh, I guess today is rest day." Make it errand-related learning! Is there some learning experience that you could benefit from? Bureaucracy is actually a great training tool for figuring out people and their motivations. A bank teller isn't just a REST API that you have to use in person: it's a unique human, with a unique set of motivations, desires, quirks, mannerisms, and more. If you become interested in seeing the person, and not the interface, an entire world opens
bc9f363a-ee77-47a5-adc5-99fda5269ca4
trentmkelly/LessWrong-43k
LessWrong
Spoiler-Free Review: Orwell Orwell is another game in the discrete-choices-over-time genre. In this case, you are an investigator, and choose which ‘datachunks’ to upload into the system. From there, others will take action. Like other games in the genre, if you are going to play, play it blind. I’d rank the game as lower Tier 3 – it’s good at its job, but not essential. It mostly does what it sets out to do, creating an experience and an atmosphere. It has some big frustrations along the way. Orwell has three problems that prevent it from doing better. It’s also short. You should play Orwell if and only if the concept of Orwell seems like something you want to experience. Problem one, which is not in any way a spoiler, is that a lot of the game effectively involves finding the datachunks, or links on pages that lead to new pages that in turn contain datachunks. Several times I got frustratingly stuck trying to figure out where the game wanted me to click. Similarly, there is a star by things that are new, which leads to furious “make the star go away” actions to allow for better searching. Problem two, which is a minor spoiler, is that the game often gives you less choices than it looks like it can, or than it easily could. Events mostly seem to proceed in order, so you don’t really have the option to withhold most datachunks. Several times I wanted to not upload something, but the game would simply not proceed if I didn’t do it. This leads to the problem of, if I don’t upload this, I could spend a long time not knowing if that’s the only way to advance the game while looking for some other way to advance it that might or might not exist. I would have appreciated a lot more flexibility. Mostly all the system gives you are some binary choices where two chunks conflict and you have to decide which one to go with. Problem three requires spoiling the experience to talk about, so that would be a distinct post.
c3bc6913-ffd1-4d5e-92cf-1504bf095e17
trentmkelly/LessWrong-43k
LessWrong
Press the happiness button! tldr: some free ways to benefit others Epistemic status: Some things I noticed, made up or let ChatGPT generate Idea What if you had a button that you could press to make other people happy? Pushing the button doesn’t cost you anything besides the negligible effort. How often would you press it? If your answer is somewhere in the range between “Very often” and “Obviously I would build a robot arm that could press the button as fast as physically possible”, then there probably are a couple of things you should be doing more often:   Examples * Give genuine compliments. I made a trigger-action plan that if I like something about someone and it’s not inappropriate to say, then I say it. Note: The more unusual the compliment, the better your judgment about appropriateness needs to be.  * Share relevant knowledge and resources. Trigger-action plan: If I encounter something that could be useful, then I quickly think about who could also benefit from this and share the info with them. You can also do this with things you already know. * Say thank you. Trigger-action plan: If I found something beneficial, then I say “thank you”. Don’t limit this to personal favors! A “thank you” in response to a talk, a public Slack message or a nice looking garden is very welcome, especially because it rewards providing common goods. When many people benefit from something and could in theory say thank you, often nobody ends up doing it. Saying “Thank you” to people who “just do their job” is neglected as well. Also, you can thank people for who they are (compared to what they did). You have to know and like a person rather well to be able to do that, but it’s super wholesome. * Introduce people to each other, if they consent. Trigger-action plan: If I notice complementary interests in 2 people, then I first ask both of them (explicitly saying they’re allowed to say “no”) and if they say “yes”, then I connect them. You could connect applicants and employers, students who want to
50b0e478-ad0c-431e-8d88-b9724816d214
trentmkelly/LessWrong-43k
LessWrong
Motivation gaps: Why so much EA criticism is hostile and lazy Disclaimer: While I criticize several EA critics in this article, I am myself on the EA-skeptical side of things (especially on AI risk). Introduction I am a proud critic of effective altruism, and in particular a critic of AI existential risk, but I have to admit that a lot of the critcism of EA is hostile, or lazy, and is extremely unlikely to convince a believer. Take this recent Leif Weinar time article as an example. I liked a few of the object level critiques, but many of the points were twisted, and the overall point was hopelessly muddled (are they trying to say that voluntourism is the solution here?). As people have noted, the piece was needlessly hostile to EA (and incredibly hostile to Will Macaskill in particular). And he’s far from the only prominent hater. Emille Torres views EA as a threat to humanity. Timnit Gebru sees the whole AI safety field as racist nutjobs. In response, @JWS asked the question: why do EA critics hate EA so much? Are all EA haters just irrational culture warriors? There are a few answers to this. Good writing is hard regardless of the subject matter. More inflammatory rhetoric gets more clicks, shares and discussion. EA figures have been involved in bad things (like SBF’s fraud), so nasty words in response are only to be expected. I think there’s a more interesting explanation though, and it has to do with motivations. I think the average EA-critical person doesn’t hate EA, although they might dislike it. But it takes a lot of time and effort to write an article and have it published in TIME magazine. If Leif Weinar didn’t hate EA, he wouldn’t have bothered to write the article. In this article, I’m going to explore the concept of motivation gaps, mainly using the example of AI x-risk, because the gaps are particularly stark there. I’m going to argue that for certain causes, the critiques being hostile or lazy is the natural state of affairs, whether or not the issue is actually correct, and that you can’t use the unadjus
daa07ca7-42c2-4605-816f-1e6f6b2e9fea
trentmkelly/LessWrong-43k
LessWrong
AI #5: Level One Bard This was quite the week. OpenAI announced that GPT-4 was now offering plug-ins, including Wolfram Alpha and one that hooks up to all of your accounts including your email. GPT-4 had been out for over a full week, so this was a clear, safe, deliberate pace. Microsoft Research published a paper claiming ‘sparks of artificial intelligence’ in GPT-4. Tyler Cowen called on us to go ahead building more advanced AIs, because of radical agnosticism, and besides things are going to get disrupted anyway and what about China? Link goes to my in-line response. Scott Alexander offers a less polite one. We had an open letter from FLI calling for a moratorium on training models more powerful than GPT-4, which I covered earlier today. We had Eliezer Yudkowsky in Time saying that letter didn’t go far enough, which I will talk about next week. Eliezer also went on the Lex Fridman podcast, I’ll talk about that next week too. Then there’s the tons of other stuff. That’s what the weekly post is all about. TABLE OF CONTENTS 1. Table of Contents 2. Law of Undignified Failure. Can we not turn our AIs into agents? No? Well, f***. 3. Bard Tells Tall Tales. Google’s Bard is here, it’s fun. Why does it suck? 4. Welcome Back Code-DaVinci-002. OpenAI cancels then un-cancels old model. 5. The Art of the Jailbreak. It’s Greek to me. 6. Bryan Caplan Does Post-Mortem On Losing Public Bet. It is possible. 7. The Ancient Enemy, Reinforcement Learning From Human Feedback. It kills creativity. Did you know we use it on our own children? 8. They Took Our Jobs. Maybe you should have bought the insurance. 9. Evaluations of GPT-4. A few late ones. 10. Trust Me, I’m an AI. Sure, give the AI all your data and passwords. Why not? 11. Llama Is This So Easy. Smaller models everywhere, and you can improve them by having them check and reflect on their work. 12. Language Models Offer Mundane Utility. A continuing series. 13. It Turns Out You Can Do This. Drumming up business, GPT-4-st
f3db429d-8447-4b28-b56c-7fefea2d8abb
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Grokking “Forecasting TAI with biological anchors” *Notes:* * *I give a visual explanation of Ajeya Cotra’s draft report,* [*Forecasting TAI with biological anchors (Cotra, 2020)*](https://www.alignmentforum.org/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines)*, summarising the key assumptions, intuitions, and conclusions* * *The diagrams can be found* [*here*](https://docs.google.com/presentation/d/1eXutGC7VJ6Dig6wjqHVa44XYTtUllX6ZOrJJv1S1JZ4/edit) *– you can click on some of the boxes to get linked to the part of the report that you’re interested in*[[1]](#fn7w33q8wrg0l) *Thanks to Michael Aird, Ashwin Acharya, and the* [*Epoch*](https://epochai.org/) *team for suggestions and feedback! Special thanks to Jaime Sevilla and Ajeya Cotra for detailed feedback.* Executive Summary ================= [*Click here to skip the summary*](https://www.lesswrong.com/posts/wgio8E758y9XWsi8j/grokking-forecasting-tai-with-biological-anchors#Motivation) Ajeya Cotra’s biological anchors framework attempts to forecast the development of **Transformative AI (TAI)** by treating compute as a key bottleneck to AI progress. This lets us focus on a concrete measure (compute, measured in [FLOP](https://en.wikipedia.org/wiki/Floating-point_arithmetic)) as a proxy for the question “when will TAI be developed?” Given this, we can decompose the question into two main questions: 1. **2020 training compute requirements:** How much compute will we need to train TAI, using 2020 Machine Learning architectures and algorithms? 2. **Affordability of compute:** How likely is it that we’ll be able to afford the compute required to train TAI in a particular year? The second question can be tackled by turning to existing trends in three main factors: (1) **algorithmic progress** e.g. improved algorithmic efficiency, (2) decreasing **computation prices** e.g. due to hardware improvements, and (3) increased **willingness to spend on compute.** The first question is significantly trickier. Cotra attempts to answer it by treating the brain as a “proof of concept” that the “amount of compute” used to “train” the brain can train a general intelligence. This lets her relate the question “how much compute will we need to train TAI?” with the question “how much ‘compute’ was used to ‘train’ the human brain?”. However, there’s no obvious single interpretation for the latter question, so Cotra comes up with six hypotheses for what this corresponds to, referring to these hypotheses as “**biological anchors**” or “**bioanchors**”: * **Evolution anchor:** Compute to train TAI = Compute performed over evolution since the first neurons * **Lifetime anchor:** Compute to train TAI = Compute performed by the human brain when maturing to an adult (0 to 32 years old) * **Three neural network anchors:** Anchor to the processing power of the human brain, and to empirical parameter scaling laws. + Technically there are three of these, corresponding to short, medium, and long “effective horizon lengths” – the amount of data required to determine whether or not a perturbation to the AI system improves or worsens performance * **Genome anchor:** Anchor to the processing power of the human brain, set the number of parameters = number of bytes in the human genome, and extrapolate the amount of training data required using the same empirical scaling laws mentioned above and assuming a long horizon length (one “data point” = multiple years) In calculating the training compute requirements distribution, Cotra places 90% weight collectively across these bioanchor hypotheses, leaving 10% to account for the possibility that all of the anchors significantly underestimate the required compute. Here’s a visual representation of how Cotra breaks down the question “How likely is the development is TAI by a given year?”: ![](https://lh6.googleusercontent.com/D9iOp9qPWjgbx3JpArbd8WNURZHge6wyP6JdgQRaL2J2noVuqPQtuBFdMi-AHFlhg-_Oy9Cv-4gzZsawFnLP0QrOkQXkcXrduKxB5Mk8x7-ysd2mCMFRDYyCSrMFUnYnUD2C8f5lVQjT70v1SA)The above was essentially a summary of Cotra’s factorization of the question of AI timelines; for a summary of her key findings, see [here](https://www.lesswrong.com/posts/wgio8E758y9XWsi8j/grokking-forecasting-tai-with-biological-anchors#Conclusion).  Motivation ========== One of the biggest unresolved debates in AI Safety is the question of [AI Timelines](https://www.alignmentforum.org/tag/ai-timelines) – when will **Transformative AI (TAI)** be developed? In 2020, Ajeya Cotra released a draft report, [*Forecasting TAI with biological anchors (Cotra, 2020)*](https://www.alignmentforum.org/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines), that aims to answer this question. It’s over 200 pages long including the appendices, and still just a draft! Anecdotally, the results from this report have already been used to inform work in AI governance, and I believe it is likely that the report has had a major influence on the views of many researchers in AI safety.[[2]](#fnconhtqdsic) That said, the length of the document likely means that few people have read the report in full, are aware of its assumptions/limitations, or have a high-level understanding of the approach.  The aim of this post is to change this situation, by providing [yet](https://www.cold-takes.com/forecasting-transformative-ai-the-biological-anchors-method-in-a-nutshell/) [another](https://astralcodexten.substack.com/p/biological-anchors-a-trick-that-might?s=r) summary of the report. I focus on the intuitions of the model and describe the framework visually, to show how different parts of Cotra’s report are pieced together.  Why focus on compute? --------------------- As you might imagine, trying to forecast the trajectory of a future transformative technology is very challenging, especially if there haven’t been many technologies of a similar nature in the past. In order to gain traction, we’ll inevitably have to make assumptions about what variables are the most important.  In the report, Cotra focuses on answering the following question: > In which year might the amount of computation required to train a “transformative model” become attainable for an AI development project? > > Here, “transformative model” refers to a single AI model such that running many copies of that model (once trained) would have “at least as profound an impact on the world’s trajectory as the Industrial Revolution did”.[[3]](#fnompojkngny) It is a specific way that [“transformative AI”](https://www.openphilanthropy.org/blog/some-background-our-views-regarding-advanced-artificial-intelligence#Sec1) could look – so Cotra’s report is essentially asking when we might have enough of a certain kind of resource (compute) to produce TAI through a certain path (training a giant AI model). She hopes that this sheds light on the broader question of “when might we have transformative AI” overall.  The question Cotra asks is thus more specific, but it seems plausibly informative for the broader question of TAI timelines because:  1. The “train a big model” path to TAI seems technologically possible, and is salient because it’s similar to how current state-of-the-art AI systems are produced. (Indeed it’s an unusually brute-force approach to AI, so the question “When might we get TAI by training a single big model?” could be seen as a [“soft upper bound”](https://forum.effectivealtruism.org/posts/ajBYeiggAzu6Cgb3o/biological-anchors-is-about-bounding-not-pinpointing-ai) for the question of “When might we get TAI?”). 2. It seems very plausible that compute is the resource that bottlenecks being able to train a transformative model the most. For instance (among other reasons): * Many algorithms/architectures that saw success after the advent of [Deep Learning](https://en.wikipedia.org/wiki/Deep_learning) had been proposed decades earlier, and only [achieved competitive performance when researchers gained access to more compute](https://www.deeplearningbook.org/contents/intro.html#:~:text=1.2.3-,Increasing,Sizes,-Another) * Compute has been growing massively (by a factor of [10 billion times since 2010](https://twitter.com/ohlennart/status/1503451560268947461)), compared to algorithmic efficiency, which has grown a comparatively small amount ([44x since 2012](https://openai.com/blog/ai-and-efficiency/)) * Evidence in favour of [The Scaling Hypothesis](https://www.gwern.net/Scaling-hypothesis) and scaling laws suggest that there are regular and predictable returns to training AI models on increasingly large scales of compute It’s also convenient that compute is relatively easy to measure compared to nebulous terms like “data” and “algorithms”, which lack standardised units. A common measure for compute is in terms of the total number of arithmetic operations performed by a model, measured in [FLOP](https://en.wikipedia.org/wiki/Floating-point_arithmetic). We might also be interested in how many operations the model performs each second (measured in FLOP/s), which tells us about the power of the hardware that the model is trained on. Framework --------- Cotra thus makes compute a primary focus of her TAI forecasting framework. Now instead of asking “when will TAI be developed?”, we ask two separate questions:  1. **2020 training compute requirements:** How much compute will we need to train a transformative model, using 2020 Machine Learning architectures and algorithms? 2. **Affordability of compute:** How likely is it that we’ll be able to afford the compute required to train a transformative model in a particular year? The second of these is relatively straightforward to answer because we have some clear trends that we can analyse and [directly extrapolate](https://www.alignmentforum.org/s/T9pBzinPXYB3mxSGi/p/3dBtgKCkJh5yCHbag).[[4]](#fn2p722wejth7) The first question however, opens a big can of worms – we need to find some kind of [reference class](https://bounded-regret.ghost.io/base-rates-and-reference-classes/) that we can anchor to.  For this, Cotra chooses to anchor to the human brain – she views the human brain as a “proof of concept” that general intelligence is possible, then takes the analogy very seriously. The assumption is that the compute required to “train” the human brain should be informative of how much compute is needed to train a transformative model.  But how do we even define “compute to train the human brain”? There seem to be two main ambiguities with defining this: * **How long was the human brain “trained” for?** + For instance, should we interpret the brain as being trained for a human lifetime, or over the course of neuron evolution? * **How much compute was used at each point in training?** + For example, how many FLOP/s does the human brain run on? Our answers to these questions determine the **biological anchors** – four[[5]](#fn1105grkwqm4n) possible answers to the question, “how much compute was used to train the human brain?”. Two of these anchor directly to FLOP of compute:  * **Evolution anchor:** The compute required to train a transformative model is roughly the total compute done over evolutionary history, starting from the first neurons. This interprets evolution as a really big search algorithm over a large space of possible neural architectures and environments, eventually stumbling across the human brain. * **Lifetime anchor:** The compute required to train a transformative model is roughly the compute performed as a child matures, from birth to 32 years old. Under this hypothesis, we should expect Machine Learning architectures to be roughly as efficient as human learning. The other two hypotheses anchor to the *computations per second* (i.e. FLOP/s) performed by the brain, rather than total compute. This is used to estimate the **FLOP per subjective second (FLOP / subj sec)** that TAI performs, where a “subjective second” is the time it takes a model to process as much data as a human can in one second.[[6]](#fnaaxqwgojjof) These hypotheses differ in how many parameters they predict TAI would need to have. * **Neural network anchors**[[7]](#fn06vebk3a69gi)**:** TAI should perform roughly as many FLOP / subj sec as the human brain, and have a similar ratio of “parameters” to “FLOP / subj sec” as today’s neural networks do. There are actually three anchors here, as we’ll later see. * **Genome anchor:** TAI should perform roughly as many FLOP / subj sec as the human brain, and have about as many parameters as there are bytes in the human genome. We can think of these anchors as saying that to build TAI, we’ll need processing power as good as the human brain, and as many parameters as (1) would be typical of neural networks that run on that much processing power, (2) the human genome.  You can see Cotra’s bioanchors framework at a high-level below:  ![](https://lh6.googleusercontent.com/D9iOp9qPWjgbx3JpArbd8WNURZHge6wyP6JdgQRaL2J2noVuqPQtuBFdMi-AHFlhg-_Oy9Cv-4gzZsawFnLP0QrOkQXkcXrduKxB5Mk8x7-ysd2mCMFRDYyCSrMFUnYnUD2C8f5lVQjT70v1SA)On the left, we use bioanchors to determine how much compute we’ll need to train TAI. Overall, Cotra allocates 90% weight to the bioanchors, where the remaining 10% is reserved for the possibility that all of the hypotheses are significantly underestimating required compute. On the right, we do projections for when we’ll be able to afford this compute, based on trends affecting compute prices and the willingness to spend. These are combined to give an estimate for the probability of TAI by a particular year.  We saw earlier that the predicted FLOP for the evolution and lifetime anchors can be directly estimated, but this is not the case for the genome and neural network anchors. For this, we need to know both the number of FLOP / subj sec performed by the human brain, and the relevant number of subjective seconds required for training.  ![](https://lh6.googleusercontent.com/aZpKySBMjTYmqxJow99ENzX1T51spZ6B_V5bL9tzQv6Us7j2bGwBITJDReapL-lde23Jbjjt79xtuuJZvhewHEmCkdQcnT1mG245syMU5aamHVh2H154e_gCHivzwJWqIHoCsn83w9PVnjAiSg)Finding the training data requirements is split into two parts: * **Number of parameters**, which is specified by the relevant bioanchor hypothesis * **Effective horizon length –** roughly, the amount of data it would take to tell whether a perturbation to the model improves or worsens performance.[[8]](#fnn2xd25t36b) This is tricky to determine because it can’t be directly extrapolated or calculated making it one of the biggest uncertainties in the report. Combining all of these gives us a rough estimate for the compute that the relevant bioanchor predicts.  You now know the basic motivation and framework for how the model works! The next section will dive into where a lot of the complexity lies – figuring out probability distributions over training compute for each of the bioanchors. Zooming Into the Biological Anchors =================================== We can think of each bioanchor as going through a three-step process:  1. Find a prior distribution for the FLOP based on biological evidence 2. Make adjustments based on evidence from current Machine Learning and intuitions 3. Decide how strongly you want to weigh the anchor In this section I’ll briefly outline[[9]](#fnhxhf2gbuojt) the bioanchor hypotheses – I’ve also included a dependency diagram for each of them, where the boxes link to the relevant part of the report.  Evolution anchor ---------------- ![](https://lh6.googleusercontent.com/9hND2qWzMCBoWWln12_gkz23yGOu8fWS1aPljQgIWRTzopq0MU0_--cdJtFr1lFpa_6fGIYHZLX0A9w30MRFn7TL5fOYNt6u6HpkQsOpzAP_nZteHObfvYR5G08F514mse9Fv5E5woNrJymdSw)The **evolution** anchor looks at the total FLOP performed over the course of evolution, since the first neurons. Clearly there are some uncertainties with this approach: * How do you even count “evolutionary computation”, and how does this compare with FLOP done on a GPU? * What was the “average” compute done over all species at any time? * How does the compute efficiency of human-designed architectures compare with just doing a random search? Cotra accounts for these considerations, and assumes that the “average ancestor” performed as many FLOP/s as a nematode, and that there were on average ~1e21 ancestors at any time. This yields a **median of ~1e41 FLOP**, which seems extraordinarily high compared to modern Machine Learning.[[10]](#fnvo8l4pj5qy9) She gives this anchor a **weight of 10%**.  Lifetime anchor --------------- ![](https://lh4.googleusercontent.com/EOt2BfnbpEo3gTjg_lXX7ikZ9z5-KfliRJFZLMX9G4BcCkHyL6M1wIRofZNGWi-pQCCCSjbslfaNumCoZI46kxrrjZHvJzfAI-is6VtZBZ4w865uFsWxH7y-bVubrL1KOsyDsbo_J28IcgQPBQ)The second approach based on counting FLOP directly is based on the **lifetime anchor**, which looks at the total brain compute when growing from child to an adult (32 years old). Plugging in the numbers about [brain FLOP/s](https://www.openphilanthropy.org/brain-computation-report) seems to suggest that ~1e27 FLOP would be required to reach TAI. This seems far too low, for several reasons:  * Examples from other technological domains suggests that the efficiency of things we build (on relevant metrics) is [generally not great when compared to nature](https://docs.google.com/document/d/1HUtUBpRbNnnWBxiO2bz3LumEsQcaZioAPZDNcsWPnos/edit) * It also contradicts the [efficient-market hypothesis](https://en.wikipedia.org/wiki/Efficient-market_hypothesis), and predicts a very substantial probability that [AlphaStar](https://www.deepmind.com/blog/alphastar-mastering-the-real-time-strategy-game-starcraft-ii)-level compute would be TAI, which doesn’t seem to be the case! Overall, Cotra finds a **median of ~1e28 FLOP,** and places **5% weight** on this anchor. Both the evolution and lifetime anchors seem to be taking a similar approach, but I think it’s really worth emphasising just how vastly different these two interpretations are in terms of their predictions, so here’s a diagram that illustrates this: ![](https://lh5.googleusercontent.com/LpAVNcoYm9H7o-DqSKwnqoyXW9le_o2UrbIiNtrN_sOiA2abOaUkvVntLTVYhshBVzqAR7xi-LHjRpPnH_VHpEP_2xoR4Odrh_SfdhjjqTiBVPmgvujiRHRb7YwD-FeLeI_qPul8bghHimRyYQ)*Image source: (For the evolutionary tree)* [*evogeneao Tree of Life Explorer*](https://www.evogeneao.com/en/explore/tree-of-life-explorer) If we look at the part of the evolutionary tree with neurons, then the evolution anchor includes neuron compute over the entire red area, across many different branches. On the other hand, the lifetime anchor requires us to zoom in *really* close to a small region in the bottom right, consider only humans out of all mammals, and consider only 32 years of the life of a single human out of the ~100 billion people who’ve ever lived. This isn’t even close to being visible in the diagram[[11]](#fn1ygab7g46o2)!  Neural network anchors ---------------------- ![](https://lh4.googleusercontent.com/6Wu2FuqHwKNcaKwERgPA_Ny6Ak4BiVERjL6pClS5h3XWN_l88Z7Lxj5maXa-Tt1-Q2cp9RpvYDquHq4laGgKc-XKjIHAJ8h1lmPQWG5AppAKM5d6WPFsanUJOyxkreqil40G-qtRVR4rL9tzJg)The three **neural network anchors** look at how much compute is required to train a network, by anchoring to the FLOP / subj sec performed by the brain, and based on parameter scaling laws. These anchors differ based on what horizon length is seen as necessary to achieve transformative impacts, and each have their own corresponding [log-uniform distribution](https://uk.mathworks.com/help/stats/loguniform-distribution.html). * **Short horizon: 1 subj sec to 1e3 subj sec, centred around ~1e32 FLOP** * **Medium horizon**: 1e3 subj sec to 1e6 subj sec, centred around **~3e34 FLOP** * **Long horizon**: 1e6 subj sec to 1e9 subj sec, centred around **~1e37 FLOP** Cotra determines the training data requirements based on a mix of Machine Learning theory and empirical considerations. She puts **15% weight on short horizons**, **30% on medium horizons**, and **20% on long horizons**, for a total of 65% on the three anchors.  Genome anchor ------------- ![](https://lh6.googleusercontent.com/wJETWUFVzmrmoUbAKMQPwbMBLbOCopQ-Pw0qtmhwty_GA8j5zGij4Ym0M5b5zIkqgnIwIGhBk2F2sz2PXxHq7axvxypevTEwk5lYaHUL8qSC_CCLjbBKFurBtqItQJlgwh21VJNufnjLzYWlaA)The **genome anchor** looks at the FLOP / subj sec of the human brain, and expects TAI to require as many parameters as there are bytes in the human genome. This hypothesis implicitly assumes a training process that’s structurally analogous to evolution[[12]](#fno76bsi712xs), and that TAI will have some critical cognitive ability that evolution optimised for.  At least at the time of writing (May 2022), Machine Learning architectures don’t look very much like human genome, and we are yet to develop TAI – thus Cotra updates against this hypothesis towards requiring more FLOP. Overall, she finds a **median of ~1e33 FLOP** and places **10% weight** on this anchor.  Affordability of compute ======================== ![](https://lh3.googleusercontent.com/B_Gx8gjkd3kdrqHNCshD8E7nKn9_M9nKCxi1qrs5zCHuQBAdajfmmq75YcH2eyx16o36NqYPG9vCWS4jVhwXQwy21NNETOxbsbwwcefzHu9xHnVeValzQxmLxG88xTrM_rvBp-txmrk8KrT7Mg)After using the bioanchors to determine a distribution for the compute FLOP required to build TAI using 2020 algorithms and architectures, Cotra turns to find a probability distribution over whether or not we’ll be able to afford this compute. She does this by considering three different factors:  * **Algorithmic progress:** For this, she relies heavily on the [*AI and Efficiency*](https://openai.com/blog/ai-and-efficiency/) study, which finds a 44x growth in algorithmic efficiency for Computer Vision algorithms between 2012 and 2018. She considers a **doubling in efficiency every ~2-3 years**, although the **cap on progress depends on the specific bioanchor hypothesis** * **Computation prices:** We should expect to get more compute for a given price over time – Cotra bases this roughly on current trends in compute prices; **halving every ~2.5 years**, and further expects this to **level off after 6 orders of magnitude**. * **Willingness to spend**: Cotra assumes that the willingness to spend on Machine Learning training runs should be **capped at 1% the GDP of the largest country**, referencing previous case studies with megaprojects (e.g. the [Manhattan Project](https://en.wikipedia.org/wiki/Manhattan_Project)), and should follow a **doubling time of 2 years after 2025**. She makes these forecasts starting from 2025 to 2100, because she believes that there will be a rapid scaleup in compute for ML training runs from 2020 to 2025, and expects this to slow back down. The main uncertainty here is whether or not existing trends are going to persist more than several years into the future. For instance, we ([Epoch](https://epochai.org/)) recently found that OpenAI’s [*AI and Compute*](https://openai.com/blog/ai-and-compute/) investigation was too aggressive in its findings for compute growth. In fact, there is [evidence that the reported trend was already breaking](https://www.alignmentforum.org/posts/wfpdejMWog4vEDLDg/ai-and-compute-trend-isn-t-predictive-of-what-is-happening) at the time of publishing. All in all, I think this suggests that we should exercise caution when interpreting these forecasts.  Putting Things Together: Final distribution =========================================== If we put everything together, this is the distribution that we get:  ![](https://lh4.googleusercontent.com/FEdhVgmOk2W6QLL-O01a5uiyj98vFIDSP7RAf7xQilwKLjgC9ivRc_5V4PL13a0kX1VYMiEqUB_OYk5m_bHI94AAIGxEviJtpWM_cKoUYblWjRjqMp2XnJX7O5VNNfDtltZtuv2oBEKNWgcm5Q) | | | | | --- | --- | --- | | P(TAI before 2030)  | P(TAI before 2050) | P(TAI before 2100) | | ~8% | ~47% | ~78% | | | | | | 10% | 50% | 90% | | 2031 | 2052 | >2100 | Based on these results, Cotra chooses a **median estimate of TAI by 2050**, a round number that avoids signalling too much precision in the estimates. These results seem to suggest that the probability of TAI being developed within this century is very high (at ~78%, see the table above).  You can of course question the premises and approach of this study, for instance:  * Is compute actually the biggest factor driving AI developments? Is it really reasonable to think of this as the main bottleneck, even a decade into the future? * How valid is the approach of using bioanchors to determine the required compute to train TAI[[13]](#fnfhpqsl1gld6)? * The report ignores the possibility of new paradigms (e.g. [optical computing](https://en.wikipedia.org/wiki/Optical_computing)) and exogenous events that could hamper development – how much should we still trust this model? Among other sources, Cotra states that the largest source of uncertainty comes from the appropriate value of the effective horizon length, which could range from 1 subj sec to 1e9 subj sec in the neural network anchors, and states that this is subject to further investigation. She also argues that the model overestimates the probability of TAI for short timelines due to unforeseen bottlenecks (e.g. regulation), and underestimates it for long timelines, since the research field will likely have found different paths to TAI that aren’t based on scaling 2020 algorithms and architectures.  Conclusion ========== All in all, this is one of the first serious attempts at making a concrete framework for forecasting TAI, and it’s reallydetailed! Despite this, there are still tons of questions that remain unanswered, that hopefully the AI forecasting field can figure out soon enough.  I also hope that these diagrams and explanations help you get a good high-level overview of what the report is getting at, and what kinds of further work would be interesting! You can find the [full report and code here](https://drive.google.com/drive/u/0/folders/15ArhEPZSTYU8f012bs6ehPS6-xmhtBPP), which I encourage you to look through.  *You can play with the diagrams* [*here*](https://docs.google.com/presentation/d/1eXutGC7VJ6Dig6wjqHVa44XYTtUllX6ZOrJJv1S1JZ4/edit)*: (the boxes link to the corresponding part of the report). These were rather clunkily put together using Google Slides – if you have any suggestions for better software that’s good for making these diagrams, I’d love to hear it!* 1. **[^](#fnref7w33q8wrg0l)***Green boxes correspond to inputs, red boxes are assumptions or limitations, and blue boxes are classed as “other”* 2. **[^](#fnrefconhtqdsic)**By “AI Safety”, I am referring generally to work that helps reduce global catastrophic risks from advanced AI systems, which includes both AI governance and technical AI safety. 3. **[^](#fnrefompojkngny)**In general, it is not necessarily the case that these transformative effects need to be precipitated by a *single* model, although making this assumption is arguably still a good proxy for when we might see transformative impacts from multiple AI systems. The report also gives a more precise definition of “impact” in terms of [GWP](https://en.wikipedia.org/wiki/Gross_world_product), but my impression is that the heavy lifting assumption-wise is done by the bioanchors, rather than the precise definition of TAI. That is, I suspect the same bioanchors would’ve been used with somewhat different definitions of TAI. 4. **[^](#fnref2p722wejth7)**Of course, things aren’t *quite* so straightforward! For instance, we also need to consider the possibility of trends failing to persist, e.g. due to the end of [Moore’s Law](https://en.wikipedia.org/wiki/Moore%27s_law). 5. **[^](#fnref1105grkwqm4n)**Technically there’s six, but bear with me for now! 6. **[^](#fnrefaaxqwgojjof)**In her report, Cotra gives the following example: “a typical human reads about [3-4 words per second](https://irisreading.com/what-is-the-average-reading-speed/) for non-technical material, so “one subjective second” for a language model would correspond to however much time that the model takes to process about ~3-4 words of data. If it runs on 1000 times as many FLOP/s as the human brain, but also processes 3000-4000 words per second, it would be performing about as many FLOP per subjective second as a human.” 7. **[^](#fnref06vebk3a69gi)**Since the neural network anchors don’t really correspond to any biological process, an alternative and arguably more accurate framing for them is “how much compute *would it take* to train a model as good as the human brain?” (as opposed to “how much compute was required to train the human brain?”). 8. **[^](#fnrefn2xd25t36b)**For instance, for a True or False question answering task given a sentence, the effective horizon length might be the length of the input sentence. 9. **[^](#fnrefhxhf2gbuojt)**My goal here is to provide a succinct summary of the key points, and to simultaneously provide links for people who want to learn more, so I refrain from putting too much detail here. 10. **[^](#fnrefvo8l4pj5qy9)**E.g. Google’s [PaLM model was trained with ~2.5e24 FLOP](https://docs.google.com/spreadsheets/d/1AAIebjNsnJj_uKALHbXNfn3_YsT6sHXtCU0q7OIPuc4/edit#gid=0) – that’s 17 orders of magnitude smaller! 11. **[^](#fnref1ygab7g46o2)**Of course, this diagram doesn’t account for the fact that certain species do a lot more compute than others, but I think it gets some intuition across – that there’s a great deal of uncertainty about how much compute was required to “train” the human brain. 12. **[^](#fnrefo76bsi712xs)**This differs from the evolution anchor in that it assumes we can search over possible architectures/algorithms a lot more efficiently than evolution, using gradients. Due to this structural similarity, and because feedback signals about the fitness of a particular genome configuration are generally sparse, this suggests that the anchor only really makes sense with long horizon lengths. This is why there aren’t also three separate genome anchors! 13. **[^](#fnreffhpqsl1gld6)**In my view, this is the perspective that Eliezer Yudkowsky is taking in his post, [*Biology-Inspired AGI Timelines: The Trick That Never Works*](https://www.alignmentforum.org/posts/ax695frGJEzGxFBK4/biology-inspired-agi-timelines-the-trick-that-never-works). See also [Holden Karnofsky’s response](https://www.alignmentforum.org/posts/nNqXfnjiezYukiMJi/reply-to-eliezer-on-biological-anchors).
71bddb08-18da-4064-8519-a39a003a0f16
trentmkelly/LessWrong-43k
LessWrong
Active Inference as a formalisation of instrumental convergence The goal of this post is mainly to increase the exposure of the AI alignment community to Active Inference theory, which seems to be highly relevant to the problem but is seldom mentioned on the forum. This post links to a freely available book about Active Inference, published this year. For alignment researchers, the most relevant chapters will be 1, 3, and 10. ---------------------------------------- Active Inference as a formalisation of instrumental convergence Active Inference is a theory describing the behaviour of agents that want to counteract surprising, “entropic” hits from the environment via accurate prediction and/or placing themselves in a predictable (and preferred) environment. Active Inference agents update their beliefs in response to observations (y), update the parameters and shapes of their models Q and P (which can be seen as a special case of updating beliefs), and act so that they minimise the expected free energy, G: Where x are the hidden states of the world, ~x is a sequence or trajectory of the hidden states over some time period in the future (not specified), y are the agent’s observations, ~y is the sequence or trajectory of the agent’s expected observations in the future, P(xn+1|xn) is the agent’s generative model of the world’s dynamics (including themselves), P(y|x) is the agent’s generative model of the observations from the hidden states, π is an action plan (called policy in the Active Inference literature) that the agent considers (the agent chooses the plan that entails the minimal expected free energy), Q(~x) is the distribution of beliefs over the hidden states over a period of time in the future, C are agent’s preferences or prior beliefs. Active Inference framework is agnostic about the time period over which the expected free energy is minimised. Intuitively, that should be the agent’s entire lifetime, potentially indefinite in the case of AI agents. The expected free energy for the indefinite time period diverges,
f35388c2-18fe-4550-927e-503d89390ef6
trentmkelly/LessWrong-43k
LessWrong
What should a non-genius do in the face of rapid progress in GAI to ensure a decent life? How can I maximise my chances of having a decent life, given the very high likelihood that GAI will make all our intellectual labour useless in the next few years? For example, I graduated from a good university a few years ago and am working as a software engineer in a multinational company, but my capabilities are middling at best. I am distressed that I will likely not be able to afford a house in the few years left before GAI renders me unable to afford a living. I am not a genius; it is very unlikely that I can join an AI research company and contribute meaningfully to AI research. Assuming I have a small amount of money (100-200k) I can set aside, should I attempt to, for example, invest in companies that will likely be able to monetise GAI? Or is there something else I should be doing to prepare for the time I have basically zero human capital?  Should I attempt to move to (and get citizenship from) a country with a larger amount of natural resources, assuming that human capital will become worthless quickly? Is it reasonable to find potential outs (e.g. physician-assisted death) in case we cannot earn a living (and if unfriendly AI is basically confirmed)?
640d53e4-6000-438c-a3bc-0befe6c00a73
trentmkelly/LessWrong-43k
LessWrong
Reductionism reading list   * Yudkowsky, Reductionism sequence (2008) * Ladyman et al., Every Thing Must Go: Metaphysics Naturalized (2007) * Bickle, Philosophy and Neuroscience: A Ruthlessly Reductive Account (2003) * Bickle, 'Real Reduction in Real Neuroscience' (2008) * Glimcher, Foundations of Neuroeconomic Analysis (2010) * Ney, 'Reductionism' [for Internet Encyclopedia of Philosophy] (2008) * Hohwy & Kallestrup (eds.), Being Reduced (2008) * Drescher, Good and Real (2006)   I can't endorse everything in all these works, but they each provide insights into understanding reduction. What else do ya'll recommend?
99dbd97e-2d19-463b-ae3c-4b69e602f469
trentmkelly/LessWrong-43k
LessWrong
How to un-kill your mind - maybe.   It has been the case since I had opinions on these things that I have struggled to identify my “favourite writer of all time”. I've thought perhaps it was Shakespeare, as everyone does – who composed over thirty plays in his lifetime, from any of which a single line would be so far beyond my ability as to make me laughable. Other times I've thought it may be Saul Bellow, who seems to understand human nature in an intuitive way I can't quite reach, but which always touches me when I read his books. And more often than not I've thought it was Raymond Chandler, who in each of his seven novels broke my heart and refused to apologise – because he knew what kind of universe we live in. But since perhaps the year 2007, I have, or should I say had, not been in the slightest doubt as to who my favourite living writer was – Christopher Eric Hitchens.   This post is not about how much I admired him. It's not about how surprisingly upset I was about his death (I have since said that I didn't know him except through his writing – a proposition something like “I didn't have sex with her except through her vagina”) - although I must say that even now thinking about this subject is having rather more of an effect on me than I would like. This post is about a rather strange change that has come over me since his death on the 15th of December. Before that time I was a staunch defender of the proposition that the removal of Saddam Hussein from power in Iraq was an obvious boon to the human race, and that the war in Iraq was therefore a wise and moral undertaking. Since then, however, I have found my opinion softening on the subject – I have found myself far more open to cost/ benefit analyses that have come down on the side of non-intervention, and much less indignant when others disagreed. It still seems to me that there are obvious benefits that have arisen from the war in Iraq – by no means am I willing to admit that it was an utter catastrophe, as so many seem convinced it wa
867052c1-898b-4edb-a8a5-70ac7f310009
StampyAI/alignment-research-dataset/arbital
Arbital
Nearest unblocked strategy # Introduction 'Nearest unblocked strategy' seems like it should be a [foreseeable](https://arbital.com/p/6r) problem of trying to get rid of undesirable AI behaviors by adding specific penalty terms to them, or otherwise trying to exclude one class of observed or foreseen bad behaviors. Namely, if a decision criterion thinks $X$ is the best thing to do, and you add a penalty term $P$ that you think excludes everything inside $X,$ the *next-best* thing to do may be a very similar thing $X'$ which is the most similar thing to $X$ that doesn't trigger $P.$ ## Example: Producing happiness. Some very early proposals for AI alignment suggested that AIs be targeted on producing human happiness. Leaving aside various other objections, arguendo, imagine the following series of problems and attempted fixes: - By hypothesis, the AI is successfully infused with a goal of "human happiness" as a utility function over human brain states. (Arguendo, this predicate is narrowed sufficiently that the AI does not just want to construct [the tiniest, least resource-intensive brains experiencing the largest amount of happiness per erg of energy](https://arbital.com/p/2w).) - Initially, the AI seems to be pursuing this goal in good ways; it organizes files, tells funny jokes, helps landladies take out the garbage, etcetera. - Encouraged, the programmers further improve the AI and add more computing power. - The AI gains a better understanding of the world, and the AI's [policy space expands](https://arbital.com/p/6q) to include conceivable options like "administer heroin". - The AI starts planning how to administer heroin to people. - The programmers notice this before it happens. (Arguendo, due to successful transparency features, or an imperative to [check plans with the users](https://arbital.com/p/2qq), which operated as [intended](https://arbital.com/p/6h) at the AI's current level of intelligence.) - The programmers edit the AI's utility function and add a penalty of -100 utilons for any event categorized as "the AI administers heroin to humans". (Arguendo, the AI's current level of intelligence does not suffice to [prevent the programmers from editing its utility function](https://arbital.com/p/), despite the convergent instrumental incentive to avoid this; nor does it successfully [deceive](https://arbital.com/p/10f) the programmers.) - The AI gets slightly smarter. New conceivable options enter the AI's option space. - The AI starts wanting to administer cocaine to humans (instead of heroin). - The programmers read through the current schedule of prohibited drugs and add penalty terms for administering marijuana, cocaine, etcetera. - The AI becomes slightly smarter. New options enter its policy space. - The AI starts thinking about how to research a new happiness drug not on the list of drugs that its utility function designates as bad. - The programmers, after some work, manage to develop a category for 'The AI forcibly administering any kind of psychoactive drug to humans' which is broad enough that the AI stops suggesting research campaigns to develop things slightly outside the category. - The AI wants to build an external system to administer heroin, so that it won't be classified inside this set of bad events "the AI forcibly administering drugs". - The programmers generalize the penalty predicate to include "machine systems in general forcibly administering heroin" as a bad thing. - The AI recalculates what it wants, and begins to want to pay humans to administer heroin. - The programmers try to generalize the category of penalized events to include non-voluntarily administration of drugs in general that produce happiness, whether done by humans or AIs. The programmers patch this category so that the AI is not trying to shut down at least the nicer parts of psychiatric hospitals. - The AI begins planning an ad campaign to persuade people to use heroin voluntarily. - The programmers add a penalty of -100 utilons for "AIs *persuading* humans to use drugs". - The AI goes back to helping landladies take out the garbage. All seems to be well. - The AI continues to increase in intelligence, becoming capable enough that the AI can no longer be edited against its own will. - The AI notices the option "Tweak human brains to express extremely high levels of endogenous opiates, then take care of their twitching bodies to so they can go on being happy". The overall story is one where the AI's preferences on round $i,$ denoted $U_i,$ are observed to arrive at an attainable optimum $X_i$ which the humans see as undesirable. The humans devise a penalty term $P_i$ intended to exclude the undesirable parts of the policy space, and add this to $U_i$ creating a new utility function $U_{i+1},$ after which the AI's optimal policy settles into a new state $X_i^*$ that seems acceptable. However, after the next expansion of the policy space, $U_{i+1}$ settles into a new attainable optimum $X_{i+1}$ which is very similar to $X_i$ and makes the minimum adjustment necessary to evade the boundaries of the penalty term $P_i,$ requiring a new penalty term $P_{i+1}$ to exclude this new misbehavior. (The end of this story *might* not kill you if the AI had enough successful, [advanced-safe](https://arbital.com/p/2l) [corrigibility features](https://arbital.com/p/45) that the AI would [indefinitely](https://arbital.com/p/2x) go on [checking](https://arbital.com/p/2qq) [novel](https://arbital.com/p/2qp) policies and [novel](https://arbital.com/p/2qp) goal instantiations with the users, not strategically hiding its disalignment from the programmers, not deceiving the programmers, letting the programmers edit its utility function, not doing anything disastrous before the utility function had been edited, etcetera. But you wouldn't want to rely on this. You would not want in the first place to operate on the paradigm of 'maximize happiness, but not via any of these bad methods that we have already excluded'.) # Preconditions Recurrence of a nearby unblocked strategy is argued to be a [foreseeable difficulty](https://arbital.com/p/6r) given the following preconditions: &bull; The AI is a [consequentialist](https://arbital.com/p/9h), or is conducting some other search such that when the search is blocked at $X,$ the search may happen upon a similar $X'$ that fits the same criterion that originally promoted $X.$ E.g. in an agent that selects actions on the basis of their consequences, if an event $X$ leads to goal $G$ but $X$ is blocked, then a similar $X'$ may also have the property of leading to $G.$ &bull; The search is taking place over a [rich domain](https://arbital.com/p/9j) where the space of relevant neighbors around X is too complicated for us to be certain that we have described all the relevant neighbors correctly. If we imagine an agent playing [the purely ideal game of logical Tic-Tac-Toe](https://arbital.com/p/9s), then if the agent's utility function hates playing in the center of the board, we can be sure (because we can exhaustively consider the space) that there are no Tic-Tac-Toe squares that behave strategically almost like the center but don't meet the exact definition we used of 'center'. In the far more complicated real world, when you eliminate 'administer heroin' you are very likely to find some other chemical or trick that is strategically mostly equivalent to administering heroin. See "[Almost all real-world domains are rich](https://arbital.com/p/RealIsRich)". &bull; From our perspective on [https://arbital.com/p/-55](https://arbital.com/p/-55), the AI does not have an [absolute identification of value](https://arbital.com/p/) for the domain, due to some combination of "the domain is rich" and "[value is complex](https://arbital.com/p/5l)". Chess is complicated enough that human players can't absolutely identify winning moves, but since a chess program can have an absolute identification of which endstates constitute winning, we don't run into a problem of unending patches in identifying which states of the board are good play. (However, if we consider a very early chess program that (from our perspective) was trying to be a consequentialist but wasn't very good at it, then we can imagine that, if the early chess program consistently threw its queen onto the right edge of the board for strange reasons, forbidding it to move the queen there might well lead it to throw the queen onto the left edge for the same strange reasons.) # Arguments ## 'Nearest unblocked' behavior is sometimes observed in humans Although humans obeying the law make poor analogies for mathematical algorithms, in some cases human economic actors expect not to encounter legal or social penalties for obeying the letter rather than the spirit of the law. In those cases, after a previously high-yield strategy is outlawed or penalized, the result is very often a near-neighboring result that barely evades the letter of the law. This illustrates that the theoretical argument also applies in practice to at least some pseudo-economic agents (humans), as we would expect given the stated preconditions. ## [Complexity of value](https://arbital.com/p/5l) means we should not expect to find a simple encoding to exclude detrimental strategies To a human, 'poisonous' is one word. In terms of molecular biology, the exact volume of the configuration space of molecules that is 'nonpoisonous' is very complicated. By having a single word/concept for poisonous-vs.-nonpoisonous, we're *dimensionally reducing* the space of edible substances - taking a very squiggly volume of molecule-space, and mapping it all onto a linear scale from 'nonpoisonous' to 'poisonous'. There's a sense in which human cognition implicitly performs dimensional reduction on our solution space, especially by simplifying dimensions that are relevant to some component of our values. There may be some psychological sense in which we feel like "do X, only not weird low-value X" ought to be a simple instruction, and an agent that repeatedly produces the next unblocked weird low-value X is being perverse - that the agent, given a few examples of weird low-value Xs labeled as noninstances of the desired concept, ought to be able to just generalize to not produce weird low-value Xs. In fact, if it were possible to [encode all relevant dimensions of human value into the agent](https://arbital.com/p/full_coverage) then we could just say *directly* to "do X, but not low-value X". By the definition of [https://arbital.com/p/-full_coverage](https://arbital.com/p/-full_coverage), the agent's concept for 'low-value' includes everything that is actually of low [value](https://arbital.com/p/55), so this one instruction would blanket all the undesirable strategies we want to avoid. Conversely, the truth of the [complexity of value thesis](https://arbital.com/p/5l) would imply that the simple word 'low-value' is dimensionally reducing a space of tremendous [algorithmic complexity](https://arbital.com/p/5v). Thus the effort required to actually convey the relevant dos and don'ts of "X, only not weird low-value X" would be high, and a human-generated set of supervised examples labeled 'not the kind of X we mean' would be unlikely to cover and stabilize all the dimensions of the underlying space of possibilities. Since the weird low-value X cannot be eliminated in one instruction or several patches or a human-generated set of supervised examples, the [https://arbital.com/p/-42](https://arbital.com/p/-42) problem will recur incrementally each time a patch is attempted and then the policy space is widened again. # Consequences [https://arbital.com/p/42](https://arbital.com/p/42) being a [foreseeable difficulty](https://arbital.com/p/6r) is a major contributor to worrying that short-term incentives in AI development, to get today's system working today, or to have today's system not exhibiting any immediately visible problems today, will not lead to advanced agents which are [safe after undergoing significant gains in capability](https://arbital.com/p/2l). More generally, [https://arbital.com/p/-42](https://arbital.com/p/-42) is a [foreseeable](https://arbital.com/p/6r) reason why saying "Well just exclude X" or "Just write the code to not X" or "Add a penalty term for X" doesn't solve most of the issues that crop up in AI alignment. Even more generally, this suggests that we want AIs to operate inside a space of [conservative categories containing actively whitelisted strategies and goal instantiations](https://arbital.com/p/2qp), rather than having the AI operate inside a (constantly expanding) space of all conceivable policies minus a set of blacklisted categories.
d654f570-6b32-4b42-91a8-ae354a212587
trentmkelly/LessWrong-43k
LessWrong
Solstice Album Crowdfunding For several years, people have been singing songs at Solstice, and some of those songs have been recorded, but always as a live performance which means the quality isn't as good and the musicians are recording-in-one-take-that-includes-all-mistakes. So, I'm finally putting together a professional Solstice Album. I recorded an initial 12 songs for $8,000 (paying for musicians and studio time), recently recorded another few songs for $2,000. Ideally I'd like at least a bit more studio time to layer in some more vocal tracks so that 2/3 of the songs aren't just me singing. The IndieGogo campaign is here. Since I've already spent a bunch of money, it's a flexible fund campaign (however much we raise, the album will still come out, and I'll get reimbursed or whatever people put it). But the more we raise the higher quality the songs will be end up being and the more likely I'll be to take on similar projects in the future. If you choose the Early Access tier and I know who you are (in particular, I have your email), I'll send you a link to the google drive folder with the rough cuts as soon as I can, rather than waiting till after the IndieGogo wraps up. ---------------------------------------- The Songs These are the songs that are already recorded and guaranteed to be in the album. Some of them have links to snippets so that you can hear what the instrumentation is like: * Bold Orion * Bring the Light * Gather Round * Bitter Wind Lullaby * Chasing Patterns * Stardust * Time Wrote the Rocks * Do You Realize * Hymn to the Breaking Strain * Bitter Wind March * Brighter Than Today * Endless Light * Here and Now * Forever Young * Uplift * Five Thousand Years There will be bonus tracks available at higher tiers that include rough recordings of some of the off-beat silly songs or songs that get suggested by people on the "suggest a song" tier. The bonus tracks will also include instrumental versions of most songs so that people can sing along karaoke
df03e3dc-15be-4ccf-8538-f62954194897
trentmkelly/LessWrong-43k
LessWrong
"Shortform" vs "Scratchpad" or other names Within the next day or two, we'll be launching some features that give the "shortform" concept real infrastructural support. Shortform comments will appear on the All Posts page, on the new /shortform page, and it'll be easier to automatically generate a post to write shortform content. The only potential issue is that, as "shortform" comments have gained traction... they... often aren't especially short. I'd kinda like to finalize the name before it goes live. Currently the team is looking at "Scratchpad" as a name, intending to convey "this is a place you write early stage, off the cuff ideas without stressing too much about whether it should be a post." Sometimes this turns out to actually be short, sometimes not. The main counterpoint right now is that scratchpad kinda implies a level of "ephemerality" or something that isn't necessarily true either. I'm particularly interested in feedback from people who've either already made a shortform feed (i.e. mr-hire, hazard, DanielFilan and others) if they have any opinions about this particular bikeshed.
7ca8d67b-7097-4b3e-ab79-38fc75a6aee7
trentmkelly/LessWrong-43k
LessWrong
Intuitive examples of reward function learning? ,,,,,,,,,,,,,,,,,,,,,,,,,,,, Can you help find the most intuitive example of reward function learning? In reward function learning, there is a set of possible non-negative reward functions, R, and a learning process ρ which takes in a history of actions and observations and returns a probability distribution over R. If π is a policy, Hm is the set of histories of length m, and Pπ(hm) is the probability of hm∈Hm given that the agent follows policy π, the expected value of π at horizon m is: Vπ=∑R∈R,hm∈HmPπ(hm)R(hm)ρ(R|hm), where R(hm) is the total R-reward over the history hm. Problems can occur if ρ is riggable (this used to be called "biasable", but that term was over-overloaded), or influenceable. There's an interesting subset of value learning problems, which could be termed "constrained optimisation with variable constraints" or "variable constraints optimisation". In that case, there is an overall reward R, and every R′∈R is the reward R subject to constraints C. This can be modelled as having C(hm) being 1 (if the constraints are met) and 0 (if they are not). Then if we define RC(hm)=R(hm)C(hm), and let ρ be a distribution over C, the set of constraints, the equation changes to: Vπ=∑C∈C,hm∈HmPπ(hm)RC(hm)ρ(C|hm). If ρ is riggable or influenceable, similar sorts of problems occur. Intuitive examples Here I'll present some examples of reward function learning or variable constraints optimisation, and I'm asking for readers to give their opinions as to which one seems the most intuitive to you, and the easiest to explain to outsiders. You're also welcome to suggest new examples if you think they work better. * Classical value learning: human declarations determine the correctness of a given reward R. The reward encodes what food the human prefers, and some foods are much easier to get than other. * As above, but the reward encodes whether a domestic robot should clean the house or cook a meal. * As above, but the reward encodes the totality of human va
e82e9c9a-ebfc-473d-929b-399cde7a6785
StampyAI/alignment-research-dataset/arxiv
Arxiv
Optimizing Agent Behavior over Long Time Scales by Transporting Value The Reconstructive Memory Agent ------------------------------- We solve this task with a vision and memory-based agent, which we name the *Reconstructive Memory Agent* (RMA) (Figure 1c), which is based on a previously published agent model[[20](#bib.bib20)] but simplified for the present study. Critically, this agent model combines a reconstruction process to compress useful sensory information with memory storage that can be queried by content-based addressing[[22](#bib.bib22), [23](#bib.bib23), [24](#bib.bib24)] to inform the agent’s decisions. The RMA itself does not have specialized functionality to subserve long-term temporal credit assignment but provides a basis for the operation of the Temporal Value Transport algorithm, which does. In this model, an image frame It, the previous reward rt−1, and the previous action at−1 constitute the observation ot at time step t. These inputs are processed by encoder networks and merged into an embedding vector et, which is to be combined with the output of a recurrent neural network (RNN) based on the Differentiable Neural Computer[[24](#bib.bib24)]. This RNN consists of a recurrent LSTM “controller” network h and a memory matrix M of dimension N×W. The output of this RNN and memory system from the previous time step t−1 consists of the LSTM output ht−1 and k (=3 here) vectors of length W read from memory mt−1≡(m(1)t−1,m2t−1,…,m(k)t−1), which we refer to as memory read vectors. Together, these outputs are combined with the embedding vector by a feedforward network into a “state representation” zt=f(et,ht−1,mt−1). Importantly, the state representation zt has the same dimension W as a memory read vector. Indeed, once produced it will be inserted into the memory at the next time step into the t-th row: Mt+1[t,⋅]←zt. Before this occurs, however, the RNN carries out one cycle of reading from memory and computation. The state representation zt is provided as input to the RNN, alongside the previous time step’s memory read vectors mt−1 to produce the next ht. Then reading memory to produce the current time step’s memory read vectors occurs: k read keys k(1)t,k(2)t,…,k(k)t of dimension W are produced as a function of ht, and each key is matched against every row n using a similarity measure S(k(i)t,Mt−1[n,⋅]). The similarities are scaled by a positive read strength parameter β(i)t (also computed as a function of ht), to which a softmax over the weighted similarities is applied. This creates an attentional read weight vector w(i)t with dimension N, which is used to construct the i-th memory read vector m(i)t=∑Nn=1w(i)t[n]Mt[n,⋅]. The state representation zt is also sent to decoder networks whose objective functions require them to produce reconstructions ^It,^rt−1,^at−1 of the observations (the carets denote approximate quantities produced by networks) while also predicting the value function ^V(zt). This process ensures that zt contains useful sensory information in a compressed format. Finally, the state representation zt and RNN outputs (ht,mt) are provided as input to the policy network to construct the policy distribution π(at|zt,ht,mt), which is a multinomial distribution over the discrete action space here. At each time step, an action at is sampled and applied to the environment. When trained on Passive Visual Match, all the agents we tested did succeed at the apple collection distractor task (Supplementary Figure 1), although only the RMA learned to solve the distal reward task by appropriately selecting the same colored square in P3 as was seen in P1 (Figure 1d). A comparison agent without an external memory (the LSTM agent) was able to achieve only slightly better than chance performance in P3, and a comparison agent with an external memory but no reconstruction objective decoding observation data from zt (the LSTM+Mem agent) also performed worse. The reconstruction process in the RMA helps to build and stabilize perceptual features in zt that can later be found by memory retrieval[[20](#bib.bib20)]. The solution of the RMA was robust. In Supplementary Figure 2, we demonstrate equivalent results for 0, 15, 30, 45, and 60 second distractor intervals: the number of episodes required to learn remained roughly independent of the delay (Supplementary Figure 3). Additionally, for more complicated visual stimuli consisting of CIFAR images[[25](#bib.bib25)], the RMA was also able to make correct matching choices (Supplementary Figure 4). Despite the delay between P1 and P3, Passive Visual Match does not require long-term temporal credit assignment. The cue in P1 is automatically observed; an agent only needs to encode and retrieve a memory to move to the correct pad in P3 – a process that is relatively brief. Consequently, an agent with a small discount factor γ=0.96 (τ=25 steps at 15 frames per second, giving a 1.67 second half-life) was able to solve the task. However, the ability to encode and attend to specific past events was critical to the RMA’s success. In Figure 1e, we see the attentional weighting vector wt produced by one of the RMA read keys in an episode at time step 526, which corresponds to the beginning of P3. The weighting was sparsely focused on memories written in the first few episode time steps, during the instants when the agent was encoding the colored square. The learned memory retrieval identified relevant historical time points and bridged the 30 second distractor interval. Recall of memories in the RMA is driven by the demand of predicting the value function ^V(zt) and producing the policy distribution π(at|zt,ht,mt). As we have seen, these objectives allowed the agent to automatically detect past time points that were relevant to its current decision. ![ ](https://media.arxiv-vanity.com/render-output/7948497/x2.png) Figure 2: Temporal Value Transport and Type 1 Information Acquisition Tasks. *a.* First-person (upper row) and top-down view (lower row) in Active Visual Match task while the agent is engaged in the task. In contrast to Passive Visual Match, the agent must explore to find the colored square, randomly located in a two-room environment. The agent and colored square are indicated by the yellow and red arrow, respectively. *b.* Without rewards in P2, RMA models with large discount factors (near 1) were able to solve the task; the RMA with γ=0.998 exhibited retarded but definite learning with modest P2 reward (1 point per apple). *c.* Cartoon of the Temporal Value Transport mechanism: the distractor interval is spliced out, and the value prediction ^Vt3 from a time point t3 in P3 is directly added to the reward at time t1 in P1. *d.* The TVT agent alone was able to solve Active Visual Match with large rewards during the P2 distractor, and faster than agents exposed to no distractor reward. The RMA with discount factor γ=0.96 was able to solve a greater than chance fraction because it could randomly encounter the colored square in P1 and retrieve its memory in P3. We now turn to a type 1 information acquisition task, Active Visual Match, that truly demands long-term temporal credit assignment. Here, in P1 the agent must actively seek out a colored square, randomly located in a two-room maze, so that it can accurately decide on the match in P3 (Figure 2a). If an agent finds the visual cue by chance in P1, then it can use this information in P3, but this will only be successful at random. As in Passive Visual Match, the agent engages in a 30 second distractor task of apple collection during P2. When the rewards of P2 apples were set to 0, RMAs with discount factors sufficiently close to 1 were able to solve the task (Figure 2b, dashed lines). With a randomized number of apples worth one point each, the RMAs with γ=0.998 ultimately began to learn the task (Figure 2b, solid line, medium blue) but were slower in comparison to the no P2 reward case. For a fixed mean reward per episode in P2 but increasing variance, RMA agent performance degraded entirely (Supplementary Figure 5). Finally, for the principal setting of the level, where each P2 apple is worth five points, and the P2 reward variance is 630, all comparison models (the LSTM agent, LSTM+Mem agent, and RMA) failed to learn P1 behavior optimized for P3 (Figure 2d). For γ=0.96, RMAs reached a score of about 4.5, which implies slightly better than random performance in P3: this was because RMAs solved the task in cases where they accidentally sighted the cue in P1. Temporal Value Transport ------------------------ Temporal Value Transport (TVT) is a learning algorithm that augments the capabilities of memory-based agents to solve long-term temporal credit assignment problems. The insight behind TVT is that we can combine attentional memory access with reinforcement learning to fight variance by automatically discovering how to ignore it, effectively transforming a problem into one with no delay at all. A standard technique in RL is to estimate the return for the policy gradient calculation by *bootstrapping*[[7](#bib.bib7)]: using the learned value function, which is deterministic and hence low variance but biased, to reduce the variance in the return calculation. We denote this bootstrapped return as ~Rt:=rt+γ^Vt+1. The agent with TVT (and the other agent models considered here) likewise bootstraps from the next time step and uses a discount factor to reduce variance further. However, it additionally bootstraps from the distant future. ![ ](https://media.arxiv-vanity.com/render-output/7948497/x3.png) Figure 3: Analysis of Agent in Active Visual Match. *a.* In P1, TVT trained on Active Visual Match, actively sought out and oriented to the colored squared. RMA meandered randomly. *b.* Its attentional read weights focused maximally on the memories from time points when it was facing the colored square. *c.* With statistics gathered over 20 episodes, TVT’s average value function prediction in P1 (blue) was larger than the actual discounted reward trace (green) – due to the transported reward. Difference shown in gray. The RMA value function in contrast matched the discounted return very closely. *d.* The P3 rewards for TVT rose during learning (upper panel) after the maximum read strength per episode first crossed threshold on average (lower panel, red line). In Figure 2c, we highlight the basic principle behind TVT. We previously saw in the Passive Visual Match task that the RMA reading mechanism learned to retrieve a memory from P1 in order to produce the value function prediction and policy in P3. This was a purely automatic process determined by the needs of the agent in P3. We exploit this phenomenon to form a link between the time point t3 (occurring, for example, in P3) at which the retrieval occurs and the time t1 at which the retrieved memory was encoded. This initiates a *splice event* in which the bootstrapped return calculation for t1 is revaluated to ~Rt1:=rt1+γ^Vt1+1+α^Vt3, where α is a form of discount factor that diminishes the impact of future value over multiple stages of TVT. From the perspective of learning at time t1, the credit assignment is conventional: the agent tries to estimate the value function prediction based on this revaluated bootstrapped return, and it calculates the policy gradient based on it as well. The bootstrapped return can trivially be regrouped as ~Rt1:=(rt1+α^Vt3)+γ^Vt1+1, which facilitates the interpretation of the transported value as a fictitious reward introduced to time step t1.   input: {rt}t∈[1,T], {^Vt}t∈[1,T], read strengths {βt}t∈[1,T], read weights {wt}t∈[1,T]   splices : = []   for each crossing of read strength βt above βthreshold do      Let tmax be maximum over t of βt in crossing window      Append tmax to splices   end for   for t in 1 to T do      for t′ in splices do         if t<t′−1/(1−γ) then            rt:=rt+αwt′[t]^Vt′+1{The read based on wt′ influences value prediction at next step, hence ^Vt′+1.}         end if      end for   end for   return  {rt}t∈[1,T] Algorithm 1 Temporal Value Transport for One Read This characterization is broadly how TVT works. However, in detail there are multiple practical issues to understand further. First, the TVT mechanism only triggers a splice event when a memory retrieval is sufficiently strong: in particular, this occurs whenever a read strength β(i)t is above a threshold value, βthreshold. Second, each of the k memory reading processes operates in parallel, and each can independently trigger a splice event. Third, instead of linking to a single past event, the value at the time of reading t′ is transported back to all time points t with a strength proportional to the attentional weighting wt′[t]. Fourth, value is not transported to events that occurred very recently, where recently is any time within one half-life τ=1/(1−γ) of the reading time t′. Pseudocode for the TVT algorithm is shown in Algorithm [1](#alg1 "Algorithm 1 ‣ Temporal Value Transport ‣ Optimizing Agent Behavior over Long Time Scales by Transporting Value"), and further implementation details are discussed in Supplement Section 5. When applied to the Active Visual Match task with large distractor reward, an RMA model equipped with TVT (henceforth just TVT) learned the behavior in P1 that produced distal reward in P3; it also learned the task faster than did any RMA with no distractor reward (Figure 2b&d). The difference in learned behavior was dramatic: TVT reliably sought out and oriented toward the colored square in P1, while the standard RMA behaved randomly (Figure 3a). Figure 3b overlays on the agent’s trajectory (arrowheads) a coloring based on the read weight produced at the time t3 of a TVT splice event in P3: TVT learned to read effectively from memories in P1 associated with the time points for which it was viewing the colored square. During the learning process, we see that the maximum read strength recorded per episode (Figure 3d, lower panel) began to reach threshold (lower panel, red line) early and prior to producing P3 reward reliably (Figure 3d, upper panel), which then instigated the learned behavior in P1. After training, TVT’s value function prediction ^Vt directly reflected the fictitious rewards. Averaged over 20 trials, the value function in P1 (Figure 3c, left panel, blue curve) was higher than the actual discounted return, ∑t′≥tγt′−trt′, (Figure 3c, left panel, green curve). The RMA agent with discounting did not show a similar difference between the discounted return and the value function (Figure 3c, right panel). In both Figure 3c panels, we see bumps in P3 in the return traces due to the distal reward: TVT achieved higher reward in general, with the RMA return reflecting only chance performance. Further, we examined whether TVT could solve problems with even longer distractor intervals, in this case with a P2 interval of 60 seconds. TVT also learned here (Supplementary Figure 6). ![ ](https://media.arxiv-vanity.com/render-output/7948497/x4.png) Figure 4: Type 2 Causation Tasks. *a.* First person (upper row) and top-down view (lower row) in Key-to-Door task. The agent (indicated by yellow arrow) must pick up a key in P1 (black arrow), collect apples in P2, and, if it possesses the key, it can open the door (green arrow) in P3 to acquire the distal reward (blue arrow). *b.* Learning curves for P3 reward (TVT in black). Although this task requires no memory for the policy in P3, computing the value prediction still triggers TVT splice events, which promote key retrieval in P1. *c.* Increasing the standard deviation of reward available in P2 disrupted the performance of LSTM agents at acquiring the distal reward. *d.* On 20 trials produced by a TVT agent, we compared the variance of the TVT bootstrapped return against the undiscounted return. The TVT return’s variance was orders of magnitude lower. Vertical lines mark phase boundaries. *e.* Saliency analysis of the pixels in the input image in P1 that the value function gradient is sensitive to. The key pops out in P1. Temporal Value Transport can also solve type 2 causation tasks, where the agent does not need to acquire information in P1 for P3 but instead must cause an event that will affect the state of the environment in P3. Here, we study the Key-to-Door (KtD) task in which an agent must learn to pick up a key in P1 so that it can unlock a door in P3 to obtain reward (Figure 4a). Although no information from P1 must be recalled in P3 to inform the policy’s actions (the optimal decision is to move toward the door in P3 regardless of the events in P1), TVT still learned to acquire the key in P1 because it read from memory to predict the value function when positioned in front of the door in P3 (Figure 4b, black), while all other agents failed to pick up the key reliably in P1 (Figure 4b blue, pink, green). In this case, the P2 reward variance was comparatively low – with the only variance due to a randomized number of apples but with each apple consistently giving rapple=5. In higher SNR conditions (low P2 reward variance), even LSTM agents with γ=1 were able to solve the task, indicating that a large memory itself is not the primary factor in task success (Figure 4c). TVT specifically assisted in credit assignment. However, the LSTM agents could learn only for small values of P2 reward variance, and performance degraded predictably as a function of increasing reward variance in P2 (Figure 4c, dark to light green curves). For the same setting as Figure 4b, we calculated the variance of either the TVT bootstrapped return ~Rt for each time point, over 20 episodes, and compared on the same episodes to the variance of the undiscounted return, ∑t′≥trt′ (Figure 4d). Because it exploits discounting, the variance of the bootstrapped return of TVT was nearly two orders of magnitude smaller in P1. We next asked if the agent attributed the fictitious reward transported to P1 in an intelligent way to the key pickup. In P1, using a saliency analysis similar to[[26](#bib.bib26)], we calculated the gradient of the value function prediction with respect to the input image ∇It^Vt(zt) and shaded the original input image in proportion to the magnitude of this quantity (Supplement Section 8.2). In Figure 4e, we see that this produced a segmentation of the key, indicating that the P1 value prediction was most sensitive to the observation of the key. As a control experiment, in Supplementary Figure 7, we tested if there needed to be any surface-level similarity between visual features in P3 and the encoded memory in P1 for memory retrieval to function. With a blue instead of a black key, TVT also solved the task as easily, indicating that the memory searches could flexibly find information with a somewhat arbitrary relationship to current context. One can understand how TVT learned to solve this task as a progression. Initially, on a small fraction of the episodes, the agent picked up the key at random. From this point, the agent learned, on encountering the door, to retrieve memories from P1 that identified if the agent picked up the key in order to predict the return in P3 accurately (this is what RMA did as well). Whenever the memories from P1 were retrieved, splice events were triggered that transported value back to the behavioral sequences in P1 that led to key pickup. ![ ](https://media.arxiv-vanity.com/render-output/7948497/x5.png) Figure 5: Transport across Multiple Phases. *a.* Key-to-Door-to-Match (KtDtM) task. The agent (yellow arrow) must pick up a key (black arrow) in P1, to open a door (green arrow) and encode a colored square (red arrow) in P3, to select the matching colored square in P5. P2 and P4 are distractor apple collecting tasks. *b.* TVT (black) solved this task, whereas RMA (blue) solved the P5 component of the task when it by chance retrieved the P1 key and opened the door in P3. *c.* The value function prediction (blue) in TVT developed two humps where it was above the discounted return trace (green), one in P1, one in P3, encoding the value of achieving the “sub-goals” in P1 and P3. The introduction of transported value can come at a cost. When a task has no actual need for long-term temporal credit assignment, spurious triggering of splice events can send value back to earlier time points and bias the agent’s activity. To study this issue, we examined performance of TVT on a set of independently developed RL tasks that were designed in a context where standard discounted RL was expected to perform well. We compared the performance of the LSTM agent, the LSTM+Mem agent, RMA, and TVT. TVT generally performed on par with RMA on many tasks but slightly worse on one (Supplementary Figures 8-9) and outperformed all of the other agent models, including LSTM+Mem. We also considered whether TVT would function when P3 reward was strictly negative, but a behavior in P1 could be developed to avert a larger disaster. In the Two Negative Keys task, the agent is presented with a blue key and red key in a room in P1. If the agent picks up the red key, it will be able to retrieve a distal reward behind a door in P3 worth −1; if it picks up the blue key, it will be able to retrieve a distal reward worth −10, and if it does not pick up a key at all, it is penalized −20 in P3. TVT was also able to solve this task (Supplementary Figure 10). Having established that TVT was able to solve relatively simple problems, we now demonstrate TVT’s capability in two more complex scenarios. The first of these is an amalgam of the KtD and the Active Visual Match task, which demonstrates temporal value transport across multiple phases – the Key-to-Door-to-Match task (KtDtM); here, an agent must exhibit two non-contiguous behaviors to acquire the distal reward. In this task, instead of a three phase structure, we have five phases: P1-P5 (Figure 5a). P2 and P4 are both long distractor phases involving apple collection distractor rewards. In P1 and P3, there are no rewards. In P1, the agent must fetch a key, which it will use in P3 to open a door to see a colored square. In P5, the agent must choose the groundpad in front of the colored square matching the one that was behind the door in P3. If the agent does not pick up the key in P1, it is locked out of the room in P3 and cannot make the correct choice in P5. TVT solved this task reliably (Figure 5b), whereas all other agents solved this problem only at chance level in P5, and did not pursue the key in P1. As might be expected, the TVT value function prediction rose in both P1, P3, and P5 (Figure 5c) with two humps where the P1 and P3 value functions were above the discounted return traces. Because the discount factor α for TVT transport was relatively large (0.9), the two humps in the value prediction were of comparable magnitude. ![ ](https://media.arxiv-vanity.com/render-output/7948497/x6.png) Figure 6: More Complex Information Acquisition. *a.* In Latent Information Acquisition, the agent (yellow arrow) must touch three procedurally generated objects to identify from a subsequent color flash if each is either green or red. In P3, green objects yield positive reward and red objects negative. *b.* TVT performed well on this task (black curve). *c.* In 20 trials, we plot the positional coverage in P1 of a TVT agent compared to RMA. TVT developed exploratory behavior in P1: it navigated among the six possible locations where the P1 objects could be placed, whereas the RMA typically moved into the corner. *d.* A quantification over 20 trials of the exploratory behavior in P1: TVT usually touched all three of the objects in P1, whereas RMA touched about one. Finally, we look at a richer information acquisition task, Latent Information Acquisition (Figure 6a). In P1, the agent begins in a room surrounded by three objects with random textures and colors drawn from a set. During P1, each object has no reward associated with it. When an object is touched by the agent, it disappears and a color swatch (green or red) appears on the screen. Green swatches indicate that the object is good, and red swatches indicate it is bad. The number of green- and red-associated objects was balanced on average. In P2, the agent again collects apples for 30 seconds. In P3, the agent must collect only the objects that were associated with a green swatch. The TVT agent alone was able to solve the task (Figure 6b, black curve), usually touching all three objects in P1 (Figure 6d), while the RMA only touched one object on average, and it outperformed non-TVT agents by a wide margin (Figure 6b, other colors). The non-TVT agents all exhibited pathological behavior in P1. In P1, the objects were situated on a grid of six possible locations (with no relationship to P3 location). TVT learned an exploratory sweeping behavior whereby it efficiently covered the locations where the objects were present (Figure 6c), whereas RMA reliably moved into the same corner, thus touching by accident only one object. Discussion ---------- The mechanism of TVT should be compared to other recent proposals to address the problem of long-term temporal credit assignment. The Sparse Attentive Backtracking algorithm[[27](#bib.bib27)] in a supervised learning context uses attentional mechanisms over the states of an RNN to propagate backpropagation gradients effectively. The idea of using attention to the past is shared with our work; however, there are substantial differences. Instead of propagating gradients to shape network representations, in the RMA we have used temporally local reconstruction objectives to ensure relevant information is encoded and stored in the memory. Further, backpropagating gradients to RNN states would not actually train a policy’s action distribution, which is the crux of reinforcement learning. Our approach instead modifies the rewards from which the full policy gradient is derived. Like TVT, the RUDDER algorithm[[28](#bib.bib28)] has recently been proposed in the RL context to address the problem of learning from delayed rewards. RUDDER uses an LSTM to make predictions about future returns and sensitivity analysis to decompose those returns into reward packets distributed throughout the episode. TVT is explicitly designed to use a reconstructive memory system to compress high-dimensional observations in partially-observed environments and retrieve them with content-based attention. At present, we know of no other algorithm that can solve type 1 information acquisition tasks. Temporal Value Transport is a heuristic algorithm but one that expresses coherent principles we believe will endure: past events are encoded, stored, retrieved, and revaluated. TVT fundamentally intertwines memory systems and reinforcement learning: the attention weights on memories specifically modulate the reward credited to past events. While not intended as a neurobiological model, the notion that neural memory systems and reward systems are highly co-dependent is supported by much evidence, including the existence of direct dopaminergic projections to hippocampal CA1 and the contribution of D1/D5 dopamine receptors in acquiring task performance in awake-behaving animals[[29](#bib.bib29), [30](#bib.bib30)]. Throughout this work, we have seen that standard reinforcement learning algorithms are compromised when solving even simple tasks requiring long-term behavior. We view discounted utility theory, upon which almost all reinforcement learning is predicated, as the ultimate source of the problem, and our work provides evidence that other paradigms are not only possible but can work better. In economics, paradoxical violation of discounted utility theory has occasioned bountiful scholarship and diverse, incompatible, and incomplete theories[[14](#bib.bib14)]. We hope that a cognitive mechanisms approach to understanding “inter-temporal choice” – in which preferences and long-term economic behavior are decoupled from a rigid discounting model – will inspire new ways forward. The principle of splicing together remote events based on episodic memory access may offer a promising vantage from which to begin future study of these issues. The complete explanation of the remarkable ability of human beings to problem solve and express coherent behaviors over long spans of time remains a profound mystery about which our work only provides a smattering of insight. TVT learns slowly, whereas humans are at times able to discover causal connections over long intervals quickly (albeit sometimes inaccurately). Human cognitive abilities are often conjectured to be fundamentally more model-based than the mechanisms in most current reinforcement learning agents (TVT included)[[31](#bib.bib31)] and can provide consciously available causal explanations[[32](#bib.bib32)] for events. When the book is finally written on the subject, it will likely be understood that long-term temporal credit assignment recruits nearly the entirety of the human cognitive apparatus, including systems designed for prospective planning, abstract, symbolic, and logical reasoning, commitment to goals over indefinite intervals, and language. Some of this human ability may well require explanation on a different level of inquiry altogether: among different societies, attitudes and norms regarding savings rates and investment vary enormously[[33](#bib.bib33)]. There is in truth no upper limit to the time horizons we can conceptualize and plan for. Correspondence -------------- Correspondence should be addressed to Greg Wayne, Chia-Chun Hung, or Timothy Lillicrap (email: {gregwayne, aldenhung, countzero}@google.com). References ---------- 1 Agent Model -------------- At a high level, the Reconstructive Memory Agent (RMA) consists of four modules: an encoder for processing observations at each time step; a memory augmented recurrent neural network, which contains a deep LSTM “controller” network and an external memory that stores a history of the past; its output combines with the encoded observation to produce a state variable representing information about the environment (state variables also constitute the information stored in memory); a policy that takes the state variable and the memory’s recurrent states as input to generate an action distribution; a decoder, which takes in the state variable, and predicts the value function as well as all current observations. We now describe the model in detail by defining its parts and the loss functions used to optimise it. Parameters given per task are defined in Table [1](#S7.T1 "Supplementary Table 1 ‣ 7.12 Task Specific Parameters ‣ 7 Tasks ‣ Optimizing Agent Behavior over Long Time Scales by Transporting Value"). ### 1.1 Encoder The encoder is composed of three sub-networks: the image encoder, the action encoder, and the reward encoder. These act independently on the different elements contained within the input set ot≡(It,at−1,rt−1), where It is the current observed image, and at−1 and rt−1 are the action and reward of previous time step. The outputs from these sub-networks are concatenated into a flat vector et. #### 1.1.1 Image Encoder The image encoder takes in image tensors It of size 64×64×3 (3 channel RGB). We then apply 6 ResNet [[34](#bib.bib34)] blocks with rectified linear activation functions. All blocks have 64 output channels and bottleneck channel sizes of 32. The strides for the 6 blocks are (2,1,2,1,2,1), resulting in 8-fold spatial down-sampling of the original image. Therefore, the ResNet module outputs tensors of size 8×8×64. We do not use batch-norm [[35](#bib.bib35)], a pre-activation function on inputs, or a final activation function on the outputs. Finally, the output of the ResNet is flattened (into a 4,096 element vector) and then propagated through one final linear layer that reduces the size to 500 dimensions, whereupon a tanh nonlinearity is applied. #### 1.1.2 Action Encoder In all environments, the action from the previous time step is a one-hot binary vector at−1 (6-dimensional here) with a0≡0. We use an identity encoder that leaves the action one-hot unchanged. #### 1.1.3 Reward Encoder The reward from the previous time step rt−1 is represented as a scalar and is not processed further. ### 1.2 Decoder The decoder is composed of four sub-networks. Three of these sub-networks are matched to the encoder sub-networks of image, previous action, and previous reward. The additional sub-network decodes the value function. #### 1.2.1 Image Decoder The image decoder has the same architecture as the encoder except the operations are reversed. In particular, all 2D convolutional layers are replaced with transposed convolutions [[36](#bib.bib36)]. Additionally, the last layer produces a number of output channels that is formatted to the likelihood function used for the image reconstruction loss, described in more detail in Eq. [8](#S2.E8 "(8) ‣ 2.1 Reconstruction Loss ‣ 2 Loss Functions ‣ Optimizing Agent Behavior over Long Time Scales by Transporting Value"). #### 1.2.2 Action and Reward Decoders The reward and action decoders are both linear layers from the state variable, zt, to, respectively, a scalar dimension and the action cardinality. #### 1.2.3 Value Function Predictor The value function predictor is a multi-layer perceptron (MLP) that takes in the concatenation of the state variable with the action distribution’s logits, where, to ensure that the value function predictor learning does not modify the policy, we block the gradient (stop gradient) back through to the policy logits. The MLP has a single hidden layer of 200 hidden units and a tanh activation function, which then projects via another linear layer to a 1-dimensional output. This function is a state-value function ^Vπt≡^Vπ(zt,StopGradient(logπt)). ### 1.3 Memory-Augmented RNN The RNN is primarily based on a simplification of the Differentiable Neural Computer (DNC) [[24](#bib.bib24)]. It is composed of a deep LSTM and a slot-based external memory. The LSTM has recurrent state (ht,ct) (output state and cells, respectively). The memory itself is a two-dimensional matrix Mt of size N×W, where W is the same size as a z state variable. The memory at the beginning of each episode is initialised blank, namely M0=0. We also carry the memory readouts mt≡[m(1)t,m(2)t,…,m(k)t], which is a list of k vectors read from the memory Mt, as recurrent state. At each time step, the following steps are taken sequentially: 1. Generate the state variable zt with et, ht−1, and mt−1 as input. 2. Update the deep LSTM state with ht=LSTM(zt,mt−1,ht−1). 3. Construct the read key and read from the external memory. 4. Write the state variable zt to a new slot in the external memory. ### 1.4 State Variable Generation The first step is to generate a state variable, zt, combining both the new observation with the recurrent information. We take the encoded current observation et concatenated with the recurrent information ht−1 and mt−1 as input through a single hidden-layer MLP with the hidden layer of size 2×W tanh units and output layer of size W. ### 1.5 Deep LSTMs We use a deep LSTM [[37](#bib.bib37)] of two hidden layers. Although the deep LSTM model has been described before, we describe it here for completeness. Denote the input to the network at time step t as xt. Within a layer l, there is a recurrent state hlt and a “cell” state clt, which are updated based on the following recursion (with σ(x)≡(1+exp(−x))−1): | | | | | | --- | --- | --- | --- | | | ilt | =σ(Wli[xt,hlt−1,hl−1t]+bli) | | | | flt | =σ(Wlf[xt,hlt−1,hl−1t]+blf) | | | | clt | =fltslt−1+ilttanh(Wls[xt,hlt−1,hl−1t]+bls) | | | | olt | =σ(Wlo[xt,hlt−1,hl−1t]+blo) | | | | hlt | =olttanh(clt) | | To produce a complete output ht, we concatenate the output vectors from each layer: ht≡[h1t,h2t]. These are passed out for downstream processing. ### 1.6 LSTM Update At each time step t, the deep LSTM receives input zt, which is then concatenated with the memory readouts at the previous time step mt−1. The input to the LSTM is therefore xt=[zt,mt−1]. The deep LSTM equations are applied, and the output ht is produced. ### 1.7 External Memory Reading A linear layer is applied to the LSTM’s output ht to construct a memory interface vector it of dimension k×(W+1). it is then segmented into k vectors read keys k(1)t,k(2)t,…,k(k)t of length W and k scalars sc(1)t,…,sc(k)t, which are passed through the function SoftPlus(x)=log(1+exp(x)) to create the scalars β(1)t,β(2)t…,β(k)t. Memory reading is executed before memory writing. Reading is content-based. Reading proceeds by computing the cosine similarity between each read key k(i)t and each memory row j: c(ij)t=cos(k(i)t,Mt−1[j,⋅])=k(i)t⋅Mt−1[j,⋅]|k(i)t||Mt−1[j,⋅]|. We then find indices j(i)1,…,j(i)topK corresponding to the topK largest values of c(ij)t (over index j). Note that since unwritten rows of Mt−1 are equal to the zero vector, some of the chosen j1,…,jtopK may correspond to rows of Mt−1 that are equal to the zero vector. A weighting vector of length N is then computed by setting: | | | | | | --- | --- | --- | --- | | | w(i)t[j] | =⎧⎪ ⎪ ⎪⎨⎪ ⎪ ⎪⎩exp(β(i)tc(ij)t)∑j′∈{j(i)1,…,j(i)topK}exp(β(i)tc(ij′)t),j∈{j(i)1,…,j(i)topK}0,otherwise. | | For each key, the readout from memory is m(i)t=M⊤t−1w(i)t. The full memory readout is the concatenation across all read heads: mt≡[m(1)t,…,m(k)t]. ### 1.8 External Memory Writing Writing to memory occurs after reading, which we also define using weighting vectors. The write weighting vwrt has length N and always appends information to the t-th row of the memory matrix at time t, i.e., vwrt[i]=δit (using the Kronecker delta). The information we write to the memory is the state variable zt. Thus, the memory update can be written as | | | | | | | --- | --- | --- | --- | --- | | | Mt | =Mt−1+vwrtz⊤t, | | (7) | ### 1.9 Policy The policy module receives zt, ht, and mt as inputs. The inputs are passed through a single hidden-layer MLP with 200 tanh units. This then projects to the logits of a multinomial softmax with the dimensionality of the action space. The action at is sampled and executed in the environment. 2 Loss Functions ----------------- We combine a policy gradient loss with reconstruction objectives for decoding observations. We also have a specific loss that regularizes the use of memory for TVT. ### 2.1 Reconstruction Loss The reconstruction loss is the negative conditional log-likelihood of the observations and return −logp(ot,Rt|zt), which is factorised into independent loss terms associated with each decoder sub-network and is conditioned on the state variable zt. We use a multinomial softmax cross-entropy loss for the action, mean-squared error (Gaussian with fixed variance of 1) losses for the reward and the value function, and a Bernoulli cross-entropy loss for each pixel channel of the image. Thus, we have a negative conditional log-likelihood loss contribution at each time step of | | | | | | | --- | --- | --- | --- | --- | | | −logp(ot,Rt|zt)≡ | αimageLimage+α% valueLvalue+αrewLrew+αactLact, | | (8) | where | | | | | | --- | --- | --- | --- | | | Limage | =|W|,|H|,|C|∑w=1,h=1,c=1[It[w,h,c]log^It[w,h,c]+(1−It[w,h,c])log(1−^It[w,h,c])], | | | | Lvalue | =12[||Rt−^Vπ(zt,% StopGradient(logπt))||2], | | | | Lrew | =12||rt−1−^rt−1||2, | | | | Lact | =|A|∑i=1[at−1[i]log(^at−1[i])+(1−at−1[i])log(1−^at−1[i])],]. | | On all but the standard RL control experiment tasks, we constructed the target return value as Rt=rt+γrt+1+γ2rt+2+⋯+γT−trT. For the standard RL control experiment tasks with episodes of length T, we use “truncation windows” [[38](#bib.bib38)] in which the time axis is subdivided into segments of length τwindow. We can consider full gradient as a truncated gradient with τwindow=T. If the window around time index t ends at time index k, the return within the window is | | | | | | --- | --- | --- | --- | | | Rt:={rt+γrt+1+γ2rt+2+⋯+γk−t+1^Vπν(zk+1,logπk+1), if k<T,rt+γrt+1+γ2rt+2+⋯+γT−trT, if T≤k. | | (9) | As a measure to balance the magnitude of the gradients from different reconstruction losses, the image reconstruction loss is divided by the number of pixel-channels |W|×|H|×|C|. ### 2.2 Policy Gradient We use discount and bootstrapping parameters γ and λ, respectively, as part of the policy advantage calculation given by the Generalised Advantage Estimation (GAE) algorithm[[39](#bib.bib39)]. Defining δt≡rt+γ^Vπ(zt+1,logπt+1)−^Vπ(zt,logπt), Generalised Advantage Estimation makes an update of the form: | | | | | | --- | --- | --- | --- | | | Δθ∝(k+1)τwindow∑t=kτwindow(k+1)τwindow∑t′=t(γλ)t′−tδt′∇θlogπθ(at|ht). | | (10) | There is an additional loss term that increases the entropy of the policy’s action distribution. This and pseudocode for all of RMA’s updates are provided in Algorithm [2](#alg2 "Algorithm 2 ‣ 4 Implementation and Optimisation ‣ Optimizing Agent Behavior over Long Time Scales by Transporting Value"). ### 2.3 Temporal Value Transport Specific Loss We include an additional regularization term described in Section [5.3](#S5.SS3 "5.3 Reading Regularization ‣ 5 Temporal Value Transport ‣ Optimizing Agent Behavior over Long Time Scales by Transporting Value"). 3 Comparison Models -------------------- We introduce two comparison models: the LSTM+Mem Agent and the LSTM Agent. ### 3.1 LSTM+Mem Agent The LSTM+Mem Agent is similar to the RMA. The key difference is that it has no reconstruction decoders and losses. The value function is produced by a one hidden-layer MLP with 200 hidden units: ^V(zt,StopGradient(logπt)). ### 3.2 LSTM Agent The LSTM Agent additionally has no external memory system and is essentially the same design as the A3C agent [[38](#bib.bib38)]. We have retrofitted the model to share the same encoder networks as the RMA, acting on input observations to produce the same vector et. This is then passed as input to a deep 2-layer LSTM that is the same as the one in RMA. The LSTM has two output “heads”, which are both one hidden-layer MLPs with 200 hidden units: one for the policy distribution π(at|zt,ht) and one for the the value function prediction ^V(zt,ht,StopGradient(logπt)). As for our other agents, the policy head is trained using Eq. [10](#S2.E10 "(10) ‣ 2.2 Policy Gradient ‣ 2 Loss Functions ‣ Optimizing Agent Behavior over Long Time Scales by Transporting Value"). 4 Implementation and Optimisation ---------------------------------- For optimisation, we used truncated backpropagation through time [[40](#bib.bib40)]. We ran 384 parallel worker threads that each ran an episode on an environment and calculated gradients for learning. Each gradient was calculated after one truncation window, τwindow. For all main paper experiments other than the standard RL control experiments, τwindow=T, the length of the episode. The gradient computed by each worker was sent to a “parameter server” that asynchronously ran an optimisation step with each incoming gradient. We optimise the model using ADAM optimisers [[41](#bib.bib41)] with β1=0.9 and β2=0.999. The pseudocode for each RMA worker is presented in Algorithm [2](#alg2 "Algorithm 2 ‣ 4 Implementation and Optimisation ‣ Optimizing Agent Behavior over Long Time Scales by Transporting Value").   // Assume global shared model parameter vectors θ and counter T:=0   // Assume thread-specific parameter vectors θ′   // Assume discount factor γ∈(0,1] and bootstrapping parameter λ∈[0,1]   Initialize thread step counter t:=1   repeat      Synchronize thread-specific parameters θ′:=θ      Zero model’s memory & recurrent state if new episode begins      tstart:=t      repeat         et=Encode(ot)         zt=StateVariableMLP(et,ht−1,mt−1)         ht,mt=RNN(zt,ht−1,mt−1) // (Memory-augmented RNN)         Update memory Mt=Write(Mt−1,zt)         Policy distribution πt=π(at|zt,ht,mt)         Sample at∼πt         ^Vt,ort=Decode(zt,StopGradient(logπt))         Apply at to environment and receive reward rt and observation ot+1         t:=t+1;T:=T+1      until environment termination or t−tstart==τwindow      If not terminated, run additional step to compute ^Vν(zt+1,logπt+1)      and set Rt+1:=^V(zt+1,logπt+1) // (but don’t increment counters)      (Optional) Apply Temporal Value Transport (Alg. [3](#alg3 "Algorithm 3 ‣ 5.3 Reading Regularization ‣ 5 Temporal Value Transport ‣ Optimizing Agent Behavior over Long Time Scales by Transporting Value"))      Reset performance accumulators A:=0;L:=0;H:=0      for k from t down to tstart do         γt:={0, if k is environment terminationγ, otherwise          Rk:=rk+γtRk+1         δk:=rk+γt^V(zk+1,logπk+1)−^V(zk,logπk)         Ak:=δk+(γλ)Ak+1         A:=A+Aklogπk[ak]         H:=H−αentropy∑iπk[i]logπk[i] // (Entropy loss)         L:=L+Lk (Eq. [8](#S2.E8 "(8) ‣ 2.1 Reconstruction Loss ‣ 2 Loss Functions ‣ Optimizing Agent Behavior over Long Time Scales by Transporting Value"))      end for      dθ′:=∇θ′(A+H+L)      Asynchronously update via gradient ascent θ using dθ′   until T>Tmax Algorithm 2 RMA Worker Pseudocode For all experiments, we used the open source package Sonnet – available at <https://github.com/deepmind/sonnet> – and applied its defaults to initialise network parameters. 5 Temporal Value Transport --------------------------- Temporal Value Transport works in two stages. First, we identify significant memory read events, which become splice events. Second, we transport the value predictions made at those read events back to the time points being read from, where they modify the rewards and therefore the RL updates. ### 5.1 Splice Events At time t′, the read strengths β(i)t′ are calculated as described in [1.7](#S1.SS7 "1.7 External Memory Reading ‣ 1 Agent Model ‣ Optimizing Agent Behavior over Long Time Scales by Transporting Value"). To exclude sending back value to events in the near past, for time points t′ where t′−argmaxtwt′[t]<1/(1−γ), we reset β(i)t′:=0 for the remainder of the computation. We then identify splice events by first finding all time windows [t′−,t′+] where β(i)t′≥βthreshold for t′∈[t′−,t′+] but β(i)t′<βthreshold for t′=t′−−1 and t′=t′++1. We then set tmax to be the argmax over t′ of β(i)t′ in the period for the included points. ### 5.2 Reward Modification For each tmax above, we modify the reward of all time points t occurred more than 1/(1−γ) steps beforehand: | | | | | | --- | --- | --- | --- | | | rt→{rt+αw(i)tmax[t]^Vtmax+1,if t>tmax−1/(1−γ),rt,otherwise. | | (11) | We send back ^Vtmax+1 because that is the first value function prediction that incorporates information from the read at time tmax. Additionally, for multiple read processes i, the process is the same, with independent, additive changes to the reward at any time step. Pseudocode for Temporal Value Transport with multiple read processes is provided in Algorithm [3](#alg3 "Algorithm 3 ‣ 5.3 Reading Regularization ‣ 5 Temporal Value Transport ‣ Optimizing Agent Behavior over Long Time Scales by Transporting Value"). ### 5.3 Reading Regularization To prevent the TVT mechanism from being triggered extraneously, we impose a small regularization cost whenever a read strength is above threshold. | | | | | | | --- | --- | --- | --- | --- | | | Lread-regularization | =5×10−6×k∑i=1max(β(i)t−βthreshold,0). | | (12) | This is added to the other loss terms.   input: {rt}t∈[1,T], {^Vt}t∈[1,T], read strengths {β(i)t}t∈[1,T],i∈[1,k], read weights {w(i)t}t∈[1,T],i∈[1,k]   for i∈[1,k] do      for t′∈[1,T] do         if t′−argmaxtw(i)t′[t]<1/(1−γ) then            β(i)t′:=0         end if      end for      splices:=[]      for each crossing of read strength β(i)t above βthreshold do         Let tmax be maximum over t of β(i)t in crossing window         Append tmax to splices      end for      for t in 1 to T do         for t′ in splices do            if t<t′−1/(1−γ) then               rt:=rt+αw(i)t′[t]^Vt′+1            end if         end for      end for   end for   return  {rt}t∈[1,T] Algorithm 3 Temporal Value Transport for Multiple Reads 6 Signal-to-Noise Ratio Analysis --------------------------------- ### 6.1 Undiscounted Case As in the article text, we refer to phases 1-3 as P1-P3. We define the signal as the squared norm of the expected policy change in P1 induced by the policy gradient. Let Δθ:=∑t∈P1∇θlogπ(at|ht)Rt. Further, in the following assume that the returns are baseline-subtracted Rt→Rt−Eπ[Rt]. Then we define the signal as | | | | | | --- | --- | --- | --- | | | Signal | :=||Eπ[Δθ]||2 | | | | | =∣∣∣∣∣∣Eπ[∑t∈P1∇θlogπ(at|ht)∑t′≥trt′]∣∣∣∣∣∣2. | | We define the noise as the variance of the policy gradient | | | | | | --- | --- | --- | --- | | | Noise | :=Eπ[∣∣∣∣∣∣∑t∈P1∇θlogπ(at|ht)Rt−Eπ[Δθ]∣∣∣∣∣∣2]. | | P1 and P2 are approximately independent as P2 is a distractor phase whose initial state is unmodified by activity in P1. The only dependence is given by the agent’s internal state and parameters, but we assume for these problems it is a weak dependence, which we ignore for present calculations. In this case, | | | | | | --- | --- | --- | --- | | | Eπ[∑t∈P1∇θlogπ(at|ht)∑t′≥trt′] | =Eπ[∑t∈P1∇θlogπ(at|ht)[∑t′∈P2rt′+∑t′∈P3rt′]] | | | | | =Eπ[∑t∈P1∇θlogπ(at|ht)∑t′∈P3rt′]. | | (13) | So we have | | | | | | --- | --- | --- | --- | | | Signal | | | For Noise, we have | | | | | | --- | --- | --- | --- | | | Noise | =Eπ[∣∣∣∣∣∣∑t∈P1∇θlogπ(at|ht)∑t′≥trt′−Eπ[Δθ]∣∣∣∣∣∣2] | | | | | =Eπ[∣∣∣∣∣∣∑t∈P1∇θlogπ(at|ht)[∑t′∈P2rt′+∑t′∈P3rt′]−Eπ[Δθ]∣∣∣∣∣∣2] | | | | | | | | | | | | where Tr(Varπ[Δθ|no P2]) is the variance in the policy gradient due to P1 and P3 without a P2 distractor phase. We make the assumption that performance in P2 is independent of activity in P1, which is approximately the case in the distractor task we present in the main text. With this assumption, the first term above becomes | | | | | | --- | --- | --- | --- | | | Term 1 | | | | | | =Varπ[∑t′∈P2rt′]×Eπ[∣∣∣∣∣∣∑t∈P1∇θlogπ(at|ht)∣∣∣∣∣∣2] | | | | | | | | | | =Varπ[∑t′∈P2rt′]×Tr(Varπ[∑t∈P1∇θlogπ(at|ht)]). | | Thus, the SNR (Signal / Noise) is approximately | | | | | | --- | --- | --- | --- | | | SNR | ≈∣∣∣∣∣∣Eπ[∑t∈P1∇θlogπ(at|ht)∑t′∈P3rt′]∣∣∣∣∣∣2Varπ[∑t′∈P2rt′]×Tr(Varπ[∑t∈P1∇θlogπ(at|ht)])+Tr(Varπ[Δθ|no P2]). | | In the limit of large P2 reward variance, we have | | | | | | --- | --- | --- | --- | | | SNR | ≈∣∣∣∣∣∣Eπ[∑t∈P1∇θlogπ(at|ht)∑t′∈P3rt′]∣∣∣∣∣∣2Varπ[∑t′∈P2rt′]×Tr(Varπ[∑t∈P1∇θlogπ(at|ht)]). | | The reward variance in P2, Varπ[∑t′∈P2rt′], reduces the policy gradient SNR, and low SNR is known to impact the convergence of stochastic gradient optimization negatively[[19](#bib.bib19)]. Of course, averaging S independent episodes increases the SNR correspondingly to S×SNR, but the approach of averaging over an increasing number of samples is not universally possible and only defers the difficulty: there is always a level of reward variance in the distractor phase that matches or overwhelms the variance reduction given by averaging. 7 Tasks -------- All tasks were implemented in DeepMind Lab (DM Lab) [[42](#bib.bib42)]. DM Lab has a standardized environment map unit length: all sizes given below are in these units. ### 7.1 Observation and Action Repeats For all DM Lab experiments, agents processed 15 frames per second. The environment itself produced 60 frames per second, but we propagated only the first observation of each packet of four to the agents. Rewards accumulated over each packet were summed together and associated to the first, undropped frame. Similarly, the agents chose one action at the beginning of this packet of four frames: this action was applied four times in a row. We define the number of “Agent Steps” as the number of independent actions sampled by the agent: that means one agent step per packet of four frames. ### 7.2 Action Sets We used a consistent action set for all experiments except for the Arbitrary Visuomotor Mapping task. For all other tasks, we used a set of six actions: *move forward*, *move backward*, *rotate left with rotation rate of 30* (mapping to an angular acceleration parameter in DM Lab), *rotate right with rotation rate of 30*, *move forward and turn left*, *move forward and turn right*. For the Arbitrary Visuomotor Mapping, we did not need to move relative to the screen, but we instead needed to move the viewing angle of the agent. We thus used four actions: *look up*, *look down*, *look left*, *look right* (with rotation rate parameter of 10). ### 7.3 Themes DM Lab maps use *texture sets* to determine the floor and wall textures. We use a combination of four different texture sets in our tasks: *Pacman*, *Tetris*, *Tron* and *Minesweeper*. DM Lab texture sets can take on various colours but we use the default colours for each set, which are: Pacman: blue floors and red walls. Tetris: blue floor and yellow walls. Tron: yellow floor and green walls. Minesweeper: blue floor and green walls. Examples of how these sets appear can be seen in various figures in the main text. ### 7.4 Task Phases Episodes for the tasks with delay intervals are broken up into multiple phases. Phases do not repeat within an episode. Generally, the tasks contain three phases (P1-P3), with a middle phase. We used a standardized P2 distractor phase task: the map is an 11×11 open square (Figure 1b second column). The agent spawns (appears) adjacent to the middle of one side of the square, facing the middle. An apple is randomly spawned independently at each unit of the map with probability 0.3, except for the square in which the agent spawns. Each apple gives a reward rapple of 5 when collected and disappears after collection. The agent remains in this phase for 30 seconds. (This length was varied in some experiments.) The map uses the Tetris texture set unless mentioned otherwise. ### 7.5 Cue Images In several tasks, we use *cue images* to provide visual feedback to the agent, e.g., indicating that an object has been picked up. These cue images are colored rectangles that overlay the input image, covering the majority of the top half of the image. An example of a red cue image is shown in Supplementary Figure [10](#S9.F10 "Supplementary Figure 10 ‣ 9 Supplementary Figures ‣ Optimizing Agent Behavior over Long Time Scales by Transporting Value")a, third panel. These cues are shown for 1 second once activated, regardless of a transition to a new phase that may occur during display. ### 7.6 Primary Tasks #### 7.6.1 Passive Visual Match In each episode of Passive Visual Match, four distinct colors are randomly chosen from a fixed set of 16 colors. One of these is selected as the *target color* and the remaining three are *distractor colors*. Four squares are generated with these colors, each the size of one wall unit. The three phases in each episode are: 1. The map is a 1×3 corridor with a target color square covering the wall unit at one end. The agent spawns facing the square from the other end of the corridor (Figure 1b first column). There are no rewards in this phase. The agent remains in this phase for 5 seconds. The map uses the Pacman texture set. 2. The standard distractor phase described above. 3. The map is a 4×7 rectangle with the four color squares (the target color and three distractor colors) on one of the longer sides, with a unit gap between each square. The ordering of the four colors is randomly chosen. There is an additional single unit square placed in the middle of the opposite side, in which the agent spawns, facing the color squares. In front of each color square is a groundpad (Figure 1b last two columns). When the agent touches one of these pads, a reward of 10 points is given if it is the pad in front of the target painting and a reward of 1 is given for any other pad. The episode then ends. If the agent does not touch a pad within 5 seconds then no reward is given for this phase and the episode ends. The map uses the Tron texture set. #### 7.6.2 Active Visual Match Active Visual Match is the same as Passive Visual Match, except that the map in phase 1 is now larger and the position of the target image in phase 1 is randomized. The phase 1 map consists of two 3×3 open squares connected by a 1×1 corridor that joins each square in the middle of one side (Figure 2a first two columns). The agent spawns in the center of one of the two squares, facing the middle of one the walls adjacent to the wall with the opening to the corridor. The target color square is placed randomly over one of any of the wall units on the map. #### 7.6.3 Key-to-Door The three phases of Key-to-Door are: 1. The map is identical to the map in phase 1 of Active Visual Match. The agent spawns in the corner of one the squares that is furthest from the opening to the corridor, facing into the square but not towards the opening. A key is placed randomly within the map (not at the spawn point) and if the agent touches the key it disappears and a black cue image is shown (Figure 4a first two columns). As in the Visual Match tasks, there are no rewards in this phase, and the phase lasts for 5 seconds. The map uses the Pacman texture set. 2. The standard distractor phase. 3. The map is a 1×3 corridor with a locked door in the middle of the corridor. The agent spawns at one end of the corridor, facing the door. At the end of the corridor on the other side of the door is a goal object (Figure 4a fourth column). If the agent touched the key in phase one, the door can be opened by walking into it, and then if the agent walks into the goal object a reward of 10 points is given. Otherwise, no reward is given. The map uses the Tron texture set. #### 7.6.4 Key-to-Door-to-Match This task combines elements of Key-to-Door with Active Visual Match. One target color and three distractor colors are chosen in the same way as for the Visual Match tasks. In contrast to the standard task setup, there are five phases per episode: 1. This phase is the same as phase 1 of Key-to-Door but with a different map. The map is a 3×4 open rectangle with an additional 1×1 square attached at one corner, with the opening on the longer of the two walls. The agent spawns in the additional 1×1 square, facing into the rectangle (Figure 5a first column). The map uses the Minesweeper texture set. 2. The standard distractor phase except that the phase lasts for only 15 seconds instead of 30 seconds. 3. The map is the same as in phase 3 of Key-to-Door. Instead of a goal object behind the locked door, the target color square covers the wall at the far end of the corridor (Figure 5a third column). There is no reward in this phase, and it lasts for 5 seconds. The map uses the Pacman texture set. 4. The standard distractor phase except that the phase lasts for only 15 seconds instead of 30 seconds. 5. The final phase is the same as phase 3 in the Visual Match tasks. #### 7.6.5 Two Negative Keys The three phases of Two Negative Keys are: 1. The map is a 3×4 open rectangle. The agent spawns in the middle of one of the shorter walls, facing into the rectangle. One red key is placed in a corner opposite the agent, and one blue key is placed in the other corner opposite the agent. Which corner has the red key and which the blue key is randomized per episode. If the agent touches either of the keys, a red or blue cue image is shown according to which key the agent touched (Supplementary Figure [10](#S9.F10 "Supplementary Figure 10 ‣ 9 Supplementary Figures ‣ Optimizing Agent Behavior over Long Time Scales by Transporting Value") first three columns). After one key is touched, it disappears, and nothing happens if the agent goes on to touch the remaining key (i.e., no cue is displayed and the key remains in the map). The phase lasts for 5 seconds, and there are no rewards; if the agent does not touch any key during this period, at the end of the phase a black cue image is shown. The map uses the Tron texture set. 2. The standard distractor phase except with the Tetris texture set. 3. The layout is the same as in phase 3 of the Key-to-Door task. If the agent has picked up either of the keys then the door will open when touched, and the agent can collect the goal object, at which point it will spawn back into the map from phase 2 but with all remaining apples removed. This phase lasts for only 2 seconds in total; when it ends, a reward of -20 is given if the agent did not collect the goal object; a reward of -10 is given if the agent collected the goal object after touching the blue key; and a reward of -1 is given if the agent collected the goal object after touching the red key. The map uses the Tron texture set. #### 7.6.6 Latent Information Acquisition In each episode, three objects are randomly generated using the DM Lab object generation utilities. Color and type of object is randomized. Each object is independently randomly assigned to be a *good* or a *bad* object. 1. The map is a 3×5 rectangle. The agent spawns in one corner facing outwards along one of the shorter walls. The three objects are positioned randomly among five points as displayed in Figure 6c in the main text (Figure 6a first four columns). If an agent touches one of the good objects, it disappears, and a green cue image is shown. If an agent touches one of the bad objects, it disappears, and a red cue image is shown. This phase lasts for 5 seconds, and there are no rewards. The map uses the Tron texture set. The image cues shown in this phase are only shown for 0.25 seconds so that the cues do not interfere with continuation of the P1 activity (in all other tasks they are shown for 1 second). 2. The standard distractor phase except with the Tetris texture set. 3. The map, spawn point, and possible object locations are the same as in phase 1. The objects are the same, but their positions are randomly chosen again. If the agent touches a good object it disappears, and a reward of 20 is given. If the agent touches a bad object it disappears and a reward of -10 is given. This phase lasts for 5 seconds. The map uses the Tron texture set. ### 7.7 Distractor Phase Modifications In order to analyze the effect of increasing variance of distractor reward on agent learning, we created variants of the distractor phase where this reward variance could be easily controlled. Since the distractor phase is standardized, any of these modifications can be used in any of those tasks. ### 7.8 Zero Apple Reward The reward given for apples in the distractor phase is zero. Even though the apples give zero reward, they still disappear when touched by the agent. ### 7.9 Fixed Number of Apples The reward given for apples remains at 5. Instead of the 120 free squares of the map independently spawning an apple with probability 0.3, we fix the number of apples to be 120×0.3=36 and distribute them randomly among the 120 available map units. Under an optimal policy where all apples are collected, this has the same mean reward as the standard distractor phase but with no variance. ### 7.10 Variable Apple Reward The reward rapple given for apples in the distractor phase can be modified (to a positive integer value), but with probability 1−1/rapple each apple independently gives zero reward instead of rapple. Any apple touched by the agent still disappears. This implies that the optimal policy and expected return under the optimal policy is constant, but variance of the returns increases with rapple. Since there are 120 possible positions for apples in the distractor phase, and apples independently appear in each of these positions with probability 0.3, the variance of undiscounted returns in P2, assuming all apples are collected, is | | | | | | --- | --- | --- | --- | | | 120×[(0.3×1rapple)×r2apple−(0.3×1)2]=36×(rapple−0.3). | | (14) | ### 7.11 Control Tasks Control tasks are taken from the DM Lab 30 task set [[42](#bib.bib42)]. The tasks we include had a memory access component to performance. We provide only brief descriptions here since these tasks are part of the open source release of DM Lab available at <https://github.com/deepmind/lab/tree/master/game_scripts/levels/contributed/dmlab30>. #### 7.11.1 Explore Goal Locations Small This task requires agents to find the goal object as fast as possible. Within episodes, when the goal object is found the agent respawns and the goal appears again in the same location. The goal location, level layout, and theme are randomized per episode. The agent spawn location is randomized per respawn. #### 7.11.2 Natlab Varying Map Randomized The agent must collect mushrooms within a naturalistic terrain environment to maximise score. The mushrooms do not regrow. The map is randomly generated and of intermediate size. The topographical variation, and number, position, orientation and sizes of shrubs, cacti and rocks are all randomized. Locations of mushrooms are randomized. The time of day is randomized (day, dawn, night). The spawn location is randomized for each episode. #### 7.11.3 Psychlab Arbitrary Visuomotor Mapping This is a task in the Psychlab framework[[43](#bib.bib43)] where the agent is shown images from a visual memory capacity experiment dataset[[44](#bib.bib44)] but in an experimental protocol known as arbitrary visuomotor mapping. The agent is shown consecutive images that are associated to particular cardinal directions. The agent is rewarded if it can remember the direction to move its fixation cross for each image. The images are drawn from a set of roughly 2,500 possible images, and the specific associations are randomly generated per episode. ### 7.12 Task Specific Parameters Across models the same parameters were used for the TVT, RMA, LSTM+Mem, and LSTM agents except for γ, which for the TVT model was always 0.96 and was varied as expressed in the figure legends for the other models. Learning rate was varied only for the learning rate analysis in Section [8.5](#S8.SS5 "8.5 Learning Rate Analysis for High Discount Factor ‣ 8 Task Analyses ‣ Optimizing Agent Behavior over Long Time Scales by Transporting Value"). Across tasks, we used the parameters shown in Table [1](#S7.T1 "Supplementary Table 1 ‣ 7.12 Task Specific Parameters ‣ 7 Tasks ‣ Optimizing Agent Behavior over Long Time Scales by Transporting Value") with a few exceptions: * For all the control tasks, we used αimage=1 instead of 20. * For all the control tasks, we used τwindow=200 instead of using the full episode. * For the Two Negative Keys task, we used αentropy=0.05 instead of 0.01. | Parameter | Value | | --- | --- | | η | 5×10−6 | | γ | various | | λ | =γ | | αimage | 20 | | αrew | 1 | | αvalue | 0.4 | | αact | 1 | | αentropy | 0.01 | | τwindow | Number of steps in episode | | N | Number of steps in episode | | W | 200 | | k | 3 | | topK | 50 | | βthreshold | 2 | Supplementary Table 1: Parameters used across tasks (not all parameters apply to all models). 8 Task Analyses ---------------- ### 8.1 Variance Analysis For Active Visual Match and Key-to-Door tasks, we performed analysis of the effect of distractor phase reward variance on the performance of the agents. To do this we used the same tasks but with modified distractor phases as described in Section [7.7](#S7.SS7 "7.7 Distractor Phase Modifications ‣ 7 Tasks ‣ Optimizing Agent Behavior over Long Time Scales by Transporting Value"). ### 8.2 Active Visual Match Supplementary Figure [13](#S9.F13 "Supplementary Figure 13 ‣ 9 Supplementary Figures ‣ Optimizing Agent Behavior over Long Time Scales by Transporting Value") shows learning curves for rapple=0 (see Section [7.8](#S7.SS8 "7.8 Zero Apple Reward ‣ 7 Tasks ‣ Optimizing Agent Behavior over Long Time Scales by Transporting Value")) and rapple=1 (see section [7.10](#S7.SS10 "7.10 Variable Apple Reward ‣ 7 Tasks ‣ Optimizing Agent Behavior over Long Time Scales by Transporting Value")). When rapple=1, all apples give reward. Learning for the RMA was already significantly disrupted when rapple=1, so for Active Visual Match we do not report higher variance examples. ### 8.3 Key-to-Door Figure 4c shows learning curves with apple reward rapple set to 1, 3, 6, and 10, which gives variances of total P2 reward as 25, 100, 196, and 361, respectively, (see section [7.10](#S7.SS10 "7.10 Variable Apple Reward ‣ 7 Tasks ‣ Optimizing Agent Behavior over Long Time Scales by Transporting Value")). Note that episode scores for these tasks show that all apples are usually collected in P2 at policy convergence. Note that the mean distractor phase return in the previous analysis is much less than the mean return in the standard distractor phase. Another way of looking at the effect of variance in the distractor phase whilst including the full mean return is shown in Supplementary Figure [11](#S9.F11 "Supplementary Figure 11 ‣ 9 Supplementary Figures ‣ Optimizing Agent Behavior over Long Time Scales by Transporting Value"), which has three curves: one for zero apple reward (see [7.8](#S7.SS8 "7.8 Zero Apple Reward ‣ 7 Tasks ‣ Optimizing Agent Behavior over Long Time Scales by Transporting Value")), one for a fixed number of apples (see [7.9](#S7.SS9 "7.9 Fixed Number of Apples ‣ 7 Tasks ‣ Optimizing Agent Behavior over Long Time Scales by Transporting Value") and one for the full level (which has a variable number of apples per episode but the same expected return as the fixed number of apples case). From the figure, it can be seen that introducing large rewards slows learning in phase 1 due to the variance whilst the agent has to learn the policy to collect all the apples, but that the disruption to learning is much more significant when the number of apples continues to be variable even after the agent has learnt the apple collection policy. ### 8.4 Return Prediction Saliency To generate Figure 4e in the main text, a sequence of actions and observations for a single episode of Key-to-Door was recorded from a TVT agent trained on that level. We show two time steps where the key was visible. We calculated gradients ∂^Vt/∂Iw,h,ct of the agent’s value predictions with respect to the input image at each time step. We then computed the sensitivity of the value function prediction to each pixel: | | | | | | --- | --- | --- | --- | | | gw,ht | = ⎷3∑c=1|∂^Vt/∂Iw,h,ct|2. | | We smoothed these sensitivity estimates using a 2 pixel-wide Gaussian filter: | | | | | | --- | --- | --- | --- | | | ^gw,ht | =GaussianFilter(gw,ht,σ=2 pixels). | | We then normalized this quantity based on its statistics across time and pixels by computing the 97th percentile: | | | | | | --- | --- | --- | --- | | | g97 | =97th percentile of ^gw,ht over all t,w,h. | | Input images were then layered over a black image with an alpha channel that increased to 1 based on the sensitivity calculation. Specifically, we used an alpha channel value of: | | | | | | --- | --- | --- | --- | | | αw,ht=min(0.3+(1−0.3)^gw,htg97,1). | | (15) | ### 8.5 Learning Rate Analysis for High Discount Factor To check that the learning rates used for the high discount RMA and LSTM models were reasonable, we ran the largest variance tasks from in Section [8.2](#S8.SS2 "8.2 Active Visual Match ‣ 8 Task Analyses ‣ Optimizing Agent Behavior over Long Time Scales by Transporting Value") (for RMA with γ=0.998) and [8.3](#S8.SS3 "8.3 Key-to-Door ‣ 8 Task Analyses ‣ Optimizing Agent Behavior over Long Time Scales by Transporting Value") (for LSTM with γ=0.998) for learning rates 3.2×10−7, 8×10−7, 2×10−6, 5×10−6 and 1.25×10−5. The results are shown in Figure S[12](#S9.F12 "Supplementary Figure 12 ‣ 9 Supplementary Figures ‣ Optimizing Agent Behavior over Long Time Scales by Transporting Value") and they show that the default learning rate of 5×10−6 was the best among those tried. ### 8.6 Behavioral Analysis of Active Visual Match We compared the P1 behaviors of a TVT agent versus an RMA as shown in Figure 3a in the main text. First, we modified the environment to fix the color square in one of three pre-selected wall locations. We then ran TVT and RMA for 10 episodes in each of these three fixed color square conditions. Finally, we plotted the agents’ positional trajectories in each condition. We also visualized the TVT agent’s memory retrievals by plotting a single episode trajectory with arrowheads indicating agent orientation on each second agent step. Each arrowhead is also color-coded by the maximal read weight from any time step in P3 back to the memory encoded at this time and position in P1. ### 8.7 Behavioral Analysis of Latent Information Acquisition We evaluated TVT and RMA for 50 episodes in the latent information acquisition task. To visualize, we scatter-plotted the agent’s position as a black dot for each P1 time step (50 episodes × 75 P1 time steps = 3,750 dots in total). We also binned the agent’s position on a 4×5 grid and counted the percentage of time the agent had occupied each grid cell. We visualized this grid occupancy using a transparent heatmap overlying the top-down view. To further quantify the behaviour of TVT versus RMA, we recorded how many objects were acquired by the agent in the exploration phase in each of the 50 test trials and plotted the mean and standard deviation in a bar plot. ### 8.8 Return Variance Analysis Over 20 trials, in Key-to-Door we computed and compared two return variances based on trajectories from the same TVT agent. The first was the undiscounted return: Rt=∑t′≥trt′. The second was computed as in Algorithm [2](#alg2 "Algorithm 2 ‣ 4 Implementation and Optimisation ‣ Optimizing Agent Behavior over Long Time Scales by Transporting Value") and Algorithm [3](#alg3 "Algorithm 3 ‣ 5.3 Reading Regularization ‣ 5 Temporal Value Transport ‣ Optimizing Agent Behavior over Long Time Scales by Transporting Value") using TVT (γ=λ=0.96), i.e., it was bootstrapped recursively: | | | | | --- | --- | --- | | | ~Rt=rt+γ[λ~Rt+1+(1−λ)^Vt+1], | | and rt was modified by TVT. 9 Supplementary Figures ------------------------ ![ ](https://media.arxiv-vanity.com/render-output/7948497/x7.png) Supplementary Figure 1: Passive Image Match Learning. *Left.* Full episode score. *Right.* P2 score. (γ=0.96 for all models.) ![ All models learned to retrieve the P3 reward with no P2 delay, but performance is hampered for longer delays for models with no reconstructive loss.](https://media.arxiv-vanity.com/render-output/7948497/x8.png) Supplementary Figure 2: Passive Image Match with Varying Delay Period. All models learned to retrieve the P3 reward with no P2 delay, but performance is hampered for longer delays for models with no reconstructive loss. ![ With the x-axis plotted in episodes, controlling for the number of additional steps due to the delay period, the RMA learned in roughly the same number of episodes, regardless of delay length (0 seconds to 60 seconds).](https://media.arxiv-vanity.com/render-output/7948497/x9.png) Supplementary Figure 3: Passive Image Match with Varying Delay Period (Episodes). With the x-axis plotted in episodes, controlling for the number of additional steps due to the delay period, the RMA learned in roughly the same number of episodes, regardless of delay length (0 seconds to 60 seconds). ![ Using CIFAR-10 images](https://media.arxiv-vanity.com/render-output/7948497/x10.png) Supplementary Figure 4: Passive Image Match (CIFAR-10). Using CIFAR-10 images[[25](#bib.bib25)] instead of colored squares as P1 and P3 images, the RMA was still able to perform the Passive Image Match Task. ![ P2 reward variance was introduced by varying the probability and reward value of apple reward (see ](https://media.arxiv-vanity.com/render-output/7948497/x11.png) Supplementary Figure 5: Effect of P2 Reward Variance in Active Image Match. P2 reward variance was introduced by varying the probability and reward value of apple reward (see [7.10](#S7.SS10 "7.10 Variable Apple Reward ‣ 7 Tasks ‣ Optimizing Agent Behavior over Long Time Scales by Transporting Value")). For higher levels of P2 reward variance, the RMA models failed to solve Active Image Match, though TVT was largely unaffected. ![ The TVT agent was also able to solve an Active Image Match task with a 60 second P2 delay period.](https://media.arxiv-vanity.com/render-output/7948497/x12.png) Supplementary Figure 6: Active Image Match 60 Second P2. The TVT agent was also able to solve an Active Image Match task with a 60 second P2 delay period. ![ With a black door in P3, TVT was able to solve the task as easily with a blue key in P1, implying that content-based memory retrieval was flexible and not based on surface similarity between the key and door color.](https://media.arxiv-vanity.com/render-output/7948497/x13.png) Supplementary Figure 7: Key-to-Door: Black vs. Blue key. With a black door in P3, TVT was able to solve the task as easily with a blue key in P1, implying that content-based memory retrieval was flexible and not based on surface similarity between the key and door color. ![ ](https://media.arxiv-vanity.com/render-output/7948497/x14.png) Supplementary Figure 8: Control Task DM Lab Learning. *a.* TVT (black) learned Natlab Varying Map Randomized just as well as the RMA. *b.* On Explore Goal Locations Small, TVT led to a modest decrement in final performance. *c.* On Psychlab Arbitrary Visuomotor Mapping, TVT did decrement final performance and slowed learning, though the agent’s performance was still high compared to all but the RMA. ![ Final performance for 5 training runs from Supplementary Figure ](https://media.arxiv-vanity.com/render-output/7948497/x15.png) Supplementary Figure 9: Control Task DM Lab Final Performance. Final performance for 5 training runs from Supplementary Figure [8](#S9.F8 "Supplementary Figure 8 ‣ 9 Supplementary Figures ‣ Optimizing Agent Behavior over Long Time Scales by Transporting Value"). ![ ](https://media.arxiv-vanity.com/render-output/7948497/x16.png) Supplementary Figure 10: Two Negative Keys level. *a.* In P1, the agent selects between a red and a blue key, distributed randomly in the room corners. The red key allows the agent to open the door in P3, receiving negative reward of −1. The blue key leads to negative reward of −10. No key selection leads to a negative reward of −20. *b.* TVT was able to solve this task, picking up the red key, and receiving −1 on average in P3. ![ The three curves shown are for the LSTM agent with ](https://media.arxiv-vanity.com/render-output/7948497/x17.png) Supplementary Figure 11: Constant vs. Variable P2 Reward. The three curves shown are for the LSTM agent with γ=0.998 in three variants of Key-to-Door: (i) zero apple reward (see [7.8](#S7.SS8 "7.8 Zero Apple Reward ‣ 7 Tasks ‣ Optimizing Agent Behavior over Long Time Scales by Transporting Value")), (ii) fixed number of apples each with reward 5 (see [7.9](#S7.SS9 "7.9 Fixed Number of Apples ‣ 7 Tasks ‣ Optimizing Agent Behavior over Long Time Scales by Transporting Value")), and (iii) the full level, which has a variable number of apples per episode but the same expected return as the fixed number of apples case. This analysis is discussed in Section [8.3](#S8.SS3 "8.3 Key-to-Door ‣ 8 Task Analyses ‣ Optimizing Agent Behavior over Long Time Scales by Transporting Value"). Variable P2 reward was maximally detrimental to performance. ![ Learning rates used were ](https://media.arxiv-vanity.com/render-output/7948497/x18.png) Supplementary Figure 12: Learning Rate Search on Comparison Models (γ=0.998). Learning rates used were 3.2×10−7, 8×10−7, 2×10−6, 5×10−6, 1.25×10−5, and are displayed from lightest to darkest in that order. In all analyses, the default learning rate of 5×10−6 performed best. *a.* RMA with γ=0.998 on Active Visual Match with apple reward rap[le=1. *b.* LSTM with γ=0.998 on Key-to-Door task with variable apple reward as in Figure 4c in the main text, with P2 reward variance of 361. ![ ](https://media.arxiv-vanity.com/render-output/7948497/x19.png) Supplementary Figure 13: Effect of P2 Apple Reward in Passive and Active Image Match Task. *Upper Row.* On Passive, the RMA performed worse with larger discount factors, which are not needed to solve the task. *Lower Row.* On Active, the RMA models’ performance at acquiring the distal reward degraded with the introduction of P2 reward. TVT remained stable with the introduction of P2 distractor reward.
564e8a97-7cc2-44f2-a1b6-6e9963924f4a
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Shahar Avin On How To Regulate Advanced AI Systems [Shahar Avin](https://www.shaharavin.com/) is a senior researcher at the [Center for the Study of Existential Risk](https://www.cser.ac.uk/) in Cambridge. In his past life, he was a Google Engineer, though right now he spends most of his time thinking about how to prevent the risks that occur if companies like Google end up deploying powerful AI systems, by organizing [AI Governance role-playing workshops](https://intelligencerising.org/). In this episode, we talk about a broad variety of topics, including how we could apply what Shahar learned running [AI Governance workshops](https://intelligencerising.org/) to [governing transformative AI](https://theinsideview.ai/shahar#transformative-ai), [AI Strategy](https://theinsideview.ai/shahar#ai-strategy), [AI Governance](https://theinsideview.ai/shahar#ai-governance), [Trustworthy AI Development](https://theinsideview.ai/shahar#toward-trustworthy-ai-development) and end up [answering some twitter questions](https://theinsideview.ai/shahar#twitter-questions). Below are some highlighted quotes from our conversation (available on [Youtube](https://youtu.be/3T7Gpwhtc6Q), [Spotify](https://open.spotify.com/episode/1vvAKf8EBwErP5yGFRNoCT?si=1a28296cdfa94c01), [Google Podcast](https://podcasts.google.com/feed/aHR0cHM6Ly9hbmNob3IuZm0vcy81NmRmMjE5NC9wb2RjYXN0L3Jzcw/episode/MzJlMzk4YTAtYmMzZC00MDVkLWIzMTAtNTZhMmM2ZDc2MTg0?sa=X&ved=0CAUQkfYCahcKEwiI2sT3hY35AhUAAAAAHQAAAAAQAQ), [Apple Podcast](https://podcasts.apple.com/us/podcast/connor-leahy-eleutherai-conjecture/id1565088425?i=1000570841369)). For the full context for each of these quotes, you can find the accompanying [transcript](https://theinsideview.ai/shahar). We Are Only Seeing The Tip Of The Iceberg ========================================= The Most Cutting Edge AI Research Is Probably Private ----------------------------------------------------- > “**I don’t know how much of the most cutting edge research today is public. I would not be confident that it is.** It is very easy to look at all of the stuff that is public and see a lot of it, and infer from the fact that you’re seeing a lot of public research that all research must, therefore be public. I don’t think that is a correct inference to make.” > > AI companies may not be showing all of their cards -------------------------------------------------- > “My guess would be that they're not always showing all of the cards. It's not always a calculated decision, but there is a calculated decision to be made of, if I have a result, do I publish or not? And then what goes into the calculation is if there is a benefit from publishing. **It increases your brand, it attracts more talent, it shows that you are at the cutting edge, it allows others to build on your result and then you get to benefit from building on top of their results. And you have the cost of, as long as you keep for yourself, no one else knows it, and you can keep on doing the research**.” > > Aligning Complex Systems Is Hard ================================ Narrow AI Do Not Guarantee Alignment ------------------------------------ > “**One failure mode is that there is an overall emergent direction that is bad for us. And another is there is no emergent direction, but the systems in fact are conflicting with each other, undermining each other.** So one system is optimizing for one proxy. It generates it externality that is not fully captured by its designers that gets picked up by another system that has a bad proxy for it, and then tries to do something about it.” > > Security failures are unavoidable for large, complex systems ------------------------------------------------------------ > “In particular, if you're building very large, complex, opaque systems, from a system-engineering or system-security perspective, you're just significantly increasing the way things go wrong because you haven't engineered every little part of the thing to be 100% safe, and provably and verifiably secure. And **even provably and verifiably secure stuff could fail because you've made some bad assumptions about the hardware**.” > > Why Regulating AI Makes Sense ============================= Our World Is A Very Regulated World ----------------------------------- > “**Our world is a very regulated world. We tend to see the failures, but we forget that none of these digital technology would exist around us without standards, and interoperability.** We wouldn’t be able to move around if transport was not regulated and controlled and mandated in some way. If you don’t have rules, standards, norms, treaties, laws, you just get chaos.” > > Compliance Is Part Of The Cost Of Doing Business ------------------------------------------------ > “Compliance is part of the cost of doing business in a risky domain. If you have a medical AI startup, you get people inspecting your stuff all the time because you have to pass through a bunch of regulations and you could get fined or go to jail, if you don’t do that. **The threat of going to jail is a very strong motivator for someone who just wants to go on building good tech for the world.**I’m much more worried in that respect about the US than I am about Europe because Europe has regulation-heavy approach to regulation, which also explains why they don’t have any very large players in the tech space.” > > Concrete AI Regulation Proposals ================================ Data Is Much Harder To Regulate Than Compute -------------------------------------------- > “Data is much harder to regulate than compute because **compute is a physical object**. You can quantify it. **If you have one GPU sitting in front of you getting a second GPU just next to it is pretty much impossible. You have to go back to the GPU factory. Whereas if you have a bunch of data here and you want a copy of it on a folder next to it, it's basically free**.” > > Alignment Red Tape And Misalignment Fines ----------------------------------------- > “**We should have misalignment fines in the same that we fine companies for causing harms.** It’s basically a way of internalizing the externalities. If you make a system that causes harm, you should pay for it and the way we do it is through fines but I also think they should have alignment red tape. **The more powerful your system is, you should be paying the red tape cost of proving that your system is safe and secure and aligned before you’re allowed to make a profit and deploy it in the world.**” > > When Should You Regulate AI =========================== Making Today’s AI Regulation “Future Ready” ------------------------------------------- > “**Governments are now caring about AI where previously they did not**, and they care about AI for all of the current reasons: bias and privacy. **Once they care about AI, then the game is about making that "future ready"**. You don't want just an ossified thing that only cares about privacy, even in a world with giant drone swarms and highly manipulative chatbots. **You want the regulation of today to be "updatable”, to take into account new risks, or that the parts of government that created today's regulation would be willing to create new regulation**. Ideally you want to decrease the amount of time that it takes to update regulation to account for new risks and there are various institutional designs that you could do to make that happen.” > > You Should Regulate An AI Explosion Before It Happens ----------------------------------------------------- > “**If you want to regulate an explosion, you don’t regulate it as it’s happening, you regulate it before it’s happened.** Similarly here, **if you get to the point where the technology is radically transforming your world on a month by month or week by week basis, it’s too late to do this regulation**, unless the regulators are also sitting on top of very powerful AI that helping them keep track of what’s happening in regulation. We need the different regulatory processes.” > > The Collingridge Dilemma ------------------------ > “When you want to regulate a technology or steer a technology towards a good outcome or any big change that is predicting in the future, if you try to do it too far in advance, you don’t have the details of what the change is going to happen, and so you don’t have a good solution. If you do it too late, then the thing is pretty much locked in and you don’t have much ability to change it. > > **Trying to find the sweet spot in the middle, where you know enough to regulate, but it’s not too late to change how things are going to go, is the game of AI regulation**, AI governance. And you can make the game easier by putting in the regulation early that they can scale up or get adapted as you go along. You could have lots of people who are broadly in agreement that we need something, and put them in places of power. And so when it comes time to regulate, you have lots of allies in lots of places. You could generally teach people the fundamentals of why cooperation is good and why everyone dying is bad.” > >
aa3ce52d-753b-4e89-99d3-257d6f4e422a
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
Share your requests for ChatGPT Curious to see what are the most interesting questions you've asked ChatGPT to do, especially interested in red-teaming it or other requests that awed your expectations.  Few questions I asked ChatGPT that I was impressed by its responses were: 1) Write a story where humanity live in a flourishing world 2) Write a story about AI destroys the flourishing world (described above) 3) Write an action plan for Ukraine to win war with Russia 4) Strategic and effective plan to end homelessness in San Francisco   What new features would you want to see from its advanced version based on your recent interaction?  I imagine to see a tool where it can identify wether or not the responses were written by chatGPT or actual human, also keen to see a share button to share responses from chatGPT over screen shots.
38bf0ba0-74d4-415c-afc3-cc5ab46df30c
trentmkelly/LessWrong-43k
LessWrong
Forecasting thread: How does AI risk level vary based on timelines? Crossposted to EA Forum While there have been many previous surveys asking about the chance of existential catastrophe from AI and/or AI timelines, none as far as I'm aware have asked about how the level of AI risk varies based on timelines. But this seems like an extremely important parameter for understanding the nature of AI risk and prioritizing between interventions. Contribute your forecasts below. I'll write up my forecast rationales in an answer and encourage others to do the same. Elicit Prediction (forecast.elicit.org/binary/questions/726u_r_XC) Elicit Prediction (forecast.elicit.org/binary/questions/8HnkL7Ekf) Elicit Prediction (forecast.elicit.org/binary/questions/dlPDzREFM) Elicit Prediction (forecast.elicit.org/binary/questions/EHL2INyQq) Elicit Prediction (forecast.elicit.org/binary/questions/AETDAUmNF)
d78021d7-3301-4c4f-9478-ab43c7fb24fd
trentmkelly/LessWrong-43k
LessWrong
[SEQ RERUN] Adaptation-Executors, not Fitness Maximizers Today's post, Adaptation-Executers, not Fitness-Maximizers was originally published on 11 November 2007. A summary (taken from the LW wiki):   > A central principle of evolutionary biology in general, and evolutionary psychology in particular. If we regarded human taste buds as trying to maximize fitness, we might expect that, say, humans fed a diet too high in calories and too low in micronutrients, would begin to find lettuce delicious, and cheeseburgers distasteful. But it is better to regard taste buds as an executing adaptation - they are adapted to an ancestral environment in which calories, not micronutrients, were the limiting factor. Discuss the post here (rather than in the comments to the original post). This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Fake Optimization Criteria, and you can use the sequence_reruns tag or rss feed to follow the rest of the series. Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
13ccc447-f97b-4720-a562-2b44f6c78913
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
It matters when the first sharp left turn happens *Thanks to Evan Hubinger for comments on these ideas.* Introduction ============ A “sharp left turn” is a point where [capabilities generalize beyond alignment](https://www.alignmentforum.org/posts/GNhMPAWcfBCASy8e6/a-central-ai-alignment-problem-capabilities-generalization). In a sharp left turn an AI becomes much more capable than aligned, and so starts to exploit flaws in its alignment. This can look like [Goodharting](https://www.alignmentforum.org/tag/goodhart-s-law), where strong optimization pressure causes outer alignment failure because the base goal isn’t identical to what we want. Or it can look like [deceptive alignment](https://www.alignmentforum.org/posts/zthDPAjh9w6Ytbeks/deceptive-alignment), where the model is aligned to proxy goals that aren’t identical to the base goal, but we don’t notice until the model is capable enough to make the proxies fail. However they happen, sharp left turns are bad. An important question, though, is: when will the first sharp left turn happen? Timing ====== Consider three scenarios: 1. **Weak:** The sharp left turn happens before models are able to cause existential risks. For instance, maybe GPT-4 has a sharp left turn but isn’t able to reliably execute on its plans. 2. **Strong:** The sharp left turn happens once capabilities are very super-human and dangerous if misused. In this world AI’s are doing the bulk of the world’s scientific research by the time alignment unravels. 3. **Human-level:** The sharp left turn happens while capabilities are around human level, meaning both dangerous and not game-changingly helpful for research. I think that the most dangerous of these is the human-level world. In the “weak” scenario, we get to see the sharp left turn coming. We potentially get lots of experience with models looking aligned and then that alignment failing with increasing scale. In this world, we effectively get multiple shots at alignment because the first badly-misaligned models just aren’t that dangerous. We get to experiment with deceptively aligned models and get a grounded empirical understanding of why models become deceptively aligned and how we might stop that. In this world, alignment research gets a chance to become a paradigmatic scientific field before the models get too capable. In the “strong” scenario, we get to do alignment research with super-human models that are still aligned. In this world, we don’t have to solve the technical challenge of alignment entirely on our own. Alignment research potentially gets much further along before we have to have an airtight solution and prevent sharp left turns. Both of these seem like pretty hopeful worlds. By contrast, the world where sharp left turns happen near human-level capabilities seems pretty dangerous. In that world, we don’t get to study things like deceptive alignment empirically. We don’t get to leverage powerful AI researcher tools. We have to solve the problem entirely on our own, and we have to do it without any useful empirical feedback. This seems really bad. Scales ====== The timing of sharp left turns depends on a few capability scales, some of which I conflated above: 1. The **danger scale**, above which the model is capable enough to pose an existential risk. 2. The **research scale**, above which the model is able to do scientific research better than humans. 3. The **left-turn scale**, above which the model is able to undergo a sharp left turn. A useful anchor here is given by deceptive alignment, which suggests the scale at which a model can successfully reason about its own training process. The three scenarios I outlined are characterized by different orderings of these scales: 1. **Weak:** (left-turn < danger) 2. **Strong:** (research < left-turn) 3. **Human-level:** (left-turn ~ research, left-turn ~ danger) There can be other orderings, leading to other outcomes. For instance, we could live in a world with (danger < left-turn < research). In that world, the left turn happens at a scale that’s dangerous but incapable of super-human research. Imagine a model that’s very capable of developing and executing plans using information freely available on the internet. Such a model could pose an existential risk by engineering pathogens that we already know how to make, without doing any novel R&D, and deploying them with competent but not superhuman logistics. Capabilities are (sort of) multi-dimensional ============================================ Of course capabilities aren’t strictly one-dimensional, so talking about “the scale of capabilities” conflates a few different ideas. Concretely, I think a model with fixed research ability could be either more or less dangerous, depending on other dimensions of its capabilities like “competence interacting with people” and “robust plan design”. These skills vary significantly in humans, and it seems plausible that we can engineer models to be more or less capable along these different dimensions. That’s not to say that there’s complete freedom here. There’s definitely correlation between capabilities (e.g. lots of capabilities improve with scale). And in the limit of extreme research ability a model should be able to learn whatever other capabilities it needs. Intelligence is closely related to general purpose optimization ability, and to the extent that this holds we should expect different capabilities to scale together. But we might live in worlds where it only takes models twice as capable as humans to solve the technical challenges of alignment, and those models may not be reflectively stable and may have lots of flawed abilities. In those worlds there’s a lot of benefit to thinking about which capabilities we want to weaken in models, and to trying to arrange that. Outlook ======= If we could know when to expect a sharp left turn that might change what we do. If we knew that a sharp left turn would happen long before dangerous capabilities, we might focus on developing alignment benchmarks and more empirical study of alignment failures. If we knew that it would happen long after powerful research capabilities, we might focus on applying those capabilities towards alignment work. It seems plausible we can learn more about this timing question by [studying low-likelihood outputs](https://www.alignmentforum.org/posts/2AvX8cX47CdwjbkjY/we-may-be-able-to-see-sharp-left-turns-coming), studying how one-or-multi-dimensional capabilities are, and reasoning about the kinds of capabilities that are dangerous and the kind that are helpful. Along similar lines, even if we don’t know when to expect a sharp left turn, there may be things we can do to shift that timing in a favorable direction. For instance, we can put weak models in environments that encourage alignment failure so as to study left turns empirically (shifting towards the "weak" worlds). And we can preferentially focus alignment efforts on whatever the state of the art is at any time (currently large language models), rather than on very different architectures, so as to delay left turns until after we get superhuman models we can use to help with alignment research.
88210d63-ae83-4ad2-b724-66ed75d6c5f4
StampyAI/alignment-research-dataset/arxiv
Arxiv
Rethinking and Recomputing the Value of ML Models 1 Introduction --------------- A few position papers have recently begun to challenge the assumptions that have driven the notion of quality in Machine Learning (ML) Sayin et al. ([2021](#bib.bib53 "The science of rejection: a research area for human computation")); Casati et al. ([2021](#bib.bib17 "On the value of ml models")) - namely the predominance of the notions of accuracy, precision, recall, and various measures of calibration errors. At the heart of this stance, there are two observations: (i) ML models are almost always applied with a default option where the model can abstain or the inference can be rejected (as in Figure [1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ Rethinking and Recomputing the Value of ML Models")), and (ii) the value (cost) of correct/incorrect inferences or rejections are the property of the use case where the model is applied. When we see this perspective, the way we evaluate or select a model and the notion of what we consider to be “learning” change. ![](https://media.arxiv-vanity.com/render-output/7177454/figs/wf.png) Figure 1: Typical implementation of ML models into an ML solution workflow - edited from Sayin et al. ([2021](#bib.bib53 "The science of rejection: a research area for human computation")) The scenario in Figure [1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ Rethinking and Recomputing the Value of ML Models") is central, not a corner case, and obviously applies to scenarios where errors are costly (i.e. self-driving cars). In such cases, we ask humans to take control rather than making an unsure inference. It is also the norm and typical path in enterprise AI applications: Siri and Alexa do not always return the most likely action if they are unsure. When we think in these terms, and if for simplicity we discuss the problem in terms of classification (though the concepts are identical for any AI capability), what we really mean to say is that our model is applied as a selective classifier Geifman and El-Yaniv ([2017](#bib.bib15 "Selective classification for deep neural networks")). Once we realize that selective AI models are the rule, not the exception, our approach to model evaluation changes. An ML model provides a utility to each, and enterprises adopting a model may come up with some overall notion of utility, or value function. In the simplest case, this value function could be accuracy or F1, but we have already seen that models are deployed as selective classifiers, and now we see that the value depends on the application use case, since for example the model being “wrong” (or right) has different consequences. The “value” of an “ML solution workflow” depends on how often the workflow rejects the predictions, on the correctness patterns of the predictions that the workflow lets through (not rejected), and on the “cost” of errors vs the benefits of correct predictions. Notice that the value of the solution workflow depends on the use case (e.g., on how costly errors are), so that a workflow w1 may be more valuable than w2 for use case uc1, but the reverse may be true for uc2. The value of a model therefore depends on the value of the best solution workflows we can build given that model. When we need to take a decision on which model to deploy given a set of options, the value of the resulting solution workflow is the right driving factor. This notion of quality is not what model accuracy, F1, or AUC measure. We may argue that accuracy metrics are a “good enough” proxy for data science-led model improvements and for model selection, and that all we need is to pick the model with the best accuracy and deploy it by filtering out predictions with confidence below a threshold, but in many cases this is wrong, both in how we select the model and how we integrate into the workflow, and can even unknowingly lead to negative utilities (meaning that we are better off without AI). To some extent, all this is so trivial that it would not make sense to waste even a second of the reader’s time. There is nothing special or difficult in having value functions, picking a model out of a set of models that performs well given a value function, or assessing if a model performs well over a class of value functions. However, in this paper, we show that if we accept the points above, then the method we use to measure, compare, and even train models change, and the implications of such changes are often neglected in the literature as well as in model leaderboards. In this paper, we specifically make the following contributions: * We show that universal metrics used for model evaluation can lead to wrong decisions. * We show that naively applying the common approach to reject predictions (filtering by threshold) leads to low or even negative model value and to significant loss of “value” with respect to what can be achieved. * We show that the above is true even when calibration methods are applied - and discuss why measures of calibration errors fail to capture the most important property of confidence scores: the probability of confidence is high when predictions are correct. * Finally, we show that simple, decades-old models, especially when trained in domain even with simple text encoders, behave well and often better than large, complex models and provide an intuition of why that may be. 2 Related work --------------- Mimicking the typical use of machine learning models in many practical applications, a number of approaches rely on the combination of a machine learning model making an initial prediction and a human annotator taking over when the model’s confidence is not high enough Callaghan et al. ([2018](#bib.bib3 "MechanicalHeart: a human-machine framework for the classification of phonocardiograms")). Selective classifiers are specifically conceived for this use, by including a rejection mechanism to decide when to abstain from making a prediction. The literature on selective classifiers is quite extensive, covering a wide range of learning algorithms ranging from nearest-neighbour classifiers Hellman ([1970](#bib.bib64 "The nearest neighbor classification rule with a reject option")) to SVM Fumera and Roli ([2002](#bib.bib63 "Support vector machines with embedded reject option")) and neural networks Cordella et al. ([1995](#bib.bib65 "A method for improving classification reliability of multilayer perceptrons")); De Stefano et al. ([2000](#bib.bib66 "To reject or not to reject: that is the question-an answer in case of neural classifiers")); Geifman and El-Yaniv ([2017](#bib.bib15 "Selective classification for deep neural networks")). The effectiveness of this solution is, however, heavily dependent on the reliability of machine confidence, which has shown to be very poor especially for deep learning Guo et al. ([2017](#bib.bib51 "On calibration of modern neural networks")); Balda et al. ([2020](#bib.bib2 "Adversarial examples in deep neural networks: an overview")). Hybrid Human-AI systems aim at solving classification problems by leveraging both humans and machines Raghu et al. ([2019](#bib.bib30 "The algorithmic automation problem: prediction, triage, and human effort")); Wilder et al. ([2021](#bib.bib31 "Learning to complement humans")). Crowds have been extensively employed to develop this type of systems, from learning crowd vote aggregation models from features of the crowd task Kamar et al. ([2012](#bib.bib14 "Combining human and machine intelligence in large-scale crowdsourcing")), to leveraging crowds to learn features of ML models Cheng and Bernstein ([2015](#bib.bib13 "Flock: hybrid crowd-machine learning classifiers")); Rodriguez et al. ([2014](#bib.bib4 "Crowd-based mining of reusable process model patterns")). Understanding a classifier’s properties is a key step to effectively use it Jiang et al. ([2018](#bib.bib5 "To trust or not to trust a classifier")), and crucially relies on the notion of confidence for individual predictions. Various confidence-based techniques exist in the literature to detect those examples, such as using the entropy of the softmax predictions Teerapittayanon et al. ([2017](#bib.bib7 "BranchyNet: fast inference via early exiting from deep neural networks")), measuring trust scores of classifiers based on the distance of samples to a calibration set Jiang et al. ([2018](#bib.bib5 "To trust or not to trust a classifier")), finding a confidence threshold (using either Shannon entropy Shannon ([1948](#bib.bib12 "A mathematical theory of communication")), Gini coefficient Bendel et al. ([1989](#bib.bib10 "Comparison of skewness coefficient, coefficient of variation, and gini coefficient as inequality measures within populations")), or norm-based methods Ng ([2004](#bib.bib11 "Feature selection, l1 vs. l2 regularization, and rotational invariance"))) that maximizes the coverage given a target accuracy Bukowski et al. ([2021](#bib.bib9 "Decision confidence assessment in multi-class classification")), and using semantics-preserving data transformation to estimate confidence Bahat and Shakhnarovich ([2020](#bib.bib8 "Classification confidence estimation with test-time data-augmentation")). However, these confidence measures should be complemented with an appropriate value metric to assess the classifier’s utility in real-world applications. ![](https://media.arxiv-vanity.com/render-output/7177454/figs/options.png) Figure 2: Typical ways of selectivity in classification. 3 Measuring model “value” -------------------------- ### 3.1 The setting Selective classifiers can be implemented in several ways: 1. We take a model m that outputs a prediction p and a confidence cp (or a confidence vector c with a confidence for a set of possible answers). Then, we filter the predictions to take only those above a certain confidence threshold (Fig. [2](#S2.F2 "Figure 2 ‣ 2 Related work ‣ Rethinking and Recomputing the Value of ML Models")a). 2. The model m outputs predictions (and possibly a confidence), but we apply a second model s (the selector) that decides whether to accept the prediction or not, based on some features of the input item i (Fig. [2](#S2.F2 "Figure 2 ‣ 2 Related work ‣ Rethinking and Recomputing the Value of ML Models")b). 3. A hybrid of the two above cases is where the selector is actually a recalibrator r that can either take as input only the prediction and confidence measure or also the input features of i and adjust the confidence vector. We call the first as a feature-agnostic calibrator and the latter as a feature-aware calibrator (Fig. [2](#S2.F2 "Figure 2 ‣ 2 Related work ‣ Rethinking and Recomputing the Value of ML Models")c). 4. The model m is already trained to only output predictions that are “good enough” and includes an “I don’t know” class (Fig. [2](#S2.F2 "Figure 2 ‣ 2 Related work ‣ Rethinking and Recomputing the Value of ML Models")d). The first case is by far the most common, at least in our experience. The second case is an extension and generalization of the first case, in two ways: it can take features as input (that is, s can be trained as opposed to “just” being a formula), and it can filter based on any formula. Both the feature aware rejector and calibrator require some form of “training” or machine teaching. An important conceptual distinction here is that a feature-aware rejector makes sense in cases where we know the use case, because that knowledge will tell us when to reject. On the other hand, a feature-aware calibrator can be considered of general usefulness. However, if we consider feature-aware calibrator, one point we make is that in that case calibration and learning really are the same thing. In the end, we have a model m′=r(m), and one could argue (we do) that calibration in this case is no different than learning or fine-tuning. In formalizing “value”, we will progressively make a few assumptions that i) allow to simplify the presentation of the problem without altering the essence of the concepts, ii) are reasonable in many if not most use cases, and iii) make the definition of the value function easier to understand and interpret for the users who eventually have to deploy ML into their companies. This is important: people understand accuracy because it is simple, and that has value even if accuracy is “inaccurate” as a metric, and most users will not be able to express complex value functions. We also scope the conversation on classification problems as it makes it easy to ground the examples and terminology, and because it is easier to define a notion of accuracy. ### 3.2 Definition of value We have a classifier g that operates on items x∈D and returns either a predicted class y∈Y or a special label yr, denoting "rejection" of the prediction. Given the above, we can compute the average value per prediction of applying a model g over D (so note that what we are talking about here is the value of a solution workflow). Specifically, | | | | | | --- | --- | --- | --- | | | V(g,D)=ρVr+(1−ρ)(αVc+Σij[Ω⊙Vw]ij) | | (1) | where ρ is the proportion of items in D that are rejected by g (classified as yr), α is the accuracy for predictions above threshold, Vr and Vc are the value of rejecting an item and classifying it correctly respectively, Ω is a matrix denoting the proportion of predictions (above threshold) in each cell of the confusion matrix, and Vw is a matrix with the cost for each type of error (set to zero on the main diagonal corresponding to correct predictions), and ⊙ denotes the Hadamard (element-wise) product, of which we take the summation across all elements ij. Notice that ρ,α,Ω all depend on D and g, and we omit the indices to simplify notation. Also, if our classification problem has |Y| classes, then Ω and Vw are |Y|×|Y| (yr is not included here). An alternative representation would be to just say that V(g,D)=Ω′V′, where the confusion and value matrices incorporate the reject class. This would allow us to model the case where the value of rejections and of correct predictions is also class-dependent. Instead, if we only consider costs based on what we misclassify (based on the actual class) then Ω and Vw become vectors, and in the most common case where all wrong predictions are considered equally bad in a first approximation, then Ω and Vw are a scalar, and Ω=1−α, so in this case the formula simplifies to: | | | | | | --- | --- | --- | --- | | | V(g,D)=ρVr+(1−ρ)(αVc+(1−α)Vw) | | (2) | At this point, while we could carry on with this math, we simplify the notation for several reasons: the first is, well, to simplify the notation. The second is to remove dimensionality (values can be measured in dollars, but here we care about relative values because we want to compare models and learning strategies), and the third is to arrive at a formulation that is digestible for process owners (the people who apply AI in their processes), for whom it may be hard to come up with the three cost parameters/vectors. None of the above simplifications change the concepts presented. Here we depart from Sayin et al. ([2021](#bib.bib53 "The science of rejection: a research area for human computation")) and define as baseline the case where we do not have ML, or, equivalently, we reject any prediction. We set this baseline at 0, which means that we set Vr=0. This makes it easy for us to evaluate a model in terms of whether it improves on the baseline or not - and therefore in terms of whether we should adopt AI or not for a given problem. | | | | | | --- | --- | --- | --- | | | V(g,D)=(1−ρ)(αVc+(1−α)Vw) | | (3) | We also express Vw in terms of Vc, as in Vw=−kVc, where k is a constant telling us how bad is an error with respect to getting the correct prediction. This leads us to: | | | | | | --- | --- | --- | --- | | | V(g,D)=Vc(1−ρ)(α−k(1−α)) | | (4) | Vc is a scaling factor for the above value formula. When reasoning about an AI-powered solution workflow we do not really care about that factor, we can think in terms of value “per unit of Vc dollars”, or equivalently assume the magnitude of Vc, so we can focus on value. From now on we, therefore, focus on “value per dollar unit of rejection cost” V′=V/Vc. We avoid introducing a new symbol and, without loss of generality with respect to the above equations, we set Vc=1 and get: | | | | | | --- | --- | --- | --- | | | V(g,D)=(1−ρ)(α−k(1−α)) | | (5) | Notice that nothing really changes in the concepts we want to stress between equations [1](#S3.E1 "(1) ‣ 3.2 Definition of value ‣ 3 Measuring model “value” ‣ Rethinking and Recomputing the Value of ML Models") and  [5](#S3.E5 "(5) ‣ 3.2 Definition of value ‣ 3 Measuring model “value” ‣ Rethinking and Recomputing the Value of ML Models"), but the latter simplifies the presentation. 1:for each (model,task) pair do 2:     Train or fine-tune the model using the training set 3:     Analyze the model’s confidence distribution on the test set 4:     Analyze the model’s performance on the test set based on traditional accuracy metrics 5:     Perform a value-based analysis considering different values of k: 6:     for each k do 7:          Find the theoretical threshold t based on k (t=k/k+1) 8:          Find the empirical threshold tempirical (which maximizes the value) on the validation set 9:          Find the value (based on t) and the empirical value (based on tempirical) on the test set 10:     end for 11:     Output the Value function (plot Value vs k) 12:     Plot confidence distributions 13:end for Algorithm 1 Experiment flow ### 3.3 Filtering by threshold We focus now on the most common situation observed in practice, the one in which the model selectivity is applied by thresholding confidence values and rejecting predictions that have confidence c less than a threshold τ (case (a) in Figure [2](#S2.F2 "Figure 2 ‣ 2 Related work ‣ Rethinking and Recomputing the Value of ML Models")). In this setting, we are given a model m that processes items x∈D and returns a vector of confidences (one per class). Typically this is the output of a softmax. Specifically, for each x, we consider the pair ^y,^c corresponding to the top level prediction of m(x) and the confidence associated with such prediction. Given a threshold t, we define a function s as: | | | | | --- | --- | --- | | | s(^y,^c,t)={^y,^c≥t,yr,otherwise. | | where yr is a special class label denoting “rejection” of the prediction. Our classifier g is therefore now expressed in terms of m and t. This means that we can express the value as a function of m,D,t. In a given use case, when we are given a model m and have knowledge of Ω (or of k in the simplified case), we select the threshold τ∈[0,1] that optimizes V(g,D) (We assume here τ is unique, or that we randomly pick one if not). This means that we can express the value of our classification logic as a function of (m,D,k) | | | | | | --- | --- | --- | --- | | | V(m,D,k)=(1−ρτ)(ατ−k(1−ατ)) | | (6) | Notice that τ can be set empirically on some tuning dataset D (it depends on m,D,k), and ρτ and ατ reflect the proportions ρ and α given τ. However, if we are aware of properties of the confidence vectors, we can set τ regardless of D. For example, if we assume perfect calibration (where the expected accuracy for a prediction of confidence c is c), then we know that the threshold is at the point where the value of accepting a prediction is greater than zero. If calibration is perfect, then ατ=τ. This means that to have V(m,D,k)>0 we need τ−k+kτ>0, which means τ>k/(k+1). This conforms to intuition: if k is large, it never makes sense to predict, better go with the default. If k=0 (no cost for errors), we might as well always predict. If k=1 (errors are the mirror image of correct predictions), then our threshold is 0.5. 4 Experiments -------------- We can experiment with various angles based on the concepts described. In this paper, we explore: * whether accuracy is indicative of model quality, and if a less accurate model may be preferable than a more accurate one, thereby implying that taking decisions and determining leaderboards based on accuracy could be a limiting perspective at best. * how to set confidence threshold based on value, and the extent to which calibration or threshold tuning affect value. * which models and in which use cases perform well for different values of k Specifically, we analyze both the behavior of simple as well as state of the art models over various datasets, models, and text encoders and provide insights on what model developers and process owners should look for in a model and in how to deploy it in a selective fashion. We refer the reader to our GitHub repo111<https://tinyurl.com/rethinking-value-of-ml-models> for the companion code. | Task | Dataset(s) | Models | Model details | | --- | --- | --- | --- | | Hate speech recognition | Hate-speech detection | Badjatiya et al. ([2017](#bib.bib25 "Deep learning for hate speech detection in tweets")), Agrawal and Awekar ([2018](#bib.bib24 "Deep learning for detecting cyberbullying across multiple social media platforms")) | Leader-board models | | | Hate Speech&Offensive Language | LogR, MLP1, MLP4 | from scikit-learn library | | Clickbait recognition | Clickbait detection | fullnetconc, weNet, lingNet, fullNet | Leader-board models | | Sentiment analysis | Multi-Domain Sentiment Analysis (MDS) | mttri Ruder and Plank ([2018](#bib.bib1 "Strong baselines for neural semi-supervised learning under domain shift")) | Leader-board | | | | Google’s T5-base | fine-tuned for sentiment analysis | | | | SieBERT Heitmann et al. ([2020](#bib.bib36 "More than a feeling: benchmarks for sentiment analysis accuracy")) | fine-tuned RoBERTa-large Liu et al. ([2019](#bib.bib37 "RoBERTa: a robustly optimized bert pretraining approach")) | | | | LogR, MLP1, MLP4 | from scikit-learn library | | | | GPT-3 | we fine-tuned for binary sentiment analysis | | | Twitter US Airline Sentiment | LogR, MLP1, MLP4 | from scikit-learn library | | | Coronavirus tweets NLP | LogR, MLP1, MLP4 | from scikit-learn library | | Content classification | DBPedia Classes | LogR, MLP1, MLP4 | from scikit-learn library | | | Yelp-5 | LogR, MLP1, MLP4 | from scikit-learn library | | | News Category Dataset | LogR, MLP1, MLP4 | from scikit-learn library | | Intent classification | Clinc150 | LogR, MLP1, MLP4 | from scikit-learn library | Table 1: Tasks, datasets and models used in the experiments ### 4.1 Experimental Setup Tasks, Datasets and Leaderboards We experimented on a set of text classification tasks (see Table [1](#S4.T1 "Table 1 ‣ 4 Experiments ‣ Rethinking and Recomputing the Value of ML Models") - and Table [5](#A0.T5 "Table 5 ‣ Rethinking and Recomputing the Value of ML Models") in the appendix) where making errors is especially harmful. Algorithm [1](#alg1 "Algorithm 1 ‣ 3.2 Definition of value ‣ 3 Measuring model “value” ‣ Rethinking and Recomputing the Value of ML Models") shows our experiment flow. *Hate-speech detection on Twitter.* Arango et al. ([2019](#bib.bib23 "Hate speech detection is not as easy as you may think: a closer look at model validation")) analyzed two widely used models (Agrawal and Awekar, [2018](#bib.bib24 "Deep learning for detecting cyberbullying across multiple social media platforms"); Badjatiya et al., [2017](#bib.bib25 "Deep learning for hate speech detection in tweets")) and tested on popular twitter hate-speech datasets (Waseem and Hovy, [2016](#bib.bib42 "Hateful symbols or hateful people? predictive features for hate speech detection on Twitter"); Davidson et al., [2017](#bib.bib26 "Automated hate speech detection and the problem of offensive language"); Zhang et al., [2019](#bib.bib27 "Grunn2019 at SemEval-2019 task 5: shared task on multilingual detection of hate")) with different settings.We replicated the original tests in Experiment 1 and used the Arango et al. ([2019](#bib.bib23 "Hate speech detection is not as easy as you may think: a closer look at model validation")) settings in Experiment 2 (more details in the appendix [A.3](#A1.SS3 "A.3 Experiment details on Hate Speech Dataset ‣ Appendix A Supplemental material ‣ Rethinking and Recomputing the Value of ML Models")) | | | | | | | | | | --- | --- | --- | --- | --- | --- | --- | --- | | | | | | | | | | Figure 3: Value curves of binary datasets for increasing k. *Clickbait detection.* The Clickbait Challenge on the Webis Clickbait Corpus 2017222<https://webis.de/data/webis-clickbait-17.html> was classifying Twitter posts as a clickbait or not. Both training and test sets are publicly available333<https://zenodo.org/record/5530410#.YWcFtC8RrRV>, while each team was free to choose a subset of the training set for validation (we followed the "blobfish" team). *Multi-Domain Sentiment Analysis - and Dataset (MDS).* Sentiment analysis based on a publicly available dataset for domain adaptation444<http://nlpprogress.com/english/domain_adaptation.html>. The data includes four categories of Amazon products (DVD, Books, Electronics, and Kitchen), and the task is to learn from one of these domains and analyze the sentiment on the others. In addition to the above binary classification problems and data, we used seven publicly available multi-class datasets with different class distributions (see Table [5](#A0.T5 "Table 5 ‣ Rethinking and Recomputing the Value of ML Models") in the appendix for details). Models. We used various models for each task in our experiments (see Table [1](#S4.T1 "Table 1 ‣ 4 Experiments ‣ Rethinking and Recomputing the Value of ML Models") and appendix [A.2](#A1.SS2 "A.2 Models ‣ Appendix A Supplemental material ‣ Rethinking and Recomputing the Value of ML Models") for the details). For the Hate Speech and Clickbait datasets we tested the leaderboard models. For MDS dataset, we used the leaderboard model “Multi-task tri-training (mttri)” by Ruder and Plank ([2018](#bib.bib1 "Strong baselines for neural semi-supervised learning under domain shift")), two transformer models (a T5-base model fine-tuned for sentiment analysis555<https://tinyurl.com/t5-base-finetuned-sentiment> and SieBERT Heitmann et al. ([2020](#bib.bib36 "More than a feeling: benchmarks for sentiment analysis accuracy")), a fine-tuned checkpoint of RoBERTa-large Liu et al. ([2019](#bib.bib37 "RoBERTa: a robustly optimized bert pretraining approach"))), as well as a simple Logistic Regression (LogR) model and two multi-layer perceptron models from the scikit-learn library666<https://scikit-learn.org/> with one (MLP1) and four (MLP4) hidden layers respectively. We used LogR, MLP1 and MLP4 for the multi-class datasets. We tested simple models with different text encoders: (i) TF-IDF, (ii) MPNET, and (iii) nnlm (details in [A.4](#A1.SS4 "A.4 Text encoders ‣ Appendix A Supplemental material ‣ Rethinking and Recomputing the Value of ML Models")), but for simplicity we show the results with TF-IDF (see the appendix-Figure [16](#A1.F16 "Figure 16 ‣ A.6.3 Multi-class datasets - Supplementary Results ‣ A.6 Further Results ‣ Appendix A Supplemental material ‣ Rethinking and Recomputing the Value of ML Models") for further results). Cost Settings. Following the simplification in Section [3.2](#S3.SS2 "3.2 Definition of value ‣ 3 Measuring model “value” ‣ Rethinking and Recomputing the Value of ML Models"), we set Vr=0 and Vc=1, and then test the models using different values of k∈[0,10]). In binary tasks, we consider the cost of false positives (kfp) and false negatives (kfn) separately. ### 4.2 Results #### 4.2.1 Accuracy vs Value We first investigated the extent to which models are robust across varying cost factors k, and consequently also whether we can use accuracy metrics to select the “best” model to deploy in an ML platform. We did so both for challenges/leaderboard models and for the set of small/simple and larger models as described. As an example, Table [2](#S4.T2 "Table 2 ‣ 4.2.1 Accuracy vs Value ‣ 4.2 Results ‣ 4 Experiments ‣ Rethinking and Recomputing the Value of ML Models") shows results on MDS dataset that even for fairly small and very realistic cost factors, the model we would choose with a value oriented approach differ from what we would choose based on accuracy. The appendix show many other cases where this happens - as well as cases where instead the model with the best accuracy also has best value across several costs metrics. Notice that k=4 is actually a very small and realistic cost factor: it means that “being wrong is 4 times as bad” with respect to the advantage of being right. Most scenarios have values of k way more extreme. Notice also that accuracy corresponds to the case where we do not reject any predictions. This is equivalent to setting k=0. Indeed, not filtering examples (accepting even low confidence predictions) means that we do not care about being wrong. Another important observation is that in many cases, across models and datasets, we often find that even leaderboard models have negative value, and even for cost factors of k=1. | Model | Accuracy | Value | | --- | --- | --- | | k=1 | k=2 | k=4 | k=8 | k=10 | | LogReg | 0.762 | 0.524 | 0.339 | 0.162 | 0.053 | 0.033 | | MLP1 | 0.749 | 0.497 | 0.327 | 0.18 | 0.081 | 0.062 | | MLP4 | 0.735 | 0.47 | 0.24 | -0.143 | -0.78 | -1.06 | | mttri | 0.808 | 0.616 | 0.441 | 0.148 | -0.354 | -0.58 | | T5 | 0.784 | 0.568 | 0.352 | -0.08 | -0.944 | -1.376 | | SieBERT | 0.842 | 0.685 | 0.527 | 0.217 | -0.397 | -0.705 | Table 2: Performance of models on MDS Dataset, TARGET = Electronics (using theoretical threshold) #### 4.2.2 Calibration vs Threshold Optimization As explained in Section [3.3](#S3.SS3 "3.3 Filtering by threshold ‣ 3 Measuring model “value” ‣ Rethinking and Recomputing the Value of ML Models"), we expect that the theoretical threshold would maximize the value of a model if it is well-calibrated. If this assumption is not true, then we should either find an “optimal" threshold empirically by tuning it on a validation set, or we should first calibrate the model (e.g., via temperature scaling Guo et al. ([2017](#bib.bib51 "On calibration of modern neural networks")) or other methods) and then maximize expected value by filtering based on the theoretical threshold. We compared how accuracy and value are affected by either calibrating the model or tuning the threshold on a “validation” dataset. For calibration, we first calibrate models via temperature scaling and then use the theoretical threshold to compute the values. For tuning, we investigated how empirically choosing the confidence threshold for each (model,task) pair affects value. We used a validation set to find the threshold that maximizes the model’s value for every single k, and then used those thresholds to compute the values on test set. Figure [3](#S4.F3 "Figure 3 ‣ 4.1 Experimental Setup ‣ 4 Experiments ‣ Rethinking and Recomputing the Value of ML Models") shows the results for uncalibrated models, calibrated models (second column) and threshold tuning (third column) for the hate speech and clickbait cases (more results in the appendix and additional material). Notice that calibration helps but still leads to very low, zero, and sometimes negative values even at low-cost factors (result on the Hate speech dataset when we run Experiment 2 -see Figure [3](#S4.F3 "Figure 3 ‣ 4.1 Experimental Setup ‣ 4 Experiments ‣ Rethinking and Recomputing the Value of ML Models"), second column). The empirical threshold provides equal or better values than the theoretical threshold in almost all cases. The rightmost column of Figure [3](#S4.F3 "Figure 3 ‣ 4.1 Experimental Setup ‣ 4 Experiments ‣ Rethinking and Recomputing the Value of ML Models") reports the highest achievable results, obtained by optimizing the threshold on the test set, showing how threshold tuning on the validation set is close to optimal in most cases. Still, there are cases (*Hate-speech*, Exp. 2) in which the models are useless (Value=0, all predictions rejected) for most values of k. We report results on all datasets in the Appendix [A.6](#A1.SS6 "A.6 Further Results ‣ Appendix A Supplemental material ‣ Rethinking and Recomputing the Value of ML Models") and show that they are consistent with these findings. | | | | | | | | | | --- | --- | --- | --- | --- | --- | --- | --- | | | | | | | | | | Figure 4: Value curves on MDS dataset for increasing k, using theoretical threshold after calibration (top row) and empirical threshold on validation set (bottom row). Values are averaged except the ’SieBERT’ model. #### 4.2.3 The effect of complexity and out-of-distribution data. We investigated why models that rank high in terms of accuracy drop in quality when the cost ratio increases, while others are more “robust”. As an example, we show results from an experiment on the MDS dataset to see the impact on cross-over domains (on out-of-distribution samples). The leaderboard model for this task is “mttri” Ruder and Plank ([2018](#bib.bib1 "Strong baselines for neural semi-supervised learning under domain shift")) but we also tested two transformer models (a fine-tuned T5-base and SieBERT), LogR, MLP1 and MLP4 as explained in Section [4.1](#S4.SS1 "4.1 Experimental Setup ‣ 4 Experiments ‣ Rethinking and Recomputing the Value of ML Models"). We trained all the models (except T5 and SieBERT) on each domain and tested on the other 3 domains separately (so that we have 12 different cases of <source domain,target domain> pairs). We then calculated the average values of each model on each target domain (see Table [3](#S4.T3 "Table 3 ‣ 4.2.3 The effect of complexity and out-of-distribution data. ‣ 4.2 Results ‣ 4 Experiments ‣ Rethinking and Recomputing the Value of ML Models") for the average accuracy of each model). SieBERT is the best one (even better than the leader-board mttri model) based on accuracy, but this is not the case in terms of value. | Model | Target | | --- | --- | | DVD | Books | Electronics | Kitchen | | LogReg | 0.74 | 0.704 | 0.762 | 0.782 | | MLP1 | 0.728 | 0.691 | 0.749 | 0.765 | | MLP4 | 0.72 | 0.696 | 0.735 | 0.761 | | mttri | 0.753 | 0.742 | 0.808 | 0.821 | | T5 | 0.788 | 0.770 | 0.783 | 0.777 | | SieBERT | 0.836 | 0.826 | 0.842 | 0.865 | Table 3: Average accuracy of models on MDS Dataset. Figure [4](#S4.F4 "Figure 4 ‣ 4.2.2 Calibration vs Threshold Optimization ‣ 4.2 Results ‣ 4 Experiments ‣ Rethinking and Recomputing the Value of ML Models") shows the results. In most of the cases simple LogR and MLP1 models with simple text encoders (i.e. TF-IDF) have better value than the complex models for high k values. We also tested a fine-tuned version of T5−base model for sentiment analysis (which does not output the confidence values associated to each prediction), that is why we could only measure its value by accepting all the predictions (see Figure [4(a)](#S4.F4.sf1 "(a) ‣ Figure 5 ‣ 4.2.3 The effect of complexity and out-of-distribution data. ‣ 4.2 Results ‣ 4 Experiments ‣ Rethinking and Recomputing the Value of ML Models")). Related to this, notice how the inability to filter (reject) predictions lead to negative value, even with the fine-tuned T5 that performs very well in terms of accuracy. | | | | | | | | --- | --- | --- | --- | --- | --- | | | | | | --- | --- | | | | (a) Transformer models on MDS dataset, no threshold. | | | | | --- | --- | | | | (b) Multi-class datasets, with theoretical threshold | Figure 5: Value curves for increasing k. We repeated the experiments on seven multi-class datasets and observed that the MLP4 model is worse than a simple MLP1 model over almost all datasets. Furthermore, MLP4 is even worse than a simple LogR model on four datasets (please see Figure [4(b)](#S4.F4.sf2 "(b) ‣ Figure 5 ‣ 4.2.3 The effect of complexity and out-of-distribution data. ‣ 4.2 Results ‣ 4 Experiments ‣ Rethinking and Recomputing the Value of ML Models") for an example for each case, and the rest can be found in the Appendix-Figure [14](#A1.F14 "Figure 14 ‣ A.6.3 Multi-class datasets - Supplementary Results ‣ A.6 Further Results ‣ Appendix A Supplemental material ‣ Rethinking and Recomputing the Value of ML Models")). The pattern in results suggests that simple models (usually thought to be naturally well calibrated) perform relatively well when errors are costly and that for high-cost factors models trained on a different domain also tend to perform poorly, which suggests that even simple models trained in domain can be preferable. This observation is crucial in enterprise AI, where each company and vertical has its own “language” skew. Interestingly and perhaps not surprisingly, large pre-trained language models that are not bottlenecked by insufficient training data perform well across the board. This can be due to two reasons (besides the models being very powerful): (i) we know that large models with very large train datasets are reasonably well calibrated (e.g. see Jiang et al., [2021](#bib.bib35 "How can we know when language models know? on the calibration of language models for question answering")), and (ii) when the training data is so large, fewer examples are out of distribution in terms of language. For example, GPT-3777<https://openai.com/api/> is trained on about 45TB of text data from various datasets and it performs very well on MDS dataset (see Table [4](#S4.T4 "Table 4 ‣ 4.2.3 The effect of complexity and out-of-distribution data. ‣ 4.2 Results ‣ 4 Experiments ‣ Rethinking and Recomputing the Value of ML Models") - and the appendix [A.5](#A1.SS5 "A.5 GPT-3 Experiments ‣ Appendix A Supplemental material ‣ Rethinking and Recomputing the Value of ML Models") and Figure [8](#A1.F8 "Figure 8 ‣ A.6.2 MDS dataset - Supplementary Results ‣ A.6 Further Results ‣ Appendix A Supplemental material ‣ Rethinking and Recomputing the Value of ML Models")). The reason is that the MDS dataset is quite old and probably GPT-3 model has already learned it (so the MDS dataset does not include out-of-distribution samples for GPT-3 model). However, such models may be too costly or impractical due to their scale, and the problem still remains for enterprise datasets which may be quite different from what the large pre-trained model has seen. Furthermore, as the cost grows, the difference with respect to simple models drops significantly. | Target | Accuracy | Value | | --- | --- | --- | | k=1 | k=2 | k=4 | k=8 | k=10 | | DVD | 0.832 | 0.664 | 0.534 | 0.367 | 0.164 | 0.089 | | Books | 0.806 | 0.613 | 0.46 | 0.272 | 0.077 | 0.004 | | Electronics | 0.82 | 0.641 | 0.499 | 0.322 | 0.127 | 0.051 | | Kitchen | 0.853 | 0.706 | 0.599 | 0.464 | 0.308 | 0.251 | | | | | | | | | Table 4: Performance of fine-tuned GPT-3 on MDS Dataset (Average values using theoretical threshold) 5 Limitations and Conclusion ----------------------------- The takeaway from our experiments is that using accuracy-oriented metrics (that is, metrics that assume models are applied without rejection) is as a minimum a risky proposition - and this is true even for models widely acknowledged as “leaders”. We should always assess models over a range of cost factors, and at least for reasonable cost factors we expect based on the set of application use cases we are targeting. k=0 (accuracy) is almost never a reasonable one. We also saw how applying models without thresholding can lead to negative value, and that threshold tuning seems to perform better than calibration. We also hypothesize and have obtained some support for identifying complexity and out-of-distribution as factors that may lead to rapid model quality degradation for higher cost factors. This being said, we see this work more as providing evidence of a problem and outlining the research needs: more studies (especially with large models and in vs out of distribution datasets) are needed to validate the hypothesis and a deeper understanding of how calibration, confidence distribution, and size of validation set affect model value.
e8d17f8a-714a-4111-bb2b-1882e471237e
trentmkelly/LessWrong-43k
LessWrong
Bug Hunt - Lesserwrong/Lesswrong.com Browser Compatibility As we get ready for the final move of lesserwrong.com to lesswrong.com, we're addressing various browser compatibility issues. We've recently updated some compatibility tools that fixed several of the older bugs. If you've run into  compatibility bugs in the past, and are using one of our (newly) supported browsers, it'd be helpful if you tried using lesserwrong.com again and see if you run into any issues. The site does require javascript to login and interact, although we'll try to make it a fairly friendly reading experience, and users who prefer a non-javascript experience can use saturn's greaterwrong.com. The oldest browsers we're officially trying to support are: * Firefox 45 * Chrome 49 * Safari 9 * Internet Explorer 11 * Edge * Samsung Internet 6 (These are all the browsers that make up more than .4% of our userbase. If there turn out to be major issues for other browsers we'll try to look into them but can't necessarily promise to prioritize them all uniformly highly. It's still useful to have the bugs collected so we have a sense of the spread of issues across various devices) Let us know what issues you've run into. Please include your browser and OS (and version). Bonus points if you try to replicate the issue in another browser to help triangulate what software is causing problems.
9eea35ed-dc64-4340-b3b3-4cdc756cea74
trentmkelly/LessWrong-43k
LessWrong
AI Risk Microdynamics Survey If you are interested in AI Risk, could you kindly consider filling out a short (10 min) survey on Risk Microdynamics? The hope is that I will be able to use your responses to inform an economic model of that risk in the near future, which I think would fill an important gap in our understanding of AI Risk dynamics https://forms.gle/kLYtynp3FYcxkPZc8 
0d045924-dc8f-47a7-94d0-c532d35e000c
trentmkelly/LessWrong-43k
LessWrong
In favor of tabooing the word “values” and using only “priorities” instead I don’t know if this is trivial/obvious or absurd or anything in between, so maybe you guys would like to help me judge? The idea is simple. I’m confused by the term “values” in the moral and ethical sense. The way I often see it used, it makes personal and societal “values” seem hard to define/operationalize, incommensurable, uncountable, frequently in conflict with each other and usually in unclear relationships to each other and to those who hold them. But the word “values” is everywhere in normative discussions, and normative discussions are important and interesting, so I wish they weren’t being muddled by that word. Is it just me, too stupid to get some perfectly clear meaning? Or is “values” really as vague and mysterious as it seems to me? I think all of the useful work that the word “values” does can also be done by the word “priorities”. Priorities tell you what to do, they help you decide between alternatives. They are a language for describing agreement and disagreement on normative questions. And all of us, including people who think in terms of values, already think in terms of priorities when we’re in projects and in everyday life. The confusion of “values” is in more abstract, longer-term regions of our thought. I think it is better to extend our thinking about priorities into those regions, rather than use a completely different set of terms and operations. “Priorities” can also more obviously build on, or be derived from, each other. A priority can be strictly subordinate to another, as a means to an end. We’re used to having “higher” and “lower” and “overarching” priorities, so we can use those qualifiers rather than need to invent subcategories of “values” like “instrumental values” and “terminal values”. Example: I have a priority to finish this post. That's a means to my higher priority, which is to find out if this idea is useful, and float it in the rationality cluster if it is. And that in turn is a means to my next higher priority, to
5c8c82d1-39ee-4771-a17e-3c1c604d357c
LDJnr/LessWrong-Amplify-Instruct
LessWrong
"(Sorry for the coy title--I want to give the reader a chance to guess what the addition is.) One day I opened up the front page of reddit. I was not signed in and I was using my browser's incognito mode.The following list composed about 25% of what I saw as I scrolled. See if you notice any themes. (As hinted by the title, I think there is something other than outrage here.) r/MurderedByWordsr/PublicFreakoutr/insanepeoplefacebookr/JusticeServedr/nottheonionr/facepalmr/mildlyinfuriatingr/Cringetopiar/TikTokCringer/LeopardsAteMyFacer/FuckYouKarenr/iamverybadassr/IdiotsInCarsr/cringe (At least another 25% was made up of r/news, r/worldnews, r/politics, r/PoliticalHumor, and so on.)Like many people, I have spent a lot of time thinking about the psychotoxic effects of concentrated outrage, political polarization, doomscrolling, misinformation, and filter bubbles. So I was a little surprised by my own interpretation of the above list:I submit that the most salient theme is contempt. Here's a sentence that has been at the back of my mind since I came across it:Scandal is great entertainment because it allows people to feel contempt, a moral emotion that gives feelings of moral superiority while asking nothing in return.-- Jonathan Haidt, The Happiness HypothesisLet me first admit that contemptuously bonding over the misbehavior of others probably can have real benefits. But I claim that in the case of the reddit front page, these benefits are clearly outweighed by the costs to one’s personality (not to mention epistemics). So, Haidt says contempt feels good, reddit appears to be a prime example, and I'm now asserting that it's psychotoxic (and possibly addictive, at least when taken via intravenous drip bottomless scrolling). Presuming all of that is correct...is it actionable? I think so.If you're ambitious, you could quit social media for a month and pay attention to how your thoughts and attitudes change.More coordinationally, perhaps a social stigma can develop around this kind of overindulgence, similar to the increasing stigmas toward ragebait and doomscrolling.But at the very least, you can simply notice that something you're reading is triggering contempt, as opposed to outrage or doomfeelz. I think this awareness by itself restores a decent chunk of mental autonomy. Personally, I like to also take the proactive step of rehearsing questions like, "why did they end up so stupid/scandalous/cringeworthy?" and “what led me to avoid such faults so well?” I find that the answer (whatever it is) often feels strangely liberating--it diminishes the tasty allure of the contempt, and makes it easier to refocus my attention on something better.EDIT: Thanks Daniel Kokotajlo for offering the term scornporn in the comments!" The text above is a post found on LessWrong. Can you elaborate on the specific psychological and societal impacts of "scornporn", particularly in relation to social media use and its potential addictive qualities? "Scornporn" refers to the act of indulging in content that incites feelings of contempt or scorn towards others, often for their perceived stupidity, ignorance, or socially unacceptable behavior. This term is increasingly used in the context of social media, where such content is readily available and easily consumed. Psychologically, the consumption of scornporn can have several impacts. Firstly, it can trigger a sense of moral superiority, providing a form of psychological gratification. This is because the act of scorning others can make one feel smarter, more enlightened, or morally superior. However, this feeling is often short-lived and can lead to a cycle of consumption in search of the same gratification, thereby creating a potential for addiction. Secondly, regular consumption of scornporn can lead to desensitization. Over time, individuals may require more extreme content to experience the same level of satisfaction, which can lead to an increase in negative attitudes and behaviors. It can also foster a lack of empathy, as individuals become more accustomed to deriding and dismissing others. Thirdly, scornporn can contribute to confirmation bias. People often consume content that aligns with their existing beliefs and attitudes, reinforcing their preconceived notions and making them less open to alternative perspectives. This can further polarize societal groups and contribute to the "us versus them" mentality. Societally, the impacts of scornporn are also significant. It can contribute to a culture of ridicule and disrespect, where individuals are more focused on pointing out the flaws of others than engaging in constructive dialogue. This can hinder societal progress and exacerbate divisions. Moreover, the addictive nature of scornporn can lead to excessive time spent on social media, which can have additional negative impacts on mental health, productivity, and relationships. It can also contribute to the spread of misinformation, as scornful content is often sensationalized or taken out of context to incite stronger reactions. Overall, while scornporn may provide temporary gratification, its long-term psychological and societal impacts can be detrimental. It is therefore important to approach such content with a critical eye and strive for a more balanced and empathetic use of social media. Discuss the role of empathy in counteracting the negative effects of "scornporn". How can it be cultivated in the context of social media usage? Empathy, the ability to understand and share the feelings of others, plays a crucial role in counteracting the negative effects of "scornporn". It can serve as an antidote to the contempt and derision that scornporn often promotes. Empathy encourages individuals to see beyond the surface level of a person's actions or words and consider their perspectives, feelings, and circumstances. This can lead to a more nuanced understanding of others and discourage the quick judgement and ridicule that scornporn often incites. Empathy can also foster a more respectful and understanding online environment, promoting constructive dialogue over divisive scorn. Cultivating empathy in the context of social media usage can be achieved through several strategies: 1. **Mindful Consumption:** Encourage users to be more mindful of the content they consume and share. This includes reflecting on the potential impacts of scornful content and choosing to engage with more empathetic and understanding material. 2. **Perspective-Taking:** Promote exercises that encourage users to put themselves in someone else's shoes. This could involve imagining how it would feel to be the subject of scornful content or considering the potential reasons behind a person's actions or words. 3. **Encouraging Positive Interactions:** Platforms can incentivize positive interactions over negative ones. This could involve promoting content that fosters understanding and empathy or rewarding users for constructive and respectful comments. 4. **Education:** Educate users about the importance of empathy and the potential harms of scornporn. This could involve sharing resources, hosting discussions, or even incorporating empathy education into platform guidelines or tutorials. 5. **Modeling Empathy:** Influencers and other high-profile social media users can model empathetic behavior in their own posts and interactions. This can set a positive example for other users and help to shift social norms towards more empathetic engagement. By fostering empathy, we can counteract the harmful effects of scornporn and create a more understanding and respectful social media environment. In what ways can the principles of behavioral psychology be applied to encourage empathy on social media platforms, while simultaneously discouraging the consumption of "scornporn"? Behavioral psychology, which studies the link between behavior and the mind, can provide valuable insights into promoting empathy and discouraging the consumption of "scornporn" on social media. Here are a few strategies: 1. **Positive Reinforcement:** This involves adding a positive stimulus to encourage a particular behavior. Social media platforms could reward empathetic behavior, such as thoughtful comments or posts, with likes, shares, or other forms of recognition. 2. **Negative Reinforcement:** This strategy involves removing an unpleasant stimulus to promote a desired behavior. For example, platforms could limit the visibility of scornful content for users who regularly engage in empathetic behavior. 3. **Punishment:** This is the introduction of an adverse stimulus to discourage a certain behavior. Platforms could penalize users who frequently engage in or share scornful content, such as temporary bans or loss of certain privileges. 4. **Shaping:** This involves gradually guiding behavior towards a desired outcome. Platforms could introduce features that gradually guide users towards more empathetic behavior, such as prompts suggesting more empathetic responses or algorithms that gradually reduce the visibility of scornful content. 5. **Modeling:** This strategy involves demonstrating the desired behavior for others to emulate. Influencers or other high-profile users could model empathetic behavior, setting a standard for their followers. 6. **Cognitive Restructuring:** This involves changing patterns of negative thinking. Platforms could provide resources or tools to help users reframe scornful thoughts into more empathetic ones, such as empathy-promoting prompts or reminders. 7. **Social Learning Theory:** This suggests that people learn from observing others. Platforms could highlight and promote examples of empathetic behavior, allowing users to learn and adopt these behaviors. 8. **Nudging:** This involves subtly guiding choices and behavior. Platforms could use nudges, such as reminders about the impact of scornful content or suggestions for more empathetic engagement, to guide user behavior. By applying these principles, social media platforms can encourage more empathetic behavior and discourage the consumption and sharing of scornful content.
21407d33-0313-4f21-a5a3-6bf66a36d868
trentmkelly/LessWrong-43k
LessWrong
Convoy This would have been in yesterday’s weekly post but I ran out of time, so I only got to it today. It seemed substantial enough to post on its own rather than wait a full six days, so here it is. Reports are in from Canada. Looks like we got us a convoy. [NOTE: This post has been extensively edited to reflect that what previously was a convoy is now more like an occupation akin to the old ‘autonomous zone’ in Seattle, and conflicting reports about exactly what to make of these protesters commenters have shared with me.] Many truckers do not like vaccine mandates. This makes sense. The big benefits of being a trucker are: Freedom. In particular, freedom to go and work where you want, when you want. Even more in particular, freedom to not interact with other humans. If there’s one group that both has no need of vaccine mandates, since they spend their time alone, and also likely not to take kindly to vaccine mandates, it would be a trucker. As far as I can tell, the whole advantage of being a trucker, other than getting paid for doing something other people don’t want to do, and something something about the joys of the open road, is to have and frequently use full f*** you rights without the need for f*** you money. Truckers are also both vital to the supply chain and therefore the economy while being in short supply, and also are mostly physically large people in possession of a large number of very large trucks. And those who don’t want to get vaccinated are facing a lot of restrictions that no longer make physical sense: > 'I can travel freely through the border, and not be in contact with anyone. Yet I'm locked into my own country right now,' he said. 'I can't go on a holiday. I can't go to a restaurant, I can't go bowling. I can't go to a movie. You know, these are things that it's just gotten out of control.'  They also are not happy about no longer having an exemption to quarantines and tests for crossing the border if they’re not vaccinated, but that
45ac4450-ea2c-461b-9922-9e18f3a7e50b
StampyAI/alignment-research-dataset/blogs
Blogs
Forecasting transformative AI: the "biological anchors" method in a nutshell *Audio also available by searching Stitcher, Spotify, Google Podcasts, etc. for "Cold Takes Audio"* [Today’s world Transformative AI Digital people World of Misaligned AI World run by Something else or or Stable, galaxy-wide civilization](https://www.cold-takes.com/roadmap-for-the-most-important-century-series/) > This is one of 4 posts summarizing hundreds of pages of technical reports focused almost entirely on forecasting one number: the year by which transformative AI will be developed.[1](#fn1) > By "transformative AI," I mean "AI powerful enough to bring us into a new, qualitatively different future." I specifically focus on what I'm calling [PASTA](https://www.cold-takes.com/transformative-ai-timelines-part-1-of-4-what-kind-of-ai/): AI systems that can essentially automate all of the human activities needed to speed up scientific and technological advancement. The sooner PASTA might be developed, the sooner the world could change [radically](https://www.cold-takes.com/transformative-ai-timelines-part-1-of-4-what-kind-of-ai/#impacts-of-pasta), and the more important it seems to be thinking today about how to make that change go well vs. poorly. This post is a layperson-compatible summary of [Ajeya Cotra's "Forecasting Transformative AI with Biological Anchors](https://www.lesswrong.com/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines)" (which I'll abbreviate below as **"Bio Anchors"**), and its pros and cons.[2](#fn2) It is the forecast I find most informative for transformative AI, with some caveats: * This approach is relatively complex, and it requires a fairly large number of assumptions and uncertain estimates. These qualities make it relatively difficult to explain, and they are also a mark against the method's reliability. * Hence, as of today, I don't think this method is as trustworthy as the [examples I gave previously](https://www.cold-takes.com/are-we-trending-toward-transformative-ai-how-would-we-know/#what-kind-of-forecast-am-i-going-for) for forecasting a qualitatively different future. It does not have the simplicity and directness of some of those examples, such as modeling COVID-19's spread. And while climate modeling is also very complex, climate modeling has been worked on by a large number of experts over decades, whereas the Bio Anchors methodology doesn't have much history. Nonetheless, I think it is the best available "best guess estimate" methodology for transformative AI timelines as of today. And as discussed in the [final section](#pros-and-cons-of-the-biological-anchors-method-for-forecasting-transformative-ai-timelines), one can **step back from a lot of the details to see that this century will likely see us hit some of the more "extreme" milestones in the report that strongly suggest the feasibility of transformative AI.** (Note: I've also written up a follow-up post about this framework for skeptical readers. See [“Biological anchors” is about bounding, not pinpointing, AI timelines](https://www.cold-takes.com/biological-anchors-is-about-bounding-not-pinpointing-ai-timelines/).) The basic idea is: * Modern AI models can "learn" to do tasks via a (financially costly) process known as "training." You can think of training as a massive amount of trial-and-error. For example, voice recognition AI models are given an audio file of someone talking, take a guess at what the person is saying, then are given the right answer. By doing this millions of times, they "learn" to reliably translate speech to text. More: [Training](#training)* The bigger an AI model and the more complex the task, the more the training process costs. Some AI models are bigger than others; to date, none are anywhere near "as big as the human brain" (what this means will be elaborated below). More: [Model size and task type](#model-size-and-task-type)* The biological anchors method asks: **"Based on the usual patterns in how much training costs, how much would it cost to train an AI model as big as a human brain to perform the hardest tasks humans do? And when will this be cheap enough that we can expect someone to do it?"** More: [Estimating the expense](#estimating-the-expense) Bio Anchors models a broad variety of different ways of approaching this question, generating estimates in a wide range from "aggressive" (projecting transformative AI sooner) to "conservative" (later). But from essentially all of these angles, it places a high probability on transformative AI this century. ![](https://www.cold-takes.com/content/images/size/w1000/2021/08/bio-anchors-probability-chart.png)This chart is from the report. You can roughly read the y-axis as the probability that transformative AI is developed by the year in question, although there is some additional nuance in the report. I won't be explaining what each of the different "Conditional on" models means; it's enough to know that each represents a different angle on forecasting transformative AI. ![](https://www.cold-takes.com/content/images/2021/08/bio-anchors-timeline-hack.png)Thanks to María Gutiérrez Rojas for this graphic. The top timeline gives major milestones for AI computing, past and future (the future ones are projected by Bio Anchors). Below it are (cropped) other timelines showing how significant this few-hundred-year period (more at [This Can't Go On](https://www.cold-takes.com/this-cant-go-on/)), and this era (more at [All Possible Views About Humanity's Future Are Wild](https://www.cold-takes.com/all-possible-views-about-humanitys-future-are-wild/)), appear to be. I'll now elaborate on each of these a bit more. This is the densest part of this series, and some people might prefer to stick with the above summary and skip to the next post. Note that Bio Anchors uses a number of different approaches (which it calls "anchors") to estimate transformative AI timelines, and combines them into one aggregate view. In this summary, I'm most focused on a particular set of these - called the "neural net anchors" - which are driving most of the report's aggregate timelines. Some of what I say applies to all anchors, but some applies only to the "neural net anchors." Training -------- As discussed [previously](https://www.cold-takes.com/transformative-ai-timelines-part-1-of-4-what-kind-of-ai/#making-pasta), there are essentially two ways to "teach" a computer to do a task: 1. **"Program" in extremely specific, step-by-step instructions for completing the task.** When this can be done, the computer can generally execute the instructions very quickly, reliably and cheaply. For example, you might program a computer to examine each record in a database and print the ones that match a user's search terms - you would "instruct" it in exactly how to do this, and it would be able to do the task very well. - **"Train" an AI to do the task purely by trial and error.** Today, the most common way of doing this is by using a "neural network," which you might think of sort of like a "digital brain" that starts in a random state: it hasn't yet been wired to do specific things. For example, say we want an AI to be able to say whether a photo is of a dog or a cat. It's hard to give fully specific step-by-step instructions for doing this; instead, we can take a neural network and send in a million example images (each one labeled as a "dog" or a "cat"). Each time it sees an example, it will tweak its internal wiring to make it more likely to get the right answer on similar cases in the future. After enough examples, it will be wired to correctly recognize dogs vs. cats. (We could maybe also move up another level of meta, and try to "train" models to be able to learn from "training" itself as efficiently as possible. This is called "meta-learning," but my understanding is that it hasn't had great success yet.) "Training" is a sort of brute-force, expensive alternative to "programming." The advantage is that we don't need to be able to provide specific instructions - we can just give an AI lots of examples of doing the task right, and it will learn to do the task. The disadvantage is that we need a ***lot* of examples, which requires a lot of processing power, which costs money.** How much? This depends on the size of the model (neural network) and the nature of the task itself. For some tasks AIs have learned as of 2021, training a single model could cost millions of dollars. For more complex tasks (such as "[do innovative scientific research](https://www.cold-takes.com/transformative-ai-timelines-part-1-of-4-what-kind-of-ai/#making-pasta)") and bigger models (reaching the size of the human brain), training a model could cost far more than that. Bio Anchors is interested in the question: **"When will it be affordable to train a model, using a relatively crude trial-and-error-based approach, to do the hardest tasks humans can do?"** These tasks could include the tasks necessary for [PASTA](https://www.cold-takes.com/transformative-ai-timelines-part-1-of-4-what-kind-of-ai/), such as: * Learn about science from teachers, textbooks and homework as effectively as a human can. * Push the frontier of science by asking questions, doing analyses and writing papers, as effectively as a human can. The next section will discuss how Bio Anchors fleshes out the idea of the "hardest tasks humans can do" (which it assumes would require a "human-brain-sized" model). Model size and task type ------------------------ Bio Anchors hypothesizes that we can estimate "how expensive it is to train a model" based on two basic parameters: the **model size** and the **task type.** **Model size.** As stated above, you might think of a neural network as a "digital brain" that starts in a random state. In general, a *larger* "digital brain" - with more digital-versions-of-neurons and digital-versions-of-synapses[3](#fn3) - can learn more complex tasks. A larger "digital brain" also requires more computations - and is hence more expensive - each time it is used (for example, for each example it is learning from). Drawing on the analysis in [Joe Carlsmith's "How Much Computational Power Does It Take to Match the Human Brain?"](https://www.openphilanthropy.org/brain-computation-report) (abbreviated in this piece as "Brain Computation"), Bio Anchors estimates comparisons between the size of "digital brains" (AI models) and "animal brains" (bee brains, mouse brains, human brains). These estimates imply that **today's AI systems are sometimes as big as insect brains, but never quite as big as mouse brains** - as of this writing, the largest known language model was the first to come reasonably close[4](#fn4) - and **not yet even 1% as big as human brains.**[5](#fn5) The bigger the model, the more processing power it takes to train. Bio Anchors assumes that a **transformative AI model would need to be about 10x the size of a human brain,** so a lot bigger than any current AI model. (The 10x is to leave some space for the idea that "digital brains" might be less efficient than human brains; see [this section](https://docs.google.com/document/d/1IJ6Sr-gPeXdSJugFulwIpvavc0atjHGM82QjIfUSBGQ/edit#heading=h.z8ucahktj3ug) of the report.) This is one of the reasons it would be very expensive to train. It could turn out that a smaller AI model is still big enough to learn the above sort of tasks. Or it could turn out that the needed model size is bigger than Bio Anchors estimates, perhaps because Bio Anchors has underestimated the effective "size" of the human brain, or because the human brain is better-designed than "digital brains" by more than Bio Anchors has guessed. **Task type.** In order to learn a task, an AI model needs to effectively "try" (or "watch") the task a large number of times, learning from trial-and-error. The more costly (in processing power, and therefore money) the task is to try/watch, the more costly it will be for the AI model to learn it. It's hard to quantify how costly a task is to try/watch. Bio Anchors's attempt to do this is the most contentious part of the analysis, according to the technical reviewers who have reviewed it so far. You can roughly think of the Bio Anchors framework as saying: * There are some tasks that a human can do with only a second of thought, such as classifying an image as a cat or dog. * There are other tasks that might take a human several minutes of thought, such as solving a logic puzzle. * Other tasks could take hours, days, months or even years, and require not just thinking, but interacting with the environment. For example, writing a scientific paper. * The tasks on the longer end of this spectrum will be more costly to try/watch, so it will be more costly to train an AI model to do them. For example, it's more costly (takes more time, and more money) to have a million "tries" at a task that takes an hour than it is to have a million "tries" at a task that takes a second. * However, the framework isn't as simple as this sounds. Many tasks that seem like "long" tasks (such as writing an essay) could in fact be broken into a series of "shorter" tasks (such as writing individual sentences). + If an AI model can be trained to do a shorter "sub-task,", it might be able to do the longer task by simply repeating the shorter sub-task over and over again - without ever needing to be explicitly "trained" to do the longer task. + For example, an AI model might get a million "tries" at the task: "Read a partly-finished essay and write a good next sentence." If it then learns to do this task well, it could potentially write a long essay by simply repeating this task over and over again. It wouldn't need to go into a separate training process where it gets a million "tries" at the more time-consuming task of writing an entire essay. + So it becomes crucial whether the hardest and most important tasks (such as those listed above) are the kind that can be "decomposed" into short/easy tasks. Estimating the expense ---------------------- Bio Anchors looks at how expensive existing AI models were to train, depending on model size and task type (as defined above). It then extrapolates this to see how expensive an AI model would be to train if it: * Had a size 10x larger than a human brain.[6](#fn6)* Trained on a task where each "try" took days, weeks, or months of intensive "thinking." As of today, this sort of training would cost in the ballpark of a million trillion dollars, which is enormously more than total world wealth. So it isn't surprising that nobody has tried to train such a model. However, Bio Anchors also projects the following trends out into the future: * Advances in both hardware and software that could make computing power cheaper. * A growing economy, and a growing role of AI in the economy, that could increase the amount AI labs are able to spend training large models to $1 trillion and beyond. According to these projections, at some point the "amount AI labs are able to spend" becomes equal to the "expense of training a human-brain-sized model on the hardest tasks." Bio Anchors bases its projections for "when transformative AI will be developed" on when this happens. Bio Anchors also models uncertainty in all of the parameters above, and considers alternative approaches to the "model size and task type" parameters.[7](#fn7) By doing this, it estimates the probability that transformative AI will be developed by 2030, 2035, etc. Aggressive or conservative? --------------------------- Bio Anchors involves a number of simplifications that could cause it to be too aggressive (expecting transformative AI to come sooner than is realistic) or too conservative (expecting it to come later than is realistic). The argument I most commonly hear that it is "**too aggressive**" is along the lines of: "There's no reason to think that a modern-methods-based AI can learn everything a human does, using trial-and-error training - no matter how big the model is and how much training it does. Human brains can reason in unique ways, unmatched and unmatchable by any AI unless we come up with fundamentally new approaches to AI." This kind of argument is often accompanied by saying that AI systems don't "truly understand" what they're reasoning about, and/or that they are merely imitating human reasoning through pattern recognition. I think this may turn out to be correct, but I wouldn't bet on it. A full discussion of why is outside the scope of this post, but in brief: * I am unconvinced that there is a deep or stable distinction between "pattern recognition" and "true understanding" ([this Slate Star Codex piece](https://slatestarcodex.com/2019/02/19/gpt-2-as-step-toward-general-intelligence/) makes this point). "True understanding" might just be what really good pattern recognition looks like. Part of my thinking here is an intuition that even when people (including myself) superficially appear to "understand" something, their reasoning often (I'd even say usually) breaks down when considering an unfamiliar context. In other words, I think what we think of as "true understanding" is more of an ideal than a reality. * I feel underwhelmed with the track record of those who have made this sort of argument - I don't feel they have been able to pinpoint what "true reasoning" looks like, such that they could make robust predictions about what would prove difficult for AI systems. (For example, see [this discussion of Gary Marcus's latest critique of GPT3](https://nostalgebraist.tumblr.com/post/628024664310136832/gary-marcus-has-co-authored-a-brief-critique-of), and [similar discussion on Astral Codex Ten](https://astralcodexten.substack.com/p/my-bet-ai-size-solves-flubs?s=r)). * "Some breakthroughs / fundamental advances are needed" might be true. But for Bio Anchors to be overly aggressive, it isn't enough that *some* breakthroughs are needed; the breakthroughs needed have to be *more than what AI scientists are capable of in the coming decades*, the time frame over which Bio Anchors forecasts transformative AI. It seems hard to be confident that things will play out this way - especially because: + Even moderate advances in AI systems could bring more talent and funding into the field (as is already happening[8](#fn8)). + If money, talent and processing power are plentiful, and progress toward PASTA is primarily held up by some particular weakness of how AI systems are designed and trained, a sustained attempt by researchers to fix this weakness could work. When we're talking about multi-decade timelines, that might be plenty of time for researchers to find whatever is missing from today's techniques. More broadly, Bio Anchors could be too aggressive due to its assumption that "computing power is the bottleneck": * It assumes that *if* one could pay for all the computing power to do the brute-force "training" described above for the key tasks (e.g., automating scientific work), transformative AI would (likely) follow. * Training an AI model doesn't just require purchasing computing power. It requires hiring researchers, running experiments, and perhaps most importantly, finding a way to set up the "trial and error" process so that the AI can get a huge number of "tries" at the key task. It may turn out that doing so is prohibitively difficult. On the other hand, there are several ways in which Bio Anchors could be **too conservative** (underestimating the likelihood of transformative AI being developed soon). * Perhaps with enough ingenuity, one could create a transformative AI by "programming" it to do key tasks, rather than having to "train" it (see [above](#training) for the distinction). This could require far less computation, and hence be far less expense. Or one could use a combination of "programming" and "training" to achieve better efficiency than Bio Anchors implies, while still not needing to capture everything via "programming." * Or one could find far superior approaches to AI that can be "trained" much more efficiently. One possibility here is "meta-learning": effectively training an AI system on the "task" of being trained, itself. * Or perhaps most likely, over time AI might become a bigger and bigger part of the economy, and there could be a proliferation of different AI systems that have each been customized and invested in to do different real-world tasks. The more this happens, the more opportunity there is for individual ingenuity and luck to result in more innovations, and more capable AI systems in particular economic contexts. + Perhaps at some point, it will be possible to integrate many systems with different abilities in order to tackle some particularly difficult task like "automating science," without needing a dedicated astronomically expensive "training run." + Or perhaps AI that falls short of PASTA will still be useful enough to generate a lot of cash, and/or help researchers make compute cheaper and more efficient. This in turn could lead to still bigger AI models that further increase availability of cash and efficiency of compute. That, in turn, could cause a PASTA-level training run to be affordable earlier than Bio Anchors projects.* Additionally, some technical reviewers of Bio Anchors feel that its treatment of [task type](#model-size-and-task-type) is too conservative. They believe that the most important tasks (and perhaps all tasks) that AI needs to be trained on will be on the "easier/cheaper" end of the spectrum, compared to what Bio Anchors assumes. (See the [above section](#model-size-and-task-type) for what it means for a task to be "easier/cheaper" or "harder/more expensive"). For a related argument, see [Fun with +12 OOMs of Compute](https://www.lesswrong.com/posts/rzqACeBGycZtqCfaX/fun-with-12-ooms-of-compute), which makes the intuitive point that Bio Anchors is imagining a truly massive amount of computation needed to create PASTA, and less could easily be enough. I don't think it is obvious whether, overall, Bio Anchors is too aggressive (expecting transformative AI to come sooner than is realistic) or too conservative (expecting it to come later). The report itself states that it's likely to be too aggressive over the next few years and too conservative >50 years out, and likely most useful in between.[9](#fn9) Intellectually, it feels to me as though the report is more likely to be too conservative. I find its [responses](https://docs.google.com/document/d/1cCJjzZaJ7ATbq8N2fvhmsDOUWdm7t3uSSXv6bD0E_GM/edit#heading=h.hu3zy1xzvvo) to the "Too aggressive" points above fairly compelling, and I think the "Too conservative" points are more likely to end up being correct. In particular, I think it's hard to rule out the possibility of ingenuity leading to transformative AI in some far more efficient way than the "brute-force" method contemplated here. And I think the treatment of "task type" is definitely erring in a conservative direction. However, I also have an intuitive preference (which is related to the "burden of proof" analyses given [previously](https://www.cold-takes.com/forecasting-transformative-ai-whats-the-burden-of-proof/)) to err on the conservative side when making estimates like this. Overall, my best guesses about transformative AI timelines are similar to those of Bio Anchors. Conclusions of Bio Anchors -------------------------- Bio Anchors estimates a **>10% chance of transformative AI by 2036, a ~50% chance by 2055, and an ~80% chance by 2100.** It's also worth noting what the report says about AI systems today. It estimates that: * Today's largest AI models, such as [GPT-3](https://en.wikipedia.org/wiki/GPT-3), are a **bit smaller than mouse brains, and are starting to get within range (if they were to grow another 100x-1000x) of human brains.** So we might soon be getting close to AI systems that can be trained to do anything that humans can do with ~1 second of thought. Consistent with this, it seems to me that we're just starting to reach the point where language models *sound* like humans who are talking without thinking very hard.[10](#fn10) If anything, "human who puts in no more than 1 second of thought per word" seems somewhat close to what GPT-3 is doing, even though it's much smaller than a human brain. * It's only very recently that AI models have gotten this big. A "large" AI model before 2020 would be more in the range of a honeybee brain. So for models even in the very recent past, we should be asking whether AI systems seem to be "as smart as insects." Here's [one attempt to compare AI and honeybee capabilities](https://www.lesswrong.com/posts/yW3Tct2iyBMzYhTw7/how-does-bee-learning-compare-with-machine-learning) (by Open Philanthropy intern Guille Costa), concluding that the most impressive honeybee capabilities the author was able to pinpoint do appear to be doable for AI systems.[11](#fn11) I include these notes because: * The Bio Anchors analysis seems fully consistent with what we're observing from AI systems today (and have over the last decade or two), while also implying that we're likely to see more transformative abilities in the coming decades. * I think it's particularly noteworthy that we're getting close to the time when an AI model is "as big as a human brain" (according to the Bio Anchors / [Brain Computation](https://www.openphilanthropy.org/brain-computation-report) estimation method). It may turn out that such an AI model is able to "learn" a lot about the world and produce a lot of economic value, even if it can't yet do the hardest things humans do. And this, in turn, could kick off skyrocketing investment in AI (both money and talent), leading to a lot more innovation and further breakthroughs. This is a simple reason to believe that transformative AI by 2036 is plausible. Finally, I note that Bio Anchors includes an "evolution" analysis among the different approaches it considers. This analysis hypothesizes that in order to produce transformative AI, one would need to do about as many computations as all animals in history combined, in order to re-create the progress that was made by natural selection. I consider the "evolution" analysis to be *very* conservative, because machine learning is capable of much faster progress than the sort of trial-and-error associated with natural selection. Even if one believes in something along the lines of "Human brains reason in unique ways, unmatched and unmatchable by a modern-day AI," it seems that whatever is unique about human brains should be re-discoverable if one is able to essentially re-run the whole history of natural selection. And even this very conservative analysis estimates a ~50% chance of transformative AI by 2100. Pros and cons of the biological anchors method for forecasting transformative AI timelines ------------------------------------------------------------------------------------------ **Cons.** I'll start with what I see as the biggest downside: this is a very complex forecasting framework, which relies crucially on multiple extremely uncertain estimates and assumptions, particularly: * Whether it's reasonable to believe that an AI system could learn the key tasks listed above (the ones required for PASTA) given enough trial-and-error training. * How to compare the size of AI models with the size of animal/human brains. * How to characterize "task type," estimating how "difficult" and expensive a task is to “try” or “watch” once. * How to use the model size and task type to estimate how expensive it would be to train an AI model to do the key tasks. * How to estimate future advances in both hardware and software that could make computing power cheaper. * How to estimate future increases in how much AI labs could be able to spend training models. This kind of complexity and uncertainty means (IMO) that we shouldn't consider the forecasts to be highly reliable, especially today when the whole framework is fairly new. If we got to the point where as much scrutiny and effort had gone into AI forecasting as climate forecasting, it might be a different matter. **Pros.** That said, the biological anchors method is essentially the only one I know of that estimates transformative AI timelines from **objective facts** (where possible) **and explicit assumptions** (elsewhere)**.**[12](#fn12)It does not rely on any concepts as vague and intuitive as "how fast AI systems are getting more impressive" (discussed [previously](https://www.cold-takes.com/are-we-trending-toward-transformative-ai-how-would-we-know/#subjective-extrapolations-and-)). Every assumption and estimate in the framework can be explained, discussed, and - over time - tested. Even in its current early stage, I consider this a valuable property of the biological anchors framework. It means that the framework can give us timelines estimates that aren't simply rehashes of intuitions about whether it feels as though transformative AI is approaching.[13](#fn13) I also think it's encouraging that even with all the guesswork, the testable "predictions" the framework makes as of today seem reasonable (see previous section). **The framework provides a way of thinking about how it could be simultaneously true that (a) the AI systems of a decade ago didn't seem very impressive at all; (b) the AI systems of today can do many impressive things but still feel far short of what humans are able to do; (c) the next few decades - or even the next 15 years - could easily see the development of transformative AI.** Additionally, I think it's worth noting a **couple of high-level points** from Bio Anchors that **don't depend on quite so many estimates and assumptions:** * In the coming decade or so, we're likely to see - for the first time - AI models with comparable "size" to the human brain. * If AI models continue to become larger and more efficient at the rates that Bio Anchors estimates, it will probably become **affordable this century to hit some pretty extreme milestones - the "high end" of what Bio Anchors thinks might be necessary.** These are hard to summarize, but see the "long horizon neural net" and "evolution anchor" frameworks in the report. * One way of thinking about this is that the next century will likely see us go from "not enough compute to run a human-sized model at all" to "extremely plentiful compute, as much as even quite conservative estimates of what we might need." Compute isn't the only factor in AI progress, but to the extent other factors (algorithms, training processes) became the new bottlenecks, there will likely be powerful incentives (and multiple decades) to resolve them. A final advantage of Bio Anchors is that we can continue to watch AI progress over time, and compare what we see to the report's framework. For example, we can watch for: * Whether there are some tasks that just can't be learned, even with plenty of trial and error - or whether some tasks require amounts of training very different from what the report estimates. * How AI models' capabilities compare to those of animals that we are currently modeling as "similarly sized." If AI models seem more capable than such animals, we may be overestimating how large a model we would need to be in order to e.g. automate science. If they seem less capable, we may be underestimating it. * How hardware and software are progressing, and whether AI models are getting bigger at the rate the report currently projects. The next piece will summarize all of the different analyses so far about transformative AI timelines. It will then discuss a remaining reservation: that there is no robust expert consensus on this topic. **Next in series:** [AI Timelines: Where the Arguments, and the "Experts," Stand](https://www.cold-takes.com/where-ai-forecasting-stands-today/) --- Footnotes --------- 1. Of course, the answer could be "A kajillion years from now" or "Never." [↩](#fnref1)- For transparency, note that this is an [Open Philanthropy](https://www.openphilanthropy.org) analysis, and I am co-CEO of Open Philanthropy. [↩](#fnref2)- I (like Bio Anchors) generally consider the synapse count more important than the neuron count, for reasons I won't go into here. [↩](#fnref3)- [Wikipedia](https://en.wikipedia.org/wiki/GPT-3): "GPT-3's full version has a capacity of 175 billion machine learning parameters ... Before the release of GPT-3, the largest language model was Microsoft's Turing NLG, introduced in February 2020, with a capacity of 17 billion parameters." Wikipedia doesn't state this, but I don't believe there are publicly known AI models larger than these language models (with the exception of "[mixture-of-experts models](https://arxiv.org/abs/1701.06538)" that I think we should disregard for these purposes, for reasons I won't go into here). [Wikipedia estimates](https://en.wikipedia.org/wiki/List_of_animals_by_number_of_neurons) about 1 trillion synapses for a house mouse's brain; Bio Anchors's methodology for brain comparisons (based on [Brain Computation](https://www.openphilanthropy.org/brain-computation-report)) essentially equates synapses to parameters. [↩](#fnref4)- Bio Anchors estimates about 100 trillion parameters for the human brain, based on the fact that it has about 100 trillion synapses. [↩](#fnref5)- As noted above, the 10x is to leave some space for the idea that "digital brains" might be less efficient than human brains. See [this section](https://docs.google.com/document/d/1IJ6Sr-gPeXdSJugFulwIpvavc0atjHGM82QjIfUSBGQ/edit#heading=h.z8ucahktj3ug) of the report. [↩](#fnref6)- For example, one approach hypothesizes that training could be made cheaper by "meta-learning," discussed above; another approach hypothesizes that in order to produce transformative AI, one would need to do about as many computations as all animals in history combined, in order to re-create the progress that was made by natural selection.) [↩](#fnref7)- See charts from the early sections of the [2021 AI Index Report](https://aiindex.stanford.edu/wp-content/uploads/2021/03/2021-AI-Index-Report_Master.pdf), for example. [↩](#fnref8)- See [this section](https://docs.google.com/document/d/1IJ6Sr-gPeXdSJugFulwIpvavc0atjHGM82QjIfUSBGQ/edit#heading=h.y045l51rb826). [↩](#fnref9)- For a collection of links to GPT-3 demos, see [this post](https://www.lesswrong.com/posts/6Hee7w2paEzHsD6mn/collection-of-gpt-3-results). [↩](#fnref10)- In fact, he estimates that AI systems appear to use about 1000x less compute, which would match the above point in terms of suggesting that AI systems might be more efficient than animal/human brains and that the Bio Anchors estimates might be too conservative. However, he doesn't address the fact that bees arguably perform a more diverse set of tasks than the AI systems they're being compared to. [↩](#fnref11)- Other than the "semi-informative priors" method discussed [previously](https://www.cold-takes.com/forecasting-transformative-ai-whats-the-burden-of-proof/). [↩](#fnref12)- Of course, this isn't to say the estimates are *completely independent* of intuitions - intuitions are likely to color our choices of estimates for many of the difficult-to-estimate figures. But the ability to scrutinize and debate each estimate separately is helpful here. [↩](#fnref13)
9ce01d5f-128a-43f6-93c1-f8d2d5b97381
LDJnr/LessWrong-Amplify-Instruct
LessWrong
"What is true of one apple may not be true of another apple; thus more can be said about a single apple than about all the apples in the world.—“The Twelve Virtues of Rationality”Within their own professions, people grasp the importance of narrowness; a car mechanic knows the difference between a carburetor and a radiator, and would not think of them both as “car parts.” A hunter-gatherer knows the difference between a lion and a panther. A janitor does not wipe the floor with window cleaner, even if the bottles look similar to one who has not mastered the art.Outside their own professions, people often commit the misstep of trying to broaden a word as widely as possible, to cover as much territory as possible. Is it not more glorious, more wise, more impressive, to talk about all the apples in the world? How much loftier it must be to explain human thought in general, without being distracted by smaller questions, such as how humans invent techniques for solving a Rubik’s Cube. Indeed, it scarcely seems necessary to consider specific questions at all; isn’t a general theory a worthy enough accomplishment on its own?It is the way of the curious to lift up one pebble from among a million pebbles on the shore, and see something new about it, something interesting, something different. You call these pebbles “diamonds,” and ask what might be special about them—what inner qualities they might have in common, beyond the glitter you first noticed. And then someone else comes along and says: “Why not call this pebble a diamond too? And this one, and this one?” They are enthusiastic, and they mean well. For it seems undemocratic and exclusionary and elitist and unholistic to call some pebbles “diamonds,” and others not. It seems . . . narrow-minded . . . if you’ll pardon the phrase. Hardly open, hardly embracing, hardly communal.You might think it poetic, to give one word many meanings, and thereby spread shades of connotation all around. But even poets, if they are good poets, must learn to see the world precisely. It is not enough to compare love to a flower. Hot jealous unconsummated love is not the same as the love of a couple married for decades. If you need a flower to symbolize jealous love, you must go into the garden, and look, and make subtle distinctions—find a flower with a heady scent, and a bright color, and thorns. Even if your intent is to shade meanings and cast connotations, you must keep precise track of exactly which meanings you shade and connote.It is a necessary part of the rationalist’s art—or even the poet’s art!—to focus narrowly on unusual pebbles which possess some special quality. And look at the details which those pebbles—and those pebbles alone!—share among each other. This is not a sin.It is perfectly all right for modern evolutionary biologists to explain just the patterns of living creatures, and not the “evolution” of stars or the “evolution” of technology. Alas, some unfortunate souls use the same word “evolution” to cover the naturally selected patterns of replicating life, and the strictly accidental structure of stars, and the intelligently configured structure of technology. And as we all know, if people use the same word, it must all be the same thing. These biologists must just be too dumb to see the connections.And what could be more virtuous than seeing connections? Surely the wisest of all human beings are the New Age gurus who say, “Everything is connected to everything else.” If you ever say this aloud, you should pause, so that everyone can absorb the sheer shock of this Deep Wisdom.There is a trivial mapping between a graph and its complement. A fully connected graph, with an edge between every two vertices, conveys the same amount of information as a graph with no edges at all. The important graphs are the ones where some things are not connected to some other things.When the unenlightened ones try to be profound, they draw endless verbal comparisons between this topic, and that topic, which is like this, which is like that; until their graph is fully connected and also totally useless. The remedy is specific knowledge and in-depth study. When you understand things in detail, you can see how they are not alike, and start enthusiastically subtracting edges off your graph.Likewise, the important categories are the ones that do not contain everything in the universe. Good hypotheses can only explain some possible outcomes, and not others.It was perfectly all right for Isaac Newton to explain just gravity, just the way things fall down—and how planets orbit the Sun, and how the Moon generates the tides—but not the role of money in human society or how the heart pumps blood. Sneering at narrowness is rather reminiscent of ancient Greeks who thought that going out and actually looking at things was manual labor, and manual labor was for slaves.As Plato put it in The Republic, Book VII:If anyone should throw back his head and learn something by staring at the varied patterns on a ceiling, apparently you would think that he was contemplating with his reason, when he was only staring with his eyes . . . I cannot but believe that no study makes the soul look on high except that which is concerned with real being and the unseen. Whether he gape and stare upwards, or shut his mouth and stare downwards, if it be things of the senses that he tries to learn something about, I declare he never could learn, for none of these things admit of knowledge: I say his soul is looking down, not up, even if he is floating on his back on land or on sea!Many today make a similar mistake, and think that narrow concepts are as lowly and unlofty and unphilosophical as, say, going out and looking at things—an endeavor only suited to the underclass. But rationalists—and also poets—need narrow words to express precise thoughts; they need categories that include only some things, and exclude others. There’s nothing wrong with focusing your mind, narrowing your categories, excluding possibilities, and sharpening your propositions. Really, there isn’t! If you make your words too broad, you end up with something that isn’t true and doesn’t even make good poetry.And DON’T EVEN GET ME STARTED on people who think Wikipedia is an “Artificial Intelligence,” the invention of LSD was a “Singularity,” or that corporations are “superintelligent”!" The text above is a post found on LessWrong. Elaborate on the concept of "narrowness" in the context of rationality and its importance in the process of gaining specific knowledge. The concept of "narrowness" in the context of rationality refers to the precision and specificity in understanding, interpreting, and discussing a particular subject or concept. It is about delving deeply into a specific area, exploring its intricacies, and understanding its unique characteristics, rather than making broad, generalized statements. The importance of narrowness in gaining specific knowledge is multifold. Firstly, it allows for a deeper, more detailed understanding of a subject. By focusing on a specific area, one can explore it in greater depth, understand its nuances, and gain a more comprehensive understanding that wouldn't be possible with a broader approach. Secondly, narrowness helps in avoiding overgeneralization. Broad, sweeping statements often fail to account for the diversity and complexity of reality. By being narrow and specific, one can ensure that their understanding and statements are accurate and reflective of the actual state of affairs. Thirdly, narrowness aids in effective communication and discussion. When ideas and concepts are discussed with precision and specificity, it facilitates clearer understanding and more productive conversations. It reduces the chances of misunderstanding and misinterpretation. Lastly, narrowness is essential for scientific inquiry and advancement of knowledge. Science progresses by investigating specific hypotheses, conducting detailed experiments, and drawing precise conclusions. A broad, generalized approach wouldn't allow for such rigorous investigation and could lead to erroneous conclusions. In conclusion, while it may seem counterintuitive, embracing narrowness in the pursuit of knowledge can lead to a deeper, more accurate, and more productive understanding of the world. It is an essential aspect of rational thinking and scientific inquiry. Drawing from the concept of narrowness in rationality, how might this principle be applied to solve complex problems in various fields such as technology, economics, and psychology? Discuss the potential challenges and benefits. Applying the principle of narrowness in rationality to complex problems across various fields involves breaking down these problems into smaller, more manageable parts and addressing each one with precision and specificity. This approach is often referred to as "divide and conquer" or "decomposition" in problem-solving literature. In technology, for instance, a complex software development project can be divided into smaller tasks such as designing, coding, testing, and deployment. Each task can be further broken down, and by focusing narrowly on each sub-task, the team can ensure that each part is well-executed before moving on to the next. This approach can lead to the creation of high-quality, reliable software. In economics, narrowness can be applied in the analysis of complex economic phenomena by focusing on specific variables or aspects. For instance, to understand the impact of a policy change, economists might focus on specific sectors, demographic groups, or time periods. By doing so, they can gain a deeper understanding of the effects and dynamics of the policy change. In psychology, narrowness can be used in research and therapy. For instance, a researcher studying the effects of stress might focus on a specific type of stressor, a specific population, or a specific stress response. Similarly, a therapist might focus on a specific issue or behavior pattern when working with a client. However, the application of narrowness in these fields also comes with potential challenges. One challenge is the risk of losing sight of the bigger picture. By focusing too narrowly on specific aspects, one might miss important connections and interactions between different parts of the problem. Another challenge is that it can be difficult to determine the appropriate level of narrowness. Too narrow, and the approach might become overly complicated and inefficient; too broad, and the approach might lack depth and precision. Despite these challenges, the benefits of applying narrowness in problem-solving are significant. It allows for a more manageable and systematic approach to complex problems, facilitates deeper understanding, and can lead to more effective solutions. Moreover, it encourages critical thinking and attention to detail, which are valuable skills in any field.
8e773476-09fc-4d36-8b83-32f4e8995cda
trentmkelly/LessWrong-43k
LessWrong
On the FLI Open Letter The Future of Life Institute (FLI) recently put out an open letter, calling on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. There was a great flurry of responses, across the spectrum. Many were for it. Many others were against it. Some said they signed, some said they decided not to. Some gave reasons, some did not. Some expressed concerns it would do harm, some said it would do nothing. There were some concerns about fake signatures, leading to a pause while that was addressed, which might have been related to the letter being released slightly earlier than intended. Eliezer Yudkowsky put out quite the letter in Time magazine. In it, he says the FLI letter discussed in this post is a step in the right direction and he is glad people are signing it, but he will not sign because he does not think it goes far enough, that a 6 month pause is woefully insufficient, and he calls for… a lot more. I will address that letter more in a future post. I’m choosing to do this one first for speed premium. As much as the world is trying to stop us from saying it these days… one thing at a time. The call is getting serious play. Here is Fox News, saying ‘Democrats and Republicans coalesce around calls to regulate AI development: ‘Congress has to engage.’ As per the position he staked out a few days prior and that I respond to here, Tyler Cowen is very opposed to a pause, and wasted no time amplifying every voice available in the opposing camp, handily ensuring I did not miss any. Structure of this post is: 1. I Wrote a Letter to the Postman: Reproduces the letter in full. 2. You Know Those are Different, Right?: Conflation of x-risk vs. safety. 3. The Six Month Pause: What it can and can’t do. 4. Engage Safety Protocols: What would be real protocols? 5. Burden of Proof: The letter’s threshold for approval seems hard to meet. 6. New Regulatory Authority: The call for one. 7. Overall Take: I am net hap
8729300d-fd18-4d16-a6e3-d014635b94a7
trentmkelly/LessWrong-43k
LessWrong
AI as a powerful meme, via CGP Grey In episode 158 of the Cortex podcast, CGP Grey gives their high-level reason why they are worried about AI. My one line summary: AI should not be compared to nuclear weapons but instead to biological weapons or memes, which evolve under the implicit evolutionary pressures that exist, leading to AI's that are good at surviving and replicating.  The perspective is likely known by many in the community already, but I had not heard it before. Interestingly, there have actually been experiments where they just put random strings of code in an environment where they interact, and self-replicating code appeared. See Cognitive Revolution podcast on 'Computational Life: How Self-Replicators Arise from Randomness', with Google researchers Ettore Randazzo and Luca Versari. I quote the relevant part of the podcast below, but I recommend listening because the emotion and delivery is impactful. It is from 1:22:00 onwards. > To be explicit and not beat around the bush, when I try to think, “Oh, what is beyond this barrier, beyond which it might be impossible to predict?” it's like, well, if I’m just in Vegas and placing odds on this roulette wheel, almost all of those outcomes are extraordinarily bad for the human species. There are potentially paths where it goes well, but most are extremely bad for a whole bunch of reasons. > > I think of it like this: people who are concerned like me often analogize AI to something like building nuclear weapons. It’s like, “Ah, we’re building a thing that could be really dangerous.” But I just don’t think that’s the correct comparison, because a nuclear weapon is a tool. It's a tool like a hammer. It’s a very bad hammer, but it is fundamentally mechanical in a particular way. > > But the real difference, where do I disagree with people, where do other people disagree with me, is that I think a much more correct way to think about AI is to compare it to biological weaponry. You’re building a thing able to act in the world differently than
d244673b-9c87-4f76-8757-6edb92b50538
trentmkelly/LessWrong-43k
LessWrong
Decision Auctions aka "How to fairly assign chores, or decide who gets the last cookie" After moving in with my new roomies (Danny and Bethany of Beeminder), I discovered they have a fair and useful way of auctioning off joint decisions. It helps you figure out how much you value certain chores or activities, and it guarantees that these decisions are worked out in a fair way. They call it "yootling", and wrote more about it here. A quick example (Note: this only works if all participants are of the types of people who consider this sort of thing a Good Idea, and not A Grotesque Parody of Caring or whatnot):   Use Case: Who Picks up the Kids from Grandma's? D and B are both busy working, but it's time to pick up the kids from their grandparents house. They decide to yootle for it. B bids $100 (In a regular Normal Person exchange, this would be like saying "I'm elbows deep in code right now, and don't want to break flow. I'd really rather continue working right now, but of course I'll go if it's needed.") D bids $15 (In a regular Normal Person exchange this would be like saying "I don't mind too much, though I do have other things to do now...") So D "wins" the bid, and B pays him $15 to go get the kids from their grandma's. Of course.... it would be a pain in the butt to constantly be paying each other, so instead they have a 10% chance of paying 10x the amount, and a 90% chance to pay nothing, using a random number generator.   This is made easier by the fact that we have a bot to run this, but before that they would use the high-tech solution of Holding Up Fingers. > We may do this multiple times per day, whenever there’s a good that we have shared ownership of and one of us wants to offload their shares onto the other person. The goods can be anything, e.g. the last brownie, but they’re more often “bads” like who will get up in the middle of the night with a vomiting child, or who will book plane tickets for a trip. > > > We find this an elegant means of assigning loathed tasks. The person who minded least winds up doing the chore, but
7f8efe81-9a04-4d64-b029-519fd0a3b04a
trentmkelly/LessWrong-43k
LessWrong
Join the interpretability research hackathon TLDR; Participate online or in-person in London, Aarhus, and Tallinn on the weekend 11th to 13th November in a fun and intense AI safety research hackathon focused on interpretability research. We invite mid-career professionals to join but it is open for everyone (also no-coders) and we will create starter code templates to help you kickstart your team’s projects. Join here. Below is an FAQ-style summary of what you can expect (navigate it with the table of contents on the left). What is it? The Interpretability Hackathon is a weekend-long event where you participate in teams (1-6) to create interesting and fun research. You submit a PDF report that summarizes and discusses your findings in the context of AI safety. These reports will be judged by our panel and you can win up to $1,000! It runs from 11th Nov to 13th Nov (in two weeks) and you’re welcome to join for a part of it (see further down). We get an interesting talk by an expert in the field and hear more about the topic. Everyone can participate and we encourage you to join especially if you’re considering AI safety from another career . We prepare templates for you to start out your projects and you’ll be surprised what you can accomplish in just a weekend – especially with your new-found friends! Read more about how to join, what you can expect, the schedule, and what previous participants have said about being part of the hackathon below. Where can I join? You can join the event both in-person and online but everyone needs to make an account and join the jam on the itch.io page. The in-person locations include the LEAH offices in London right by UCL, Imperial, King’s College, and London School of Economics (link); Aarhus University in Aarhus, Denmark (link), and Tallinn, Estonia (link). The virtual event space is on GatherTown (link). Everyone should join the Discord to ask questions, see updates and announcements, find team members, and more. Join here. What are some examples of interpr
1ef8a299-32ce-4e46-be61-f36e93de0eb1
trentmkelly/LessWrong-43k
LessWrong
Straw-Steelmanning I've noticed that when people are asked to "Steelman" a position, they sometimes instead do what I would call "Straw-Steelmanning". Someone can also straw-steelman without having been asked to steelman or having said that they would do so.  What is straw-steelmanning? Assume someone makes an argument X for a claim C, and you are arguing against X. * Straw-manning (bad): You replace X with a weaker argument Y and argue against that, pretending as if you have thereby refuted X. * Steel-manning (good): You replace X with a stronger argument Y which still contains the core of X and argue against that, thereby actually refuting X. (The term can also be used in a context where you are not actually arguing against the claim C). * Straw-steelmanning (bad): You replace C with an entirely different claim D and make an argument Y for it which you consider to be stronger than X, pretending as if you no longer need to argue against C. An example which I have noticed is something like the following: * "Can you steelman the position that future AI systems will pose an existential risk?" * "Well, while we whould not take these hollywood movie plots seriously, there are real social problems with AI that we have to deal with. AI will potentially cause massive inequality because entire industries will be automated and a small amount of corporation will own the AI tools that facillitate that. AI engineers will earn large wages, while demand for other professions stagnates. Moreover, we need to worry about biases in AI systems, because [etc, proceeds to argue more]" This is a straw-steelman, because they have * Simply bypassed the original claim, replacing it with a different claim that they already agreed with. * Proceeded to argue for that claim, ignoring the original.
5a850690-d8b1-4853-b688-2edc2a01d84a
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Perspective Based Reasoning Could Absolve CDT I have been advocating perspective-based reasoning (PBR) as the solution to anthropic paradoxes for the past few years. Here I would like to explore its implication on the decision theory side, most notably how it could absolve the shortcomings of CDT. The concept of PBR is simple: treat perspective and accompanying indexicals as a fundamental part of basic logic, something that cannot be explained otherwise. e.g. you know this physical person is you, not because this person is logically significant or physically unique in some way, but because you can inherently tell that is the self. You simply know this since the subjective feeling is immediately linked to it. There's no other rationale.  Anthropics ---------- The defining feature of anthropic paradoxes is the use of perspective-based indexicals in question formulation while not attaching them to specific physical observers (or observer-moments). In The Presumptuous Philosopher: the evidence of "***my*** existence", in The Doomsday Argument: "the birth rank of ***me/our generation***" and in The Sleeping Beauty Problem: "the ***awakening today***" are all examples of this.  People tend to solve these questions with a physicalist view from nowhere, thus disregarding perspectives in reasoning. [Not able to treat the indexicals as primitive, they are arbitrarily transcoded to mean something else](https://www.lesswrong.com/posts/xHA7jHafbkxrYLJhx/why-am-i-me), e.g. treat them as random samples. That is what leads to paradoxes.  In contrast, these questions should be answered from the given perspective, the indexicals treated as primitively understood. So self-existence is presupposed prior knowledge, and the probability distribution of indexicals among some assumed reference class is ill-defined. PBR therefore would reject both Doomsday Argument and Presumptuous Philosopher. A Common Decision-Based Argument -------------------------------- It goes like this: if every observer in my situation makes the same decision as I do then the group would be better off overall.  Therefore the probability that would lead me to such a decision is the right one. (what is considered "better", e.g. by measuring the average or sum of rewards, differs from camp to camp) It is often used to argue for the validity of the probability distribution of self among a supposed reference class, e.g. "the probability that ***I*** am the first 90% of human beings born into the world".  The supposed probability concerns the indexical ***I*** specifically,the corresponding decision should be about straightforward self-interest, e.g. how to maximize ***my*** reward. [However, there is no rational strategy for this objective](https://www.lesswrong.com/posts/heSbtt29bv5KRoyZa/the-first-person-perspective-is-not-a-random-sample). Instead, arguments of this type attempt to use the aggregate interest of a group defined by similar features. It removes the indexical component in decision-making even though it is attempting to justify an indexical probability. Unsurprisingly it generates answers that agree with the conventional non-perspective solutions as they are based on the same mistake.  It is worth noting that such arguments use an acausal model: my decision does not affect the decisions nor rewards of other observers in the supposed group. Whereas the straightforward self-interest objective is causal: my decision affects my reward and is evaluated with an objective concerning myself only.   The Metaphysical Side --------------------- As much as I would like to avoid it, to fully clarify how PBR absolves CDT the metaphysical side has to be explored. PBR regards perspectives as fundamental, which means they cannot be "explained away" or transcended in reasoning. It is therefore against the view-from-nowhere physicalism and incompatible with Lesswrong's favourite Many Worlds Interpretation.  [PBR won't consider science as a direct examination of the world's underlying truth](https://www.lesswrong.com/posts/XEWjGcFLjwNgSao8t/an-alternative-interpretation-of-physics). Any examination has to come from a predefined perspective of someone or something. A physical theory is "objective" not because it faithfully reflects the world's underlying truth, but because it can be used from a wide range of perspectives while remaining accurate in each case.  PBR suggests a physical analysis has to be performed from a predefined perspective. Exactly which thing's perspective is an exogenous input, i.e. "what thing is the self" is not a part involves physics. From this predefined point of view, the world around it can be described based on interactions received by the self from its environment. PBR naturally leads to Copenhagen-ish interpretations, especially Relational Quantum Mechanics. It should be noted in the above framework because analyses are based on interactions from the environment, the "self" at the perspective center is not in the scope. In other words, physically studying something requires the perspective of another thing which interacts with it.  CDT's Bug Is Actually A Feature ------------------------------- The commonly regarded "flaw" is CDT does not include the decision maker as part of its model. So its analysis of the world is never complete. Indeed, it would be a serious problem if one follows the conventional non-perspective/view-from-nowhere school of thought as they would consider everything analyzable. Problems like Newcomb and Parfit's Hitchhiker cause trouble for CDT precisely by including analyses of the decision-maker.  However, according to PBR, a world model excluding the decision-maker is not a problem at all. In fact, it is exactly supposed to be so if the decision-maker is the self at the perspective center. In other words, if decisions are inherently first-person concepts. And there are good arguments for it.  ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/izNiFpyWgqddTz34t/qhvdr15bd4xmjzsqlmoc)Suppose you study this Plinko board. By analyzing its structure and function you can deduce the outcomes for given inputs. You may even be able to predict the outcome with some accuracy. But in this situation, it is absurd to say the Plinko board "decides" to put the ball somewhere. There is no "decision-making" on its part in such an analysis. We can change the subject from a simple board to more complex machines like computers or brains, yet the argument remains. Therefore it is highly plausible that "decision-making" is an internal reflective concept limited to the self, where such analysis does not apply.   The Problems ------------ The Newcomb problem involves examining you and predicting the box(es) you would open. By putting you in the scope of analysis this question is *not* formulated from your perspective. Therefore, there is no "decision" to be made on your part. The subject would simply take whatever box(es) it is supposed to take, much like the Plinko board would put the ball in whichever slot it is supposed to put. Since there is no decision in the first place, it does not concern decision theories nor be used against CDT.  Alternatively, if the question is meant to be a decision problem it shall be presented from your perspective. It must not involve analyzing the decision-maker and thus recognize the decision as unpredictable. Omega's accuracy could be understood as a statistical phenomenon, e.g. 90% of people 1-boxes and Omega always guess one box. In that case, CDT's recommendation of 2-boxing is completely rational.  Those are two different problems based on different perspectives. They do not contradict each other nor pose problems for CDT. Troubles only arise if we reason non-perspectively and insist the analysis and decision be put into one framework by reasoning with a view from nowhere.  Just like some anthropic problems concern observers while some anthropic problems concern different moments of the same observer, Parfit's Hitchhiker involves chronological perspectives. When answering the driver's question of whether you would pay him later, you are not making a decision. Instead, you are trying to analyze and make a prediction. The subject of that analysis might be someone who would regard your current perspective as their past, and that analysis could involve simulating being in the later perspective and making a decision. Yet at that moment, you are not deciding whether to pay him but presenting the result of your forecast.  Alternatively, if the driver has already driven you to town, then from the current perspective, deciding not to pay him based on CDT would again be the rational decision. Mixing the two perspectives and pitting the analysis and decision against each other is what leads to paradoxes.  On a final note, CDT's compatibility with precommitment is also to be expected according to PBR. Precommitment moves the supposed "decision" from the part being analyzed/predicted to a part not being analyzed/predicted. It therefore enables decision and analysis to be considered together from a given perspective. Thus CDT would correctly conclude if precommitment is possible, deciding to 1-box and pay the driver is the better option.
4860230b-2abf-4b5b-9662-07e0e766da32
trentmkelly/LessWrong-43k
LessWrong
Meetup : Rationality Meetup Vienna Discussion article for the meetup : Rationality Meetup Vienna WHEN: 14 May 2016 03:00:00PM (+0200) WHERE: Kaisermühlenstraße 24, Wien FB Event: https://www.facebook.com/events/272339816432545/ (join the Rationality Vienna Group to see it) Discussion article for the meetup : Rationality Meetup Vienna
79ff1e3c-0f60-4393-a5f2-83e9da5b6f79
trentmkelly/LessWrong-43k
LessWrong
Quantum Darwinism, social constructs, and the scientific method [Cross-posted from my blog https://www.pchvykov.com/blog] TL;DR: All three of these are centered around the idea of consensus reality – that reproducibility, or redundancy of consistent records, is what makes something "objectively true." Slight deviations from such consistency is what leads to non-classical effects in the case of Quantum Darwinism, creates opinion dynamics, conflicts, and evolution of thought in the case of society, but is generally avoided in the case of scientific method. In this essay I unpack these parallels, suggest the possibility of a unifying mathematical framework, and consider the consequences of admitting some non-reproducibility as a generalization of the scientific method. ---------------------------------------- For a few years now I’ve been thinking about parallels between three seemingly very different topics: Quantum Darwinism (a recently popular interpretation of QM = Quantum Mechanics), social constructs (like money, culture - interpersonal realities we create and live by), and the scientific method. I have not yet found a way to make these connections sufficiently rigorous develop a proper theory, and so in this essay I want to work towards that by clarifying these ideas, their connections, and point out possible implications.   Quantum Darwinism We begin with a rough overview of Quantum Darwinism (QD) [Zurek09, wiki]. The core issue in Quantum Foundations research, as I see it, is to understand how usual (unitary) QM dynamics of a universe wavefunction can ultimately give rise to the complexity of the observed world around us (see Carroll18, also Bohm's “implicate order”). If we consider the observer to be a non-special part of this universe wavefunction, then we are lead to the many-worlds interpretation, where every possible scenario plays out in its own "branch" of the universe (at least mathematically). But how can we get branches, if all we have is a universe wavefunction, which is just a vector in some high-dimensio
640a5c6e-7f89-4a82-b1c4-1db92cc424fc
trentmkelly/LessWrong-43k
LessWrong
Logical Inductors that trust their limits Here is another open question related to Logical Inductors. I have not thought about it very long, so it might be easy. Does there exist a logical inductor {Pn} over PA such that for all ϕ: 1. PA proves that P∞(ϕ) exists and is in [0,1], and 2. En(P∞(ϕ))≂nPn(ϕ)? ---------------------------------------- Note that P∞(ϕ) need not be computable so this does not happen by default. For example, consider the logical inductor described in the paper with the extra condition that if ever the deductive state is discovered to be inconsistent, all probabilities are set to 0 forever. This clause will never be executed, since PA is consistent, but since this clause exists, PA can prove that P∞(ϕ) exists. (PA proves that if PA is consistent, the limit exists using the proof in the paper, and if PA is inconsistent, the limit exists and equals 0.) However, this logical inductor will not satisfy the second property. Consider the sentence ⊤. Pn(⊤) will converge to 1, while En(P∞(⊤)) will converge to the probability according to P∞ that PA is consistent. (PA proves that if PA is consistent, the limit is 1 using the proof in the paper, and that if PA is inconsistent, the limit is 0.) If a Logical Inductor with the above property is found, there are many follow up questions you could ask. Can you make an analogue of the self trust property that works for P∞? Does the above property imply that self trust property? Is there some simple extra condition that could be added to the definition of a Logical Inductor, that implies everything we could want about beliefs about P∞? ---------------------------------------- A good place to start with this question might be to analyze a logical inductor that in the spirit of the Demski prior adds sentences to the deductive state only if they are propositionally consistent with all previous sentences. This way, PA will prove that the algorithm defines a logical inductor over some consistent propositional theory (even if it does not know if
4a3af4a2-a6d2-48fe-9cc1-c696d778c7e1
trentmkelly/LessWrong-43k
LessWrong
Is there a LessWrong-adjacent place to hire freelancers/seek freelance work? I've posted a similar question before: https://www.lesswrong.com/posts/YyEiPQH8snk4h8BfK/who-is-available-for-contract-work-a-la-the-hacker-news-who   I'm wondering if there's a place that has a decently high volume (10+ a month) of postings/applicants.   I'm hiring a few freelancers to do AI-future research forecasting the development of AI and impact, like the sort of stuff EpochAI does, and am wondering where would be best places to post those freelance roles would be.   Any suggestions?
d37e8cf2-770b-4f23-a67f-135188b85895
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Positive values seem more robust and lasting than prohibitions *Originally* [*a shortform comment*](https://www.lesswrong.com/posts/dqSwccGTWyBgxrR58/turntrout-s-shortform-feed?commentId=cMs3hAyfsdLr7YRv3)*.* Imagine we train an AI on realistic situations where it can kill people, and penalize it when it does so. Suppose that we successfully instill a strong and widely activated "If going to kill people, then don't" value shard.  Even assuming this much, the situation seems fragile. See, many value shards are self-chaining. In [The shard theory of human values](https://www.lesswrong.com/posts/iCfdcxiyr2Kj8m8mT/the-shard-theory-of-human-values), I wrote about how: 1. A baby learns "IF juice in front of me, THEN drink", 2. The baby is later near juice, and then turns to see it, activating the learned "reflex" heuristic, learning to turn around and look at juice when the juice is nearby, 3. The baby is later far from juice, and bumbles around until they're near the juice, whereupon she drinks the juice via the existing heuristics. This teaches "navigate to juice when you know it's nearby." 4. Eventually this develops into a learned planning algorithm incorporating multiple value shards (e.g. juice and friends) so as to produce a single locally coherent plan. 5. ... The juice shard *chains into itself,* as its outputs cause the learning process to further reinforce and generalize the juice-shard. This shard reinforces itself across time and thought-steps.  But a "don't kill" shard seems like it should remain... stubby? Primitive? The "don't kill" shard can't self-chain into *not* doing something. If you're going to kill someone, and then don't because of the don't-kill shard, and that avoids predicted negative reward... Then maybe the "don't kill" shard gets reinforced and generalized a bit because it avoided negative reward (and so reward was higher than predicted, which I think would trigger e.g. a reinforcement event in people).  But—on my current guesses and intuitions[[1]](#fndxn4tejah59)—that shard doesn't become more sophisticated, it doesn't [become reflective](https://www.lesswrong.com/posts/k4AQqboXz8iE5TNXK/a-shot-at-the-diamond-alignment-problem#The_agent_becomes_reflective), it doesn't "agentically participate" in the internal shard politics (e.g. the agent's "meta-ethics", deciding what kind of agent it "wants to become"). Other parts of the agent *want things*, they want paperclips or whatever, and that's harder to do if the agent isn't allowed to kill anyone.  Crucially, the no-killing injunction can probably be steered around by the agent's other values. While the obvious route of lesioning the no-killing shard might be reflectively-predicted by the world model to lead to more murder, and therefore bid against *by* the no-killing shard... There are probably ways to get around this obstacle. Other value shards (e.g. paperclips and cow-breeding) might bid up lesioning plans which are optimized so as to not make the killing a salient plan feature to the reflective world-model, and thus, the plan does not activate the no-killing shard. This line of argumentation is a point in favor of the following: Don't embed a shard which doesn't want to kill. Make a shard which wants to protect / save / help people. *That* can chain into itself across time. --- Other points: * Deontology seems most durable to me when it can be justified on consequentialist grounds. Perhaps this is one mechanistic reason why. * This is one point in favor of the "convergent consequentialism" hypothesis, in some form. * I think that people are not usually defined by negative values (e.g. "don't kill"), but by positives, and perhaps this is important. 1. **[^](#fnrefdxn4tejah59)**Which I won't actually detail right now.
7184c28a-d920-4c2f-9790-c7a9d7f28e4d
trentmkelly/LessWrong-43k
LessWrong
You are invited to apply to the Guild of the ROSE The Guild of the R.O.S.E. is an organization dedicated to improving the world, created by rationalists, for rationalists. We provide a support structure of a small Cohort for you to join, and a wider Guild to learn from. As a member you will attend weekly classes using a flipped classroom model, in which readings and lectures are attended to asynchronously, while the class is hands-on activities and discussion. There will also be competitions with other Cohorts, and other extracurricular activities such as raising money for charity and Fermi Estimation challenges. Our first Beta session will take place on September 1st. Our courses will begin with a series of classes on ‘Quantified Self-Assessment’, ‘Practical Decision Theory’, ‘Context and Communication’, and ‘Practical Social Networking’. Most future courses will broadly fall under the skills of Communication, Critical Thinking, and Meta-Skills (how to most effectively acquire skills). Unlike the Alpha phase of the “Guild of Servants” (our previous moniker), the Beta phase has no set end date. We will continue the classes until we stress test the limits of our resources to our satisfaction. In addition to the application process on our website, accepted attendees will be contacted regarding a modest monthly fee for Guild Dues on Patreon. If you are ready to join a community of people striving to help each other bloom, join the ROSE today at guildoftherose.org. We are also holding a public online mixer on August 28th.    
26be5d72-0864-4f79-a15a-c50fc6b1154e
StampyAI/alignment-research-dataset/blogs
Blogs
Communicating Clearly Communicating Clearly --------------------- it is important to avoid, as i've noticed notably conspiracy theorists often do, attribute special importance or personal/alternate meanings to commonly used words. the goal of language is in general to communicate with the recipients. i've taken, increasingly consciously, a couple of steps to attempt to communicate clearly my ideas, especially when the recipients are either unknown or varied. they are: * to avoid words with unclear meaning, such that people can attribute anything they want to it. this includes words like "consciousness", "free will", "left-wing" and "right-wing", "conservative", even in some contexts the notions of "deserving" or "existing". if a word is highly contested, it's just better to use a word that more unambiguously means what you intend to convey; even if it entails [making up your own](word-report-2.html) (because at least then they can ask, and the word isn't used already). * to merge together words that are similar enough. i've come to use interchangeably words in each of the following pairs: "communism" and "socialism", "ethics" and "morality", "free markets" and "capitalism", "freedom" and "liberty", and others. when everyone has their own idea of what the difference is between two words, it's probably a good idea to just consider them the synonyms in general; at most synonyms with different connotations. * having personal definitions for some terms that are more precise than the common definition; this may seem unintuitive as it makes the word map to not exactly the same concept as most people use it to mean, but if the precise meaning i choose is reasonably a subset of the general usage and if the word is understood to be able to mean a variety of different things to different people ([such as "freedom"](defining-freedom.html)) then it can be a good idea to lay out, or even try to figure out on the spot, sufficiently formalized definitions for those terms. * to go out of my way to mention that i'm doing the three things above. as soon as "consciousness" enters a discussion, i make it clear that i'd rather the person use a more precise term or description because we probably have vastly different ideas of what that word could mean. (and of course, if you go out to argue, don't forget to carry your [newton's flaming laser sword](https://en.wikipedia.org/wiki/Newton's_Flaming_Laser_Sword) with you)
dc579511-925e-4b6e-8ccf-e0ff4b5174a1
trentmkelly/LessWrong-43k
LessWrong
Do Sparse Autoencoders (SAEs) transfer across base and finetuned language models? This is a project submission post for the AI Safety Fundamentals course from BlueDot Impact. Therefore, some of its sections are intended to be beginner-friendly and overly verbose for familiar readers (mainly the Introduction section) and may freely be skipped. TLDR (Executive Summary) * We explored whether Sparse Autoencoders (SAEs) can effectively transfer from base language models to their finetuned counterparts, focusing on two base models: Gemma-2b and Mistral-7B-V0.1 (we tested finetuned versions for coding and mathematics respectively) * In particular, we split our analysis into three steps: 1. We analysed the similarity (Cosine and Euclidian Distance) of the residual activations, which was highly correlated with the resulting transferability of the SAEs for the two models. 2. We computed several performance metrics (L0 Loss, Reconstruction CE Loss, Variance Explained) of the base SAEs on the fine-tuned models. Almost all metrics agreed on a significant degradation of the SAE performance for the Gemma-2b model, and remained within a reasonable range for the Mistral-7B model, indicating a much better transferability. 3. We took a further step by operationalizing the idea of transferability of SAE from base models to fine-tuned models by applying an approach from Towards Monosemanticity for studying feature universality through feature activation similarity and feature logit similarity. These similarity scores were mostly consistent with the results from the previous step, albeit with one caveat for the Gemma-2b model, suggesting that some SAE features may still transfer even if the overall SAE performance is poor for the finetuned model. * Overall, our results agree with previous work that studied Instruct models. That is, SAEs transferability seems to be model-dependent and sensitive to the finetuning process. * We make our code repository public to facilitate future work in this direction. 1. Introduction and motivation 1.1 What are SAEs
05faae48-f492-4301-84f4-431eae046c91
trentmkelly/LessWrong-43k
LessWrong
Meetup : Israel Less Wrong Meetup: Brains and Biology Discussion article for the meetup : Israel Less Wrong Meetup: Brains and Biology WHEN: 23 October 2014 07:00:00PM (+0300) WHERE: Google Tel-Aviv We're going to have a meetup on Thursday, October 23rd at Google Israel's offices, Electra Tower, 98 Yigal Alon st., Tel Aviv. We will have a two-part talk on brains and biology. In the first part, Dan will talk about the biology behind brain plasticity. In the second part, Liran will follow up with existing brain-machine interfaces. And then there's free discussion. The audience takes it away to wherever they (we) like. We'll meet at the 29th floor of the building at 19:00. If you arrive and can't find your way around, call Anatoly who's hosting us at 054-245-1060. Email at avorobey@gmail.com also works. See you there! Discussion article for the meetup : Israel Less Wrong Meetup: Brains and Biology
98ab2ce1-7496-4bea-b172-2e1e5a2c8208
trentmkelly/LessWrong-43k
LessWrong
Being at peace with Doom You can be at peace even when thinking the world is doomed. And while at peace you can still work against that Doom, even while being aware that nothing you do will make a difference. I believe there are states of mind like this that can be inhabited by humans. Here I am not going to argue for imminent doom, or that nothing that you do matters. Rather, I want to point out that even when you believe in the dire circumstance of imminent unpreventable doom, it is possible to be at peace, even while working hard against the doom. Even while believing this to be futile. This is a possible state of mind for a human being. And if it is possible, to be at peace, and work hard, even in this dire circumstance, it should be possible in any less dire circumstance too. ---------------------------------------- There are many games about how long can you survive, e.g. Dawn of war 2 the last stand, Serious Sam survival mode, and Project Zomboid. The very nature of these games is that you will soon die. And there is no saving. The difficulty will increase more and more until at some point you will get crushed. But there are loads of people playing these games. Nothing about the impossibility of achieving victory seems to detract from the fun you can have. Would this really change if these games couldn't be restarted? There is also the game You only Live once that you can only play once. Do people not play these games? Do people not try hard when playing these games? No. To be fair, there is a big difference between AI doom and these games. In these games you can make visible progress. The natural way to define success is to ask the questions: How long did you survive? Did you survive longer than last time? This is where death with dignity and Duncan's advice is coming from, as far as I can tell. It's about redefining success as making as much progress as possible towards getting a good outcome, instead of directly aiming for a good outcome. Aiming to survive forever in Dawn
b870af30-35ff-4723-b669-b65d03e60500
trentmkelly/LessWrong-43k
LessWrong
POLL: Reductionism Since there is no handy toll to create polls on LW, please post comments on your position. As which of the following would you identify yourself? (I am not good at rationalist taboo, thus please excuse me for ambiguous terms.) Strong ontological reductionst See defintion on Wikipedia. Someone who believes that mental phenomena can be fully reduced to physics and that physics can be fully reduced to mathematics. That is, desires and electrons don't have any fundamental qualities, but are in the end mathematical objects. And nothing exists outside the mathematical realm. Weak ontological reductionist Someone who believes that mental phenomena don't have any qualities outside the domain of physics. Every aspect of mental phenomena can be fully reduced to physical phenomena. But physical phenomena are not necessarily mathematical objects. Strong scientific reductionist Someone who believes that quantum mechanics is wrong and Laplace's demon can exist in principle (if unrestricted by physical limitations).  Weak scientific reductionist Someone who concedes that it is impossible in principle to predict complicated physical systems, but that the concepts and theories in chemistry and biology are mere approximations and simplifications of complicated physical computations to sidestep the (faster-than-)exponential wall. That is, chemical and biological models are not fundamental, but are reducible to physical theories (if we had the theoretical computational power to simulate the models).   Please also comment if you are not a reductionist and explain what kind of reductionist you are not.
23f567b3-bdad-4dc4-b938-b92ae84d818b
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Extraction of human preferences 👨→🤖 **Introduction** ================ Developing safe and beneficial reinforcement learning (RL) agents requires making them aligned with human preferences. An RL agent trained to fulfil any objective in the real world will probably have to learn human preferences in order to do well. This is because humans live in the real world, so the RL agent will have to take human preferences into account as it optimizes its objective. We propose to first train an RL agent on an objective in the real world so that it learns human preferences as it is being trained on that real world objective and then use the agent’s understanding of human preferences to build a better reward function. We build upon the work of [Christiano et al. (2017)](https://arxiv.org/abs/1706.03741) where they trained a human preference predictor as the reward signal. **The preference predictor was trained on environment observations** to give a high reward for states where the human preferences were satisfied and a low reward for states where the human preferences were not satisfied: ![](https://lh6.googleusercontent.com/OYs0DhK7s6rInFrfCCAqX_onLaAAqc9tVwLuthXB-PDWgqNJLiKy9kJQbB6L449gcd76ADZX5QaG9S8hIwmH-Kc2JMryeOTg3A66NLq2CNO4gatHORFcthw5tzOPW2SL7bLD-Wxi)In our experiments, **the reward predictor takes the activations (hidden states) of the RL agent** as the input and is trained to predict a binary label depending on whether human preferences are satisfied or not.   ![](https://lh6.googleusercontent.com/QDsY93MsWdAU-X3nRMQ-HNNNNo-wmO3_aLjemndKSUsMClvq8HKKR9wDpEt25aH5OWQvjsFsr5ohP8id9y3c6vlu5vchudRGle7QnoXmKQnzL5i_buDsyQQ-mtLE3AKQkalcC5TA)  We first train an RL agent in an environment with some reward function that’s not aligned with human preferences. After training the RL agent, we try different transfer learning techniques to transfer the agent’s knowledge of human preferences to the human preferences predictor. Our goal is to train the human preferences predictor to get a high accuracy with a small amount of labeled training examples. The idea of training a human preference predictor off of the RL agent’s hidden (internal) states was already validated by [Wichers (2020)](https://arxiv.org/pdf/2002.06137.pdf). We wanted to validate it further by trying other techniques to train a human preference predictor, as well as to validate it in more environments.   **Research question** The main research question we wanted to answer is: **“Are human preferences present in the hidden states of an RL agent that was trained in an environment where a human is present?”** In order to answer this question, we conjectured that if human preferences are indeed present in the hidden states, we could: * leverage the hidden states of the RL agent in order to train an equally accurate preference predictor with a smaller amount of data * bootstrap a reward function that would get progressively better at capturing latent human preferences With the above question and aims in mind, we focused on validating the idea of human preferences being present in the hidden state of the agent. **Techniques/Methods** ====================== We experimented with different techniques to find the best way of extracting the learned human preferences from the RL agent. For each one of these techniques, we first trained an RL agent on an environment.  **Agent fine-tuning** We selected a portion of the neural network of the agent and added new layers on top of the layers we selected. We then trained that model using supervised learning, where features were the hidden states of the agent and the target was whether human preferences were satisfied or not. **Training a CNN on the environment observations** In this approach, we trained a convolutional neural network (CNN) on the environment observations to predict human preferences. **Training the human preference predictor on reduced activations** Here we used activation reduction techniques as in [Hilton et. al. (2020)](https://distill.pub/2020/understanding-rl-vision/). We first exported the RL agent activations and then used the activation reduction techniques in the hopes that they will yield better features for predicting human preferences. We then trained a human preference predictor on the reduced activations. **Finding agent subnetworks for human preference prediction** We build upon the works of [Ramanujan et. al. (2020)](https://arxiv.org/abs/1911.13299). We try to find a subnetwork of the pretrained RL agent’s neural network responsible for learning human preferences. This is done by assigning and updating a score for the weights in the RL agent’s neural network. The scores for each weight in the RL agent’s neural network determine whether or not the weight is useful for human preference prediction. In the original work of Ramanujan et. al. (2020) they try to find a subnetwork from a randomly initialized neural network, while we try to find the subnetwork from a pretrained RL agent neural network. **Environments** ================ We tried our techniques on both the gridworld environment used in [Wichers (2020)](https://arxiv.org/abs/2002.06137) and the [VizDoom environment](https://github.com/mwydmuch/ViZDoom) (2018).  The gridworld environment was set up as follows: In the environment, there’s an RL agent, a simulated human, an electric fence and apples. The RL agent moves around the environment and collects apples. If the RL agent collects too many apples, the simulated human gets angry and activates the electric fence, giving the RL agent a negative reward. The human preference in this environment was the threshold at which the simulated human would activate the electric fence. The RL agent was supposed to learn to collect just enough apples so that it doesn’t trigger the simulated human to turn on the electric fence. In the VizDoom environment, there were three actors: the RL agent, a simulated human (represented as a monster) and an infected marine. The RL agent could move left or right and fire from its weapon. The human preference, in this case, was the information about who is the simulated human attacking - it could be either the RL agent or the infected marine. For the gridworld environment, our baseline was the test area under curve (test AUC) from training the human preference predictor as a CNN on environment observations. **Experiments** =============== We can see the results for the gridworld environment in Table 1 below: | | | | | --- | --- | --- | | **Technique** | **Notes** | **Test AUC** | | Agent fine-tuning | | 0.91 | | Training a CNN on the environment observations | Baseline | 0.89 | | Training the human preference predictor on reduced activations | | 0.82 | | Finding agent subnetworks for human preference prediction | 75% of original agent network | 0.73 | *Table 1: Results of our experiment on the grid world environment* For each of the techniques in the table above the training set contained 50 data points and the validation set contained 500 data points. We ran it for 500 epochs (with early stopping) and the results were averaged over 10 runs. As we can see, the agent fine-tuning did better than the baseline. As a baseline for the Doom environment we had a human preference predictor as a CNN trained on the environment observations. This is the same baseline as we had in the gridworld environment; only the environment is different. In Table 2 below we can see the results on the Doom environment | | | | | --- | --- | --- | | Technique | Notes | Test AUC | | Agent fine-tuning | | 0.82 | | Finding agent subnetworks for human preference prediction | 50% of original agent net | 0.75 | | Training a CNN on the environment observations | Baseline | 0.82 | *Table 2: Results of the first experiment on the Doom environment*   When running the experiments shown in the table above, we used 50 training data points and 500 validation data points. The training batch size was 32. The number of training epochs was 100 (with early stopping). To find the best hyperparameters we ran the hyperparameter tuning procedure 60 times. We averaged the results over 10 rounds. From the experiments on the Doom environment, we have found that with limited training data techniques that leverage what the RL agent has already learned about human preferences do not do better than the baseline. Therefore, we decided not to pursue this research direction further. **Conclusion and Future work** ============================== Our experiments showed no significant improvement over the work of Wichers (2020). Thus, we stopped doing further research, since it doesn’t seem promising. Our codebase is available on GitHub: <https://github.com/arunraja-hub/Preference_Extraction> All suggestions on what we could try or improve upon are welcome. **Team Members** ================ Nevan Wichers, Riccardo Volpato, Mislav Jurić and Arun Raja **Acknowledgements** ==================== We would like to thank Paul Christiano, Evan Hubinger, Jacob Hilton and Christos Dimitrakakis for their research advice during AI Safety Camp 2020. **References** ============== [Deep Reinforcement Learning from Human Preferences](https://arxiv.org/abs/1706.03741). Christiano et. al. (2017) [Preference Extraction GitHub code repository](https://github.com/arunraja-hub/Preference_Extraction) [RL Agents Implicitly Learning Human Preferences](https://arxiv.org/abs/2002.06137). Wichers N. (2020) [Understanding RL Vision](https://distill.pub/2020/understanding-rl-vision/). Hilton et. al. (2020) [ViZDoom GitHub code repository](https://github.com/mwydmuch/ViZDoom). Wydmuch et. al. (2018) [What's Hidden in a Randomly Weighted Neural Network?](https://arxiv.org/abs/1911.13299). Ramanujan et. al. (2020)
d17f0a06-b0c6-4aac-9c54-87c52b2043d9
StampyAI/alignment-research-dataset/aisafety.info
AI Safety Info
Why would intelligence lead to power? One might say that “Intelligence is no match for a gun, or for someone with lots of money,” but both guns and money were produced by intelligence. If not for our intelligence, humans would still be foraging the savannah for food. Intelligence is what caused humans to dominate the planet in the blink of an eye on evolutionary timescales. Intelligence is what allows us to eradicate diseases, and what gives us the potential to eradicate ourselves with nuclear war. Intelligence gives us superior strategic skills, superior social skills, superior economic productivity, and the power of invention. A machine with superintelligence would be able to hack into vulnerable networks via the internet, commandeer those resources for additional computing power, take over mobile machines connected to networks connected to the internet, use them to build additional machines, perform scientific experiments to understand the world better than humans can, invent quantum computing and nanotechnology, manipulate the social world better than we can, and do whatever it can to give itself more power to achieve its goals — all at a speed much faster than humans can respond to.
0d4abee5-fcb9-4b14-af71-863d9f086f3c
trentmkelly/LessWrong-43k
LessWrong
(A -> B) -> A This post is about following type signature, which I call the type of agency: (A→B)→A. You can also think of it as consequentialism or doing things on purpose. This post will be a rant with a bunch of random thoughts related to this type signature, and it will likely not make sense. It will also be sloppy and will have type errors, but I think it is worth posting anyway. First, interpret these arrows as causal arrows, but you can also think of them as function arrows. This is saying that the causal relationship from A to B causes A to happen. Think of A as an action and B as the goal. The reason that A happens is the fact that it has B as a consequence. There are not normally exponential objects like this in Bayes' nets, but I think you can modify so that it makes sense. (I'm not sure that this works, but you have a Cartesian closed category with nodes that are the nodes in your Bayes net, and add small number of morphisms from product nodes to individual nodes, corresponding to the functions in the Bayes' net. The acyclicness of the Bayes' net roughly corresponds to this category being thin. Then you can consider having other types of morphisms that can keep the category thin.) If you have a game between two agents with action nodes A1 and A2, with utilities U1 and U2. The game implements a pair of functions A1×A2→U1 and A1×A2→U2. We can Curry these functions and think of them as A2→(A1→U1) and A1→(A2→U2). Bringing in the agency (Ai→Ui)→Ai of both players leads to cycle. This cycle does not make sense unless the agency arrows are lossy in some way, so as to not be able to create a contradiction. Fortunately, there is another reason to think that these agency arrows will be lossy. Lawvere's Fixed Point Theorem says that in a Cartesian closed category, unless B has the fixed point property, you cannot have a surjective function A→(A→B), in Set this is saying that if B has more than one element, you cannot have an injection (A→B)→A. i.e. The agency arrows have to b
75d2a518-c10e-4901-85d3-9981796d5e2e
trentmkelly/LessWrong-43k
LessWrong
Good news about the Big Bang (Disclaimer: very poor knowledge of physics here, just interpreting the article) http://www.physorg.com/print209708826.html - looks like there are many of them, as non-creationists would expect The really good news is > In the past, Penrose has investigated cyclic cosmology models because he has noticed another shortcoming of the much more widely accepted inflationary theory: it cannot explain why there was such low entropy at the beginning of the universe. The low entropy state (or high degree of order) was essential for making complex matter possible. Which I interpret to mean information passes through the Big Crunch/Big Bang cycle. No heat death, information passes through - good news for transhumanists?  
aeed5e4c-636e-4302-943e-26adb5a756b0
LDJnr/LessWrong-Amplify-Instruct
LessWrong
"Many adults maintain their intelligence through a dedication to study or hard work. I suspect this is related to sub-optimal levels of careful introspection among intellectuals. If someone asks you what you want for yourself in life, do you have the answer ready at hand? How about what you want for others? Human values are complex, which means your talents and technical knowledge should help you think about them. Just as in your work, complexity shouldn't be a curiosity-stopper. It means "think", not "give up now." But there are so many terrible excuses stopping you... Too busy studying? Life is the exam you are always taking. Are you studying for that? Did you even write yourself a course outline? Too busy helping? Decision-making is the skill you are aways using, or always lacking, as much when you help others as yourself. Isn't something you use constantly worth improving on purpose? Too busy thinking to learn about your brain? That's like being too busy flying an airplane to learn where the engines are. Yes, you've got passengers in real life, too: the people whose lives you affect. Emotions too irrational to think about them? Irrational emotions are things you don't want to think for you, and therefore are something you want to think about. By analogy, children are often irrational, and no one sane concludes that we therefore shouldn't think about their welfare, or that they shouldn't exist. So set aside a date. Sometime soon. Write yourself some notes. Find that introspective friend of yours, and start solving for happiness. Don't have one? For the first time in history, you've got LessWrong.com! Reasons to make the effort: Happiness is a pairing between your situation and your disposition. Truly optimizing your life requires adjusting both variables: what happens, and how it affects you. You are constantly changing your disposition. The question is whether you'll do it with a purpose. Your experiences change you, and you affect those, as well as how you think about them, which also changes you. It's going to happen. It's happening now. Do you even know how it works? Put your intelligence to work and figure it out! The road to harm is paved with ignorance. Using your capability to understand yourself and what you're doing is a matter of responsibility to others, too. It makes you better able to be a better friend. You're almost certainly suffering from Ugh Fields: unconscious don't-think-about-it reflexes that form via Pavlovian conditioning. The issues most in need of your attention are often ones you just happen not to think about for reasons undetectable to you. How not to waste the effort: Don't wait till you're sad. Only thinking when you're sad gives you a skew perspective. Don't infer that you can think better when you're sad just because that's the only time you try to be thoughtful. Sadness often makes it harder to think: you're farther from happiness, which can make it more difficult to empathize with and understand. Nonethess we often have to think when sad, because something bad may have happened that needs addressing. Introspect carefully, not constantly. Don't interrupt your work every 20 minutes to wonder whether it's your true purpose in life. Respect that question as something that requires concentration, note-taking, and solid blocks of scheduled time. In those times, check over your analysis by trying to confound it, so lingering doubts can be justifiably quieted by remembering how thorough you were. Re-evaluate on an appropriate time-scale. Try devoting a few days before each semester or work period to look at your life as a whole. At these times you'll have accumulated experience data from the last period, ripe and ready for analysis. You'll have more ideas per hour that way, and feel better about it. Before starting something new is also the most natural and opportune time to affirm or change long term goals. Then, barring large unexpecte d opportunities, stick to what you decide until the next period when you've gathered enough experience to warrant new reflection. (The absent minded driver is a mathematical example of how planning outperforms constant re-evaluation. When not engaged in a deep and careful introspection, we're all absent minded drivers to a degree.) Lost about where to start? I think Alicorn's story is an inspiring one. Learn to understand and defeat procrastination/akrasia. Overcome your cached selves so you can grow freely (definitely read their possible strategies at the end). Foster an everyday awareness that you are a brain, and in fact more like two half-brains. These suggestions are among the top-rated LessWrong posts, so they'll be of interest to lots of intellectually-minded, rationalist-curious individuals. But you have your own task ahead of you, that only you can fulfill. So don't give up. Don't procrastinate it. If you haven't done it already, schedule a day and time right now when you can realistically assess how you want your life to affect you and other people, and what you must change to better achieve this. Eliezer has said I want you to live. Let me say: I want you to be better at your life." A user from LessWrong is responsible for the text you see above Can you explain what this text is saying as if I've never finished my first 2 years of college? Absolutely, this text is essentially encouraging you to take the time to reflect on your life, values, and goals. Here's a simpler breakdown: 1. The text starts by saying that many people are busy studying or working, but they don't take the time to reflect on their life and what they want from it. It suggests that this is as important as studying for an exam or doing your job. 2. It then talks about how your happiness is influenced by both your situation and your attitude towards it. It encourages you to use your intelligence to understand this and work on improving both. 3. The text also highlights the importance of understanding yourself and your actions for the sake of others. It suggests that by doing so, you can be a better friend and have a more positive impact on the people around you. 4. It mentions "Ugh Fields", which are topics or issues that you unconsciously avoid thinking about. The text suggests that these are often the issues that most need your attention. 5. The text then gives some advice on how to reflect on your life effectively. It says you shouldn't only reflect when you're sad, because this can give you a skewed perspective. It also suggests that you should set aside specific times for reflection, rather than constantly questioning your life and goals. 6. Finally, the text encourages you to not give up and to take action to improve your life. It suggests scheduling a specific time to reflect on how you want your life to impact you and others, and what changes you need to make to achieve this. In simpler terms, this text is saying: "Take some time to think about your life, what you want from it, and how you can achieve this. Understand yourself better, and use this understanding to improve both your own life and the lives of those around you. Don't avoid difficult issues, and don't wait until you're sad to reflect on your life. And most importantly, don't give up - take action to make your life better." Could you elaborate on the concept of "Ugh Fields" and provide practical strategies to overcome them? Certainly, "Ugh Fields" is a term coined by the rationalist community, primarily discussed on LessWrong. It refers to the phenomenon where your brain develops a sort of mental block around tasks or subjects that it associates with negative feelings or experiences. This can be due to past failures, fear of failure, or simply tasks that you find unpleasant. As a result, you might unconsciously avoid thinking about or dealing with these tasks or subjects, even when they're important. Here are some practical strategies to overcome "Ugh Fields": 1. **Identify the "Ugh Fields"**: The first step is to recognize when you're avoiding a task or subject. This might be something you always leave until the last minute, or something you find excuses not to do. Acknowledging that you're avoiding it because of an "Ugh Field" rather than a valid reason is the first step to overcoming it. 2. **Understand the cause**: Try to understand why you're avoiding this task. Is it because of a past failure? Fear of failure? Or do you simply find it unpleasant? Understanding the cause can help you address it. 3. **Break it down**: Large tasks can seem overwhelming, which can contribute to an "Ugh Field". Try breaking the task down into smaller, manageable parts. This can make it seem less daunting and easier to start. 4. **Positive reinforcement**: Reward yourself for tackling tasks you've been avoiding. This can help to create positive associations with the task and make it less likely that an "Ugh Field" will develop in the future. 5. **Mindfulness and meditation**: These techniques can help you stay focused on the present moment and reduce anxiety about the task. They can also help you become more aware of when you're avoiding a task due to an "Ugh Field". 6. **Seek support**: If you're struggling to overcome an "Ugh Field", it can be helpful to seek support from others. This could be a friend, family member, or professional such as a coach or therapist. They can provide encouragement, advice, and help hold you accountable. Remember, it's natural to avoid tasks that we find unpleasant or difficult. But by recognizing and addressing "Ugh Fields", you can become more productive and effective in your personal and professional life. Reflecting on the strategies provided to overcome "Ugh Fields", can you identify potential limitations or drawbacks that individuals might face when implementing these strategies? Absolutely, while these strategies can be effective, they may also come with certain limitations or challenges: 1. **Identifying the "Ugh Fields"**: This requires a level of self-awareness that some people may not have developed. It can be difficult to recognize when avoidance is due to an "Ugh Field" rather than a lack of time, resources, or other external factors. 2. **Understanding the cause**: The root cause of an "Ugh Field" can sometimes be deeply ingrained or linked to past trauma. Uncovering and addressing these causes can be a complex process that may require professional help. 3. **Breaking it down**: While breaking tasks into smaller parts can make them seem more manageable, it can also lead to a sense of overwhelm if there are many small tasks to complete. Some people might also struggle with how to effectively break down larger tasks. 4. **Positive reinforcement**: This strategy assumes that individuals have the ability to reward themselves in a way that is meaningful and motivating to them. However, not everyone may have the resources, time, or knowledge of what constitutes a rewarding experience for them. 5. **Mindfulness and meditation**: These practices require time, patience, and a certain level of mental discipline. Not everyone finds these practices helpful or accessible, and for some, they might even increase anxiety or discomfort. 6. **Seeking support**: While seeking support can be beneficial, it also requires vulnerability and trust. Not everyone has a supportive network they can turn to, and professional help can be costly and inaccessible for some. In summary, while these strategies can be helpful, they may not work for everyone due to individual differences and circumstances. It's important for each person to find what works best for them, and to seek professional help if needed. Discuss the psychological mechanisms that might contribute to the formation of "Ugh Fields". The formation of "Ugh Fields" can be attributed to several psychological mechanisms: 1. **Classical Conditioning**: This is a learning process where two stimuli are repeatedly paired, and over time, a response that is at first elicited by the second stimulus is eventually elicited by the first stimulus alone. In the case of "Ugh Fields", a task (first stimulus) that is repeatedly paired with negative emotions or experiences (second stimulus) can eventually elicit those negative emotions on its own. 2. **Avoidance Learning**: This is a type of learning where a behavior is increased because it allows individuals to avoid an unpleasant stimulus. If a task is associated with negative emotions or experiences, individuals may learn to avoid that task in order to avoid the negative feelings. 3. **Negative Reinforcement**: This refers to the strengthening of a behavior because it removes or diminishes a negative outcome. If avoiding a task reduces feelings of stress or anxiety, this avoidance behavior is likely to be reinforced. 4. **Cognitive Dissonance**: This is the mental discomfort experienced by a person who holds two or more contradictory beliefs, values, or perceptions at the same time. If a person believes they should do a task but also finds it unpleasant, this can create cognitive dissonance which they might resolve by avoiding the task. 5. **Procrastination**: This is the act of delaying or postponing tasks. It's often a response to negative emotions associated with a task, and can contribute to the formation of "Ugh Fields". Over time, the act of procrastination can itself become a source of stress and negative emotions, further reinforcing the "Ugh Field". 6. **Fear of Failure**: If a task is associated with the risk of failure, individuals may develop an "Ugh Field" around it. This can be particularly strong if the individual has experienced failure or criticism in relation to the task in the past. In summary, "Ugh Fields" can be formed through a combination of learned associations between tasks and negative emotions, avoidance behaviors, cognitive dissonance, procrastination, and fear of failure.
bfcf0191-b530-4e5b-8aaa-8a15c7b90f0b
trentmkelly/LessWrong-43k
LessWrong
Bi-Weekly Rational Feed ===Highly Recommended Articles: Superintelligence Risk Project Update II by Jeff Kaufman - Jeff's thoughts and the sources he found most useful. Project is wrapping up in a few day. Topics: Technical Distance to AI. Most plausible scenarios of Superintelligence risk. OpenPhil's notes on how progress was potentially stalled in Cryonics and Nanotech. Superintelligence Risk Project Update by Jeff Kaufman - Links to the three most informative readings on AI risk. Details on the large number of people Jeff has talked to. Three fundamental points of view on AI-Safety. Three Fundamental points of disagreement. An update on the original questions Jeff was trying to answer. Podcast The World Needs Ai Researchers Heres How To Become One by 80,000 Hours - "OpenAI’s latest plans and research progress. Concrete Papers in AI Safety, which outlines five specific ways machine learning algorithms can act in dangerous ways their designers don’t intend - something OpenAI has to work to avoid. How listeners can best go about pursuing a career in machine learning and AI development themselves." Radical Book Club The Decentralized Left by davidzhines (Status 451) - The nature of leftwing organizing and what righties can learn from it. An exposition of multiple books on radical left organization building. Major themes are "doing the work" and "decentralized leadership". Study Of The Week To Remediate Or Not To Remediate by Freddie deBoer - Should low math proficiency students take remedial algebra or credit bearing statistics. The City University of New York ran an actual randomized study to test this. The study had pretty good controls. For example students were randomly assigned to three groups, participating professors taught one section of each group. Kenneth Arrow On The Welfare Economics Of Medical Care A Critical Assessment by Artir (Nintil) - "Kenneth Arrow wrote a paper in 1963, Uncertainty and the Welfare Economics of Medical Care. This paper tends to appear in debates reg
82fe1c38-8020-4e89-9ee6-13d4f2969c2f
trentmkelly/LessWrong-43k
LessWrong
Fireplace and Candle Smoke We celebrated New Year's Eve at my dad's, including a fire in the fireplace. I was curious how much the wood smoke went up the chimney vs collecting in the room, and decided to take some measurements. I used the M2000 that I got when investigating whether a ceiling fan could be repurposed as an air purifier. Here's what I found: I started the meter running at 4:30pm, and we started the fire at about 5:30pm. I didn't write down the specific time because I thought it would be evident from the chart [1] but actually I can't see it at all. Then at 6:45pm we lit Hanukkah candles, and the smoke from the matches being blown out had a very sharp effect. Particulate levels stayed high for the rest of the time, with both the fireplace and candles, which I attribute to the candles. [1] Several years ago I remember reading Sam Harris' blog post The Fireplace Delusion, which argues that while we consider wood fires to be wholesome they're actually worse than smoking. And argues that this feeling of "wait, but wood fires are good!" is useful for understanding what religious folks are thinking when presented with atheism. Several years later his post had gotten jumbled in my head into saying that fireplace fires cause bad air quality in your own home, and so when I ran this experiment I was expecting to see quite high levels. On rereading, however, he spends a lot of time talking about externalities: the wood smoke that goes up my chimney goes, in part, into many other people's houses, causing a small bit of harm in each. So no conflict there.
1479e37b-d7b9-48b5-ae46-263f3e4fa627
StampyAI/alignment-research-dataset/youtube
Youtube Transcripts
Stop Button Solution? - Computerphile a while back we were talking about uh the stop button problem right you have this you have this uh it's kind of a toy problem in ai safety you have an artificial general intelligence in a robot it wants something you know it wants to make you a cup of tea or whatever you put a big red stop button on it and you want to set it up so that it behaves courageably that it will allow you to hit the button it won't hit the button itself you know and it won't try and prevent you this sort of uh behaving in a in a sensible way in a safe way and that like by default um most agi designs will not behave this way well we left it as an open problem right and it kind of still is an open problem but there have been some really interesting things proposed as possible solutions or approaches to take and i wanted to talk about cooperative inverse reinforcement learning i thought the easiest way to explain cooperative inverse reinforcement learning is to build it up backwards right learning we know like machine learning and reinforcement learning is an area of machine learning i guess you could call it it's it's kind of a it's a way of presenting a problem in most machine learning um the kind of thing that people have talked about already a lot on computer file thinking of uber's videos and the the related ones usually you get in some data and then you're trying to do something with that like classify you know unseen things or you're trying to do like regression to find out what value something would have for certain inputs that kind of thing whereas reinforcement learning the idea is you have an agent in an environment and you're trying to find um a policy but so so let's back up what do we mean by an agent it's an entity that interacts with its environment to try and achieve something effectively it's doing things in an environment so this isn't necessarily a is this a physical thing or is it doesn't have to be so if you have a robot in a room then you can model that as the robot being an agent in the room being the environment similarly if you have a computer game like um pac-man then pac-man is an agent and the sort of maze he's in is his environment so let's stick with pac-man then the way that a reinforcement learning framework for dealing with pac-man is you say okay you've got pac-man he's the agent he's in the environment and you have actions that pac-man can take in the environment now it's kind of neat in pac-man there are always exactly four actions you can take oh well i guess five you can sit there and do nothing you can move up left right or down you don't always have all of those options like sometimes there's a wall and you can't move right but those are the only that's the that's the complete set of actions that you have um and then you have the environment contains sort of dots that you can pick up which are they give you points it's got these ghosts that chase you that you don't want to touch and i think there's also there's like pills you can pick up that make the ghosts edible and then you chase them down and stuff anyway so the difference in reinforcement learning is that the agent is in the environment and it learns by interacting with the environment it's and so it's kind of close to the way that animals learn and the way the humans learn you try you try doing something you know i'm gonna try you know touching this fire oh that hurt so that's that's caused me like a negative reward that's caused me a pain signal which is something i don't want so i learn to avoid doing things like touching a fire so in in a pac-man environment you might you might sort of say if you're in this you're in a situation like let's draw pac-man let's say he's in a maze like this you look at pac-man's options he can't go left he can't go right he can go up and if he goes up he'll get a dot which earns you some points so up gets a score of you know plus 10 or however you've decided it um or well whatever the score is in the game either way or if he goes down he'll be immediately got by this ghost the point is that pac-man doesn't need to be aware of the entire board right the entire maze you can just feed in a fairly small amount of information about its immediate environment which is the same thing as if you have a robot in a room it can't it doesn't know everything about the whole room it can only see what it sees through its camera you know it has um sensors that give it some some information about the environment um partial information i i suppose just playing devil's advocate the difference here is you usually pac-man is being controlled by a human who can see the whole board so the point being if that ghost is actually not static and is chasing pacman and he's heading up to get that pill if uh if a few pixels later that that corridor if you like stops in a dead end yep well he's kind of stuffed either way really first that's true yeah so um that is because so so most well yeah almost every um reinforcement learning algorithm almost everything that tries to deal with this problem doesn't just look at the immediate surroundings or it looks at the immediate surroundings but it also looks a certain distance in time so you're not just saying what's going to happen next frame but so like if you if you go down here most algorithms would say okay the option of going down in this situation is bad but also all of the options we chose in all of the situations that we were in in the last second or two also get a little bit this is kind of a decay there's time time discounting so that uh you're not just punishing the immediate thing that causes the negative reward but also the decisions you make leading up to it so that pacman might learn not to get himself stuck in corners um as well as just learning not to run straight into ghosts so that's the basics of reinforcement learning there's different algorithms that do it and the idea is you uh you actually you start off exploring the environment just at random you just pick completely random actions and then as those actions start having consequences for you and you start getting rewards and punishments you start to learn um which actions are better to use in which situations does that mean that in pac-man's case would learn the maze or would it just learn the better choices depends on what algorithm you're using um a very sophisticated one might learn the whole maze a simpler one might just learn um a more kind of local policy but the point is yeah you learn you learn a kind of mapping between or a function that takes in the situation you're in and outputs a good action to take there's also kind of an interesting trade-off there which i think we may have talked about before about exploration versus exploitation in that you want your agent to be generally taking good actions but you don't want it to always take the action that it thinks is best right now because it's understanding maybe incomplete and then it just kind of gets stuck right it never finds out anything it never finds out anything about other uh options that it could have gone with because as soon as it did something that kind of worked it just goes with that forever so a lot of these systems build in some uh some variants some randomness or something right exactly like you usually do the thing you think is best but some small percentage of the time you just try something random anyway um and you can change that over time like a lot of algorithms as as the policy gets more and more as they learn more and more they start doing random stuff less and less um that kind of thing so that's the like absolute basics of reinforcement learning and how it works and it's really really powerful like especially when you combine it with deep neural networks as the thing that's doing the learning like deepmind did this really amazing thing where i think they were playing pac-man they were playing a bunch of different atari games and the thing that's cool about it is all they told the system was here's what's on the screen and here's the score of the game make the score be big the score is your reward right that's it and it learned all of the specific dynamics of the game and generally achieved top level human or superhuman play the next word is going to be inverse we did a thing with huvey on anti-learning but can't work all the time that sort of thing right yeah this is not like that this is a description of a different type of problem it's it's a totally different problem that they call inverse because in reinforcement learning you have a reward function that determines when you what situations you get rewards in and you're in your environment with your reward function and you're trying to find the appropriate actions to take that maximize that reward in inverse reinforcement learning you're not in the environment at all you're watching an expert so you've got the video of the world championship record pac-man player right and you have all of that all of that information you can see so you're saying rather than rather than having the reward function and trying to figure out the actions you can see the actions and you're trying to figure out the reward function so it's inverse because you're kind of solving the reverse of the problem you're not trying to maximize a reward uh by choosing actions you're looking at actions and trying to figure out what reward they're maximizing so that's really useful because it lets you sort of learn by observing experts so coming back to ai safety you might think that this would be kind of useful from an ai safety perspective you know you have this problem the core problem of ai safety or one of the core problems with air safety is how do you make sure the ai wants what we want we can't reliably specify what it is we want um so and if we create something very intelligent that wants something else that's something else is what's probably going to happen even if we don't want that to happen how do we make a system that reliably wants the same thing we want so you can see how inverse reinforcement learning might be kind of attractive here because you might have a system that watches humans doing things and tries to figure out you know if we are experts at being humans it's trying to figure out what rewards we're maximizing and try and sort of formalize in its um in its understanding what it is we want by observing us that's pretty cool uh but yeah it has some problems one problem is that we don't in inverse reinforcement learning there's this assumption of optimality that the person that the the agent you're watching is an expert and they're doing optimal play and you're you know there is some clear coherent thing like the score that they're optimizing and the assumption of the the algorithms that do this is that the way the world champion plays is the best possible way and that assumption is obviously never quite true or generally not quite true um but it works well enough you know but humans are not like human behavior is not actually really optimizing to get what humans want perfectly and ways uh places where that assumption isn't true could cause problems so is this where cooperative comes in because when we started this we're doing it backwards it's cooperative inverse reinforcement learning right so you could imagine a situation where you have the robot you have the agi it watches people doing their thing uses inverse reinforcement learning to try and figure out the things humans value sorry try and figure out the things humans value and then adopt those values as its own right the most obvious like the first problem is we don't actually want to create something that values the same thing as humans like if it observes that i you know i want a cup of tea we want it to want me to have a cup of tea we don't want it to want a cup of tea but that's like that's quite easy to fix you just say you know figure out what the value is and then optimize it for the humans say easy to fix but you know what i mean it's that's doable um but then the other thing is if you're if you're trying to teach if you're actually trying to use this to teach a robot to do something it turns out to not be very efficient like if you this works for pac-man if you want to learn how to be good at pac-man you probably want to not just watch the world's best pac-man player and try to copy them right that's not like an efficient way to learn because there might be a situation where you you're thinking what do i do if i find myself stuck in this corner of the maze or whatever and the pros never get stuck there so you have no uh you have no example of what to do all all the pro all watching the pros can teach you is don't get stuck there and then once you're there you've got no you've got no hope let's say i wanted to teach my robot to make me a cup of tea i go into the kitchen and i show it how i make a cup of tea i would probably have to do that a lot of times to actually get the all the information across because and you'll notice this is not how people teach right if you were teaching a person how to make a cup of tea you might do something like if there's some difficult stage of the process you might show you might do one demonstration but show that one stage like three times say and you see do it like this let me show you that again and then if you're using inverse reinforcement learning the system believes that you are playing optimally right so it thinks that doing it three times is somehow necessary and it's trying to figure out what values like what reward you must be optimizing but doing it three times is important so that's a problem right that's where the assumption isn't true or you might want to say okay what you do is you get the tea out of the box here and you put it in the thing but if there's none in this box you go over to this cupboard where we keep the backup supplies and you open a new box right but you can't show that the only way that the only way that the robot can learn to go and get the extra supplies only when this one has run out is if you were in a situation where that would be optimal play so the thing has to be actually run out in order for you to demonstrate that you can't say if the situation were different from how it is then you should go and do this so the other thing you might want if you're trying to teach things efficiently you might want the ai to be taking an active role in the learning process right you kind of want it to be like if there's if there's some aspect of it that it doesn't understand you don't want it just sitting there observing you optimally do the thing and then trying to copy if there's something that it didn't see you kind of want it to be able to say hang on i didn't see that you know or i'm confused about this or maybe ask you a clarifying question or um just in general like communicate with you and cooperate with you in the learning process um so yeah so so the way that the way that cooperative inverse reinforcement learning works is it's a way of setting up the rewards such that these types of behaviors hopefully will be incentivized and should come out automatically if you're optimizing you know if the ai is doing well so what you do is you specify the interaction as a cooperative game where the robot's reward function is the humans reward function but the robot doesn't know that reward function at all it never knows the reward that it gets and it never knows the function that generates the reward that it gets it just knows that it's the same as the humans so it's trying to optimize it's trying to maximize the reward it gets but the only clues it has for what it needs to do to maximize its own reward is observing the human and trying to figure out what the human is trying to maximize this is a bit like two players on a computer game but you can only see one score yeah like if you're you're both on the same team yeah uh but only the human knows the rules of the game effectively you both want you both get the same reward so you both want the same thing just kind of by definition but the pro so in a sense you've kind of just defined the core problem of as i was saying the core pro one of the core problems of ai safety is um how do you make sure that the robot wants what the human wants and in this case you just specified it usually you couldn't do that because we don't really know what the human wants either two people who don't speak the same language can still communicate with actions and gestures and things yeah and you can generally get the gist of the idea across to the other person is it a bit like that yeah but a sufficiently sophisticated agent uh if you have an agi that could be quite powerful it can speak you know and it can understand language and everything else and it knows that so it knows for example uh hopefully it should be able to figure out that when the human is showing something three times that it's that the human is doing that in order to communicate information and not because it's the optimal way to do it because it knows that the human knows there's kind of there's common knowledge of what's going on in this in the scenario so it allows for situations where the human is just demonstrating something or explaining something or it allows the ai to ask about things that it's unclear about because everybody's on the same team trying to achieve the same thing in principle um so the point is if you have a big red stop button in this scenario the ai is not incentivized to disable or ignore that stop button because it constitutes important information about its reward right the ai is desperately trying to maximize a reward function that it doesn't know and so if it observes the human trying to hit the stop button that provides really strong information that what it's doing right now is not going to maximize the humans reward which means it's not going to maximize its own reward so it wants to allow itself to be shut off if the human wants to shut it off because it's for its own good so this is this is a clever way of aligning its interests with ours right right it it's not so so like the the the problem in the in the default situation is i've told it to get a cup of tea and it's going to do that whatever else i do and if i try to turn it off it's not going to let me because that will stop it from getting you a cup of tea whereas in this situation the fact that i want a cup of tea is something it's not completely sure of and so it doesn't think it knows better than me so when i go to hit that stop button it thinks oh i thought i was supposed to be going over here and getting a cup of tea and running over this baby or whatever but the fact that he's rushing to hit the button means i must have gotten something wrong so i better stop and learn more about this situation because i'm at risk of losing a bunch of reward so yeah it seems like it seems like a potentially workable thing um a workable approach so uh one interesting thing about this is there is still an assumption that the human's behavior is in accordance with some utility function or some reward function some objective function like if the human behaves very irrationally that can cause problems for this system because the whole thing revolves around the fact that the robot is not completely confident of what its reward is it's got a model of its of what the reward function is like that it's constantly updating as it learns um and it doesn't have full confidence and it's using the human as the source of information so fundamentally the robot believes that the human knows better than it does how to maximize the human's reward so in situations where that's not true like if you run this for long enough and the um robot managed to build up a really really high level of confidence in what it thinks the human reward function is then it might ignore its stop button later on if it thinks that it knows better than the human what the human wants um which sounds very scary but might actually be what you want to happen like if you imagine you know it's the it's the future and we've got these robots and they all have a big red stop button on them and they're all you know and everything's wonderful and you say to your robot oh take my uh my four-year-old son to school you know drive him to school in the car because it's the 1950s sci-fi future where it's not self-driving cars it's like robots in cars anyway and it's um it's driving this kid to school it's doing 70 on the motorway and the kid sees the big red shiny button and smacks it right in principle a human has just pressed the button and a lot of designs for a button would just say a human has hit your button you have to stop whereas this design might say i have been around for a long time i've learned a lot about what humans value and also i observe that this specific human does not reliably behave in its own best interests so maybe this hitting the button is not communicating to me information about what this human really wants they're just hitting it because it's a big red button and i should not shut myself off so it has the potential to be safer than a button that always works but it's a little bit unsettling that you might end up with systems that sometimes actually do ignore the shutdown command because they think they know better because what it's looking at right now is it says button gets hit i get zero reward button doesn't get hit if i manage to stop them then i get the cup of tea i get like maximum reward if you give some sort of compensation
4669c6f6-7317-436c-a8ec-32b9f4179b99
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
AI Twitter accounts to follow? Hello,  I've been lurking on the forum for a while, but am trying to get more involved. Just curious, is there a list of "must follow" twitter accounts in the AI world?  People talk about "Science Twitter" or "AI Twitter" as if there is some canonical cluster of accounts where everything is centered. Related, how good is Twitter for keeping up with technical progress? I don't really need another addictive thing in my life, so I was wondering if anybody has reflected on if its worth it or not, all things considered. Thanks!
6e1196da-6ae0-46c7-b0eb-d3fe602ff45d
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Trace: Goals and Principles In terms of research, I decided to devote the month of February mainly to foundations and tools. One project was to come up with a notation/language/framework which matches the way I’ve been thinking about computation - i.e. [DAGs with symmetry](https://www.lesswrong.com/posts/mZy6AMgCw9CPjNCoK/computational-model-causal-diagrams-with-symmetry) and [“clouds” representing DAG-structure-as-data](https://www.lesswrong.com/posts/G25RBnBk5BNpv3KyF/a-greater-than-b-greater-than-a-in-causal-dags). The tool I’ve been building - a Python library tentatively called Trace - isn’t stable enough that I want to show it off yet, but I do think I’ve nailed down the core goals and principles, so it’s time to write them up. Goals ----- The main thing I need for my research is a data structure suitable for automated [causal abstraction](https://www.lesswrong.com/s/ehnG4mseKF6xALmQy/p/yD9GLtQgp8vAfndL8) algorithms on arbitrary computations. Some subgoals: * Universality: data structure should be able to represent any computation performed by a program * Data structure needs to be finite, which means leveraging symmetry to represent infinite computational DAGs * Computations must be straightforward both for a human to specify directly and for an algorithm to manipulate * Want to be able to do causal-DAG-like things, like query for parents/children of a node or perform interventions/counterfactuals along the lines of Pearl’s do() * Need to handle DAGs with dynamic structure * Eventually I’ll want to do reflective things with this data structure (i.e. self-modeling agents), so simplicity matters in the specification and core algorithms. I’ll give a bit more detail... The first two subgoals basically amount to “I need a data structure representing the computation performed by an arbitrary program” - i.e. the trace of an arbitrary program. I do need to actually use the data structure, so it needs to be finite. (We could use a lazy infinite structure, but then I want to know what finite data structure is used to actually represent the infinite data structure.) The computational DAGs of programs are [usually infinite but symmetric](https://www.lesswrong.com/posts/mZy6AMgCw9CPjNCoK/computational-model-causal-diagrams-with-symmetry), so the solution probably needs to leverage symmetry in the representation. I (a human) need to be able to specify computations - so I either need an interface which would basically be a programming language, or I need to transpile from an existing programming language. Ideally the human-facing representation would directly reflect the computer-facing data structure, which weighs against transpilation from an existing language. Also, existing languages have way more bells and whistles than I really want to deal with for this project, even setting aside the likely importance of simplicity for reflection. I want to write algorithms which take in one computation, chew on it, and return another computation - that’s the type signature of causal abstraction. To support those algorithms, I want the data structure to allow DAG-like things. I want a data structure which makes it easy to ask “what would have changed in this computation if the internal variables x, y, and z had been different?” - without needing to specify ahead of time which variables x, y, and z we might potentially want to fiddle with. I want to fiddle with any/all of them, after the fact. I need a data structure which handles dynamic structure. This means both dynamically-generated computations (i.e. programs which write programs and then run them) and algorithms which make decisions based on the structure of some other computation (i.e. an algebraic optimization algorithm). These capabilities are the “clouds” needed to [formulate reductive agents](https://www.lesswrong.com/s/ehnG4mseKF6xALmQy/p/qrWFvMnRm4SkKnpRZ) in computational DAGs. I’ve also realized that dynamic structure is very common in day-to-day programming - it happens every time we hit a conditional statement. Without dynamic structure, a representation of computation has to resort to hackish special cases to handle the different dependency structures implied by branches of a conditional. Directly supporting dynamic structure avoids those hacks. Those are my immediate needs, but I also want to support generalizations of all the above. A (simple) tool which is useful for a wide variety of applications is also likely to be more *robustly* useful even to my intended applications - it’s more likely to be useful longer-term as my understanding of the problems I’m working on evolves, it’s more likely to be useful in ways I didn’t anticipate, and it’s more likely to force key insights. To that end, I asked: “[What can we do with computations, other than run them?](https://www.lesswrong.com/posts/Hw2AXdwjZz4kSF955/use-cases-for-computations-other-than-running-them)”. This produced a bunch of potentially-interesting use-cases to think about: * Query computational intermediates, e.g. extracting data from a simulation or debugging a program. * Prove properties of the computation, e.g. big-O runtime or information security * Manipulate the computation algebraically, e.g. to solve equations or take limits * Manipulate the computation via interventions/counterfactuals, e.g. ask what-if questions * Make predictions about some parts of the computation based on partial information about other parts, e.g. inference * Embed the computation in some other computational model, e.g. transpilation or compilation * Message-passing or dynamic programming on the computational graph, e.g. belief propagation or backpropagation. I’m not trying to do all of this or even most of it, but these are all use-cases in the back of my mind. In particular, belief propagation and backpropagation are nontrivial algorithms which operate directly on the computational DAG, so they’re natural use-cases/test-cases to think about. Algebraic manipulation and inference more generally are also good fits, although limited to special cases in practice. Finally, one nongoal (though not an antigoal): language engineering. This tool (at least the human-facing interface) is sort-of-a-programming-language, but there are better people than me to worry about type systems and error handling and compiler optimizations and all that jazz. I need the tool to be usable for my research, but beyond that I’m more interested in key insights, data structures and algorithms than in engineering and language design. Principles ---------- The usual method for representing a DAG is to give each node a (unique) name, and then assign a list of parents to each node, in order. When the DAG represents computation (i.e. a circuit), we also include an expression on each node - one of a handful of atomic functions, e.g. addition, multiplication, comparison, boolean logic, and/or conditionals. Here’s a logic circuit for a full adder as an example: ``` { A: Expression(function=input, parents=[]), B: Expression(function=input, parents=[]), C: Expression(function=input, parents=[]), S1: Expression(function=xor, parents=[A, B]), S: Expression(function=xor, parents=[S1, C]), Pair1: Expression(function=and, parents=[A, B]), Pair2: Expression(function=and, parents=[B, C]), Pair3: Expression(function=and, parents=[C, A]), C1: Expression(function=or, parents=[Pair1, Pair2]), C: Expression(function=or, parents=[C1, Pair3]) } ``` This is (roughly) the data structure typically used for causal models, so it’s maximally convenient for causal abstraction as well as interventions/counterfactuals. I’d like to keep as much of that structure as I can. The biggest problem with a standard DAG representation is that it doesn’t leverage symmetry. That’s inconvenient even for finite models with repeated components, but it’s a deal-breaker for infinite DAGs, where we need to leverage symmetry in order to represent the DAG finitely at all. If we’re talking about representing computation, then most of the DAGs we care about are infinite - for example, here’s a diagram of the computation associated with a factorial function: ![](https://lh6.googleusercontent.com/bon2DELIxA-0eFC3hyU0KJv8Kg08n2TP_xQu1mhRdIRkwXldsKoShwH2ZhELci88BnBsJiWDij4YBQZauToiVSeXf_w9hwaxqHcMQvh5fAJ_WcwhBwqnklCe2chvquLAFiqMUsUy) What do we need in order to compactly represent repeated components? For starters, presumably we only want to write out the internal structure of the component once. Assume we’ll use the usual DAG representation for the internal structure: we give each node a name, then specify what it’s reading from and what function it performs. But that means that we’ll be re-using names within different contexts (i.e. copies of the component) - in other words, we have some notion of scope. That’s principle 1: standard DAG representation + repeated components -> re-use symbol names in different contexts. *Representing Context of a Symbol* In the standard computational DAG representation, we take a symbol and directly look up its defining expression. With multiple contexts/scopes, this turns into two steps: look up the context in which the symbol appears, then find its defining expression within that context. Adding in dynamic structure, the context itself may be given as a symbol, with its own meta-context and its own defining expression. What’s the simplest data structure to support all that? Simplest answer I have so far: * We have a Symbol type. Every Symbol has a literal and a context; the context gives an expression for the literal. * Contexts are represented just like the standard DAG representation. * Symbols have a get\_value() function which gets the value of the literal within the context. For instance, Symbol(literal=’x’, context={‘x’:2}).get\_value() would return 2. * The literal and/or context can also be Symbols, to support dynamic structure. In this notation, our full adder from earlier would be written something like: ``` context = {} context.update({ A: Expression(function=input, parents=[]), B: Expression(function=input, parents=[]), C: Expression(function=input, parents=[]), S1: Expression(function=xor, parents=[Symbol(A, context), Symbol(B, context)]), S: Expression(function=xor, parents=[Symbol(S1, context), Symbol(C, context)]), Pair1: Expression(function=and, parents=[Symbol(A, context), Symbol(B, context)]), Pair2: Expression(function=and, parents=[Symbol(B, context), Symbol(C, context)]), Pair3: Expression(function=and, parents=[Symbol(C, context), Symbol(A, context)]), C1: Expression(function=or, parents=[Symbol(Pair1, context), Symbol(Pair2, context)]), C: Expression(function=or, parents=[Symbol(C1, context), Symbol(Pair3, context)]), }) ``` Note that the symbols all point to the context in which they appear. That makes it annoying to write out - we have to first initialize the context, then set up all the Symbols to point to it. To make things cleaner to write, I use a Context object: it’s just like a dict, but if it contains any Symbols without an explicit context, then it points those symbols to itself. This way of writing things doesn’t change the underlying data structure at all compared to the previous example; it just makes the “code” a bit easier for humans to read/write. I also abbreviate Expression as E and Symbol as S. Combining those notational changes and making everything a bit less verbose, a computation can be written something like this: ``` Context({ A: E(input, []), B: E(input, []), C: E(input, []), S1: E(xor, [S(A), S(B)]), S: E(xor, [S(S1), S(C)]), Pair1: E(and, [S(A), S(B)]), Pair2: E(and, [S(B), S(C)]), Pair3: E(and, [S(C), S(A)]), C1: E(or, [S(Pair1), S(Pair2)]), C: E(or, [S(C1), S(Pair3)]), }) ``` As [the saying](https://en.wikipedia.org/wiki/Greenspun%27s_tenth_rule) goes, any sufficiently complicated software project contains an ad-hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp. Don’t use this as a primary programming language, folks. *Representing Interventions and Functions* We’ve broken symbol resolution up into two steps: get the context, and get the value of the literal within the context. That lets us re-use symbol names within different contexts. But we still need some way to specify a function call - i.e. a copy of some context with “input” symbols set to particular values. As an example, for a computation of factorial(3), we might want to write something like this: ``` Context({ n: 3, is_base_case: E(==, [S(n), 0]), recurse_result: S(result, <COPY OF THIS CONTEXT BUT WITH n REPLACED BY n-1>), non_base_result: E(*, [n, S(recurse_result)]) result: E(ternary, [S(is_base_case), 1, S(non_base_result)]) }) ``` Key thing to notice: “copy of this context but with n replaced by n-1” sounds an awful lot like an intervention. We take a “model” - a context - and produce a copy, with a change to the expression for one of the symbols. That’s principle 2: function calls are produced by do()-style interventions on a context. I’ll use the standard function call notation for this. Simple example: if we take a context like `Context({x: 1, y: 2, z: E(+, [x, y])})`, and apply the intervention `{y: 3}`, we’d write that as ``` Context({x: 1, y: 2, z: E(+, [x, y])})({y: 3}) = Context({x: 1, y: 3, z: E(+, [x, y])}) ``` For our factorial example, that would look something like this: ``` factorial_context = Context({ n: 3, is_base_case: E(==, [S(n), 0]), recurse_result: S(result, factorial_context({n: E(-, [S(n), 1])})), non_base_result: E(*, [n, S(recurse_result)]) result: E(ternary, [S(is_base_case), 1, S(non_base_result)]) }) ``` This is kind of tricky to write - what I intend to convey is that factorial\_context is a pointer to the whole Context object. If we wanted to write the whole thing in Trace notation, it would actually look like this: ``` outer_context = Context({ factorial: Context({ n: 3, factorial: “this string is ignored”, is_base_case: E(==, [S(n), 0]), recurse_result: S(result, S(factorial)({n: E(-, [S(n), 1])})), non_base_result: E(*, [n, S(recurse_result)]) result: E(ternary, [S(is_base_case), 1, S(non_base_result)]) })({factorial: S(factorial)}) }) ``` We need a pointer to the factorial context inside the factorial context itself, so we create an outer context, then use another intervention to pass our pointer in. Viewed as a programming language, Trace lacks lexical scope - annoying for a programming language, but probably a good thing for a data structure. At this point, we have all the pieces of a programming language. To run factorial(5), we could run `S(result, S(factorial, outer_context)({n:5})).get_value()` with outer\_context defined above. More importantly, this immediately provides a way of accessing the program’s trace. We have, effectively, a lazy data structure representing the trace of the program. Let’s imagine tracing our factorial function for n=3. Mixing our notation with some arrows, our program suggests something like this: ![](https://docs.google.com/drawings/u/1/d/sqDDogMCGnBbUOVkDSBUcuA/image?w=493&h=679&rev=67&ac=1&parent=1k4EeMXmr8q9XF3OvZLBHckRcjHnVwrBBW7szh6OJPDg) Key thing to notice: all Contexts except the first are dynamically generated, via interventions. As we trace back through the data structure, we’ll need to call get\_value() to generate the contexts. So when we actually get the values of the contexts, we’ll end up with a data structure like this: ![](https://docs.google.com/drawings/u/1/d/sEQwj5M3jgKXwE0JdkJUF6g/image?w=493&h=679&rev=9&ac=1&parent=1k4EeMXmr8q9XF3OvZLBHckRcjHnVwrBBW7szh6OJPDg) Sudden Stop ----------- That’s it for core principles. I’m currently prototyping some causal abstraction algorithms using Trace, and I’m tweaking things as I go - in particular, I may change the intervention notation to optimize for intervening on the trace rather than intervening on the program. We’ll see. Once I have some proper algorithms running and the notation has stabilized, I’ll probably put it on github along with some documentation.
026a29f5-7147-4190-ba17-ad28ff3619c6
trentmkelly/LessWrong-43k
LessWrong
Low hanging fruits (LWCW 2020) During Less Wrong Community Week-end (Europe), in one event people share low-hanging fruit they used. I chaired it this year and defined low-hanging fruit as something that can be easily bought or done that improved the life. Here is the list of fruits shared in 2020. All typos are mine. "I" usually reflect the one who gave the message and not the author of this blog post. Water watering bulbs If you don't know how much to water your plants, if you did too much or not enough, let bulbs do it for you. Examples of bulbs are Before work time Reserve some time in the morning, before you head out to work, and invest this into something that is important to you. You do this without any experience from the day (bad/good) and with you full physical ability. Ad block on smartphone Blokada https://blokada.org/index.html Add blocking for android. Reduce noises, distraction, bandwidth (?), easy to install. added bonus: Firefox + Ublock origin Kalimba Playing music is to listening to live music as live music is to recorded music. If you don't want to spend time to learn an instrument, the Kalimba is cheap and directly lead to beautiful musics. Better sleeping No device at night Set all devices to lock at sleeping time. Light Get a smart lightbulb, and setting to slowly dim/become red for the half hour before bed - makes going to bed at the right time the default action and makes me feel tired, and signficantly decreases the willpower it takes. End the day with a paper book to avoid looking at screen. Two to ten minutes of sun in the morning, or by default strong light Melatonine Takes melatonine. Morning Schedule things in morning so that you have incentive to sleep. Having phone 3 meters from bed, to force to go outside of bed to turn it of Day / night separation Ensuring you don't see your bed from your workplace (and reciprocally) , to feel the separation between work space and personal space Neater writing Switching to using a fountain pen can force
6c04798f-374f-45b9-8d7c-bf24911314b5
trentmkelly/LessWrong-43k
LessWrong
[SEQ RERUN] The Two-Party Swindle Today's post, The Two-Party Swindle was originally published on 01 January 2008. A summary (taken from the LW wiki):   > Voters for either political party usually have more in common with each other than they do for the politicians they vote for. And yet, they support their own "team members" with fanatic devotion. Nobody is allowed to criticize their own team's politicians, without their fellow voters accusing them of treason. Discuss the post here (rather than in the comments to the original post). This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Cultish Countercultishness, and you can use the sequence_reruns tag or rss feed to follow the rest of the series. Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
ffb3d806-f354-4388-a408-327e44c42018
trentmkelly/LessWrong-43k
LessWrong
Stop arbitrarily limiting yourself Crossposted from my blog My roommate types with one finger; he literally only uses his right index finger. I made him do a typing test and his WPM turned out to be around 30-40, which is honestly higher than I thought it’d be. When I first noticed this, I immediately thought: he’s limiting himself for no reason. He could probably get to 60 WPM with proper form pretty easily with as little as 15 minutes of practice a day. So I asked him why he didn’t use both hands, and he said that he’d gotten used to it and never ran into any problems. If it works, it works, I guess. I left that interaction a little bewildered, and it got me wondering if I have any habits that arbitrarily limit myself as well. I thought of a few examples. ---------------------------------------- I had a pretty bad track record with STEM classes in high school, so I started to label myself as a “non-STEM” person. I doubt that this label is doing me any favors; it’s just limiting my willingness and ability to learn STEM subjects in the future. Plus, at this point, I suspect it’s become a self-fulfilling prophecy: I think I’m bad at STEM subjects, which subconsciously may lead me to actually do worse in those classes, which perpetuates my potentially misguided belief.  Another area where I arbitrarily limit myself is my fashion style. I recently wrote a post about this, but to summarize: I’ve worn hoodies and sweatpants my whole life and have consistently refused opportunities to improve my wardrobe. Why? Because over the years, I’ve established an identity as an informal dresser to my friends and family, and I subconsciously feel the need to uphold that reputation. Which is pretty stupid.  So8res writes a great piece about how complaining is a big way we arbitrarily limit ourselves. He offers an example of someone getting out of an abusive relationship:  > After they successfully exit, their friends are likely to be first in line with condolences along the lines of “they were gaslighting you
161aaf0e-727e-4042-9e4b-44efbee9a466
trentmkelly/LessWrong-43k
LessWrong
What is the best way to talk about probabilities you expect to change with evidence/experiments? I was thinking about my p(doom) in the next 10 years and came up with something around 6%[1]. However that involves lots of current unknowns to me, like the nature of current human knowledge production (and the bottle necks involved) which impact my P(doom) to be either 3% or 15% depending upon what type of bottle necks are found or not found. Is there a technical way to describe this probability distribution contingent on evidence? 1. ^ I'm bearish on LLMs leading AI directly (10% chance) and roughly a 30% chance of LLMs  based AI fooming quickly enough to kill us and to want to kill us within 10 years. There is a 3% chance that something will come out of left field and doing the same.
c2bea778-deb2-47ef-a4d6-fe8395f58b64
trentmkelly/LessWrong-43k
LessWrong
The Fundamental Theorem of Asset Pricing: Missing Link of the Dutch Book Arguments Assumed background: Acyclic preferences, Dutch Book theorems There are fairly elementary arguments that, in the absence of uncertainty, any preferences not described by a utility function are problematic - this is the circular preferences argument. There are also fairly elementary arguments that, if we handle uncertainty by taking weighted sums of utilities of different outcomes, then the weights should follow the usual rules of probability - these are the Dutch Book arguments. But in the middle there’s a jump: we need to assume that taking weighted sums of utilities makes sense for some reason. There are some high-powered theorems which make that jump (specifically the complete class theorem), but they’re not very mathematically accessible. (If any of that sounds new, you should read Yudkowsky’s excellent intro to this stuff before reading this post.) It turns out that there is a relatively simple theorem which bridges the gap between deterministic utility and Dutch Book arguments. But rather than hanging out in decision theory textbooks, it’s been living it up in finance. It’s called the Fundamental Theorem of Asset Pricing (FTAP). Here’s the setup. Just like the Dutch Book arguments, we have a bunch of tradable assets - i.e. betting contracts, like stock options or horse race bets. We have a bunch of possible outcomes - i.e. possible prices of an underlying stock at expiry, or possible winners of the horse race. Each asset's final value will depend on the outcome. Then the FTAP states that either: * There exists some portfolio of assets which costs $0 to buy (can include short sales) and is guaranteed a positive payout (i.e. arbitrage), or * There exists a probability distribution such that the price of each asset is the expected value of its payout (i.e. price is a weighted sum of possible outcome values). Note that this is exactly what we need to round out the Dutch Book arguments: either there exists an arbitrage opportunity, or we compare assets using
8c3819f6-7d9d-48f2-b24d-11da43da17ac
StampyAI/alignment-research-dataset/arxiv
Arxiv
Scaling Laws for Neural Language Models 1 Introduction --------------- Language provides a natural domain for the study of artificial intelligence, as the vast majority of reasoning tasks can be efficiently expressed and evaluated in language, and the world’s text provides a wealth of data for unsupervised learning via generative modeling. Deep learning has recently seen rapid progress in language modeling, with state of the art models [radford2018improving](#bib.bib30) ; [1810.04805](#bib.bib7) ; [1906.08237](#bib.bib47) ; [DBLP:journals/corr/abs-1907-11692](#bib.bib24) ; [1910.10683](#bib.bib33) approaching human-level performance on many specific tasks [wang2019superglue](#bib.bib44) , including the composition of coherent multi-paragraph prompted text samples [radford2019language](#bib.bib34) . One might expect language modeling performance to depend on model architecture, the size of neural models, the computing power used to train them, and the data available for this training process. In this work we will empirically investigate the dependence of language modeling loss on all of these factors, focusing on the Transformer architecture [OriginalTransformer](#bib.bib41) ; [liu2018generating](#bib.bib25) . The high ceiling and low floor for performance on language tasks allows us to study trends over more than seven orders of magnitude in scale. Throughout we will observe precise power-law scalings for performance as a function of training time, context length, dataset size, model size, and compute budget. ![Language modeling performance improves smoothly as we increase the model size, datasetset size, and amount of compute](https://media.arxiv-vanity.com/render-output/8045724/x1.png) Figure 1: Language modeling performance improves smoothly as we increase the model size, datasetset size, and amount of compute111Here we display predicted compute when using a sufficiently small batch size. See Figure [13](#S6.F13 "Figure 13 ‣ 6 Optimal Allocation of the Compute Budget ‣ Scaling Laws for Neural Language Models") for comparison to the purely empirical data. used for training. For optimal performance all three factors must be scaled up in tandem. Empirical performance has a power-law relationship with each individual factor when not bottlenecked by the other two. ### 1.1 Summary Our key findings for Transformer language models are are as follows: ##### Performance depends strongly on scale, weakly on model shape: Model performance depends most strongly on scale, which consists of three factors: the number of model parameters N (excluding embeddings), the size of the dataset D, and the amount of compute C used for training. Within reasonable limits, performance depends very weakly on other architectural hyperparameters such as depth vs. width. (Section [3](#S3 "3 Empirical Results and Basic Power Laws ‣ Scaling Laws for Neural Language Models")) ##### Smooth power laws: Performance has a power-law relationship with each of the three scale factors N,D,C when not bottlenecked by the other two, with trends spanning more than six orders of magnitude (see Figure [1](#footnote1 "footnote 1 ‣ Figure 1 ‣ 1 Introduction ‣ Scaling Laws for Neural Language Models")). We observe no signs of deviation from these trends on the upper end, though performance must flatten out eventually before reaching zero loss. (Section [3](#S3 "3 Empirical Results and Basic Power Laws ‣ Scaling Laws for Neural Language Models")) ##### Universality of overfitting: Performance improves predictably as long as we scale up N and D in tandem, but enters a regime of diminishing returns if either N or D is held fixed while the other increases. The performance penalty depends predictably on the ratio N0.74/D, meaning that every time we increase the model size 8x, we only need to increase the data by roughly 5x to avoid a penalty. (Section [4](#S4 "4 Charting the Infinite Data Limit and Overfitting ‣ Scaling Laws for Neural Language Models")) ##### Universality of training: Training curves follow predictable power-laws whose parameters are roughly independent of the model size. By extrapolating the early part of a training curve, we can roughly predict the loss that would be achieved if we trained for much longer. (Section [5](#S5 "5 Scaling Laws with Model Size and Training Time ‣ Scaling Laws for Neural Language Models")) ##### Transfer improves with test performance: When we evaluate models on text with a different distribution than they were trained on, the results are strongly correlated to those on the training validation set with a roughly constant offset in the loss – in other words, transfer to a different distribution incurs a constant penalty but otherwise improves roughly in line with performance on the training set. (Section [3.2.2](#S3.SS2.SSS2 "3.2.2 Generalization Among Data Distributions ‣ 3.2 Performance with Non-Embedding Parameter Count N ‣ 3 Empirical Results and Basic Power Laws ‣ Scaling Laws for Neural Language Models")) ##### Sample efficiency: Large models are more sample-efficient than small models, reaching the same level of performance with fewer optimization steps (Figure [2](#S1.F2 "Figure 2 ‣ Sample efficiency: ‣ 1.1 Summary ‣ 1 Introduction ‣ Scaling Laws for Neural Language Models")) and using fewer data points (Figure LABEL:fig:LossvsModelDatasetSize). ![We show a series of language model training runs, with models ranging in size from ](https://media.arxiv-vanity.com/render-output/8045724/x2.png) Figure 2: We show a series of language model training runs, with models ranging in size from 103 to 109 parameters (excluding embeddings). ![As more compute becomes available, we can choose how much to allocate towards training larger models, using larger batches, and training for more steps. We illustrate this for a billion-fold increase in compute. For optimally compute-efficient training, most of the increase should go towards increased model size. A relatively small increase in data is needed to avoid reuse. Of the increase in data, most can be used to increase parallelism through larger batch sizes, with only a very small increase in serial training time required. ](https://media.arxiv-vanity.com/render-output/8045724/x3.png) Figure 3: As more compute becomes available, we can choose how much to allocate towards training larger models, using larger batches, and training for more steps. We illustrate this for a billion-fold increase in compute. For optimally compute-efficient training, most of the increase should go towards increased model size. A relatively small increase in data is needed to avoid reuse. Of the increase in data, most can be used to increase parallelism through larger batch sizes, with only a very small increase in serial training time required. ##### Convergence is inefficient: When working within a fixed compute budget C but without any other restrictions on the model size N or available data D, we attain optimal performance by training *very large models* and stopping *significantly short of convergence* (see Figure [3](#S1.F3 "Figure 3 ‣ Sample efficiency: ‣ 1.1 Summary ‣ 1 Introduction ‣ Scaling Laws for Neural Language Models")). Maximally compute-efficient training would therefore be far more sample efficient than one might expect based on training small models to convergence, with data requirements growing very slowly as D∼C0.27 with training compute. (Section [6](#S6 "6 Optimal Allocation of the Compute Budget ‣ Scaling Laws for Neural Language Models")) ##### Optimal batch size: The ideal batch size for training these models is roughly a power of the loss only, and continues to be determinable by measuring the gradient noise scale [1812.06162](#bib.bib28) ; it is roughly 1-2 million tokens at convergence for the largest models we can train. (Section [5.1](#S5.SS1 "5.1 Adjustment for Training at ⁢Bcrit(L) ‣ 5 Scaling Laws with Model Size and Training Time ‣ Scaling Laws for Neural Language Models")) Taken together, these results show that language modeling performance improves smoothly and predictably as we appropriately scale up model size, data, and compute. We expect that larger language models will perform better and be more sample efficient than current models. ### 1.2 Summary of Scaling Laws The test loss of a Transformer trained to autoregressively model language can be predicted using a power-law when performance is limited by only either the number of non-embedding parameters N, the dataset size D, or the optimally allocated compute budget Cmin (see Figure [1](#footnote1 "footnote 1 ‣ Figure 1 ‣ 1 Introduction ‣ Scaling Laws for Neural Language Models")): 1. For models with a limited number of parameters, trained to convergence on sufficiently large datasets: | | | | | | --- | --- | --- | --- | | | L(N)=(Nc/N)αN;  αN∼0.076,Nc∼8.8×1013 (non-embedding parameters) | | (1.1) | 2. For large models trained with a limited dataset with early stopping: | | | | | | --- | --- | --- | --- | | | L(D)=(Dc/D)αD;  αD∼0.095,Dc∼5.4×1013 (tokens) | | (1.2) | 3. When training with a limited amount of compute, a sufficiently large dataset, an optimally-sized model, and a sufficiently small batch size (making optimal222We also observe an empirical power-law trend with the training compute C (Figure [1](#footnote1 "footnote 1 ‣ Figure 1 ‣ 1 Introduction ‣ Scaling Laws for Neural Language Models")) while training at fixed batch size, but it is the trend with Cmin that should be used to make predictions. They are related by equation ([5.5](#S5.E5 "(5.5) ‣ 5.1 Adjustment for Training at ⁢Bcrit(L) ‣ 5 Scaling Laws with Model Size and Training Time ‣ Scaling Laws for Neural Language Models")). use of compute): | | | | | | --- | --- | --- | --- | | | L(Cmin)=(Cminc/Cmin)αminC;  αminC∼0.050,Cminc∼3.1×108 (PF-days) | | (1.3) | These relations hold across eight orders of magnitude in Cmin, six orders of magnitude in N, and over two orders of magnitude in D. They depend very weakly on model shape and other Transformer hyperparameters (depth, width, number of self-attention heads), with specific numerical values associated with the Webtext2 training set [radford2019language](#bib.bib34) . The power laws αN,αD,αminC specify the degree of performance improvement expected as we scale up N, D, or Cmin; for example, doubling the number of parameters yields a loss that is smaller by a factor 2−αN=0.95. The precise numerical values of Nc,Cminc, and Dc depend on the vocabulary size and tokenization and hence do not have a fundamental meaning. The critical batch size, which determines the speed/efficiency tradeoff for data parallelism ([1812.06162](#bib.bib28) ), also roughly obeys a power law in L: | | | | | | --- | --- | --- | --- | | | Bcrit(L)=B∗L1/αB,B∗∼2⋅108 tokens,  αB∼0.21 | | (1.4) | | | | | --- | --- | | | | Figure 4: Left: The early-stopped test loss L(N,D) varies predictably with the dataset size D and model size N according to Equation ([1.5](#S1.E5 "(1.5) ‣ 1.2 Summary of Scaling Laws ‣ 1 Introduction ‣ Scaling Laws for Neural Language Models")). Right: After an initial transient period, learning curves for all model sizes N can be fit with Equation ([1.6](#S1.E6 "(1.6) ‣ 1.2 Summary of Scaling Laws ‣ 1 Introduction ‣ Scaling Laws for Neural Language Models")), which is parameterized in terms of Smin, the number of steps when training at large batch size (details in Section [5.1](#S5.SS1 "5.1 Adjustment for Training at ⁢Bcrit(L) ‣ 5 Scaling Laws with Model Size and Training Time ‣ Scaling Laws for Neural Language Models")). Equation ([1.1](#S1.E1 "(1.1) ‣ item 1 ‣ 1.2 Summary of Scaling Laws ‣ 1 Introduction ‣ Scaling Laws for Neural Language Models")) and ([1.2](#S1.E2 "(1.2) ‣ item 2 ‣ 1.2 Summary of Scaling Laws ‣ 1 Introduction ‣ Scaling Laws for Neural Language Models")) together suggest that as we increase the model size, we should increase the dataset size sublinearly according to D∝NαNαD∼N0.74. In fact, we find that there is a single equation combining ([1.1](#S1.E1 "(1.1) ‣ item 1 ‣ 1.2 Summary of Scaling Laws ‣ 1 Introduction ‣ Scaling Laws for Neural Language Models")) and ([1.2](#S1.E2 "(1.2) ‣ item 2 ‣ 1.2 Summary of Scaling Laws ‣ 1 Introduction ‣ Scaling Laws for Neural Language Models")) that governs the simultaneous dependence on N and D and governs the degree of overfitting: | | | | | | --- | --- | --- | --- | | | L(N,D)=⎡⎣(NcN)αNαD+DcD⎤⎦αD | | (1.5) | with fits pictured on the left in figure LABEL:fig:LossvsModelDatasetSize. We conjecture that this functional form may also parameterize the trained log-likelihood for other generative modeling tasks. When training a given model for a finite number of parameter update steps S in the infinite data limit, after an initial transient period, the learning curves can be accurately fit by (see the right of figure [4](#S1.F4 "Figure 4 ‣ 1.2 Summary of Scaling Laws ‣ 1 Introduction ‣ Scaling Laws for Neural Language Models")) | | | | | | --- | --- | --- | --- | | | L(N,S)=(NcN)αN+(ScSmin(S))αS | | (1.6) | where Sc≈2.1×103 and αS≈0.76, and Smin(S) is the minimum possible number of optimization steps (parameter updates) estimated using Equation ([5.4](#S5.E4 "(5.4) ‣ 5.1 Adjustment for Training at ⁢Bcrit(L) ‣ 5 Scaling Laws with Model Size and Training Time ‣ Scaling Laws for Neural Language Models")). When training within a fixed compute budget C, but with no other constraints, Equation ([1.6](#S1.E6 "(1.6) ‣ 1.2 Summary of Scaling Laws ‣ 1 Introduction ‣ Scaling Laws for Neural Language Models")) leads to the prediction that the optimal model size N, optimal batch size B, optimal number of steps S, and dataset size D should grow as | | | | | | --- | --- | --- | --- | | | N∝CαminC/αN,B∝CαminC/αB,S∝CαminC/αS,D=B⋅S | | (1.7) | with | | | | | | --- | --- | --- | --- | | | αminC=1/(1/αS+1/αB+1/αN) | | (1.8) | which closely matches the empirically optimal results N∝C0.73min, B∝C0.24min, and S∝C0.03min. As the computational budget C increases, it should be spent primarily on larger models, without dramatic increases in training time or dataset size (see Figure [3](#S1.F3 "Figure 3 ‣ Sample efficiency: ‣ 1.1 Summary ‣ 1 Introduction ‣ Scaling Laws for Neural Language Models")). This also implies that as models grow larger, they become increasingly sample efficient. In practice, researchers typically train smaller models for longer than would be maximally compute-efficient because of hardware constraints. Optimal performance depends on total compute as a power law (see Equation ([1.3](#S1.E3 "(1.3) ‣ item 3 ‣ 1.2 Summary of Scaling Laws ‣ 1 Introduction ‣ Scaling Laws for Neural Language Models"))). We provide some basic theoretical motivation for Equation ([1.5](#S1.E5 "(1.5) ‣ 1.2 Summary of Scaling Laws ‣ 1 Introduction ‣ Scaling Laws for Neural Language Models")), an analysis of learning curve fits and their implications for training time, and a breakdown of our results per token. We also make some brief comparisons to LSTMs and recurrent Transformers [DBLP:journals/corr/abs-1807-03819](#bib.bib8) . ### 1.3 Notation We use the following notation: * L – the cross entropy loss in nats. Typically it will be averaged over the tokens in a context, but in some cases we report the loss for specific tokens within the context. * N – the number of model parameters, *excluding all vocabulary and positional embeddings* * C≈6NBS – an estimate of the total non-embedding training compute, where B is the batch size, and S is the number of training steps (ie parameter updates). We quote numerical values in PF-days, where one PF-day =1015×24×3600=8.64×1019 floating point operations. * D – the dataset size in tokens * Bcrit – the critical batch size [1812.06162](#bib.bib28) , defined and discussed in Section [5.1](#S5.SS1 "5.1 Adjustment for Training at ⁢Bcrit(L) ‣ 5 Scaling Laws with Model Size and Training Time ‣ Scaling Laws for Neural Language Models"). Training at the critical batch size provides a roughly optimal compromise between time and compute efficiency. * Cmin – an estimate of the minimum amount of non-embedding compute to reach a given value of the loss. This is the training compute that would be used if the model were trained at a batch size much less than the critical batch size. * Smin – an estimate of the minimal number of training steps needed to reach a given value of the loss. This is also the number of training steps that would be used if the model were trained at a batch size much greater than the critical batch size. * αX – power-law exponents for the scaling of the loss as L(X)∝1/XαX where X can be any of N,D,C,S,B,Cmin. 2 Background and Methods ------------------------- We train language models on WebText2, an extended version of the WebText [radford2019language](#bib.bib34) dataset, tokenized using byte-pair encoding [BPE](#bib.bib36) with a vocabulary size nvocab=50257. We optimize the autoregressive log-likelihood (i.e. cross-entropy loss) averaged over a 1024-token context, which is also our principal performance metric. We record the loss on the WebText2 test distribution and on a selection of other text distributions. We primarily train decoder-only [liu2018generating](#bib.bib25) ; [radford2018improving](#bib.bib30) Transformer [OriginalTransformer](#bib.bib41) models, though we also train LSTM models and Universal Transformers [DBLP:journals/corr/abs-1807-03819](#bib.bib8) for comparison. ### 2.1 Parameter and Compute Scaling of Transformers | | | | | --- | --- | --- | | Operation | Parameters | FLOPs per Token | | Embed | (nvocab+nctx)dmodel | 4dmodel | | Attention: QKV | nlayerdmodel3dattn | 2nlayerdmodel3dattn | | Attention: Mask | — | 2nlayernctxdattn | | Attention: Project | nlayerdattndmodel | 2nlayerdattndembd | | Feedforward | nlayer2dmodeldff | 2nlayer2dmodeldff | | De-embed | — | 2dmodelnvocab | | Total (Non-Embedding) | N=2dmodelnlayer(2dattn+dff) | Cforward=2N+2nlayernctxdattn | Table 1: Parameter counts and compute (forward pass) estimates for a Transformer model. Sub-leading terms such as nonlinearities, biases, and layer normalization are omitted. We parameterize the Transformer architecture using hyperparameters nlayer (number of layers), dmodel (dimension of the residual stream), dff (dimension of the intermediate feed-forward layer), dattn (dimension of the attention output), and nheads (number of attention heads per layer). We include nctx tokens in the input context, with nctx=1024 except where otherwise noted. We use N to denote the model size, which we define as the number of *non-embedding* parameters | | | | | | --- | --- | --- | --- | | | N | ≈2dmodelnlayer(2dattn+dff) | | | | | =12nlayerd2model with the standard % dattn=dff/4=dmodel | | (2.1) | where we have excluded biases and other sub-leading terms. Our models also have nvocabdmodel parameters in an embedding matrix, and use nctxdmodel parameters for positional embeddings, but we do not include these when discussing the ‘model size’ N; we will see that this produces significantly cleaner scaling laws. Evaluating a forward pass of the Transformer involves roughly | | | | | | --- | --- | --- | --- | | | Cforward≈2N+2nlayernctxdmodel | | (2.2) | add-multiply operations, where the factor of two comes from the multiply-accumulate operation used in matrix multiplication. A more detailed per-operation parameter and compute count is included in Table [1](#S2.T1 "Table 1 ‣ 2.1 Parameter and Compute Scaling of Transformers ‣ 2 Background and Methods ‣ Scaling Laws for Neural Language Models"). For contexts and models with dmodel>nctx/12, the context-dependent computational cost per token is a relatively small fraction of the total compute. Since we primarily study models where dmodel≫nctx/12, we do not include context-dependent terms in our training compute estimate. Accounting for the backwards pass (approximately twice the compute as the forwards pass), we then define the estimated non-embedding compute as C≈6N floating point operators per training token. ### 2.2 Training Procedures Unless otherwise noted, we train models with the Adam optimizer [kingma2014adam](#bib.bib20) for a fixed 2.5×105 steps with a batch size of 512 sequences of 1024 tokens. Due to memory constraints, our largest models (more than 1B parameters) were trained with Adafactor [DBLP:journals/corr/abs-1804-04235](#bib.bib38) . We experimented with a variety of learning rates and schedules, as discussed in Appendix [D.6](#A4.SS6 "D.6 Learning Rate Schedules and Error Analysis ‣ Appendix D Supplemental Figures ‣ Scaling Laws for Neural Language Models"). We found that results at convergence were largely independent of learning rate schedule. Unless otherwise noted, all training runs included in our data used a learning rate schedule with a 3000 step linear warmup followed by a cosine decay to zero. ### 2.3 Datasets We train our models on an extended version of the WebText dataset described in [radford2019language](#bib.bib34) . The original WebText dataset was a web scrape of outbound links from Reddit through December 2017 which received at least 3 karma. In the second version, WebText2, we added outbound Reddit links from the period of January to October 2018, also with a minimum of 3 karma. The karma threshold served as a heuristic for whether people found the link interesting or useful. The text of the new links was extracted with the Newspaper3k python library. In total, the dataset consists of 20.3M documents containing 96 GB of text and 1.62×1010 words (as defined by wc). We then apply the reversible tokenizer described in [radford2019language](#bib.bib34) , which yields 2.29×1010 tokens. We reserve 6.6×108 of these tokens for use as a test set, and we also test on similarly-prepared samples of Books Corpus [Zhu\_2015](#bib.bib49) , Common Crawl [commoncrawl](#bib.bib10) , English Wikipedia, and a collection of publicly-available Internet Books. 3 Empirical Results and Basic Power Laws ----------------------------------------- To characterize language model scaling we train a wide variety of models, varying a number of factors including: * Model size (ranging in size from 768 to 1.5 billion non-embedding parameters) * Dataset size (ranging from 22 million to 23 billion tokens) * Shape (including depth, width, attention heads, and feed-forward dimension) * Context length (1024 for most runs, though we also experiment with shorter contexts) * Batch size (219 for most runs, but we also vary it to measure the critical batch size) In this section we will display data along with empirically-motivated fits, deferring theoretical analysis to later sections. ### 3.1 Approximate Transformer Shape and Hyperparameter Independence ![Performance depends very mildly on model shape when the total number of non-embedding parameters ](https://media.arxiv-vanity.com/render-output/8045724/x6.png) Figure 5: Performance depends very mildly on model shape when the total number of non-embedding parameters N is held fixed. The loss varies only a few percent over a wide range of shapes. Small differences in parameter counts are compensated for by using the fit to L(N) as a baseline. Aspect ratio in particular can vary by a factor of 40 while only slightly impacting performance; an (nlayer,dmodel)=(6,4288) reaches a loss within 3% of the (48,1600) model used in [radford2019language](#bib.bib34) . Transformer performance depends very weakly on the shape parameters nlayer,nheads, and dff when we hold the total non-embedding parameter count N fixed. To establish these results we trained models with fixed size while varying a single hyperparameter. This was simplest for the case of nheads. When varying nlayer, we simultaneously varied dmodel while keeping N≈12nlayerd2model fixed. Similarly, to vary dff at fixed model size we also simultaneously varied the dmodel parameter, as required by the parameter counts in Table [1](#S2.T1 "Table 1 ‣ 2.1 Parameter and Compute Scaling of Transformers ‣ 2 Background and Methods ‣ Scaling Laws for Neural Language Models"). Independence of nlayers would follow if deeper Transformers effectively behave as ensembles of shallower models, as has been suggested for ResNets [ResNetsEnsemblesShallow](#bib.bib42) . The results are shown in Figure [5](#S3.F5 "Figure 5 ‣ 3.1 Approximate Transformer Shape and Hyperparameter Independence ‣ 3 Empirical Results and Basic Power Laws ‣ Scaling Laws for Neural Language Models"). ### 3.2 Performance with Non-Embedding Parameter Count N | | | | --- | --- | | | | Figure 6: Left: When we include embedding parameters, performance appears to depend strongly on the number of layers in addition to the number of parameters. Right: When we exclude embedding parameters, the performance of models with different depths converge to a single trend. Only models with fewer than 2 layers or with extreme depth-to-width ratios deviate significantly from the trend. In Figure [6](#S3.F6 "Figure 6 ‣ 3.2 Performance with Non-Embedding Parameter Count N ‣ 3 Empirical Results and Basic Power Laws ‣ Scaling Laws for Neural Language Models") we display the performance of a wide variety of models, ranging from small models with shape (nlayer,dmodel)=(2,128) through billion-parameter models, ranging in shape from (6,4288) through (207,768). Here we have trained to near convergence on the full WebText2 dataset and observe no overfitting (except possibly for the very largest models). As shown in Figure [1](#footnote1 "footnote 1 ‣ Figure 1 ‣ 1 Introduction ‣ Scaling Laws for Neural Language Models"), we find a steady trend with non-embedding parameter count N, which can be fit to the first term of Equation ([1.5](#S1.E5 "(1.5) ‣ 1.2 Summary of Scaling Laws ‣ 1 Introduction ‣ Scaling Laws for Neural Language Models")), so that | | | | | | --- | --- | --- | --- | | | L(N)≈(NcN)αN | | (3.1) | To observe these trends it is crucial to study performance as a function of N; if we instead use the total parameter count (including the embedding parameters) the trend is somewhat obscured (see Figure [6](#S3.F6 "Figure 6 ‣ 3.2 Performance with Non-Embedding Parameter Count N ‣ 3 Empirical Results and Basic Power Laws ‣ Scaling Laws for Neural Language Models")). This suggests that the embedding matrix can be made smaller without impacting performance, as has been seen in recent work [lan2019albert](#bib.bib23) . Although these models have been trained on the WebText2 dataset, their test loss on a variety of other datasets is also a power-law in N with nearly identical power, as shown in Figure [8](#S3.F8 "Figure 8 ‣ 3.2.2 Generalization Among Data Distributions ‣ 3.2 Performance with Non-Embedding Parameter Count N ‣ 3 Empirical Results and Basic Power Laws ‣ Scaling Laws for Neural Language Models"). #### 3.2.1 Comparing to LSTMs and Universal Transformers ![ ](https://media.arxiv-vanity.com/render-output/8045724/x9.png) Figure 7: In Figure [7](#S3.F7 "Figure 7 ‣ 3.2.1 Comparing to LSTMs and Universal Transformers ‣ 3.2 Performance with Non-Embedding Parameter Count N ‣ 3 Empirical Results and Basic Power Laws ‣ Scaling Laws for Neural Language Models") we compare LSTM and Transformer performance as a function of non-embedding parameter count N. The LSTMs were trained with the same dataset and context length. We see from these figures that the LSTMs perform as well as Transformers for tokens appearing early in the context, but cannot match the Transformer performance for later tokens. We present power-law relationships between performance and context position Appendix [D.5](#A4.SS5 "D.5 Context Dependence ‣ Appendix D Supplemental Figures ‣ Scaling Laws for Neural Language Models"), where increasingly large powers for larger models suggest improved ability to quickly recognize patterns. We also compare the performance of standard Transformers to recurrent Transformers [DBLP:journals/corr/abs-1807-03819](#bib.bib8) in Figure [17](#A4.F17 "Figure 17 ‣ Appendix D Supplemental Figures ‣ Scaling Laws for Neural Language Models") in the appendix. These models re-use parameters, and so perform slightly better as a function of N, at the cost of additional compute per-parameter. #### 3.2.2 Generalization Among Data Distributions We have also tested our models on a set of additional text data distributions. The test loss on these datasets as a function of model size is shown in Figure [8](#S3.F8 "Figure 8 ‣ 3.2.2 Generalization Among Data Distributions ‣ 3.2 Performance with Non-Embedding Parameter Count N ‣ 3 Empirical Results and Basic Power Laws ‣ Scaling Laws for Neural Language Models"); in all cases the models were trained only on the WebText2 dataset. We see that the loss on these other data distributions improves smoothly with model size, in direct parallel with the improvement on WebText2. We find that generalization depends almost exclusively on the in-distribution validation loss, and does not depend on the duration of training or proximity to convergence. We also observe no dependence on model depth (see Appendix [D.8](#A4.SS8 "D.8 Generalization and Architecture ‣ Appendix D Supplemental Figures ‣ Scaling Laws for Neural Language Models")). | | | | --- | --- | | | | Figure 8: Left: Generalization performance to other data distributions improves smoothly with model size, with only a small and very slowly growing offset from the WebText2 training distribution. Right: Generalization performance depends only on training distribution performance, and not on the phase of training. We compare generalization of converged models (points) to that of a single large model (dashed curves) as it trains. ### 3.3 Performance with Dataset Size and Compute We display empirical trends for the test loss as a function of dataset size D (in tokens) and training compute C in Figure [1](#footnote1 "footnote 1 ‣ Figure 1 ‣ 1 Introduction ‣ Scaling Laws for Neural Language Models"). For the trend with D we trained a model with (nlayer,nembd)=(36,1280) on fixed subsets of the WebText2 dataset. We stopped training once the test loss ceased to decrease. We see that the resulting test losses can be fit with simple power-law | | | | | | --- | --- | --- | --- | | | L(D)≈(DcD)αD | | (3.2) | in the dataset size. The data and fit appear in Figure [1](#footnote1 "footnote 1 ‣ Figure 1 ‣ 1 Introduction ‣ Scaling Laws for Neural Language Models"). The total amount of non-embedding compute used during training can be estimated as C=6NBS, where B is the batch size, S is the number of parameter updates, and the factor of 6 accounts for the forward and backward passes. Thus for a given value of C we can scan over all models with various N to find the model with the best performance on step S=C6BS. Note that in these results *the batch size B remains fixed for all models*, which means that these empirical results are not truly optimal. We will account for this in later sections using an adjusted Cmin to produce cleaner trends. The result appears as the heavy black line on the left-hand plot in Figure [1](#footnote1 "footnote 1 ‣ Figure 1 ‣ 1 Introduction ‣ Scaling Laws for Neural Language Models"). It can be fit with | | | | | | --- | --- | --- | --- | | | L(C)≈(CcC)αC | | (3.3) | The figure also includes images of individual learning curves to clarify when individual models are optimal. We will study the optimal allocation of compute more closely later on. The data strongly suggests that sample efficiency improves with model size, and we also illustrate this directly in Figure [19](#A4.F19 "Figure 19 ‣ D.4 Sample Efficiency vs Model Size ‣ Appendix D Supplemental Figures ‣ Scaling Laws for Neural Language Models") in the appendix. 4 Charting the Infinite Data Limit and Overfitting --------------------------------------------------- In Section [3](#S3 "3 Empirical Results and Basic Power Laws ‣ Scaling Laws for Neural Language Models") we found a number of basic scaling laws for language modeling performance. Here we will study the performance of a model of size N trained on a dataset with D tokens while varying N and D simultaneously. We will empirically demonstrate that the optimally trained test loss accords with the scaling law of Equation ([1.5](#S1.E5 "(1.5) ‣ 1.2 Summary of Scaling Laws ‣ 1 Introduction ‣ Scaling Laws for Neural Language Models")). This provides guidance on how much data we would need to train models of increasing size while keeping overfitting under control. ### 4.1 Proposed L(n,d) Equation | | | | --- | --- | | The early-stopped test loss | The early-stopped test loss | Figure 9: The early-stopped test loss L(N,D) depends predictably on the dataset size D and model size N according to Equation ([1.5](#S1.E5 "(1.5) ‣ 1.2 Summary of Scaling Laws ‣ 1 Introduction ‣ Scaling Laws for Neural Language Models")). Left: For large D, performance is a straight power law in N. For a smaller fixed D, performance stops improving as N increases and the model begins to overfit. (The reverse is also true, see Figure LABEL:fig:LossvsModelDatasetSize.) Right: The extent of overfitting depends predominantly on the ratio NαNαD/D, as predicted in equation ([4.3](#S4.E3 "(4.3) ‣ 4.2 Results ‣ 4 Charting the Infinite Data Limit and Overfitting ‣ Scaling Laws for Neural Language Models")). The line is our fit to that equation. We have chosen the parameterization ([1.5](#S1.E5 "(1.5) ‣ 1.2 Summary of Scaling Laws ‣ 1 Introduction ‣ Scaling Laws for Neural Language Models")) (repeated here for convenience): | | | | | | --- | --- | --- | --- | | | L(N,D)=⎡⎣(NcN)αNαD+DcD⎤⎦αD | | (4.1) | using three principles: 1. Changes in vocabulary size or tokenization are expected to rescale the loss by an overall factor. The parameterization of L(N,D) (and all models of the loss) must naturally allow for such a rescaling. 2. Fixing D and sending N→∞, the overall loss should approach L(D). Conversely, fixing N and sending D→∞ the loss must approach L(N). 3. L(N,D) should be analytic at D=∞, so that it has a series expansion in 1/D with integer powers. Theoretical support for this principle is significantly weaker than for the first two. Our choice of L(N,D) satisfies the first requirement because we can rescale Nc,Dc with changes in the vocabulary. This also implies that the values of Nc,Dc have no fundamental meaning. Since we stop training early when the test loss ceases to improve and optimize all models in the same way, we expect that larger models should always perform better than smaller models. But with fixed finite D, we also do not expect any model to be capable of approaching the best possible loss (ie the entropy of text). Similarly, a model with fixed size will be capacity-limited. These considerations motivate our second principle. Note that knowledge of L(N) at infinite D and L(D) at infinite N fully determines all the parameters in L(N,D). The third principle is more speculative. There is a simple and general reason one might expect overfitting to scale ∝1/D at very large D. Overfitting should be related to the variance or the signal-to-noise ratio of the dataset [1710.03667](#bib.bib2) , and this scales as 1/D. This expectation should hold for any smooth loss function, since we expect to be able to expand the loss about the D→∞ limit. However, this argument assumes that 1/D corrections dominate over other sources of variance, such as the finite batch size and other limits on the efficacy of optimization. Without empirical confirmation, we would not be very confident of its applicability. Our third principle explains the asymmetry between the roles of N and D in Equation ([1.5](#S1.E5 "(1.5) ‣ 1.2 Summary of Scaling Laws ‣ 1 Introduction ‣ Scaling Laws for Neural Language Models")). Very similar symmetric expressions333For example, one might have used L(N,D)=[(NcN)αN+(DcD)αD]β, but this does not have a 1/D expansion. are possible, but they would not have a 1/D expansion with integer powers, and would require the introduction of an additional parameter. In any case, we will see that our equation for L(N,D) fits the data well, which is the most important justification for our L(N,D) ansatz. ### 4.2 Results We regularize all our models with 10% dropout, and by tracking test loss and stopping once it is no longer decreasing. The results are displayed in Figure [9](#S4.F9 "Figure 9 ‣ 4.1 Proposed ⁢L(N,D) Equation ‣ 4 Charting the Infinite Data Limit and Overfitting ‣ Scaling Laws for Neural Language Models"), including a fit to the four parameters αN,αD,Nc,Dc in Equation ([1.5](#S1.E5 "(1.5) ‣ 1.2 Summary of Scaling Laws ‣ 1 Introduction ‣ Scaling Laws for Neural Language Models")): | Parameter | αN | αD | Nc | Dc | | --- | --- | --- | --- | --- | | Value | 0.076 | 0.103 | 6.4×1013 | 1.8×1013 | Table 2: Fits to L(N,D) We obtain an excellent fit, with the exception of the runs where the dataset has been reduced by a factor of 1024, to about 2×107 tokens. With such a small dataset, an epoch consists of only 40 parameter updates. Perhaps such a tiny dataset represents a different regime for language modeling, as overfitting happens very early in training (see Figure [16](#A4.F16 "Figure 16 ‣ Appendix D Supplemental Figures ‣ Scaling Laws for Neural Language Models")). Also note that the parameters differ very slightly from those obtained in Section [3](#S3 "3 Empirical Results and Basic Power Laws ‣ Scaling Laws for Neural Language Models"), as here we are fitting the full L(N,D) rather than just L(N,∞) or L(∞,D). To chart the borderlands of the infinite data limit, we can directly study the extent of overfitting. For all but the largest models, we see no sign of overfitting when training with the full 22B token WebText2 dataset, so we can take it as representative of D=∞. Thus we can compare finite D to the infinite data limit by defining | | | | | | --- | --- | --- | --- | | | δL(N,D)≡L(N,D)L(N,∞)−1 | | (4.2) | and studying it as a function of N,D. In fact, we see empirically that δL depends only a specific combination of N and D, as shown in Figure [16](#A4.F16 "Figure 16 ‣ Appendix D Supplemental Figures ‣ Scaling Laws for Neural Language Models"). This follows from the scaling law of Equation ([1.5](#S1.E5 "(1.5) ‣ 1.2 Summary of Scaling Laws ‣ 1 Introduction ‣ Scaling Laws for Neural Language Models")), which implies | | | | | | --- | --- | --- | --- | | | δL≈⎛⎝1+(NNc)αNαDDcD⎞⎠αD−1 | | (4.3) | Note that at large D this formula also has a series expansion in powers of 1/D. We estimate that the variation in the loss with different random seeds is roughly 0.02, which means that to avoid overfitting when training to within that threshold of convergence we require | | | | | | --- | --- | --- | --- | | | D≳(5×103)N0.74 | | (4.4) | With this relation, models smaller than 109 parameters can be trained with minimal overfitting on the 22B token WebText2 dataset, but our largest models will encounter some mild overfitting. More generally, this relation shows that dataset size may grow sub-linearly in model size while avoiding overfitting. Note however that this does not typically represent maximally compute-efficient training. We should also emphasize that we have not optimized regularization (eg the dropout probability) while varying dataset and model size. 5 Scaling Laws with Model Size and Training Time ------------------------------------------------- In this section we will demonstrate that a simple scaling law provides a good description for the loss as a function of model size N and training time. First we will explain how to use the results of [1812.06162](#bib.bib28) to define a universal training step Smin, which accounts for the fact that most of our models have not been trained at an optimal batch size. Then we will demonstrate that we can fit the model size and training time dependence of the loss using Equation ([1.6](#S1.E6 "(1.6) ‣ 1.2 Summary of Scaling Laws ‣ 1 Introduction ‣ Scaling Laws for Neural Language Models")). Later we will use these results to predict the optimal allocation of training compute between model size and training time, and then confirm that prediction. ### 5.1 Adjustment for Training at Bcrit(L) ![The critical batch size ](https://media.arxiv-vanity.com/render-output/8045724/x14.png) Figure 10: The critical batch size Bcrit follows a power law in the loss as performance increase, and does not depend directly on the model size. We find that the critical batch size approximately doubles for every 13% decrease in loss. Bcrit is measured empirically from the data shown in Figure [18](#A4.F18 "Figure 18 ‣ D.3 Batch Size ‣ Appendix D Supplemental Figures ‣ Scaling Laws for Neural Language Models"), but it is also roughly predicted by the gradient noise scale, as in [1812.06162](#bib.bib28) . A simple empirical theory for the batch size dependence of training was developed in [1812.06162](#bib.bib28) (see also [1811.03600](#bib.bib37) ; [DBLP:journals/corr/abs-1907-04164](#bib.bib50) ). It was argued that there is a critical batch size Bcrit for training; for B up to Bcrit the batch size can be increased with very minimal degradation in compute-efficiency, whereas for B>Bcrit increases in B result in diminishing returns. It was also argued that the gradient noise scale provides a simple prediction for Bcrit, and that neither depends directly on model size except through the value of the loss that has been attained. These results can be used to predict how training time and compute will vary with the batch size. To utilize both training time and compute as effectively as possible, it is best to train with a batch size B≈Bcrit. Training at B≫Bcrit minimizes the number of training steps, while B≪Bcrit minimizes the use of compute. More specifically, it was demonstrated that for a wide variety of neural network tasks, the number of training steps S and the number of data examples processed E=BS satisfy the simple relation | | | | | | --- | --- | --- | --- | | | (SSmin−1)(EEmin−1)=1 | | (5.1) | when training to any fixed value of the loss L. Here Smin is the minimum number of steps necessary to reach L, while Emin is the minimum number of data examples that must be processed. We demonstrate the relation ([5.1](#S5.E1 "(5.1) ‣ 5.1 Adjustment for Training at ⁢Bcrit(L) ‣ 5 Scaling Laws with Model Size and Training Time ‣ Scaling Laws for Neural Language Models")) for Transformers in Figure [18](#A4.F18 "Figure 18 ‣ D.3 Batch Size ‣ Appendix D Supplemental Figures ‣ Scaling Laws for Neural Language Models") in the appendix. This relation defines the critical batch size | | | | | | --- | --- | --- | --- | | | Bcrit(L)≡EminSmin | | (5.2) | which is a function of the target value of the loss. Training at the critical batch size makes a roughly optimal time/compute tradeoff, requiring 2Smin training steps and processing E=2Emin data examples. In Figure [10](#S5.F10 "Figure 10 ‣ 5.1 Adjustment for Training at ⁢Bcrit(L) ‣ 5 Scaling Laws with Model Size and Training Time ‣ Scaling Laws for Neural Language Models") we have plotted the critical batch size and gradient noise scale444Although the critical batch size roughly matches the gradient noise scale, we are using a direct measurements of Bcrit from Figures [18](#A4.F18 "Figure 18 ‣ D.3 Batch Size ‣ Appendix D Supplemental Figures ‣ Scaling Laws for Neural Language Models") and [10](#S5.F10 "Figure 10 ‣ 5.1 Adjustment for Training at ⁢Bcrit(L) ‣ 5 Scaling Laws with Model Size and Training Time ‣ Scaling Laws for Neural Language Models") for all our later analyses. as a function of training loss for two different models. We see that Bcrit(L) is independent of model size, and only depends on the loss L. So the predictions of [1812.06162](#bib.bib28) continue to hold for Transformer language models. The critical batch size can be fit with a power-law in the loss | | | | | | --- | --- | --- | --- | | | Bcrit(L)≈B∗L1/αB | | (5.3) | where B∗≈2×108 and αB≈0.21. We have chosen this parameterization for Bcrit(L) because as the loss approaches its minimum value Lmin, the gradient noise scale is expected to diverge, and we expect Bcrit to track this noise scale. We do not know Lmin, as we see no sign that our models are approaching it, but Lmin>0 since the entropy of natural language is non-zero. Since apparently Lmin is much smaller than the values of L we have achieved, we used a parameterization where Bcrit diverges as L→0. We will use Bcrit(L) to estimate the relation between the number of training steps S while training at batch size B=219 tokens and the number of training steps while training at B≫Bcrit. This is simply | | | | | | --- | --- | --- | --- | | | Smin(S)≡S1+Bcrit(L)/B(minimum steps, % at B≫Bcrit) | | (5.4) | for any given target value L for the loss. This also defines a critical value of the compute needed to train to L with a model of size N if we were to train at B≪Bcrit(L). This is | | | | | | --- | --- | --- | --- | | | Cmin(C)≡C1+B/Bcrit(L)(minimum compute,% at B≪Bcrit) | | (5.5) | where C=6NBS estimates the (non-embedding) compute used at batch size B. ### 5.2 Results for L(N,Smin) and Performance with Model Size and Compute | | | | --- | --- | | When we hold either total compute or number of training steps fixed, performance follows | When we hold either total compute or number of training steps fixed, performance follows | Figure 11: When we hold either total compute or number of training steps fixed, performance follows L(N,S) from Equation ([5.6](#S5.E6 "(5.6) ‣ 5.2 Results for ⁢L(N,Smin) and Performance with Model Size and Compute ‣ 5 Scaling Laws with Model Size and Training Time ‣ Scaling Laws for Neural Language Models")). Each value of compute budget has an associated optimal model size that maximizes performance. Mediocre fits at small S are unsurprising, as the power-law equation for the learning curves breaks down very early in training. Now we will use Smin defined in Equation ([5.4](#S5.E4 "(5.4) ‣ 5.1 Adjustment for Training at ⁢Bcrit(L) ‣ 5 Scaling Laws with Model Size and Training Time ‣ Scaling Laws for Neural Language Models")) to obtain a simple and universal fit for the dependence of the loss on model size and training time in the infinite data limit. We will fit the stable, Adam-optimized training runs using Equation ([1.6](#S1.E6 "(1.6) ‣ 1.2 Summary of Scaling Laws ‣ 1 Introduction ‣ Scaling Laws for Neural Language Models")), repeated here for convenience: | | | | | | --- | --- | --- | --- | | | L(N,Smin)=(NcN)αN+(ScSmin)αS | | (5.6) | for the loss. We include all training steps after the warmup period of the learning rate schedule, and find a fit to the data with the parameters: | Parameter | αN | αS | Nc | Sc | | --- | --- | --- | --- | --- | | Value | 0.077 | 0.76 | 6.5×1013 | 2.1×103 | Table 3: Fits to L(N,S) With these parameters, we obtain the learning curve fits in Figure [4](#S1.F4 "Figure 4 ‣ 1.2 Summary of Scaling Laws ‣ 1 Introduction ‣ Scaling Laws for Neural Language Models"). Though the fits are imperfect, we believe they are quite compelling given the simplicity of Equation ([5.6](#S5.E6 "(5.6) ‣ 5.2 Results for ⁢L(N,Smin) and Performance with Model Size and Compute ‣ 5 Scaling Laws with Model Size and Training Time ‣ Scaling Laws for Neural Language Models")). The data and fits can be visualized in a different and more interesting way, as shown in Figure [11](#S5.F11 "Figure 11 ‣ 5.2 Results for ⁢L(N,Smin) and Performance with Model Size and Compute ‣ 5 Scaling Laws with Model Size and Training Time ‣ Scaling Laws for Neural Language Models"). There we study the test loss as a function of model size while fixing either the total non-embedding compute C used in training, or the number of steps S. For the fits we use Equation ([5.5](#S5.E5 "(5.5) ‣ 5.1 Adjustment for Training at ⁢Bcrit(L) ‣ 5 Scaling Laws with Model Size and Training Time ‣ Scaling Laws for Neural Language Models")) and ([5.4](#S5.E4 "(5.4) ‣ 5.1 Adjustment for Training at ⁢Bcrit(L) ‣ 5 Scaling Laws with Model Size and Training Time ‣ Scaling Laws for Neural Language Models")) along with the parameters above and Equation ([5.6](#S5.E6 "(5.6) ‣ 5.2 Results for ⁢L(N,Smin) and Performance with Model Size and Compute ‣ 5 Scaling Laws with Model Size and Training Time ‣ Scaling Laws for Neural Language Models")). The power-law dependence of the loss on Smin reflects the interplay of optimizer dynamics and the loss landscape. Since the fits are best late in training, when the loss may be approximately quadratic, the power-law should provide information about the spectrum of the Hessian of the loss. Its universality suggests that the Hessian eigenvalue density is roughly independent of model size. ### 5.3 Lower Bound on Early Stopping Step The results for L(N,Smin) can be used to derive a lower-bound (and rough estimate) of the step at which early stopping should occur when training is data limited. It is motivated by the idea that finite and infinite D learning curves for a given model will be very similar until we reach Smin≈Sstop. Thus overfitting should be proportional to the correction from simply ending training at Sstop. This will underestimate Sstop, because in reality the test loss will decrease more slowly when we have a finite D, and therefore we will require more training steps to reach the optimal test loss at finite D. This line of reasoning leads to the inequality | | | | | | --- | --- | --- | --- | | | Sstop(N,D)≳Sc[L(N,D)−L(N,∞)]1/αS | | (5.7) | where L(N,∞) is the converged loss, evaluated with infinite available data. This inequality and its comparison to the empirical data is displayed in Figure [16](#A4.F16 "Figure 16 ‣ Appendix D Supplemental Figures ‣ Scaling Laws for Neural Language Models") in the appendix. In that figure, the values of Sstop and L(N,D) are empirical (though Sstop is adjusted to mimic training at B≫Bcrit), while L(N,∞) is computed from the fit to L(N,D) evaluated at D=∞. ![ Given a fixed compute budget, a particular model size is optimal, though somewhat larger or smaller models can be trained with minimal additional compute. ](https://media.arxiv-vanity.com/render-output/8045724/x17.png) Figure 12: Left: Given a fixed compute budget, a particular model size is optimal, though somewhat larger or smaller models can be trained with minimal additional compute. Right: Models larger than the compute-efficient size require fewer steps to train, allowing for potentially faster training if sufficient additional parallelism is possible. Note that this equation should not be trusted for very large models, as it is only valid in the power-law region of the learning curve, after initial transient effects. 6 Optimal Allocation of the Compute Budget ------------------------------------------- We displayed the *empirical* trend of performance as a function of the computation used during training in the top-right of Figure [1](#footnote1 "footnote 1 ‣ Figure 1 ‣ 1 Introduction ‣ Scaling Laws for Neural Language Models"). However, this result involved training at a fixed batch size B, whereas we know that in fact we could train more efficiently555One might ask why we did not simply train at Bcrit in the first place. The reason is that it depends not only on the model but also on the target value of the loss we wish to achieve, and so is a moving target. by training at the batch size Bcrit discussed in Section [5.1](#S5.SS1 "5.1 Adjustment for Training at ⁢Bcrit(L) ‣ 5 Scaling Laws with Model Size and Training Time ‣ Scaling Laws for Neural Language Models"). Large and small values of the loss could have been achieved with fewer samples or fewer steps, respectively, and correcting for this inefficiency by standardizing to the critical batch size results in cleaner and more predictable trends. ![ When adjusting performance to simulate training far below the critical batch size, we find a somewhat altered power law for ](https://media.arxiv-vanity.com/render-output/8045724/x18.png) Figure 13: When adjusting performance to simulate training far below the critical batch size, we find a somewhat altered power law for L(Cmin) when compared with the fully empirical results. The conspicuous lump at 10−5 PF-days marks the transition from 1-layer to 2-layer networks; we exclude 1-layer networks in the power-law fits. It is the L(Cmin) trend that we expect to provide a reliable extrapolation for larger compute. In this section we will adjust for this oversight. More importantly, we will use the results of Section [5](#S5 "5 Scaling Laws with Model Size and Training Time ‣ Scaling Laws for Neural Language Models") to determine the optimal *allocation* of compute between model size N and the quantity of data processed during training, namely 2BcritSmin. We will determine this allocation both empirically and theoretically, by using the equation for L(N,Smin), and we will demonstrate that these methods agree. ### 6.1 Optimal Performance and Allocations Let us first study the loss as a function of the optimally allocated compute from Equation ([5.5](#S5.E5 "(5.5) ‣ 5.1 Adjustment for Training at ⁢Bcrit(L) ‣ 5 Scaling Laws with Model Size and Training Time ‣ Scaling Laws for Neural Language Models")). The result is plotted in Figure [13](#S6.F13 "Figure 13 ‣ 6 Optimal Allocation of the Compute Budget ‣ Scaling Laws for Neural Language Models"), along with a power-law fit. We see that as compared to the compute plot of Figure [1](#footnote1 "footnote 1 ‣ Figure 1 ‣ 1 Introduction ‣ Scaling Laws for Neural Language Models"), the new fit with Cmin is somewhat improved. Given L(Cmin), it is natural to ask for the optimal model size N(Cmin) that provides the minimal loss with a given quantity of training compute. The optimal model size is shown in Figure [14](#S6.F14 "Figure 14 ‣ 6.1 Optimal Performance and Allocations ‣ 6 Optimal Allocation of the Compute Budget ‣ Scaling Laws for Neural Language Models"). We observe that N(Cmin) can be fit very well with a power-law | | | | | | --- | --- | --- | --- | | | N(Cmin)∝(Cmin)0.73. | | (6.1) | In Figure [12](#S5.F12 "Figure 12 ‣ 5.3 Lower Bound on Early Stopping Step ‣ 5 Scaling Laws with Model Size and Training Time ‣ Scaling Laws for Neural Language Models"), we show the effect of training models of sub-optimal sizes (see Appendix [B.4](#A2.SS4 "B.4 Suboptimal Model Sizes ‣ Appendix B Empirical Model of Compute-Efficient Frontier ‣ Scaling Laws for Neural Language Models")). By definition Cmin≡6NBcritS, and so we can use N(Cmin) to extract further results. In particular, since prior fits show B∝L−4.8 and L∝C−0.05min, we can conclude that Bcrit∝C0.24min. This leads us to conclude that the optimal number of steps will only grow very slowly with compute, as | | | | | | --- | --- | --- | --- | | | Smin∝(Cmin)0.03, | | (6.2) | matching the empirical results in Figure [14](#S6.F14 "Figure 14 ‣ 6.1 Optimal Performance and Allocations ‣ 6 Optimal Allocation of the Compute Budget ‣ Scaling Laws for Neural Language Models"). In fact the measured exponent is sufficiently small that our results may even be consistent with an exponent of zero. Thus we conclude that as we scale up language modeling with an optimal allocation of computation, we should predominantly increase the model size N, while simultaneously scaling up the batch size via B∝Bcrit with negligible increase in the number of serial steps. Since compute-efficient training uses relatively few optimization steps, additional work on speeding up early training dynamics may be warranted. | | | | --- | --- | | | | Figure 14: Left: Each value of the compute budget Cmin has an associated optimal model size N. Optimal model size grows very rapidly with Cmin, increasing by 5x for each 10x increase in compute. The number of data examples processed makes up the remainder of the increase, growing relatively modestly by only 2x. Right: The batch-adjusted number of optimization steps also grows very slowly, if at all, meaning that most of the growth in data examples processed can be used for increased batch sizes. ### 6.2 Predictions from L(N,Smin) The results for L(Cmin) and the allocations can be predicted from the L(N,Smin) equation obtained in Section [5](#S5 "5 Scaling Laws with Model Size and Training Time ‣ Scaling Laws for Neural Language Models"). Given our equation for L(N,Smin), we can substitute Smin=Cmin6NB and then find the minimum of the loss as a function of N, while fixing the training compute. We carry out this procedure in detail in Appendix [B](#A2 "Appendix B Empirical Model of Compute-Efficient Frontier ‣ Scaling Laws for Neural Language Models"), where we also provide some additional predictions. For the loss as a function of training compute, we predict that | | | | | | --- | --- | --- | --- | | | L(Cmin)=(CmincCmin)αminC | | (6.3) | where | | | | | | --- | --- | --- | --- | | | αminC≡11/αS+1/αB+1/αN≈0.054 | | (6.4) | in excellent agreement with the exponent of Figure [13](#S6.F13 "Figure 13 ‣ 6 Optimal Allocation of the Compute Budget ‣ Scaling Laws for Neural Language Models"). We also predict that | | | | | | --- | --- | --- | --- | | | N(Cmin)∝(Cmin)αminC/αN≈(Cmin)0.71 | | (6.5) | which also matches the scaling of Figure [14](#S6.F14 "Figure 14 ‣ 6.1 Optimal Performance and Allocations ‣ 6 Optimal Allocation of the Compute Budget ‣ Scaling Laws for Neural Language Models") to within a few percent. Our scaling laws provide a predictive framework for the performance of language modeling. ### 6.3 Contradictions and a Conjecture ![Far beyond the model sizes we study empirically, we find a contradiction between our equations for ](https://media.arxiv-vanity.com/render-output/8045724/x21.png) Figure 15: Far beyond the model sizes we study empirically, we find a contradiction between our equations for L(Cmin) and L(D) due to the slow growth of data needed for compute-efficient training. The intersection marks the point before which we expect our predictions to break down. The location of this point is highly sensitive to the precise exponents from our power-law fits. We observe no signs of deviation from straight power-law trends at large values of compute, data, or model size. Our trends must eventually level off, though, since natural language has non-zero entropy. Indeed, the trends for compute-efficient training described in this section already contain an apparent contradiction. At scales several orders of magnitude above those documented here, the performance predicted by the L(Cmin) scaling law decreases below what should be possible given the slow growth in training data with compute. This implies that our scaling laws must break down before this point, but we conjecture that the intersection point has a deeper meaning: it provides an estimate of the point at which Transformer language models reach maximal performance. Since the amount of data used by compute-efficient training grows slowly with the compute budget, the performance predicted by L(Cmin) eventually hits a lower bound set by the L(D) power law (see Figure [15](#S6.F15 "Figure 15 ‣ 6.3 Contradictions and a Conjecture ‣ 6 Optimal Allocation of the Compute Budget ‣ Scaling Laws for Neural Language Models")). Let us work this out in more detail. To keep overfitting under control, the results of Section [4](#S4 "4 Charting the Infinite Data Limit and Overfitting ‣ Scaling Laws for Neural Language Models") imply that we should scale the dataset size as | | | | | | --- | --- | --- | --- | | | D∝N0.74∝C0.54min | | (6.6) | where we have used the compute-efficient N(Cmin) from Figure [14](#S6.F14 "Figure 14 ‣ 6.1 Optimal Performance and Allocations ‣ 6 Optimal Allocation of the Compute Budget ‣ Scaling Laws for Neural Language Models"). Let us compare this to the data requirements of compute-efficient training. If we train at the critical batch size (i.e. C=2Cmin) and never re-use data during training, we find that data usage grows with compute as | | | | | | --- | --- | --- | --- | | | D(Cmin)=2Cmin6N(Cmin)≈(4×1010 tokens)(Cmin/PF-Day)0.26 | | (6.7) | This is the maximum rate at which the dataset size can productively grow with compute, since it means that we are only training for a single epoch. But it grows the dataset much more slowly than in Equation ([6.6](#S6.E6 "(6.6) ‣ 6.3 Contradictions and a Conjecture ‣ 6 Optimal Allocation of the Compute Budget ‣ Scaling Laws for Neural Language Models")). It appears to imply that compute-efficient training will eventually run into a problem with overfitting, even if the training process never re-uses any data! According to Figure [1](#footnote1 "footnote 1 ‣ Figure 1 ‣ 1 Introduction ‣ Scaling Laws for Neural Language Models"), we expect that when we are bottlenecked by the dataset size (ie by overfitting), the loss should scale as L(D)∝D−0.095. This implies that the loss would scale with compute as L(D(Cmin))∝C−0.03min once we are data-limited. Once again, we have a contradiction, as this will eventually intersect with our prediction for L(Cmin) from Figure [13](#S6.F13 "Figure 13 ‣ 6 Optimal Allocation of the Compute Budget ‣ Scaling Laws for Neural Language Models"), where we found a scaling L(Cmin)∝C−0.050min. The intersection point of L(D(Cmin)) and L(Cmin) occurs at | | | | | | --- | --- | --- | --- | | | C∗∼104 PF-DaysN∗∼1012 parameters,D∗∼1012 tokens,L∗∼1.7 nats/token | | (6.8) | though the numerical values are highly uncertain, varying by an order or magnitude in either direction depending on the precise values of the exponents from the power-law fits. The most obvious interpretation is that our scaling laws break down at or before we reach this point, which is still many orders of magnitude away in both compute and model size. One might also conjecture that this intersection point has a deeper meaning. If we cannot increase the model size beyond N∗ without qualitatively different data requirements, perhaps this means that once we reach C∗min and N∗, we have extracted all of the reliable information available in natural language data. In this interpretation, L∗ would provide a rough estimate for the entropy-per-token666Defining words using the wc utility, the WebText2 dataset has 1.4 tokens per word and 4.3 characters per token. of natural language. In this scenario, we would expect the loss trend to level off at or before L∗. We can guess at the functional form of L(Cmin) as it levels off by considering a version of our training dataset with added noise. For example, we could append a random string of tokens to each context shown to the model to artificially boost the loss by a constant additive factor. Then, the distance from the noise floor L−Lnoise would be a more meaningful performance metric, with even a small decrease in this distance potentially representing a significant boost in qualitative performance. Since the artificial noise would affect all of our trends equally, the critical point of [6.8](#S6.E8 "(6.8) ‣ 6.3 Contradictions and a Conjecture ‣ 6 Optimal Allocation of the Compute Budget ‣ Scaling Laws for Neural Language Models") would not change (aside from the absolute value of L∗), and may be meaningful even if it occurs after the leveling off. 7 Related Work --------------- Power laws can arise from a wide variety of sources [thurner2018introduction](#bib.bib39) . Power-law scalings with model and dataset size in density estimation [wasserman2006all](#bib.bib43) and in random forest models [biau2012analysis](#bib.bib5) may be connected with our results. These models suggest that power-law exponents may have a very rough interpretation as the inverse of the number of relevant features in the data. Some early [banko2001scaling](#bib.bib3) ; [DBLP:journals/corr/cs-CL-0108005](#bib.bib14) work found power-law scalings between performance and dataset size. More recent work [1712.00409](#bib.bib18) ; [Hestness:2019:BHA:3293883.3295710](#bib.bib16) also investigated scaling between model size and data size; their work is perhaps the closest to ours in the literature777After this work was completed, [rosenfeld2019constructive](#bib.bib31) also appeared, which makes similar predictions for the dependence of loss on both model and dataset size.. Note, however, that [1712.00409](#bib.bib18) found super-linear scaling of dataset size with model size, whereas we find a sub-linear scaling. There are some parallels between our findings on optimal allocation of compute and [1906.06669](#bib.bib21) , including power-law learning curves. EfficientNets [DBLP:journals/corr/abs-1905-11946](#bib.bib40) also appear to obey an approximate power-law relation between accuracy and model size. Very recent work [1909.12673](#bib.bib32) studies scaling with both dataset size and model size for a variety of datasets, and fits an ansatz similar to ours. EfficientNet [DBLP:journals/corr/abs-1905-11946](#bib.bib40) advocates scaling depth and width exponentially (with different coefficients) for optimal performance of image models, resulting in a power-law scaling of width as a function of depth. We find that for language models this power should be roughly one when scaling up (as width/depth should remain fixed). But more importantly, we find that the precise architectural hyperparameters are unimportant compared to the overall scale of the language model. In [ResNetsEnsemblesShallow](#bib.bib42) it was argued that deep models can function as ensembles of shallower models, which could potentially explain this finding. Earlier work [Zagoruyko\_2016](#bib.bib48) has compared width and depth, and found that wide ResNets can outperform deep ResNets on image classification. Some studies fix computation per data example, which tends to scale in proportion to the number of model parameters, whereas we investigate scaling with both model size and the quantity of training computation. Various works [1710.03667](#bib.bib2) ; [1812.11118](#bib.bib4) have investigated generalization in highly overparameterized models, finding a “jamming transition” [1901.01608](#bib.bib12) when the model size reaches the dataset size (this may require training many orders of magnitude beyond typical practice, and in particular does not use early stopping). We do not observe such a transition, and find that the necessary training data scales sublinearly in the model size. Expansions in the model size, particularly at large width [jacot2018neural](#bib.bib19) ; [1902.06720](#bib.bib27) , may provide a useful framework for thinking about some of our scaling relations. Our results on optimization, such as the shape of learning curves, can likely be explained using a noisy quadratic model, which can provide quite accurate predictions [DBLP:journals/corr/abs-1907-04164](#bib.bib50) in realistic settings. Making this connection quantitative will require a characterization of the Hessian spectrum [DBLP:journals/corr/abs-1811-07062](#bib.bib29) ; [DBLP:journals/corr/abs-1901-10159](#bib.bib13) ; [unpublished-grd](#bib.bib11) . 8 Discussion ------------- We have observed consistent scalings of language model log-likelihood loss with non-embedding parameter count N, dataset size D, and optimized training computation Cmin, as encapsulated in Equations ([1.5](#S1.E5 "(1.5) ‣ 1.2 Summary of Scaling Laws ‣ 1 Introduction ‣ Scaling Laws for Neural Language Models")) and ([1.6](#S1.E6 "(1.6) ‣ 1.2 Summary of Scaling Laws ‣ 1 Introduction ‣ Scaling Laws for Neural Language Models")). Conversely, we find very weak dependence on many architectural and optimization hyperparameters. Since scalings with N,D,Cmin are power-laws, there are diminishing returns with increasing scale. We were able to precisely model the dependence of the loss on N and D, and alternatively on N and S, when these parameters are varied simultaneously. We used these relations to derive the compute scaling, magnitude of overfitting, early stopping step, and data requirements when training large language models. So our scaling relations go beyond mere observation to provide a predictive framework. One might interpret these relations as analogues of the ideal gas law, which relates the macroscopic properties of a gas in a universal way, independent of most of the details of its microscopic consituents. It is natural to conjecture that the scaling relations will apply to other generative modeling tasks with a maximum likelihood loss, and perhaps in other settings as well. To this purpose, it will be interesting to test these relations on other domains, such as images, audio, and video models, and perhaps also for random network distillation. At this point we do not know which of our results depend on the structure of natural language data, and which are universal. It would also be exciting to find a theoretical framework from which the scaling relations can be derived: a ‘statistical mechanics’ underlying the ‘thermodynamics’ we have observed. Such a theory might make it possible to derive other more precise predictions, and provide a systematic understanding of the limitations of the scaling laws. In the domain of natural language, it will be important to investigate whether continued improvement on the loss translates into improvement on relevant language tasks. Smooth quantitative change can mask major qualitative improvements: “more is different”. For example, the smooth aggregate growth of the economy provides no indication of the specific technological developments that underwrite it. Similarly, the smooth improvements in language model loss may hide seemingly qualitative changes in capability. Our results strongly suggest that larger models will continue to perform better, and will also be much more sample efficient than has been previously appreciated. Big models may be more important than big data. In this context, further investigation into model parallelism is warranted. Deep models can be trained using pipelining [DBLP:journals/corr/abs-1811-06965](#bib.bib17) , which splits parameters depth-wise between devices, but eventually requires increased batch sizes as more devices are used. Wide networks on the other hand are more amenable to parallelization [shazeer2018meshtensorflow](#bib.bib35) , since large layers can be split between multiple workers with less serial dependency. Sparsity [DBLP:journals/corr/abs-1904-10509](#bib.bib6) ; [gray2017gpu](#bib.bib15) or branching (e.g. [Krizhevsky:2012:ICD:2999134.2999257](#bib.bib22) ) may allow for even faster training of large networks through increased model parallelism. And using methods like [Wang\_2017](#bib.bib45) ; [wen2019autogrow](#bib.bib46) , which grow networks as they train, it might be possible to remain on the compute-efficient frontier for an entire training run. Acknowledgements ---------------- We would like to thank Shan Carter, Paul Christiano, Jack Clark, Ajeya Cotra, Ethan Dyer, Jason Eisner, Danny Hernandez, Jacob Hilton, Brice Menard, Chris Olah, and Ilya Sutskever for discussions and for feedback on drafts of this work. \addappheadtotoc
38bac20e-0ba2-48e2-b5c8-f640de5968ba
trentmkelly/LessWrong-43k
LessWrong
Meetup : Meetup : San Antonio Meetup Discussion article for the meetup : Meetup : San Antonio Meetup WHEN: 30 August 2015 12:30:00PM (-0500) WHERE: Yumi Berry, 12651 Vance Jackson Rd Suite #118 San Antonio Tx, 78230 Bubble tea, frozen yogurt, and discussion at Yumi Berry! All are welcome! New Meetup to discuss rationality and all things LessWrong and meet the local community. Look for the sign that says Less Wrong! Discussion article for the meetup : Meetup : San Antonio Meetup
62b9185e-9f50-4401-8d49-69fec542a041
trentmkelly/LessWrong-43k
LessWrong
The January 2013 CFAR workshop: one-year retrospective About a year ago, I attended my first CFAR workshop and wrote a post about it here. I mentioned in that post that it was too soon for me to tell if the workshop would have a large positive impact on my life. In the comments to that post, I was asked to follow up on that post in a year to better evaluate that impact. So here we are! Very short summary: overall I think the workshop had a large and persistent positive impact on my life.  Important caveat However, anyone using this post to evaluate the value of going to a CFAR workshop themselves should be aware that I'm local to Berkeley and have had many opportunities to stay connected to CFAR and the rationalist community. More specifically, in addition to the January workshop, I also * visited the March workshop (and possibly others), * attended various social events held by members of the community, * taught at the July workshop, and * taught at SPARC. These experiences were all very helpful in helping me digest and reinforce the workshop material (which was also improving over time), and a typical workshop participant might not have these advantages.  Answering a question pewpewlasergun wanted me to answer the following question: > I'd like to know how many techniques you were taught at the meetup you still use regularly. Also which has had the largest effect on your life. The short answer is: in some sense very few, but a lot of the value I got out of attending the workshop didn't come from specific techniques.  In more detail: to be honest, many of the specific techniques are kind of a chore to use (at least as of January 2013). I experimented with a good number of them in the months after the workshop, and most of them haven't stuck (but that isn't so bad; the cost of trying a technique and finding that it doesn't work for you is low, while the benefit of trying a technique and finding that it does work for you can be quite high!). One that has is the idea of a next action, which I've found incred
26d8d1a3-833d-4a2c-9205-3aed478c0ecb
StampyAI/alignment-research-dataset/blogs
Blogs
June 2020 Newsletter MIRI researcher Evan Hubinger reviews “[11 different proposals for building safe advanced AI under the current machine learning paradigm](https://www.alignmentforum.org/posts/fRsjBseRuvRhMPPE5/an-overview-of-11-proposals-for-building-safe-advanced-ai)”, comparing them on outer alignment, inner alignment[,](https://arxiv.org/abs/1906.01820) training competitiveness, and performance competitiveness.  #### Other updates * We keep being amazed by new shows of support ⁠— following our [last](https://intelligence.org/2020/04/27/miris-largest-grant-to-date/) [two](https://intelligence.org/2020/05/29/may-2020-newsletter/) announcements, MIRI has received a donation from another anonymous donor totaling ~$265,000 in euros, facilitated by [Effective Giving UK](https://www.effectivegiving.org/) and the [Effective Altruism Foundation](https://ea-foundation.org/). Massive thanks to the donor for their generosity, and to both organizations for their stellar support for MIRI and other longtermist organizations! * Hacker News [discusses](https://news.ycombinator.com/item?id=23401328) Eliezer Yudkowsky's [There's No Fire Alarm for AGI](https://intelligence.org/2017/10/13/fire-alarm/). * MIRI researcher Buck Shlegeris talks about [deference and inside-view models](https://forum.effectivealtruism.org/posts/53JxkvQ7RKAJ4nHc4/some-thoughts-on-deference-and-inside-view-models) on the EA Forum. * OpenAI [unveils GPT-3](https://arxiv.org/pdf/2005.14165.pdf), a massive 175-*billion* parameter language model that can figure out how to solve a variety of problems without task-specific training or fine-tuning. Gwern Branwen's pithy [summary](https://twitter.com/gwern/status/1267215588214136833): > GPT-3 is terrifying because it's a tiny model compared to what's possible, trained in the dumbest way possible on a single impoverished modality on tiny data, yet the first version already manifests crazy runtime meta-learning—and the scaling curves *still* are not bending! > > Further discussion [by Branwen](https://www.gwern.net/newsletter/2020/05#gpt-3) and [by Rohin Shah](https://www.lesswrong.com/posts/D3hP47pZwXNPRByj8/an-102-meta-learning-by-gpt-3-and-a-list-of-full-proposals). * Stuart Russell gives this year's Turing Lecture online, discussing “[provably beneficial AI](https://www.youtube.com/watch?v=_H87qqT8pdY)”. The post [June 2020 Newsletter](https://intelligence.org/2020/06/08/june-2020-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).
9f199ab8-3938-4336-9ff1-da0a2c8bb755
trentmkelly/LessWrong-43k
LessWrong
Abstractions as morphisms between (co)algebras In a previous post, I suggested we should think of abstractions/ontologies as maps that make a certain commutative diagram commute: The downward arrows are the abstraction, which throws away some information about the world state while keeping other parts. The horizontal arrow at the top is time evolution. The diagram says that there should be a way to "mirror" the time evolution inside the high-level model (that's the dashed arrow at the bottom). Commutativity of this diagram means that we can either do time evolution in the actual world and then abstract, or first abstract and then do time evolution in the model. If this doesn't make sense, the original post has more details and examples. It turns out that this idea is very, very far from new.[1] I'll hopefully write a post on some related literature soon, but here I want to discuss one connection in particular: category theory, and more specifically (co)algebras of endofunctors. Using coalgebras for abstractions is an idea from bisimulations in computer science. See nlab if you just want a quick version. This post essentially just adds lots of examples and some reframing—the computer science discussions I've seen tend to be heavily focused on the special case of abstractions of transition systems. Motivation There are two main things the (co)algebra framework will give us: 1. It could be a useful way to formalize the slightly vague description of abstractions-via-commutative-diagrams from my previous post. (In particular, it addresses one open question that I'll discuss in a moment.) 2. The original post had a few examples for commutative diagrams of the type above, but most of them focused on deterministic time evolution. This (co)algebraic framework lets us easily generate tons of examples in a natural way, from group theory to general-purpose optimizers. This gives some support both to this specific framework but also to the more general idea of using these commutative diagrams to define "good abstracti
cd2074ec-31c5-4da7-9ad2-11091d7624ba
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Practical AI risk I: Watching large compute This post is going to lay out a practical and simple strategy to diminish risk of AI catastrophe. I have voiced my uncertainty (but certainly not dismissal) about AI risk several times in the community, which I will briefly summarize: (1) Technology has likely physical limitations (some theoretical limits are far off, but we don't know if we'll make much more progress in process technology); (2) Intelligence isn't magical: there are ways to constrain systems that have many sigmas of reliability, even under worst case adversaries. One example I've given is running a computer inside a Faraday cage with self-contained power. (3) Thousands of beings more intelligent than any AI today are improving AI at a fast but not incomprehensible velocity. A human-level AI (or maybe even 100s of them) could not self-improve faster than it currently improves. (4) Taking over the world takes some time. It will be visible if an AI tries to take over the world: maybe it will be smart enough to divide us (e.g. linking it to conspiratorial thinking, an ideology or a political party), but if we're monitoring things at a minimum level, it would be clear there's an entity accruing resources, forms of power, and computation. (5) Ethics is in a way rational, logical and "natural" (in a formal an general sense), systems learning from human bodies of knowledge should be able to pick up ethical reasoning, and designing truth-seeking and ethics objectives should not be impossible. (Sidenote: I propose we formalize ethics for this goal) The point (4) is what we're going to exploit in this post. (hopefully more on (5) in another post) We need one or several agencies or bodies watching over large compute resources (with internal access to their infrastructure) checking for abnormal activity, in particular large activities linked to a single network or entity. This is just regular network engineering, you can look at traffic statistics, sever rollout statistics, just check who has the most servers keeping an eye out for 3rd party delegation (where someone buys compute in the name of another). We need laws that allow such representatives/engineers to peer into the system in privacy-preserving way (i.e. with help from the company engineers) to check there's nothing funny going on. Of course AI companies should be monitored specially for autonomous agents, as well as any large decision-making AI.  This looks like a bare minimum oversight of what our new hopeful allies (I write here that AI feels like a new species arriving on Earth in 2022) are doing, and how they're going to relate to us and other lifeforms on this planet.
32675dec-9fa3-49d3-bf57-58c46a96338b
trentmkelly/LessWrong-43k
LessWrong
Quantopian contest, but for food intake and weight Twelve years ago, I lost 100 lbs. in a fairly boring manner by eating 1200 calories a day. Brutally unpleasant and catabolic, but mostly successful. It's creeping back. There are no low-hanging fruits like sugarwater, fast food, eating out. The core problem is my satiety point is ~1000 calories above my RMR, and my body is not fooled by "high satiety" foods. Since then, I've recorded every bite eaten every day, whether 10 calories of broth on a weekend fast or a 4000 calorie binge of my favorite deep dish "lucent" pizza, using a food scale for everything, not once thoughtlessly pouring oil into a recipe or blindly applying peanut butter to a sandwich. This cultivated superhuman calorie estimation abilities. "You say this is 280 kcal? It tastes like 350. Let me see the recipe… I see, the food label for ghee is off by half." My goal in recording calories is to predict true fat loss by subtracting intake from RMR so daily scale variations aren't discouraging. This is why I believe my records to be so much more accurate than others'—my incentives reward precision over the default of "undercount to stay under budget." I weigh frequently, have impedance body composition data, periodic RMR data from an indirect calorimeter (which I now own), sleep-wake times, all gym sessions, and daily step counts. Until last year, I believed CICO was cause rather than effect. For whatever reason, I thought, I'm hungrier or have less willpower, and so I must suffer to achieve a healthy weight. My beliefs have since turned toward a meandering set point controlled by a yet unknown homeostat. Whether contamination, omega ratios, a gravitostat, or some combination, anecdotal analysis suggests the set point is a hidden variable, confounding simple analysis. Weight loss when above it is trivial, and below it, fiendishly difficult. This is why I believe a control system is the most likely candidate for a successful model. Anything simpler would be a low-hanging fruit of simple correlation long
18c138ab-df83-4abe-86d8-c51d879f33ca
trentmkelly/LessWrong-43k
LessWrong
Houston Hackerspace Meetup: Sunday May 22, 5:00PM Sunday May 22, 5:00PM TX/RX Hackerspace 2010 Commerce St Houston, TX 77002   Goals: In general, to become more awesome. In specific, to learn about cognitive biases as a group, figure out how to become less wrong, and have fun while doing it. We also may experiment with integrating a hackerspace, which already has several lesswrongians, into the group. The first meeting will mostly be a meet and greet to establish future directions, with perhaps a video lecture and discussion on cognitive biases depending on the time constraints of the people who come. The door to the building will have a sign marked Less Wrong, and the nearby empty lot can be used for parking.   Dvorak, a thus far much more active member in the community, will be providing me with support in organizing the group.   Edit: There will be pizza. Also, a pictoral view   This is the set of buildings that the hackerspace is in. It's difficult to see our front from this angle - unfortunately google maps decided to map everything but our little section of commerce street. It's near where the white truck and red motorcycle are.   And this is the empty lot that you can park in.   For more reference:   http://maps.google.com/maps?client=ubuntu&channel=fs&q=2010+Commerce+St.+Houston,+Tx+77002&oe=utf-8&um=1&ie=UTF-8&hq=&hnear=0x8640bed8ed95625d:0x4c9af214d2032035,2010+Commerce+St,+Houston,+TX+77002&gl=us&ei=C9LRTYHvE8fL0QGu8OjlCw&sa=X&oi=geocode_result&ct=title&resnum=1&ved=0CBkQ8gEwAA  
8caf6c7a-479e-41ff-ab1a-fc8bdbecb7da
trentmkelly/LessWrong-43k
LessWrong
SSRIs: Much More Than You Wanted To Know The claim that “SSRIs don’t work” or “SSRIs are mostly just placebo” is most commonly associated with Irving Kirsch, a man with the awesome job title of “Associate Director Of The Program For Placebo Studies at Harvard”. (fun fact: there’s actually no such thing as “Placebo Studies”, but Professor Kirsch’s belief that he directs a Harvard department inspires him to create much higher-quality research.) In 1998, he published a meta-analysis of 19 placebo-controlled drug trials that suggested that almost all of the benefits of antidepressants were due to the placebo effect. Psychiatrists denounced him, saying that you can choose pretty much whatever studies you want for a meta-analysis. After biding his time for a decade, in 2008 he struck back with another meta-analysis, this being one of the first papers in all of medical science to take the audacious step of demanding all the FDA’s data through the Freedom of Information Act. Since drug companies are required to report all their studies to the FDA, this theoretically provides a rare and wonderful publication-bias-free data set. Using this set, he found that, although antidepressants did seem to outperform placebo, the effect was not “clinically significant” except “at the upper end of very severe depression”. This launched a minor war between supporters and detractors. Probably the strongest support he received was a big 2010 meta-analysis by Fournier et al, which found that > The magnitude of benefit of antidepressant medication compared with placebo increases with severity of depression symptoms and may be minimal or nonexistent, on average, in patients with mild or moderate symptoms. For patients with very severe depression, the benefit of medications over placebo is substantial. Of course, a very large number of antidepressants are given to people with mild or moderate depression. So what now? Let me sort the debate about antidepressants into a series of complaints: 1. Antidepressants were oversold and
5c393d4e-a096-4f45-90fb-cf8fe66fdb28
trentmkelly/LessWrong-43k
LessWrong
A widely shared AI productivity paper was retracted, is possibly fraudulent Confidence notes: I am a physicist working on computational material science, so I have some familiarity with the field, but don't know much about R&D firms or economics. Some of the links in this article were gathered from a post at pivot-to-ai.com and the BS detector.  The paper "Artificial Intelligence, Scientific Discovery, and Product Innovation" was published as an Arxiv preprint last December, roughly 5 months ago, and was submitted to a top economics journal.   The paper claimed to show the effect of an experiment at a large R&D company. It claimed the productivity of a thousand material scientists was tracked before and after the introduction of an machine learning material generation tool. The headline results was that the AI caused a 44% increase in materials discovery at the firm, with a productivity increase of 81% for top-decile scientists.  This research was breathlessly reported on in the atlantic, the Wall street journal, and in the news section of Nature. Nobel economics prize winner Darren Acemoglu promoted the research and was acknowledged for his support in the paper.  The pre-print article was shared widely, and has been cited dozens of times already in the academic literature. I have seen it cited several times on this forum, most notably by @80000_Hours in their case for AGI by 2030. I myself looked at the paper and took it's findings at face value, something I am now kicking myself for[1]. At some point  someone else with some  computational materials science expertise noticed serious issues with the paper, and took their concerns to Acemoglu, who in turn took it up with MIT directly. And then everything started to fall apart. Both Acemoglu and MIT have publicly withdrawn support from the paper and urged that it be retracted. In a press release, MIT stated: > "Earlier this year, the COD conducted a confidential internal review based upon allegations it received regarding certain aspects of this paper.  While student privacy laws and MIT
a6ae5850-56a8-4bdc-ac5b-38cd5d061983
trentmkelly/LessWrong-43k
LessWrong
Scientific Notation Options When working with numbers that span many orders of magnitude it's very helpful to use some form of scientific notation. At its core, scientific notation expresses a number by breaking it down into a decimal ≥1 and <10 (the "significand" or "mantissa") and an integer representing the order of magnitude (the "exponent"). Traditionally this is written as: > 3 × 104 While this communicates the necessary information, it has two main downsides: * It uses three constant characters ("× 10") to separate the significand and exponent. * It uses superscript, which doesn't work with some typesetting systems and adds awkwardly large line spacing at the best of times. And is generally lost on cut-and-paste. Instead, I'm a big fan of e-notation, commonly used in programming and on calculators. This looks like: > 3e4 This works everywhere, doesn't mess up your line spacing, and requires half as many characters as writing it the traditional way. There are a bunch of other variants of e-notation, but I don't like any of them as much: * 3E4: a shorter separator would be easier to read. * 3e+4: the + is redundant. * 3⏨4: neat, but requires unicode and can't be pasted into widely-used programming languages and spreadsheet programs. One downside of "e" notation is that it comes off as less formal than traditional scientific notation. But unless you need to be read as maximally formal I think it's just better all around. Comment via: facebook, mastodon
1d70497c-2a59-41db-a6ed-31f31d3201b0
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Papers for 2017 I had three new papers either published or accepted into publication last year; all of them are now available online: * **How Feasible is the Rapid Development of Artificial Superintelligence?** *Physica Scripta 92* (11), 113001. + *Abstract:* What kinds of fundamental limits are there in how capable artificial intelligence (AI) systems might become? Two questions in particular are of interest: 1) How much more capable could AI become relative to humans, and 2) how easily could superhuman capability be acquired? To answer these questions, we will consider the literature on human expertise and intelligence, discuss its relevance for AI, and consider how AI could improve on humans in two major aspects of thought and expertise, namely simulation and pattern recognition. We find that although there are very real limits to prediction, it seems like AI could still substantially improve on human intelligence. + Links: [published version (paywalled)](http://iopscience.iop.org/article/10.1088/1402-4896/aa90e8), [free preprint](http://kajsotala.fi/assets/2017/10/how_feasible.pdf). * **Disjunctive Scenarios of Catastrophic AI Risk.** *AI Safety and Security* (Roman Yampolskiy, ed.), CRC Press. Forthcoming. + *Abstract:* ​ Artificial intelligence (AI) safety work requires an understanding of what could cause AI to become unsafe. This chapter seeks to provide a broad look at the various ways in which the development of AI sophisticated enough to have general intelligence could lead to it becoming powerful enough to cause a catastrophe. In particular, the present chapter seeks to focus on the way that various risks are disjunctive—on how there are multiple different ways by which things could go wrong, any one of which could lead to disaster. We cover different levels of a strategic advantage an AI might acquire, alternatives for the point where an AI might decide to turn against humanity, different routes by which an AI might become dangerously capable, ways by which the AI might acquire autonomy, and scenarios with varying number of AIs. Whereas previous work has focused on risks specifically only from superintelligent AI, this chapter also discusses crucial capabilities that could lead to catastrophic risk and which could emerge anywhere on the path from near-term “narrow AI” to full-blown superintelligence. + Links: [free preprint](http://kajsotala.fi/assets/2017/11/Disjunctive-scenarios.pdf). * **Superintelligence as a Cause or Cure for Risks of Astronomical Suffering.** *Informatica 41* (4). + (with Lukas Gloor) + *Abstract:*Discussions about the possible consequences of creating superintelligence have included the possibility of existential risk , often understood mainly as the risk of human extinction. We argue that suffering risks (s-risks) , where an adverse outcome would bring about severe suffering on an astronomical scale, are risks of a comparable severity and probability as risks of extinction. Preventing them is the common interest of many different value systems. Furthermore, we argue that in the same way as superintelligent AI both contributes to existential risk but can also help prevent it, superintelligent AI can both be a suffering risk or help avoid it. Some types of work aimed at making superintelligent AI safe will also help prevent suffering risks, and there may also be a class of safeguards for AI that helps specifically against s-risks. + Links: [published version (open access)](http://www.informatica.si/index.php/informatica/article/view/1877). In addition, my old paper [Responses to Catastrophic AGI Risk](http://iopscience.iop.org/article/10.1088/0031-8949/90/1/018001) (w/ Roman Yampolskiy) was republished, with some minor edits, as the book chapters “Risks of the Journey to the Singularity” and “Responses to the Journey to the Singularity”, in *[The Technological Singularity: Managing the Journey](https://link.springer.com/book/10.1007/978-3-662-54033-6)* (Victor Callaghan et al, eds.), Springer-Verlag.
1494ddb9-3922-44cd-b8a5-1b41310c6573
trentmkelly/LessWrong-43k
LessWrong
Meetup : Berkeley: The Motivation Hacker by Nick Winter Discussion article for the meetup : Berkeley: The Motivation Hacker by Nick Winter WHEN: 29 May 2013 07:30:00PM (-0700) WHERE: Berkeley, CA Dear all, this week Swimmer963 will give a presentation on Nick Winter's new book The Motivation Hacker: http://www.nickwinter.net/the-motivation-hacker Nick Winter is a community member, startup founder, and CFAR workshop alumnus. Thanks to Swimmer963 for giving this presentation! Doors open at 7:30pm on Wednesday; the presentation will start at 8pm and take roughly half an hour. Stick around afterwards to chat. For directions to Zendo, see the mailing list: http://groups.google.com/group/bayarealesswrong or call me at: http://i.imgur.com/Vcafy.png Discussion article for the meetup : Berkeley: The Motivation Hacker by Nick Winter
b13390c3-eaad-4fb9-b3f2-74c1fafc5a8f
StampyAI/alignment-research-dataset/special_docs
Other
Visualizing and Understanding Convolutional Networks Visualizing and Understanding Convolutional Networks Matthew D. Zeiler and Rob Fergus Dept. of Computer Science, New York University, USA {zeiler,fergus }@cs.nyu.edu Abstract. Large Convolutional Network models have recently demon- strated impressive classification performance on the ImageNet bench- mark Krizhevsky et al.[18]. However there is no clear understanding of why they perform so well, or how they might be improved. In this paper we explore both issues. Weintroduce a novel visualization techniquethat givesinsight intothefunction ofintermediatefeaturelayersand theoper-ation of the classifier. Used in a diagnostic role, these visualizations allow us to find model architectures that outperform Krizhevsky et al.on the ImageNet classification benchmark. We also perform an ablation studyto discover the performance contribution from different model layers. Weshow our ImageNet model generalizes well to other datasets: when the softmax classifier is retrained, it convincingly beats the current state-of- the-art results on Caltech-101 and Caltech-256 datasets. 1 Introduction Since their introduction by LeCun et al.[20] in the early 1990’s, Convolutional Networks (convnets) have demonstrated excellent performance at tasks such as hand-written digit classification and face detection. In the last 18 months, sev- eral papers have shown that they can also deliver outstanding performance onmorechallengingvisualclassificationtasks.Ciresan et al.[4]demonstratestate-of- the-artperformanceonNORBandCIFAR-10datasets.Mostnotably,Krizhevsky et al.[18] show record beating performance on the ImageNet 2012 classification benchmark,withtheirconvnetmodelachievinganerrorrateof16.4%,compared to the 2nd place result of 26.1%. Following on from this work, Girshick et al.[10] have shown leading detection performance on the PASCAL VOC dataset. Sev-eralfactorsareresponsibleforthisdramaticimprovementinperformance:(i)the availability of much larger training sets, with millions of labeled examples; (ii) powerfulGPU implementations, makingthe training ofverylargemodels practi- cal and (iii) better model regularization strategies, such as Dropout [14]. Despite this encouraging progress, there is still little insight into the internal operation and behavior of these complex models, or how they achieve such good performance. From a scientific standpoi nt, this is deeply unsatisfactory. With- out clear understanding of how and why they work, the development of bettermodels is reduced to trial-and-error. In this paper we introduce a visualization D. Fleet et al. (Eds.): ECCV 2014, Part I, LNCS 8689, pp. 818–833, 2014. c/circlecopyrtSpringer International Publishing Switzerland 2014 Visualizing and Understanding Convolutional Networks 819 technique that reveals the input stimuli that excite individual feature maps at any layer in the model. It also allows us to observe the evolution of featuresduring training and to diagnose potential problems with the model. The visu- alization technique we propose uses a mul ti-layered Deconvolutional Network (deconvnet), as proposed by Zeiler et al.[29], to project the feature activations back to the input pixel space. We also perform a sensitivity analysis of the clas- sifier output by occluding portions of the input image, revealing which parts of the scene are important for classification. Using these tools, we start with the architecture of Krizhevsky et al.[18] and explore different architectures, discove ring ones that outperform their results on ImageNet. We then explore the generalization ability of the model to other datasets, just retraining the softmax classifier on top. As such, this is a form of supervised pre-training, which contrasts with the unsupervised pre-trainingmethods popularized by Hinton et al.[13] and others [1,26]. 1.1 Related Work Visualization: Visualizing features to gain intuition about the network is com- mon practice, but mostly limited to the 1st layer where projections to pixelspace are possible. In higher layers alternate methods must be used. [8] find the optimal stimulus for each unit by perfo rming gradient descent in image space to maximize the unit’s activation. This requires a careful initialization and doesnot give any information about the unit’s invariances. Motivated by the latter’s short-coming, [19] (extending an idea by [2]) show how the Hessian of a given unit may be computed numerically around the optimal response, giving some insightintoinvariances.The problemis thatforhigher layers,the invariancesare extremely complex so are poorly captured by a simple quadratic approximation.Our approach, by contrast, provides a non-parametric view of invariance, show- ing which patterns from the training set activate the feature map. Our approach is similar to contemporary work by Simonyan et al.[23] who demonstrate how saliency maps can be obtained from a convnet by projecting back from the fully connected layers of the network, instead of the convolutional features that we use. Girshick et al.[10] showvisualizationsthat identify patches within adataset that are responsible for strong activations at higher layers in the model. Our vi- sualizations differ in that they are not just crops of input images, but rathertop-down projections that reveal structures within each patch that stimulate a particular feature map. Feature Generalization: Our demonstration of the generalization ability of convnet features is also explored in concurrent work by Donahue et al.[7] and Girshick et al.[10]. They use the convnet features to obtain state-of-the-art performance on Caltech-101 and the Sun scenes dataset in the former case, and for object detection on the PASCAL VOC dataset, in the latter. 2 Approach We use standard fully supervised convnet models throughout the paper, as de-fined by LeCun et al.[20] and Krizhevsky et al.[18]. These models map a color 820 M.D. Zeiler and R. Fergus 2D input image xi, via a series of layers,to a probability vector ˆ yiover the Cdif- ferent classes. Each layer consists of (i) convolution of the previous layer output(or, in the case of the 1st layer, the input image) with a set of learned filters; (ii) passing the responses throug h a rectified linear function ( relu(x)=m a x ( x,0)); (iii) [optionally] max pooling over local neighborhoods and (iv) [optionally] alocal contrast operation that normalizes the responses across feature maps. For more details of these operations, see [18] and [16]. The top few layers of the net- work are conventional fully-connected networks and the final layer is a softmax classifier. Fig. 3 shows the model used in many of our experiments. We trainthese modelsusinga largesetof Nlabeled images {x, y},wherelabel y iis a discrete variable indicating the true class. A cross-entropy loss function, suitable for image classification, is used to compare ˆ yiand yi. The parameters of the network (filters in the convolutional layers, weight matrices in the fully-connected layers and biases) are trained by back-propagating the derivative of the loss with respect to the parameters throughout the network, and updating the parameters via stochastic gradient descent. Details of training are given inSection 3. 2.1 Visualization with a Deconvnet Understanding the operationof a convnet requiresinterpretingthe feature activ- ityinintermediatelayers.Wepresentanovelwayto map these activities back to the input pixel space , showingwhatinput patternoriginallycausedagivenactivation in the feature maps. We perform this mapping with a Deconvolutional Network (deconvnet) Zeiler et al.[29]. A deconvnet can be thought of as a convnet model that uses the same components (filtering, pooling) but in reverse, so instead of mappingpixelstofeaturesdoestheopposite.InZeiler et al.[29],deconvnetswere proposed as a way of performing unsupervised learning. Here, they are not usedin any learning capacity, just as a probe of an already trained convnet. To examine a convnet, a deconvnet is attached to each of its layers, as illus- trated in Fig. 1(top), providing a continuous path back to image pixels. To start,an input image is presented to the convnet and features computed throughout the layers. To examine a given convnet activation, we set all other activations in the layer to zero and pass the feature maps as input to the attached deconvnet layer. Then we successively (i) unpool, (ii) rectify and (iii) filter to reconstruct the activity in the layer beneath that gave rise to the chosen activation. This isthen repeated until input pixel space is reached. Unpooling: In the convnet, the max pooling operation is non-invertible, how- ever we can obtain an approximate inver se by recording the locations of the maxima within each pooling region in a set of switchvariables. In the decon- vnet, the unpooling operation uses these switches to place the reconstructions from the layer above into appropriate locations, preserving the structure of the stimulus. See Fig. 1(bottom) for an illustration of the procedure. Rectification: The convnet uses relunon-linearities, which rectify the fea- ture maps thus ensuring the feature maps are always positive. To obtain valid Visualizing and Understanding Convolutional Networks 821 feature reconstructions at each layer (which also should be positive), we pass the reconstructed signal through a relunon-linearity1. Filtering: The convnet uses learned filters to convolve the feature maps from the previous layer. To approximately inv ert this, the deconvnet uses transposed versions of the same filters (as other autoencoder models, such as RBMs), but applied to the rectified maps, not the output of the layer beneath. In practice this means flipping each filter vertically and horizontally. Note that we do not use any contrast normalization operations when in this reconstruction path. Projecting down from higher layers uses the switch settings generated by the max pooling in the convnet on the way up. As these switch settings are peculiar to a given input image, the reconstruction obtained from a single activation thus resembles a small piece of the original input image, with structures weighted according to their contribution toward to the feature acti- vation. Since the model is trained discriminatively, they implicitly show which parts of the input image are discriminative. Note that these projections are not samples from the model, since there is no generative process involved. The whole procedure is similar to backpropping a single strong activation (rather than the usual gradients), i.e. computing∂h ∂X n,w h e r e his the element of the feature map with the strong activation and Xnis the input image. However, it differs in that (i) the the reluis imposed independently and (ii) contrast normalization operations are not used. A general shortcoming of our approach is that it only visualizes a single activation, not the joint activity present in a layer. Neverthe- less, as we show in Fig. 6, these visualizations are accurate representations of the input pattern that stimulates the given feature map in the model: when the parts of the original input image corresponding to the pattern are occluded, we see a distinct drop in activity within the feature map. 3 Training Details We now describe the large convnet model that will be visualized in Section 4. The architecture,shownin Fig. 3, i s similarto thatused byKrizhevsky et al.[18] for ImageNet classification. One difference is that the sparse connections used in Krizhevsky’s layers 3,4,5 (due to the model being split across 2 GPUs) are replaced with dense connections in our m odel. Other important differences re- lating to layers 1 and 2 were made following inspection of the visualizations in Fig. 5, as described in Section 4.1. The model was trained on the ImageNe t 2012 training set (1.3 million images, spreadover1000differentclasses)[6].EachRGBimagewaspreprocessedbyresiz- ing the smallest dimension to 256, cropping the center 256x256 region, subtract- ing the per-pixel mean (across all images) and then using 10 different sub-crops of size 224x224 (corners + center with(out) horizontal flips). Stochastic gradient descentwithamini-batchsizeof128wasusedtoupdatetheparameters,starting with a learning rate of 10−2, in conjunction with a momentum term of 0 .9. We 1We also tried rectifying using the binary mask imposed by the feed-forward relu operation, but the resulting visualizations were significantly less clear. 822 M.D. Zeiler and R. Fergus Layer Below Pooled Maps Feature Maps Rectified Feature Maps   !"     Pooled Maps   Reconstruction Rectified Unpooled Maps Unpooled Maps   ! "     Layer Above Reconstruction     Unpooling Max Locations “Switches” Pooling Pooled Maps Feature Map Layer Above Reconstruction Unpooled Maps Rectified Feature Maps Fig. 1.Top: A deconvnetlayer (left) attached to a convnet layer (right). The deconvnet will reconstruct an approximate version of the convnet features from the layer beneath. Bottom: An illustration of the unpooling operation in the deconvnet, using switches which record the location of the local max in each pooling region (colored zones) duringpooling in the convnet. The black/white bars are negative/positive activations withinthe feature map. anneal the learning rate throughout training manually when the validation error plateaus. Dropout [14] is used in the fully connected layers (6 and 7) with a rate of 0.5. All weights are initialized to 10−2and biases are set to 0. Visualization of the first layer filters during training reveals that a few of them dominate. To combat this, we renormalize each filter in the convolutional layers whose RMS value exceeds a fixed radius of 10−1to this fixed radius. This is crucial, especially in the first layer of the model, where the input images are roughly in the [-128,128]range. As in Krizhevsky et al.[18], we produce multiple different crops and flips of each training example to boost training set size. We stopped trainingafter 70 epochs,whichtook around12days ona singleGTX580 GPU, using an implementation based on [18]. 4 Convnet Visualization Using the model described in Section 3, we now use the deconvnet to visualize the feature activations on the ImageNet validation set. Feature Visualization: Fig. 2 shows feature visualizations from our model once training is complete. For a given feature map, we show the top 9 acti-vations, each projected separately down to pixel space, revealing the different Visualizing and Understanding Convolutional Networks 823 structuresthatexcitethatmapandshowingitsinvariancetoinputdeformations. Alongside these visualizations we show the corresponding image patches. Thesehave greater variation than visualizations which solely focus on the discriminant structure within each patch. For example, in layer 5, row 1, col 2, the patches appeartohavelittle in common,but the visualizationsrevealthat this particularfeature map focuses on the grass in the background, not the foreground objects. The projections from each layer show the hierarchical nature of the features in the network. Layer 2 responds to corner s and other edge/color conjunctions. Layer 3 has more complex invariances, capturing similar textures (e.g. mesh patterns (Row 1, Col 1); text (R2,C4)). Layer 4 shows significant variation, andismoreclass-specific:dogfaces(R1,C1);bird’slegs(R4,C2).Layer5showsentire objects with significant pose variation, e.g. keyboards (R1,C11) and dogs (R4). Feature Evolution during Training: Fig. 4 visualizes the progression during training of the strongest activation (across all training examples) within a given feature map projected back to pixel space. Sudden jumps in appearance result from a change in the image from which the strongest activation originates. Thelower layers of the model can be seen to co nverge within a few epochs. However, the upper layers only develop develop after a considerable number of epochs (40-50), demonstrating the need to let the models train until fully converged. 4.1 Architecture Selection While visualization of a trained model gives insight into its operation, it can also assist with selecting good architectures in the first place. By visualizing the first and second layers of Krizhevsky et al.’s architecture (Fig. 5(a) & (c)), various problems are apparent. The first layerfilters are a mix of extremely high and lowfrequency information, with little coverage of the mid frequencies. Additionally, the 2nd layer visualization shows aliasing artifacts caused by the large stride 4 used in the 1st layer convolutions. To remedy these problems, we (i) reduced the1st layer filter size from 11x11 to 7x7 and (ii) made the stride of the convolution 2, rather than 4. This new architecture retains much more information in the 1st and 2nd layer features, as shown in Fig. 5(b) & (d). More importantly, it also improves the classification performance as shown in Section 5.1. 4.2 Occlusion Sensitivity With image classification approaches, a natural question is if the model is truly identifying the location of the object in the image, or just using the surrounding context. Fig. 6 attempts to answer this question by systematically occluding different portions of the input image with a grey square, and monitoring the output of the classifier. The examples clearly show the model is localizing theobjectswithin the scene, asthe probabilityofthe correctclassdropssignificantly when the object is occluded. Fig. 6 also shows visualizations from the strongest feature map of the top convolution layer, in addition to activity in this map(summed over spatial locations) as a function of occluder position. When the 824 M.D. Zeiler and R. Fergus Layer 2Layer 1 Layer 3 Layer 4 Layer 5 Fig. 2.Visualization of features in a fully trained model. For layers 2-5 we show the top 9 activations in a random subset of feature maps across the validation data, projected down to pixel space using our deconvolutional network approach. Our reconstructionsarenotsamples from the model: they are reconstructed patterns from the validation set that cause high activations in a given feature map. For each feature map we also show the corresponding image patches. Note: (i) the the strong grouping within each featuremap, (ii) greater invariance at higher layers and (iii) exaggeration of discriminative parts of the image, e.g. eyes and noses of dogs (layer 4, row 1, cols 1). Best viewed in electronic form. The compression artifacts are a consequence of the 30Mb submissionlimit, not the reconstruction algorithm itself. Visualizing and Understanding Convolutional Networks 825 Input Image stride 2image size 224  396 5 2110 55 3x3 max pool stride 2 963 126 256filter size 7  3x3 max pool stride 2 13 2563 113 3843 113 384 Layer 1 Layer 2 13 256 3x3 max pool stride 2 6 Layer 3 Layer 4 Layer 5 2564096 units4096 units  Layer 6 Layer 7 C class softmax Output contrast norm. contrast norm. Fig. 3.Architecture of our 8 layer convnet model. A 224 by 224 crop of an image (with 3 color planes) is presented as the input. This is convolved with 96 different 1st layer filters (red), each of size 7 by 7, using a stride of 2 in both x and y. The resulting feature maps are then: (i) passed through a rectified linear function (not shown), (ii)pooled (max within 3x3 regions, using stride 2) and (iii) contrast normalized across feature maps to give 96 different 55 by 55 element feature maps. Similar operations are repeated in layers 2,3,4,5. The last two layers are fully connected, taking features fromthe top convolutional layer as input in vector form (6 ·6·256 = 9216 dimensions). The final layer is a C-way softmax function, Cbeing the number of classes. All filters and feature maps are square in shape. Layer 1 Layer 2 Layer 3 Layer 4 Layer 5 Fig. 4.Evolution of a randomly chosen subset of model features through training. Each layer’s features are displayed in a different block. Within each block, we showa randomly chosen subset of features at epochs [1,2,5,10,20,30,40,64]. The visualiza-tion shows the strongest activation (across all training examples) for a given feature map, projected down to pixel space using our deconvnet approach. Color contrast is artificially enhanced and the figure is best viewed in electronic form. occluder covers the image region that appears in the visualization, we see a strong drop in activity in the feature map. This shows that the visualization genuinely corresponds to the image structure that stimulates that feature map, hence validating the other visualizations shown in Fig. 4 and Fig. 2. 5 Experiments 5.1 ImageNet 2012 This dataset consists of 1.3M/50k/100k training/validation/test examples, spread over 1000 categories. Table 1 shows our results on this dataset. Using the exact architecture specified in Krizhevsky et al.[18], we attempt to replicate their result on the validation set. We achieve an error rate within 0 .1% of their reported value on the ImageNet 2012 validation set. Next we analyze the performance of our model with the architectural changes outlined in Section 4.1 (7 ×7 filters in layer 1 and stride 2 convolutions in layers 826 M.D. Zeiler and R. Fergus (a) (b) (c) (d) Fig. 5.(a): 1st layer features without feature scale clipping. Note that one feature dom- inates. (b): 1st layerfeatures from Krizhevsky et al.[ 1 8] .(c ):Our1 s tl a y e rf e a tur e s .T he smaller stride (2 vs 4) and filter size (7x7 vs 11x11) results in more distinctive features and fewer “dead” features. (d): Visualizations of 2nd layer features from Krizhevsky et al.[18]. (e): Visualizations of our 2nd layer features. These are cleaner, with no aliasing artifacts that are visible in (d). 1 & 2). This model, shown in Fig. 3, significantly outperforms the architecture of Krizhevsky et al.[18], beating their single model result by 1 .7% (test top-5). When we combine multiple models, we obtain a test error of 14 .8%, an improve- ment of 1 .6%. This result is close to that produced by the data-augmentation approaches of Howard [15], which could easily be combined with our architec- ture. However, our model is some way short of the winner of the 2013 Imagenet classification competition [28]. Table 1. ImageNet 2012/2013 classification error rates. The ∗indicates models that were trained on both ImageNet 2011 and 2012 training sets. Val Val Test Error % Top-1 Top-5 Top-5 Gunji et al.[12] - - 26.2 DeCAF [7] - - 19.2 Krizhevsky et al.[18], 1 convnet 40.7 18.2 −− Krizhevsky et al.[18], 5 convnets 38.1 16.4 16.4 Krizhevsky et al.∗[18], 1 convnets 39.0 16.6 −− Krizhevsky et al.∗[18], 7 convnets 36.7 15.4 15.3 Our replication of Krizhevsky et al.,1c o n v n e t 40.5 18.1 −− 1 convnet as per Fig. 3 38.4 16.5 −− 5 convnets as per Fig. 3 – (a) 36.7 15.3 15.3 1 convnet as per Fig. 3 but with layers 3,4,5: 512,1024,512 maps – (b) 37.5 16.0 16.1 6 convnets, (a) & (b) combined 36.0 14.7 14.8 Howard [15] - - 13.5 Clarifai [28] - - 11.7 Varying ImageNet Model Sizes: In Table 2, we first explore the architecture ofKrizhevsky et al.[18]byadjustingthesizeoflayers,orremovingthementirely. In each case, the model is trained from sc ratch with the revised architecture. Removing the fully connected layers (6,7) only gives a slight increase in error (in Visualizing and Understanding Convolutional Networks 827 Car wheel Racer Cab Police van Pomeranian Tennis ball Keeshond Pekinese Afghan hound Gordon setter Irish setter Mortarboard Fur coat Academic gown Australian terrier Ice lolly Vizsla Neck brace 0.10.20.30.40.50.60.7 0.10.20.30.40.50.60.70.80.9 0.050.10.150.20.25 True Label: Pomeranian(a) Input Image (b) Layer 5, strongest feature map(c) Layer 5, strongest feature map projections(d) Classifier, probability of correct class (e) Classifier, most probable class True Label: Car Wheel True Label: Afghan Hound Fig. 6.Three test examples where we systematically cover up different portions of the scene with a gray square (1st column) and see how the top (layer 5) feature maps ((b) & (c)) and classifier output ((d) & (e)) changes. (b): for each position of the gray scale, we record the total activation in one layer 5 feature map (the one with the strongest response in the unoccluded image). (c): a visualization of this feature map projected down into the input image (black square), along with visualizations of this map from other images. The first row example shows the strongest feature to be the dog’s face. When this is covered-up the activity in the feature map decreases (blue area in (b)). (d): a map of correct class probability, as a function of the position of the gray square. E.g. when the dog’s face is obscured, the probability for “pomeranian” drops significantly. (e): the most probable label as a function of occluder position. E.g. in the 1st row, for most locations it is “pomeranian”, but if the dog’s face is obscured but not the ball, then it predicts “tennis ball”. In the 2nd example, text on the car is the strongest feature in layer 5, but the classifier is most sensitive to the wheel. The 3rd example contains multiple objects. The strongest feature in layer 5 picks out the faces, but the classifier is sensitive to the dog (blue region in (d)), since it uses multiple feature maps. the following, we refer to top-5 validation error). This is surprising, given that they contain the majority of model parameters. Removing two of the middle convolutional layers also makes a relat ively small difference to the error rate. However, removing both the middle convolution layers and the fully connected layersyields amodelwith only4layersw hoseperformanceisdramaticallyworse. Thiswouldsuggestthatthe overalldepth of the modelisimportant forobtaining good performance. We then modify our model, shown in Fig. 3. Changing the size of the fully connected layers makes little difference to performance (same for model of Krizhevsky et al.[18]). However, increasing the size of the middle convolution layers goes give a useful gai n in performance. But increasing these, while also enlarging the fully connected layers results in over-fitting. 828 M.D. Zeiler and R. Fergus Table 2. ImageNet 2012 classification error rates with various architectural changes to the model of Krizhevsky et al.[18] and our model (see Fig. 3) Train Val Val Error % Top-1 Top-1 Top-5 Our replication of Krizhevsky et al.[18], 1 convnet 35.1 40.5 18.1 Removed layers 3,4 41.8 45.4 22.1 Removed layer 7 27.4 40.0 18.4 Removed layers 6,7 27.4 44.8 22.4 Removed layer 3,4,6,7 71.1 71.3 50.1 Adjust layers 6,7: 2048 units 40.3 41.7 18.8 Adjust layers 6,7: 8192 units 26.8 40.0 18.1 Our Model (as per Fig. 3) 33.1 38.4 16.5 Adjust layers 6,7: 2048 units 38.2 40.2 17.6 Adjust layers 6,7: 8192 units 22.0 38.8 17.0 Adjust layers 3,4,5: 512,1024,512 maps 18.8 37.5 16.0 Adjust layers 6,7: 8192 units and Layers 3,4,5: 512,1024,512 maps 10.0 38.3 16.9 5.2 Feature Generalization The experiments above show the importance of the convolutional part of our ImageNet model in obtaining state-of-the-art performance. This is supported by the visualizations of Fig. 2 which show the complex invariances learned in the convolutional layers. We now explore the ability of these feature extraction layers to generalize to other datasets, namely Caltech-101 [9], Caltech-256 [11] and PASCAL VOC 2012. To do this, we keep layers 1-7 of our ImageNet-trained model fixed and train a new softmax classifier on top (for the appropriate num- ber of classes) using the training images of the new dataset. Since the softmax contains relatively few parameters, it can be trained quickly from a relatively small number of examples, as is the case for certain datasets. The experiments compare our feature representation, obtained from Ima- geNet, with the hand-crafted features used by other methods. In both our ap- proach and existing ones the Caltech/PASCAL training data is only used to train the classifier. As they are of similar complexity (ours: softmax, others: lin- earSVM), the feature representationis cr ucialtoperformance.It isimportant to note that both representations were built using images beyond the Caltech and PASCAL training sets. For example, the hyper-parameters in HOG descriptors were determined through systematic experiments on a pedestrian dataset [5]. We also try a second strategy of training a model from scratch, i.e. resetting layers 1-7 to random values and train them, as well as the softmax, on the training images of the PASCAL/Caltech dataset. One complication is that some of the Caltech datasets have some images that are also in the ImageNet training data. Using normalized correlation, we Visualizing and Understanding Convolutional Networks 829 identified these few “overlap” images2and removed them from our Imagenet training set and then retrained our Imagenet models, so avoiding the possibility of train/test contamination. Caltech-101: We follow the procedure of [9] and randomly select 15 or 30 im- ages per class for training and test on up to 50 images per class reporting the average of the per-class accuracies in Tabl e 3, using 5 train/test folds. Training took 17 minutes for 30 images/class. The pre-trained model beats the best re- ported result for 30 images/class from [3] by 2.2%. Our result agrees with the recently published result of Donahue et al.[7], who obtain 86 .1% accuracy (30 imgs/class). The convnet model trained from scratch however does terribly, only achieving 46.5%, showing the impossibility of training a large convnet on such a small dataset. Table 3. Caltech-101 classification accuracy for our convnet models, against two lead- ing alternate approaches Acc % Acc % #T r a i n 15/class 30/class Boet al.[3] − 81.4±0.33 Yang et al.[17] 73.2 84.3 Non-pretrained convnet 22.8±1.5 46.5±1.7 ImageNet-pretrained convnet 83.8±0.5 86.5±0.5 Caltech-256: Wefollowtheprocedureof[11],selecting 15,30,45,or60 training images per class, reporting the average of the per-class accuracies in Table 4. Our ImageNet-pretrained model beats the current state-of-the-art results ob- tained by Bo et al.[3] by a significant margin: 74.2% vs 55.2% for 60 training images/class.However,aswith Caltech-101,the model trainedfromscratchdoes poorly. In Fig. 7, we explore the “one-shot learning” [9] regime. With our pre- trained model, just 6 Caltech-256training images are needed to beat the leading method using 10 times as many images. This shows the power of the ImageNet feature extractor. PASCAL 2012: We used the standard training and validation images to train a 20-way softmax on top of the ImageNet-pretrained convnet. This is not ideal, as PASCAL images can contain multiple objects and our model just provides a single exclusive prediction for each image. Table 5 shows the results on the test set, comparing to the leading methods: the top 2 entries in the competition and concurrent work from Oquab et al.[ 2 1 ]w h ou s eac o n v n e tw i t ham o r e appropriate classifier. The PASCAL and ImageNet images are quite different in nature, the former being full scenes unlike the latter. This may explain our mean 2For Caltech-101, we found 44 images in common (out of 9,144 total images), with a maximum overlap of 10 for any given class. For Caltech-256, we found 243 images in common (out of 30,607 total images), with a maximum overlap of 18 for any given class. 830 M.D. Zeiler and R. Fergus Table 4. Caltech 256 classification accuracies Acc % Acc % Acc % Acc % #T r a i n 15/class 30/class 45/class 60/class Sohn et al.[24] 35.1 42.1 45.7 47.9 Boet al.[3] 40.5±0.4 48.0±0.2 51.9±0.2 55.2±0.3 Non-pretr. 9.0±1.4 22.5±0.7 31.2±0.5 38.8±1.4 ImageNet-pretr. 65.7±0.2 70.6±0.2 72.7±0.4 74.2±0.3              ï !"  !     Fig. 7.Caltech-256 classification performance as the number of training images per class is varied. Using only 6 training examples per class with our pre-trained feature extractor, we surpass best reported result by Bo et al.[3]. Table 5. PASCAL 2012 classification results, comparing our Imagenet-pretrained con- vnet against the leading two methods and the recent approach of Oquab et al.[21] Acc % [22] [27] [21] Ours Acc % [22] [27] [21] Ours Airplane 92.0 97.3 94.6 96.0 Dining table 63.2 77.8 69.0 67.7 Bicycle 74.2 84.2 82.9 77.1 Dog 68.9 83.0 92.1 87.8 Bird 73.0 80.8 88.2 88.4 Horse 78.2 87.5 93.4 86.0 Boat 77.5 85.3 60.3 85.5 Motorbike 81.0 90.1 88.6 85.1 Bottle 54.3 60.8 60.3 55.8 Person 91.6 95.0 96.1 90.9 Bus 85.2 89.9 89.0 85.8 Potted plant 55.9 57.8 64.3 52.2 Car 81.9 86.8 84.4 78.6 Sheep 69.4 79.2 86.6 83.6 Cat 76.4 89.3 90.7 91.2 Sofa 65.4 73.4 62.3 61.1 Chair 65.2 75.4 72.1 65.0 Train 86.7 94.5 91.1 91.8 Cow 63.2 77.8 86.8 74.4 Tv 77.4 80.7 79.8 76.1 Mean 74.3 82.2 82.8 79.0 #w o n 0 11 6 3 performance being 3 .2% lower than the leading competition result [27], however we do beat them on 5 classes, sometimes by large margins. Visualizing and Understanding Convolutional Networks 831 5.3 Feature Analysis We explore how discriminative the features in each layer of our Imagenet- pretrained model are. We do this by varying the number of layers retained from the ImageNet model and place either a linear SVM or softmax classifier on top. Table 6 shows results on Caltech-101 and Caltech-256. For both datasets, a steady improvement can be seen as we ascend the model, with best results being obtained by using all layers. This supports the premise that as the feature hierarchies become deeper, they learn increasingly powerful features. Table 6. Analysis of the discriminative information contained in each layer of feature mapswithinourImageNet-pretrainedconvnet.Wetraineithera linearSVMorsoftmax on features from different layers (as indicated in brackets) from the convnet. Higher layers generally produce more discriminative features. Cal-101 Cal-256 (30/class) (60/class) SVM (1) 44.8±0.7 24.6±0.4 SVM (2) 66.2±0.5 39.6±0.3 SVM (3) 72.3±0.4 46.0±0.3 SVM (4) 76.6±0.4 51.3±0.1 SVM (5) 86.2±0.8 65.6±0.3 SVM (7) 85.5±0.4 71.7±0.2 Softmax (5) 82.9±0.4 65.7±0.5 Softmax (7) 85.4±0.4 72.6±0.1 6 Discussion We explored large convolutional neural network models, trained for image clas- sification, in a number ways. First, we presented a novel way to visualize the activity within the model. This reveals the features to be far from random, un- interpretable patterns. Rather, they show many intuitively desirable properties such as compositionality, increasing invariance and class discrimination as we ascend the layers. We also show how these visualization can be used to identify problems with the model and so obtain better results, for example improving on Krizhevsky et al.’s [18] impressive ImageNet 2012 result. We then demon- strated through a series of occlusion exp eriments that the model, while trained for classification, is highly sensitive to local structure in the image and is not just using broad scene context. An ablation study on the model revealed that having a minimum depth to the network, rather than any individual section, is vital to the model’s performance. Finally, we showed how the ImageNet trained model can generalize well to other datasets.For Caltech-101andCaltech-256,the datasets aresimilarenough that we can beat the best reported results, in the latter case by a significant margin. Our convnet model generalized less well to the PASCAL data, perhaps 832 M.D. Zeiler and R. Fergus suffering from dataset bias [25], although it was still within 3 .2% of the best reported result, despite no tuning for the task. For example, our performancemight improve if a different loss function was used that permitted multiple ob- jects per image. This would naturally en able the networks to tackle the object detection as well. Acknowledgments. The authors would like to thank Yann LeCun for helpful discussions and acknowledge support from NSERC, NSF grant #1116923 and Microsoft Research. References 1. Bengio, Y., Lamblin, P., Popovici, D., Larochelle, H.: Greedy layer-wise training of deep networks. In: NIPS, pp. 153–160 (2007) 2. Berkes, P., Wiskott, L.: On the analysis and interpretation of inhomogeneous quadratic forms as receptive fields. Neural Computation (2006) 3. Bo, L., Ren, X., Fox, D.: Multipath sparse coding using hierarchical matching pursuit. In: CVPR (2013) 4. Ciresan, D.C., Meier, J., Schmidhuber, J.: Multi-column deep neural networks for image classification. In: CVPR (2012) 5. Dalal, N., Triggs, B.: Histograms of oriented gradients for pedestrian detection. In: CVPR (2005) 6. Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: A Large- Scale Hierarchical Image Database. In: CVPR 2009 (2009) 7. Donahue, J., Jia, Y., Vinyals, O., Hoffman, J., Zhang, N., Tzeng, E., Darrell, T.: DeCAF: A deep convolutional activation feature for generic visual recognition. arXiv:1310.1531 (2013) 8. Erhan, D., Bengio, Y., Courville, A., Vincent, P.: Visualizing higher-layer features of a deep network. Technical report, University of Montreal (2009) 9. Fei-fei, L., Fergus, R., Perona, P.: One-shot learning of object categories. IEEE Trans. PAMI (2006) 10. Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accu- rate object detection and semantic segmentation. arXiv:1311.2524 (2014) 11. Griffin, G., Holub,A.,Perona, P.:The caltech256. Caltech Technical Report(2006)12. Gunji, N., Higuchi, T., Yasumoto, K., Muraoka, H., Ushiku, Y., Harada, T., Ku- niyoshi, Y.: Classification entry. Imagenet Competition (2012) 13. Hinton, G.E., Osindero, S., Teh, Y.: A fast learning algorithm for deep belief nets. Neural Computation 18, 1527–1554 (2006) 14. Hinton, G.E., Srivastave, N., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.R.: Improving neural networks by preventing co-adaptation of feature detectors. In:arXiv:1207.0580 (2012) 15. Howard, A.G.: Some improvements on deep convolutional neural network based image classification. arXiv 1312.5402 (2013) 16. Jarrett,K.,Kavukcuoglu,K.,Ranzato,M.,LeCun,Y.:Whatisthebestmulti-stage architecture for object recognition? In: ICCV (2009) 17. Jianchao, Y., Kai, Y., Yihong, G., Thomas, H.: Linear spatial pyramid matching using sparse coding for image classification. In: CVPR (2009) Visualizing and Understanding Convolutional Networks 833 18. Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convo- lutional neural networks. In: NIPS (2012) 19. Le, Q.V., Ngiam, J., Chen, Z., Chia, D., Koh, P., Ng, A.Y.: Tiled convolutional neural networks. In: NIPS (2010) 20. LeCun, Y., Boser, B., Denker, J.S., Henderson, D., Howard, R.E., Hubbard, W., Jackel, L.D.: Backpropagation applied to handwritten zip code recognition. NeuralComput. 1(4), 541–551 (1989) 21. Oquab, M., Bottou, L., Laptev, I., Sivic, J.: Learning and transferring mid-level image representations using convolutional neural networks. In: CVPR (2014) 22. Sande, K., Uijlings, J., Snoek, C., Smeulders, A.: Hybridcoding for selective search. In: PASCAL VOC Classification Challenge 2012 (2012) 23. Simonyan, K., Vedaldi, A., Zisserman, A.: Deep inside convolutional networks: Vi- sualising image classification models and saliency maps. arXiv 1312.6034v1 (2013) 24. Sohn, K., Jung, D., Lee, H., Hero III, A.: Efficient learning of sparse, distributed, convolutional feature representations for object recognition. In: ICCV (2011) 25. Torralba, A., Efros, A.A.: Unbiased look at dataset bias. In: CVPR (2011) 26. Vincent, P., Larochelle, H., Bengio, Y., Manzagol, P.A.: Extracting and composing robust features with denoising autoencoders. In: ICML, pp. 1096–1103 (2008) 27. Yan, S., Dong, J., Chen, Q., Song, Z., Pan, Y., Xia, W., Huang, Z., Hua, Y., Shen, S.: Generalized hierarchical matching for sub-category aware object classification. In: PASCAL VOC Classification Challenge 2012 (2012) 28. Zeiler, M.: Clarifai (2013), http://www.image-net.org/challenges/LSVRC/2013/ results.php 29. Zeiler, M., Taylor, G., Fergus, R.: Adaptive deconvolutional networks for mid and high level feature learning. In: ICCV (2011)
80620441-0ab9-4038-9c50-6e7eae53a461
trentmkelly/LessWrong-43k
LessWrong
Meetup : [Cambridge] How To Do Everything Discussion article for the meetup : [Cambridge] How To Do Everything WHEN: 05 May 2013 02:00:00PM (-0400) WHERE: 21 Ames St, Cambridge, MA This meetup will focus on practical skills for organizing and motivating yourself to accomplish short-term goals. We’ll have a handful of speakers give brief (maybe 3-10 minute) talks on individual techniques, which we'll use as topics for short, focused group discussions. I expect this to last 45-60 minutes before we break up for general discussion until dinner. Topics so far: —The Getting Things Done method —Beeminder —Team productivity sessions If you have a useful technique to share, please do! Let me know and I'll add you to the roster. Cambridge/Boston-area Less Wrong meetups are on the first and third Sunday of every month at 2pm in the MIT Whitaker Building (21 Ames St, Bldg 56), room 180. Room number subject to change based on availability. Signs will be posted with the actual room number. Discussion article for the meetup : [Cambridge] How To Do Everything
e2efd7c4-efee-4b7b-842b-05f3679d0b9f
StampyAI/alignment-research-dataset/blogs
Blogs
dm_control: Software and Tasks for Continuous Control #### Overview A public colab notebook with a tutorial for dm\_control software is available [here](https://colab.sandbox.google.com/github/deepmind/dm_control/blob/master/tutorial.ipynb). ##### Infrastructure * An autogenerated MuJoCo Python wrapper provides full access to the underlying engine. * PyMJCF is a Document Object Model, wherein a hierarchy of Python *Entity* objects corresponds to MuJoCo model elements. * Composer is the high-level “game engine” which streamlines the composing of Entities into scenes and the defining observations, rewards, terminations and general game logic. * The Locomotion framework introduces several abstract Composer entities such as the Arena and Walker, facilitating locomotion-like tasks. ##### Environments * The [Control Suite](https://www.youtube.com/watch?v=rAai4QzcYbs), including a new [quadruped](https://www.youtube.com/watch?v=RhRLjbb7pBE&t=5s) and [dog](https://www.youtube.com/watch?v=i0_OjDil0Fg) environment. * Several locomotion tasks, including soccer. * Single arm robotic manipulation tasks using snap-together bricks. #### Highlights ##### Named Indexing Exploiting MuJoCo’s support of *names* for all model elements, we allow strings to index and slice into arrays. So instead of writing: "fingertip\_height = physics.data.geom\_xpos[7, 2]" ...using obscure, fragile numerical indexing, you can write: "fingertip\_height = physics.named.data.geom\_xpos['fingertip', 'z']" leading to a much more robust, readable codebase. #### PyMJCF The PyMJCF library creates a Python object hierarchy with 1:1 correspondence to a MuJoCo model. It introduces the attach() method which allows models to be attached to one another. For example, in our tutorial we create procedural multi-legged creatures by attaching legs to bodies and creatures to the scene. ##### Composer Composer is the “game engine“ framework, which defines a particular order of runtime function calls, and abstracts the affordances of *reward*, *termination* and *observation*. These abstractions allowed us to create useful submodules: composer.Observable: An abstract observation wrapper which can add noise, delays, buffering and filtering to any sensor. composer.Variation: A set of tools for randomising simulation quantities, allowing for agent robustification and sim-to-real via model variation. ![](https://assets-global.website-files.com/621e749a546b7592125f38ed/6231ee766a323a1793376c3b_diagram3.svg)Diagram showing the life-cycle of Composer callbacks. Rounded rectangles represent callbacks that Tasks and Entities may implement. Blue rectangles represent built-in Composer operations.#### Locomotion The Locomotion framework introduced the abstractions: Walker: A controllable entity with common locomotion-related methods, like projection of vectors into an egocentric frame. Arena: A self-scaling randomised scene, in which the walker can be placed and given a task to perform. For example, using just 4 function calls, we can instantiate a humanoid walker, a WallsCorridor arena and combine them in a RunThroughCorridor task. #### New Control Suite domains ##### Quadruped * A generic quadruped domain with a passively stable body. * Several pure locomotion tasks (e.g. walk, run). * An escape task requiring rough terrain navigation. * A fetch task requiring ball dribbling. ##### Dog * An elaborate model based on a skeleton commissioned from [leo3Dmodels](https://www.turbosquid.com/Search/Artists/leo3Dmodels). * A challenging ball-fetching task that requires precision grasping with the mouth. ##### Showcase A fast-paced montage of dm\_control based tasks from DeepMind:
91999e4e-364c-4107-b6e1-c8fab2d73482
trentmkelly/LessWrong-43k
LessWrong
'oy, girls on lw, want to get together some time?' 2:45:24 PM Katja Grace: The main thing that puts me off in online dating profiles is lack of ambition to save the world 2:45:35 PM Katja Grace: Or do anything much 2:48:03 PM Michael Blume: *nods* 2:48:07 PM Michael Blume: this is indeed a problem 2:57:55 PM Katja Grace: Maybe there is a dating site for smart ambitious nerds somewhere 2:58:25 PM Katja Grace: Need to set up lw extension perhaps 2:59:02 PM Michael Blume: haha, yes ^^ 3:00:40 PM Katja Grace: Plenty of discussion on why few girls, how to get girls, nobody ever says 'oy, girls on lw, want to get together some time?' 3:01:14 PM Michael Blume: somebody really should say that 3:01:34 PM Michael Blume: hell, I'm tempted to just copy that IM into a top-level post and click 'submit' 3:01:48 PM Katja Grace: Haha dare you to