id
stringlengths
36
36
source
stringclasses
15 values
formatted_source
stringclasses
13 values
text
stringlengths
2
7.55M
9a8db96f-cffe-4cc1-9f8e-32d380a19235
trentmkelly/LessWrong-43k
LessWrong
The Holy Algorithm As it will surely not have escaped your insight, this weekend is Easter. Why now? The date of Easter is determined by a complicated process called the Computus Ecclesiasticus. I will just quote the Wikipedia page: The Easter cycle groups days into lunar months, which are either 29 or 30 days long. There is an exception. The month ending in March normally has thirty days, but if 29 February of a leap year falls within it, it contains 31. As these groups are based on the lunar cycle, over the long term the average month in the lunar calendar is a very good approximation of the synodic month, which is 29.53059 days long. There are 12 synodic months in a lunar year, totaling either 354 or 355 days. The lunar year is about 11 days shorter than the calendar year, which is either 365 or 366 days long. These days by which the solar year exceeds the lunar year are called epacts. It is necessary to add them to the day of the solar year to obtain the correct day in the lunar year. Whenever the epact reaches or exceeds 30, an extra intercalary month (or embolismic month) of 30 days must be inserted into the lunar calendar: then 30 must be subtracted from the epact. If your thirst of knowledge is not satisfied, here is a 140-page document in Latin with more detail. As far as I understand, during the Roman Era the Pope or one of his bureaucrats would perform the computus, then communicate the date to the rest of Christianity and everybody could eat their chocolates at the same time. Then, the Middle-Ages happened and communication became much harder, so instead they came up with a formula so people could compute the date of Easter locally. Of course, the initial formulas had problems – with the date of Easter dangerously drifting later and later in the year over centuries, and don’t even get me started on calendar changes. Eventually Carl Friedrich Gauss entered the game and saved humanity once again with a computationally-efficient algorithm (I am over-simplifying the story s
5a74120f-f823-45ba-ba87-512501c55b70
trentmkelly/LessWrong-43k
LessWrong
A Comprehensive Mechanistic Interpretability Explainer & Glossary This is a linkpost for a very long doc defining, explaining, and giving intuitions and conceptual frameworks for all the concepts I think you should know about when engaging with mechanistic interpretability. If you find the UI annoying, there's an HTML version here Why does this doc exist? * The goal of this doc is to be a comprehensive glossary and explainer for Mechanistic Interpretability (focusing on transformer language models), the field of studying how to reverse engineer neural networks. * There's a lot of complex terms and jargon in the field! And these are often scattered across various papers, which tend to be pretty well-written but not designed to be an introduction to the field as a whole. The goal of this doc is to resolve some research debt and strives to be a canonical source for explaining concepts in the field * I try to go beyond just being a reference that gives definitions, and to actually dig into how to think about a concept. Why does it matter? Why should you care about it? What are the subtle implications and traps to bear in mind? What is the underlying intuition, and how it fits into the rest of the field? * I also go outside pure mechanistic interpretability, and try to define what I see as the key terms in deep learning and in transformers, and how I think about them. If you want to reverse engineer a system, it's extremely useful to have a deep model of what's going on inside of it. What are the key components and moving parts, how do they fit together, and how could the model use them to express different algorithms? How to read this doc? * The first intended way is to use this a reference. When reading papers, or otherwise exploring and learning about the field, coming here and looking up any terms and trying to understand them. * The second intended way is to treat this as a map to the field. My hope is that if you're new to the field, you can just read through this doc from the top, get introduced to the key ideas, and b
fdca0d8b-f0b5-447d-b8c8-59ba7652dacb
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Tyler Cowen's challenge to develop an 'actual mathematical model' for AI X-Risk On the Russ Roberts ECONTALK Podcast #893, guest Tyler Cowen challenges Eliezer Yudkowsky and the Less Wrong/EA Alignment communities to develop a mathematical model for AI X-Risk. **Will Tyler Cowen agree that an 'actual mathematical model' for AI X-Risk has been developed by October 15, 2023?** <https://manifold.markets/JoeBrenton/will-tyler-cowen-agree-that-an-actu?r=Sm9lQnJlbnRvbg> (This market resolves to "YES" if Tyler Cowen publicly acknowledges, by October 15 2023, that an actual mathematical model of AI X-Risk has been developed.) Two excerpts from the conversation: <https://youtube.com/clip/Ugkxtf8ZD3FSvs8TAM2lhqlWvRh7xo7bISkp> > *...But, I mean, here would be my initial response to Eliezer. I've been inviting people who share his view simply to join the discourse. So, they have the sense, 'Oh, we've been writing up these concerns for 20 years and no one listens to us.' My view is quite different. I put out a call and asked a lot of people I know, well-informed people, '**Is there any actual mathematical model of this process of how the world is supposed to end?**'* > > *So, if you look, say, at COVID or climate change fears, in both cases, there are many models you can look at, including--and then models with data. I'm not saying you have to like those models. But the point is: there's something you look at and then you make up your mind whether or not you like those models; and then they're tested against data...* > > <https://youtube.com/clip/Ugkx4msoNRn5ryBWhrIZS-oQml8NpStT_FEU> > *...So, when it comes to AGI and existential risk, it turns out as best I can ascertain, **in the 20 years or so we've been talking about this seriously, there isn't a single model done**. Period. Flat out.* > > *So, I don't think any idea should be dismissed. I've just been inviting those individuals to actually join the discourse of science. '**Show us your models.** Let us see their assumptions and let's talk about those.'...* > >   **Related:** Will there be a funding commitment of at least $1 billion in 2023 to a program for mitigating AI risk? [**https://manifold.markets/JoeBrenton/will-there-be-a-funding-commitment?r=Sm9lQnJlbnRvbg**](https://manifold.markets/JoeBrenton/will-there-be-a-funding-commitment?r=Sm9lQnJlbnRvbg)   **Will the US government launch an effort in 2023 to augment human intelligence biologically in response to AI risk?** [**https://manifold.markets/JoeBrenton/will-the-us-government-launch-an-ef?r=Sm9lQnJlbnRvbg**](https://manifold.markets/JoeBrenton/will-the-us-government-launch-an-ef?r=Sm9lQnJlbnRvbg) https://manifold.markets/JoeBrenton/will-the-general-public-in-the-unit?r=Sm9lQnJlbnRvbg   **Will the general public in the United States become deeply concerned by LLM-facilitated scams by Aug 2 2023?** [**https://manifold.markets/JoeBrenton/will-the-general-public-in-the-unit?r=Sm9lQnJlbnRvbg**](https://manifold.markets/JoeBrenton/will-the-general-public-in-the-unit?r=Sm9lQnJlbnRvbg)
bd14e1ad-06ed-409f-9860-f1ee0498c517
trentmkelly/LessWrong-43k
LessWrong
Kolmogorov's AI Forecast Has anybody heard of this AI forecast that Kolmogorov apparently made? I heard of it in a panel with Maxim Kontsevich: > MAXIM: Actually, Kolmogorov thought that mathematics will be extinct in 100 years, he had an estimate. He calculated the number of neurons and connections, he made the head something like one cubic meter. So yes, maybe a crazy estimate, but he was also thinking about natural boundaries I've been unable to track down a source, anyone(perhaps Russian-speaking readers?) know of where to find one?
6ca0ede4-a6ad-432e-b217-d1a59f9f7f38
StampyAI/alignment-research-dataset/special_docs
Other
Why I Want to be a Posthuman when I Grow Up Chapter 7 Why I Want to be a Posthuman when I Grow Up Nick Bostrom I am apt to think, if we knew what it was to be an angel for one hour, we should return to this world, though it were to sit on the brightest throne in it, with vastly more loathing and reluc- tance than we would now descend into a loathsome dungeon or sepulchre1 Berkley (1685–1753) 7.1 Introduction Extreme human enhancement could result in “posthuman” modes of being. After offering some definitions and conceptual clarification, I argue for two theses. First, some posthuman modes of being would be very worthwhile. Second, it could be very good for human beings to become posthuman. 7.2 Setting the Stage The term “posthuman” has been used in very different senses by different authors.2 I am sympathetic to the view that the word often causes more confusion than clar-ity, and that we might be better off replacing it with some alternative vocabulary. However, as the purpose of this paper is not to propose terminological reform but to argue for certain substantial normative theses (which one would naturally search for in the literature under the label “posthuman”), I will instead attempt to achieve intelligibility by clarifying the meaning that I shall assign to the word. Such termi-nological clarification is surely a minimum precondition for having a meaningful discussion about whether it might be good for us to become posthuman. 1 Berkeley et al. 1897: 172. 2 The definition used here follows in the spirit of Bostrom (2003). A completely different concept of “posthuman” is used in e.g. Hayles 1999. B. Gordijn, R. Chadwick (eds.) Medical Enhancement and Posthumanity, 107 © Springer Science + Business Media B.V . 2008 108 N. Bostrom I shall define a posthuman as a being that has at least one posthuman capacity. By a posthuman capacity, I mean a general central capacity greatly exceeding the maximum attainable by any current human being without recourse to new technological means. I will use general central capacity to refer to the following: ● Healthspan – the capacity to remain fully healthy, active, and productive, both mentally and physically ● Cognition – general intellectual capacities, such as memory, deductive and ana-logical reasoning, and attention, as well as special faculties such as the capacity to understand and appreciate music, humor, eroticism, narration, spirituality, mathematics, etc. ● Emotion – the capacity to enjoy life and to respond with appropriate affect to life situations and other people In limiting my list of general central capacities to these three, I do not mean to imply that no other capacity is of fundamental importance to human or posthuman beings. Nor do I claim that the three capacities in the list are sharply distinct or independent. Aspects of emotion and cognition, for instance, clearly overlap. But this short list may give at least a rough idea of what I mean when I speak of post-humans, adequate for present purposes. In this paper, I will be advancing two main theses. The first is that some possible posthuman modes of being would be very good. I emphasize that the claim is not that all possible posthuman modes of being would be good. Just as some possible human modes of being are wretched and horrible, so too are some of the posthuman possibilities. Yet it would be of interest if we can show that there are some posthu-man possibilities that would be very good. We might then, for example, specifically aim to realize those possibilities. The second thesis is that it could be very good for us to become posthuman. It is possible to think that it could be good to be posthuman without it being good for us to become posthuman. This second thesis thus goes beyond the first. When I say “good for us”, I do not mean to insist that for every single current human individual there is some posthuman mode of being such that it would be good for that indi-vidual to become posthuman in that way. I confine myself to making a weaker claim that allows for exceptions. The claim is that for most current human beings, there are possible posthuman modes of being such that it could be good for these humans to become posthuman in one of those ways. It might be worth locating the theses and arguments to be presented here within a broader discourse about the desirability of posthumanity. Opponents of posthu-manity argue that we should not seek enhancements of a type that could make us, or our descendants, posthuman. We can distinguish at least five different “levels” on which objections against posthumanity could be launched (Table 7.1). This paper focuses on levels 3 and 4. I am thus setting aside issues of feasibility, costs, risks, side-effects, and social consequences. While those issues are obviously important when considering what we have most reason to do all things considered, they will not be addressed here. 7 Why I Want To Be a Posthuman When I Grow Up 109 Some further terminological specifications are in order. By a mode of being I mean a set of capacities and other general parameters of life. A posthuman mode of being is one that includes at least one posthuman capacity. I shall speak of the value of particular modes of being. One might hold that primary value-bearers are some entities other than modes of being; e.g. mental states, subjective experiences, activities, preference-satisfactions, achievements, or particular lives. Such views are consistent with this paper. The position I seek to defend is consistent with a wide variety of formal and substantive theories of value. I shall speak of the value of modes of being for the sake of simplicity and conven-ience, but in doing so I do not mean to express a commitment to any particular controversial theory of value. We might interpret “the values” of modes of beings as proxies for values that would be realized by particular lives instantiating the mode of being in question. If we proceed in this way, we create some indeterminacy. It is possible for a mode of being (and even more so for a class of modes of being) to be instantiated in a range of different possible lives, and for some of these lives to be good and others to be bad. In such a case, how could one assign a value to the mode of being itself? Another way of expressing this concern is by saying that the value of instantiat- ing a particular mode of being is context-dependent. In one context, the value Table 7.1 Levels of objection to posthumanity Level 0. “It can’t be done” Objections based on empirical claims to the effect that it is, and will remain, impossible or infeasible to create posthumans. Level 1. “It is too difficult/costly” Objections based on empirical claims that attempts to transform humans into posthumans, or to create new posthuman beings, would be too risky, or too expensive, or too psychologi- cally distracting. Concerns about medical side-effects fall into this category, as do concerns that resources devoted to the requisite research and treatment would be taken away from more important areas. Level 2. “It would be too bad for society” Objections based on empirical claims about social consequences that would follow from the successful creation of posthuman beings, for example concerns about social inequality, discrimi- nation, or conflicts between humans and posthumans. Level 3. “Posthuman lives would be worse than human lives” Objections based on normative claims about the value of posthuman lives compared to human lives. Level 4. “We couldn’t benefit” Objections based on agent-relative reasons against human beings transforming themselves into posthuman beings or against humans bringing new posthuman beings into existence. Although posthuman lives might be as good as or better than human lives, it would be bad for us to become posthuman or to create posthumans. 110 N. Bostrom might be high; in another, it might be negative. Nevertheless, it is useful to be able to speak of values of items other than those we accord basic intrinsic value. We might for example say that it is valuable to be in good health and to have some money. Yet neither having good health nor having some money is guaranteed to make a positive difference to the value of your life. There are contexts in which the opposite is true. For instance, it could be the case that because you had some money you got robbed and murdered, or that because you were always in rude health you lacked a particular (short, mild) disease experience that would have transformed your mediocre novel into an immortal masterpiece. Even so, we can say that health and money are good things without thereby implying that they are intrinsically valuable or that they add value in all possible contexts. When we say that they are valuable we might merely mean that these things would normally make a positive contribution to the value of your life; they would add value in a very wide range of plausible contexts. This mundane meaning is what I have in mind when I speak of modes of being having a value: i.e., in a very wide range of plausible contexts, lives instantiating that mode of being would tend to contain that value. 3 A life might be good or bad because of its causal consequences for other people, or for the contribution it makes to the overall value of a society or a world. But here I shall focus on the value that a life has for the person whose life it is: how good (or bad) it is for the subject to have this life. The term “well-being” is often used in this sense. 4 When I speak of the value of a life here, I do not refer to the moral status of the person whose life it is. It is a separate question what the moral status would be of human and posthuman beings. We can assume for present purposes that human and posthuman persons would have the same moral status. The value of a life refers, rather, to how well a life goes for its subject. Different human lives go differently well, and in this sense their lives have different values. The life of a person who dies from a painful illness at age 15 after having lived in extreme poverty and social isolation is typically worse and has less value than that of a person who has an 80-year-long life full of joy, creativity, worthwhile achievements, friendships, and love. Whatever terminology we use to describe the difference, it is plain that the latter kind of life is more worth having. One way to express this platitude is by 3 Compare this take on “mundane values” with the notion of mid-level principles in applied ethics. The principle of respecting patient autonomy is important in medical ethics. One might accept this if one holds that respect for patient autonomy is an implication of some fundamental ethical prin- ciple. But equally, one might accept patient autonomy as an important mid-level principle even if one merely holds that this is a way of expressing a useful rule of thumb, a sound policy rule, or a derived ethical rule that is true in a world like ours because of various empirical facts even though it is not necessarily true in all possible worlds. For the role of mid-level principles in applied ethics, see e.g. Beauchamp and Childress 2001. 4 I am thus not concerned here with global evaluations into which individuals’ well-being might enter as a factor, e.g. evaluations involving values of diversity, equality, or comparative fairness. 7 Why I Want To Be a Posthuman When I Grow Up 111 saying that the latter life is more valuable than the former.5 This is consistent with assigning equal moral status to the two different persons whose lives are being compared. Some pairs of possible lives are so different that it is difficult – arguably impos- sible – to compare their value. We can leave aside the question of whether, for every pair of possible lives, it is true either than one is better than the other, or that they are equally good; that is, whether all pairs of possible lives have commensurable value. We shall only assume that at least for some pairs of possible lives, one is definitely better than the other. To supply our minds with a slightly more concrete image of what becoming posthuman might be like, let us consider a vignette of how such a process could unfold. 7.3 Becoming Posthuman Let us suppose that you were to develop into a being that has posthuman healthspan and posthuman cognitive and emotional capacities. At the early steps of this proc-ess, you enjoy your enhanced capacities. You cherish your improved health: you feel stronger, more energetic, and more balanced. Your skin looks younger and is more elastic. A minor ailment in your knee is cured. You also discover a greater clarity of mind. You can concentrate on difficult material more easily and it begins making sense to you. You start seeing connections that eluded you before. You are astounded to realize how many beliefs you had been holding without ever really thinking about them or considering whether the evidence supports them. You can follow lines of thinking and intricate argumentation farther without losing your foothold. Your mind is able to recall facts, names, and concepts just when you need them. You are able to sprinkle your conversation with witty remarks and poignant anecdotes. Your friends remark on how much more fun you are to be around. Your experiences seem more vivid. When you listen to music you perceive layers of structure and a kind of musical logic to which you were previously oblivious; this gives you great joy. You continue to find the gossip magazines you used to read amusing, albeit in a different way than before; but you discover that you can get more out of reading Proust and Nature. You begin to treasure almost every moment of life; you go about your business with zest; and you feel a deeper warmth and affection for those you love, but you can still be upset and even angry on occasions where upset or anger is truly justified and constructive. As you yourself are changing you may also begin to change the way you spend your time. Instead of spending four hours each day watching television, you may 5 I do not assume that the value of a life, or well-being, supervenes on the mental experiences of a person, nor that it supervenes on a thin time-slice of a person’s life. It could represent a wider and more global evaluation of how well a person’s life is going. 112 N. Bostrom now prefer to play the saxophone in a jazz band and to have fun working on your first novel. Instead of spending the weekends hanging out in the pub with your old buddies talking about football, you acquire new friends with whom you can discuss things that now seem to you to be of greater significance than sport. Together with some of these new friends, you set up a local chapter of an international non-profit to help draw attention to the plight of political prisoners. By any reasonable criteria, your life improves as you take these initial steps towards becoming posthuman. But thus far your capacities have improved only within the natural human range. You can still partake in human culture and find company to engage you in meaningful conversation. Consider now a more advanced stage in the transformation process.… You have just celebrated your 170th birthday and you feel stronger than ever. Each day is a joy. You have invented entirely new art forms, which exploit the new kinds of cognitive capacities and sensibilities you have developed. You still listen to music – music that is to Mozart what Mozart is to bad Muzak. You are communicating with your contemporaries using a language that has grown out of English over the past century and that has a vocabulary and expressive power that enables you to share and discuss thoughts and feelings that unaugmented humans could not even think or experience. You play a certain new kind of game which combines VR-mediated artis-tic expression, dance, humor, interpersonal dynamics, and various novel faculties and the emergent phenomena they make possible, and which is more fun than anything you ever did during the first 100 years of your existence. When you are playing this game with your friends, you feel how every fiber of your body and mind is stretched to its limit in the most creative and imaginative way, and you are creating new realms of abstract and concrete beauty that humans could never (concretely) dream of. You are always ready to feel with those who suffer misfortunes, and to work hard to help them get back on their feet. You are also involved in a large voluntary organization that works to reduce suffering of animals in their natural environment in ways that permit ecologies to continue to function in traditional ways; this involves political efforts combined with advanced science and information processing services. Things are getting better, but already each day is fantastic. As we seek to peer farther into posthumanity, our ability to concretely imagine what it might be like trails off. If, aside from extended healthspans, the essence of posthumanity is to be able to have thoughts and experiences that we cannot readily think or experience with our current capacities, then it is not surprising that our abil-ity to imagine what posthuman life might be like is very limited. Yet we can at least perceive the outlines of some of the nearer shores of posthumanity, as we did in the imaginary scenario above. Hopefully such thought experiments are already enough to give plausibility to the claim that becoming posthuman could be good for us. In the next three sections we will look in a little more detail at each of the three general central capacities that I listed in the introduction section. I hope to show that the claim that it could be very good to be posthuman is not as radical as it might appear to some. In fact, we will find that individuals and society already in some ways seem to be implicitly placing a very high value on posthuman capacities – or at least, there are strong and widely accepted tendencies pointing that way. 7 Why I Want To Be a Posthuman When I Grow Up 113 I therefore do not regard my claim as in any strong sense revisionary. On the con- trary, I believe that the denial of my claim would be strongly revisionary in that it would force us to reject many commonly accepted ethical beliefs and approved behaviors. I see my position as a conservative extension of traditional ethics and values to accommodate the possibility of human enhancement through technological means. 7.4 Healthspan It seems to me fairly obvious why one might have reason to desire to become a posthuman in the sense of having a greatly enhanced capacity to stay alive and stay healthy. 6 I suspect that the majority of humankind already has such a desire implicitly. People seek to extend their healthspan, i.e. to remain healthy, active, and produc- tive. This is one reason why we install air bags in cars. It may also explain why we go to the doctor when we are sick, why higher salaries need to be paid to get workers to do physically dangerous work, and why governments and charities give money to medical research. 7 Instances of individuals sacrificing their lives for the sake of some other goal, whether suicide bombers, martyrs, or drug addicts, attract our attention precisely because their behavior is unusual. Heroic rescue workers who endanger their lives on a dangerous mission are admired because we assume that they are putting at risk something that most people would be very reluctant to risk, their own survival. For some three decades, economists have attempted to estimate individuals’ preferences over mortality and morbidity risk in labor and product markets. While the tradeoff estimates vary considerably between studies, one recent meta-analysis puts the median value of the value of a statistical life for prime-aged workers to about $7 million in the United States (Viscusi & Aldy 2003). A study by the EU’s Environment Directorates-General recommends the use of a value in the interval €0.9–€3.5 million (Johansson 2002). Recent studies by health economists indicate that improvements in the health status of the U.S. population over the 20th century have made as large a contribution to raising the standards of living as all other forms of consumption growth combined (Murphy & Topel 2003; Nordhaus 2003). While the exact numbers are debatable, there is little doubt that most people place a very high value on their continued existence in a healthy state. 6 Having such a capacity is compatible with also having the capacity to die at any desired age. One might thus desire a capacity for greatly extended healthspan even if one doubts that one would wish to live for more than, say, 80 years. A posthuman healthspan capacity would give one the option of much longer and healthier life, but one could at any point decide no longer to exercise the capacity. 7 Although on the last item, see Hanson (2000) for an alternative view. 114 N. Bostrom Admittedly, a desire to extend one’s healthspan is not necessarily a desire to become posthuman. To become posthuman by virtue of healthspan extension, one would need to achieve the capacity for a healthspan that greatly exceeds the maxi-mum attainable by any current human being without recourse to new technological means. Since at least some human beings already manage to remain quite healthy, active, and productive until the age of 70, one would need to desire that one’s healthspan were extended greatly beyond this age in order that it would count as having a desire to become posthuman. 8 Many people will, if asked about how long they would wish their lives to be, name a figure between 85 and 90 years (Cohen & Langer 2005). In many cases, no doubt, this is because they assume that a life significantly longer than that would be marred by deteriorating health – a factor from which we must abstract when considering the desirability of healthspan extension. People’s stated willingness to pay to extend their life by a certain amount does in fact depend strongly on the health status and quality of that extra life (Johnson et al. 1998). Since life beyond 85 is very often beset by deteriorating health, it is possible that this figure substantially underestimates how long most people would wish to live if they could be guaranteed perfect health. It is also possible that a stated preference for a certain lifespan is hypocritical. Estimates based on revealed preferences in actual market choices, such as fatality risk premiums in labor markets or willingness to pay for health care and other forms of fatality risk reduction might be more reliable. It would be interesting to know what fraction of those who claim to have no desire for healthspan extension would change their tune if they were ever actually handed a pill that would reliably achieve this effect. My conjecture would be that when presented with a real-world choice, most would choose the path of prolonged life, health, and youthful vigor over the default route of aging, disease, and death. One survey asked: “Based on your own expectations of what old age is like, if it were up to you, how long would you personally like to live – to what age?” Only 27% of respondents said they would like to live to 100 or older (Cohen & Langer 2005). A later question in the same survey asked: “Imagine you could live to 100 or older, but you’d have to be very careful about your diet, exercise regularly, not smoke, avoid alcohol, and avoid stress. Would it be worth it, or not?” To this, 64% answered in the affirmative! Why should more people want to live beyond 100 when restrictions on activity are imposed? Is it because it frames the question more as if it were a real practical choice rather than as an idle mind game? Perhaps when the question is framed as a mind game, respondents tend to answer in ways which they believe expresses culturally approved attitudes, or which they think signal socially desirable personal traits (such as having “come to terms” with one’s own mortality), while this tendency is diminished when the framing suggests a practical choice with real consequences. We do not know for sure, but this kind of anomaly suggests that we should not take people’s stated “preferences” 8 At least one human, Jeanne Calment, lived to 122. But although she remained in relatively fair health until close to her death, she clearly suffered substantial decline in her physical (and presum- ably mental) vigor compared to when she was in her 20s. She did not retain the capacity to be fully healthy, active, and productive for 122 years. 7 Why I Want To Be a Posthuman When I Grow Up 115 about how long they would wish to live too seriously, and that revealed preferences might be a more reliable index of their guiding values. It is also worth noting that only a small fraction of us commit suicide, suggest- ing that our desire to live is almost always stronger than our desire to die.9 Our desire to live, conditional on our being able to enjoy full health, is even stronger. This presumption in favor of life is in fact so strong that if somebody wishes to die soon, even though they are seemingly fully healthy, with a long remaining healthy life expectancy, and if their external circumstances in life are not cata-strophically wretched, we would often tend suspect that they might be suffering from depression or other mental pathology. Suicidal ideation is listed as a diag-nostic symptom of depression by the American Psychiatric Association. 10 Even if a stated preference against healthspan extension were sincere, we would need to question how well considered and informed it is. It is of relevance that those who know most about the situation and are most directly affected by the choice, namely the elderly, usually prefer life to death. They usually do so when their health is poor, and overwhelmingly choose life when their health is at least fair. Now one can argue that a mentally intact 90-year-old is in a better position to judge how their life would be affected by living for another year than she was when she was 20, or 40. If most healthy and mentally intact 90-year-olds prefer to live for another year (at least if they could be guaranteed that this extra year would be one of full health and vigor), this would be evidence against the claim that it would be better for these peo-ple that their lives end at 90. 11 Similarly, of course, for people of even older age. One can compare this situation with the different case of somebody becoming para- plegic. Many able-bodied people believe that their lives would not be worth living if they became paraplegic. They claim that they would prefer to die rather than continu-ing life in a paraplegic state. Most people who have actually become paraplegic, how-ever, find that their lives are worth living. 12 People who are paraplegic are typically better judges of whether paraplegic lives are worth continuing than are people who have never experienced what it is like to be paraplegic. Similarly, people who are 90 years old are in a better position to judge whether their lives are worth continuing than are younger people (including themselves at any earlier point in their lives). 13 One study assessed the will to live among 414 hospitalized patients aged 80–98 years, presumably representing the frailer end of the distribution of the “old old”. 9 For some, the reluctance to commit suicide might reflect a desire not to kill oneself rather than a desire not to die, or alternatively a fear of death rather than an authentic preference not to die. 10 DSM-IV (American Psychiatric Association 2000). 11 This is a kind of Millian best-judge argument. However, if fear of death were irrational, one could argue that people who are closer to death are on average worse judges of the value for them of an extra year of life, because their judgments would tend to be more affected by irrational fear. 12 This basic result is reflected in many chronic disease conditions (Ubel et al. 2003). The discrep- ancy of attitudes seems to be due to non-patient’s failure to realize the extent to which patients psychologically adapt to their condition (Damschroder et al. 2005). 13 The analogy with paraplegia is imperfect in at least one respect: when the issue is healthspan extension, we are considering whether it would be worth living an extended life in perfect health and vigor. If anything, this discrepancy strengthens the conclusion, since it is more worth continuing living in perfect health than in poor health, not less worth it. 116 N. Bostrom 40.8% of respondents were unwilling to exchange any time in their current state of health for a shorter life in excellent health, and 27.8% were willing to give up at most 1 month of 12 in return for excellent health (Tsevat et al. 1998). 14 (Patients who were still alive one year later were even less inclined to give up life for better health, but with continued large individual variations in preferences.) The study also found that patients were willing to trade significantly less time for a healthy life than their surrogates assumed they would. Research shows that life-satisfaction remains relatively stable into old age. One survey of 60,000 adults from 40 nations discovered a slight upward trend in life- satisfaction from the 20s to the 80s in age (Diener & Suh 1998). Life satisfaction showed this upward trend even though there was some loss of positive affect. Perhaps life-satisfaction would be even higher if positive affect were improved (a possibility we shall discuss in a later section). Another study, using a cross-sectional sample (age range 70–103 years), found that controlling for functional health constraints reversed the direction of the relationship between age and positive affect and produced a negative association between age and negative affect (K unzmann et al. 2000). These findings suggest that some dimensions of subjective well-being, such as life-satisfaction, do not decline with age but might actually increase some-what, and that the decline in another dimension of subjective well-being (positive affect) is not due to aging per se but to health constraints. Most people reveal through their behavior that they desire continued life and health, 15 and most of those who are in the best position to judge the value of continued healthy life, at any age, judge that it is worth having. This constitutes prima facie support for the claim that extended life is worth having even when it is not fully healthy. The fact that this holds true at all currently realized ages suggests that it is not a strongly revisionary view to hold that it could be good for many people to become posthuman through healthspan extension. Such a view might already be implicitly endorsed by many. 7.5 Cognition People also seem to be keen on improving cognition. Who wouldn’t want to remember names and faces better, to be able more quickly to grasp difficult abstract ideas, and to be able to “see connections” better? Who would seriously object to being able to appreciate music at a deeper level? The value of optimal cognitive functioning is so obvious that to elaborate the point may be unnecessary. 16 14 See also McShine et al. 2000. For a methodological critique, see Arnesen and Norheim 2003. 15 This is fully consistent with the fact that many people knowingly engage in risky behaviors such as smoking. This might simply mean that they are unable to quit smoking, or that they desire the pleasure of smoking more than they desire a longer healthier life. It does not imply that they do not desire longer healthier life. 16 One might even argue that a desire for cognitive improvement is a constitutive element of human rationality, but I will not explore that hypothesis here. 7 Why I Want To Be a Posthuman When I Grow Up 117 This verdict is reflected in the vast resources that society allocates to education, which often explicitly aims not only to impart specific items of knowledge but also to improve general reasoning abilities, study skills, critical thinking, and problem solving capacity. 17 Many people are also keen to develop various particular talents that they may happen to have, for example musical or mathematical, or to develop other specific faculties such as aesthetic appreciation, narration, humor, eroticism, spirituality etc. We also reveal our desire for improving our cognitive functioning when take a cup of coffee to increase our alertness or when we regret our failure to obtain a full night’s sleep because of the detrimental effects on our intellectual performance. Again, the fact that there is a common desire for cognitive improvement does not imply that there is a common desire for becoming posthuman. To want to become posthuman through cognitive improvement, one would have to want a great deal of cognitive improvement. It is logically possible that each person would only want to become slightly more intelligent (or musical, or humorous) than he or she currently is and would not want any very large gain. I will offer two considerations regarding this possibility. First, it seems to me (based on anecdotal evidence and personal observations) that people who are already endowed with above-average cognitive capacities are at least as eager, and, from what I can tell, actually more eager to obtain further improvements in these capacities than are people who are less talented in these regards. For instance, someone who is musically gifted is likely to spend more time and effort trying to further develop her musical capacities than is somebody who lacks a musical ear; and likewise for other kinds of cognitive gifts. This phenomenon may in part reflect the external rewards that often accrue to those who excel in some particular domain. An extremely gifted musician might reap greater rewards in terms of money and esteem from a slight further improve-ment in her musicality than would somebody who is not musically gifted to begin with. That is, the difference in external rewards is sometimes greater for somebody who goes from very high capacity to outstandingly high capacity than it is for somebody who goes from average capacity to moderately high capacity. However, I would speculate that such differences in external rewards are only part of the explanation and that people who have high cognitive capacities are usually also more likely (or at least no less likely) to desire further increases in those capacities than are people of lower cognitive capacities even when only the intrinsic benefits of capacities are considered. Thus, if we imagine a group of people placed in soli-tary confinement for the remainder of their lives, but with access to books, musical instruments, paints and canvasses, and other prerequisites for the exercise of capacities, I would hypothesize that those with the highest pre-existing capacity in a given domain would be more likely (or at least not less likely) to work hard to further develop their capacities in that domain, for the sake of the intrinsic benefits that the possession and exercise of those capacities bestow, than would those with 17 U.S. public expenditure on education in 2003 was 5.7% of its GDP (World Bank 2003). 118 N. Bostrom lower pre-existing capacities in the same domain.18 While $100 brings vastly less utility to a millionaire than to a pauper, the marginal utility of improved cognitive capacities does not seem to exhibit a similar decline. These considerations suggest that there are continuing returns in the “intrinsic” (in the sense of non-instrumental, non-positional) utility of gains in cognitive capaci-ties, at least within the range of capacity that we find instantiated within the current human population. 19 It would be implausible to suppose that the current range of human capacity, in all domains, is such that while increments of capacity within this range are intrinsically rewarding, yet any further increases outside the current human range would lack intrinsic value. Again, we have a prima facie reason for concluding that enhancement of cognitive capacity to the highest current human level, and prob-ably beyond that, perhaps up to and including the posthuman level, would be intrin-sically desirable for the enhanced individuals. We get this conclusion if we assume that those who have a certain high capacity are generally better judges of the value of having that capacity or of a further increment of that capacity than are those who do not possess the capacity in question to the same degree. 7.6 Emotion It is straightforward to determine what would count as an enhancement of healthspan. We have a clear enough idea of what it means to be healthy, active, and productive, and the difference between this state and that of being sick, incapacitated, or dead. An enhancement of healthspan is simply an intervention that prolongs the duration of the former state. It is more difficult to define precisely what would count as a cognitive enhancement because the measure of cognitive functioning is more multifaceted, various cognitive capacities can interact in complex ways, and it is a more norma-tively complex problem to determine what combinations of particular cognitive competences are of value in different kinds of environments. For instance, it is not obvious what degree of tendency to forget certain kinds of facts and experiences is desirable. The answer might depend on a host of contextual factors. Nevertheless, we do have some general idea of how we might value various increments or decrements in many aspects of our cognitive functioning – a sufficiently clear idea, I suggest, to make it intelligible without much explanation what one might mean by phrases like “enhancing musical ability”, “enhancing abstract reasoning ability” etc. 18 Complication: if high capacity were solely a result from having spent a lot of effort in developing that capacity, then the people with high capacity in some domain might be precisely those that started out having an unusually strong desire for having a strong capacity in that domain. It would then not be surprising that those with high capacity would have the strongest desire for further increases in capacity. Their stronger desire for higher capacity might then not be the result of more information and better acquaintance with what is at stake, but might instead simply reflect a prior inclination. 19 It would be more difficult to determine whether the marginal intrinsic utility of gains in capacity are constant, or diminishing, or increasing at higher levels of capacity, and if so by what amount. 7 Why I Want To Be a Posthuman When I Grow Up 119 It is considerably more difficult to characterize what would count as emotional enhancement. Some instances are relatively straightforward. Most would readily agree that helping a person who suffers from persistent suicidal depression as the result of a simple neurochemical imbalance so that she once again becomes capable of enjoyment and of taking an interest in life would be to help her improve her emotional capacities. Yet beyond cases involving therapeutic interventions to cure evident psychopathology it is less clear what would count as an enhancement. One’s assessment of such cases often depends sensitively on the exact nature of one’s normative beliefs about different kinds of possible emotional constitutions and personalities. It is correspondingly difficult to say what would constitute a “posthuman” level of emotional capacity. Nevertheless, people often do strive to improve their emo-tional capacities and functioning’s. We may seek to reduce feelings of hate, con-tempt, or aggression when we consciously recognize that these feelings are prejudiced or unconstructive. We may take up meditation or physical exercise to achieve greater calm and composure. We may train ourselves to respond more sensitively and empathetically to those we deem deserving of our trust and affec-tion. We may try to overcome fears and phobias that we recognize as irrational, or we may wrestle with appetites that threaten to distract us from what we value more. Many of us expend life-long effort to educate and ennoble our sentiments, to build our character, and to try to become better people. Through these strivings, we seek to achieve goals involving modifying and improving our emotional capacities. An appropriate conception of emotional capacity would be one that incorporates or reflects these kinds of goal, while allowing perhaps for there being a wide range of different ways of instantiating “high emotional capacity”, that is to say, many different possible “characters” or combinations of propensities for feeling and reacting that could each count as excellent in its own way. If this is admitted, then we could make sense of emotional enhancement in a wide range of contexts, as being that which makes our emotional characters more excellent. A posthuman emotional capacity would be one which is much more excellent than that which any current human could achieve unaided by new technology. One might perhaps question whether there are possible emotional capacities that would be much more excellent than those attainable now. Conceivably, there might be a maximum of possible excellence of emotional capacity, and those people who currently have the best emotional capacities might approach so closely to this ideal that there is not enough potential left for improvement to leave room for a posthu-man realm of emotional capacity. I doubt this, because aside from the potential for fine-tuning and balancing the various emotional sensibilities we already have, I think there might also be entirely new psychological states and emotions that our species has not evolved the neurological machinery to experience, and some of these sensibilities might be ones we would recognize as extremely valuable if we became acquainted with them. It is difficult intuitively to understand what such novel emotions and mental states might be like. This is unsurprising, since by assumption we currently lack 120 N. Bostrom the required neurological bases. It might help to consider a parallel case from within the normal range of human experience. The experience of romantic love is something that many of us place a high value on. Yet it is notoriously difficult for a child or a prepubescent teenager to comprehend the meaning of romantic love or why adults should make so much fuss about this experience. Perhaps we are all currently in the situation of children relative to the emotions, passions, and mental states that posthuman beings could experience. We may have no idea of what we are missing out on until we attain posthuman emotional capacities. One dimension of emotional capacity that we can imagine enhanced is subjec- tive well-being and its various flavors: joy, comfort, sensual pleasures, fun, positive interest and excitement. Hedonists claim that pleasure is the only intrinsic good, but one need not be a hedonist to appreciate pleasure as one important component of the good. The difference between a bleak, cold, horrid painful world and one that is teeming with fun and exciting opportunities, full of delightful quirks and lovely sensations, is often simply a difference in the hedonic tone of the observer. Much depends on that one parameter. It is an interesting question how much subjective well-being could be enhanced without sacrificing other capacities that we may value. For human beings as we are currently constituted, there is perhaps an upper limit to the degree of subjective well-being that we can experience without succumbing to mania or some other mental unbalance that would prevent us from fully engaging with the world if the state were indefinitely prolonged. But it might be possible for differently consti-tuted minds to have experiences more blissful than those that humans are capable of without thereby impairing their ability to respond adequately to their surround-ings. Maybe for such beings, gradients of pleasure could play a role analogous to that which the scale ranging between pleasure and pain has for us (Pearce 2004). When thinking the possibility of posthumanly happy beings, and their psychological properties, one must abstract from contingent features of the human psyche. An experience that would consume us might perhaps be merely “spicy” to a posthuman mind. It is not necessary here to take a firm stand on whether posthuman levels of pleasure are possible, or even on whether posthuman emotional capacities more generally are possible. But we can be confident that, at least, there is vast scope for improvements for most of individuals in these dimensions because even within the range instantiated by currently exiting humans, there are levels of emotional capaci-ties and degrees of subjective well-being that, for most of us, are practically unat-tainable to the point of exceeding our dreams. The fact that such improvements are eagerly sought by many suggests that if posthuman levels were possible, they too would be viewed as highly attractive. 20 20 The quest for subjective well-being, in particular, seems to be a powerful motivator for billions of people even though arguably none of the various means that have been attempted in this quest has yet proved very efficacious in securing the goal (Brickman & Campbell 1971). 7 Why I Want To Be a Posthuman When I Grow Up 121 7.7 Structure of the Argument, and Further Supporting Reasons It might be useful to pause briefly to reflect on the structure of the argument pre- sented so far. I began by listing three general central capacities (healthspan, cogni-tion, and emotion), and I defined a posthuman being as one who has at least one of these capacities in a degree unattainable by any current human being unaided by new technology. I offered some plausibility arguments suggesting that it could be highly desirable to have posthuman levels of these capacities. I did this partly by clarifying what having the capacities would encompass and by explaining how some possible objections would not apply because they rely on a misunderstanding of what is proposed. Furthermore, I tried to show that for each of the three capacities we find that many individuals actually desire to develop the capacities to higher levels and often undertake great effort and expense to achieve these aims. This desire is also reflected in social spending priorities, which devote significant resources to e.g. healthspan-extending medicine and cognition-improving education. Significantly, at least in the cases of healthspan extension and cognitive improvement, the persons best placed to judge the value and desirability of incremental improvements at the high end of the contemporary human capacity distribution seem to be especially likely to affirm the desirability of such additional improvements of capacity. For many cognitive faculties, it appears that the marginal utility of improvements increases with capacity levels. This suggests that improvements beyond the current human range would also viewed as desirable when evaluated by beings in a better position to judge than we currently are. That people desire X does not imply that X is desirable. Nor does the fact that people find X desirable, even when this judgment is shared among those who are in the best position to judge the desirability of X, prove that X is desirable or valu-able. Even if one were to assume some version of a dispositional theory of value, it does not follow from these premises that X is valuable. A dispositional theory of value might assert something like the following: X is valuable for A if and only if A would value X if A were perfectly rational, perfectly well-informed, and perfectly acquainted with X.21 The people currently best placed to judge the desirability for an individual of enhancement of her general central capacities are neither perfectly rational, nor perfectly well-informed, nor perfectly acquainted with the full meaning of such enhancements. If these people were more rational or obtained more information or became better acquainted with the enhancements in question, they would perhaps no longer value the enhancements. Even if everybody judged becoming posthuman as desirable, it is a logical possibility that becoming posthuman is not valuable, even given a theory of value that defines value in terms of valuing-dispositions. 21 See e.g. Lewis 1989. 122 N. Bostrom The argument presented in the preceding sections is not meant to be deductive. Its ambition is more modest: to remind us of the plausibility of the view that (1) enhancements along the three dimensions discussed are possible in principle and of significant potential intrinsic value, and (2) enhancements along these dimensions large enough to produce posthuman beings could have very great intrinsic value. This argument is defensible. One way in which it could be defeated would be by pointing to further information, rational reasoning, or forms of acquaintance, not accounted for by current opinion, and which would change current opinion if it were incorporated. Critics could for example try to point to some reasoning mistake that very old people commit when they judge that it would be good for them to live another year in perfect health. However, I think the considerations I have pointed to provide prima facie evidence for my conclusions. There are other routes by which one could reach the position that I have advo- cated, which supports the above arguments. For instance, one might introspect one’s own mind to determine whether being able to continue to live in good health longer, being able better to understand the world and other people, or being able more fully to enjoy life and to react with appropriate affect to life events would seem like worthwhile goals for oneself if they were obtainable (see e.g. Bostrom 2005). Alternatively, one might examine whether having these capacities to an enhanced or even posthuman degree could enable one to realize states and life paths that would have great value according to one’s favorite theory of value. (To me, both these tests deliver affirmative verdicts on (1) and (2).) Yet another route to making the foregoing conclusions plausible is by consider- ing our current ignorance and the vastness of the as-yet unexplored terrain. Let S H be the “space” of possible modes of being that could be instantiated by someone with current human capacities. Let S P be the space of possible modes of being that could be instantiated by someone with posthuman capacities. In an intuitive sense, S P is enormously much larger than SH. There is a larger range of possible life courses that could be lived out during a posthuman lifespan than during a human lifespan. There are more thoughts that could be thought with posthuman cognitive capacities than with human capacities (and more musical structures that could be created and appreciated with posthuman musical capacities etc.). There are more mental states and emotions that could be experienced with posthuman emotional faculties than with human ones. So why, apart from a lack of imagination, should anybody suppose that the S H already contains all the most valuable and worthwhile modes of being? An analogy: For as long as anybody remembers, a tribe has lived in a certain deep and narrow valley. They rarely think of what lies outside their village, and on the few occasions when they do, they think of it only as a mythical realm. One day a sage who has been living apart from the rest, on the mountainside, comes down to the village. He explains that he has climbed to the top of the mountain ridge and from there he could see the terrain stretching far away, all the way to the horizon. He saw plains, lakes, forests, winding rivers, mountains, and the sea. Would it not be reasonable, he says, in lieu of further exploration, to suppose that this vast space is likely to be home to natural resources of enormous value? – Similarly, the sheer 7 Why I Want To Be a Posthuman When I Grow Up 123 size and diversity of SP is in itself a prima facie reason for thinking that it is likely to contain some very great values (Bostrom 2004). 7.8 Personal Identity Supposing the previous sections have succeeded in making it plausible that being a posthuman could be good, we can now turn to a further question: whether becom-ing posthuman could be good for us. It may be good to be Joseph Haydn. Let us suppose that Joseph Haydn had a better life than Joe Bloggs so that in some sense it is better to be Haydn and living the life that Haydn lived than to be Bloggs and living Bloggs’ life. We may further suppose that this is so from Bloggs’ evaluative standpoint. Bloggs might recognize that on all the objective criteria which he thinks makes for a better mode of being and a better life, Haydn’s mode of being and life are better than his own. Yet it does not follow that it would be good for Bloggs to “become” Haydn (or to become some kind of future equivalent of Haydn) or to live Haydn’s life (or a Haydn-like life). There are several possible reasons for this which we need to examine. First, it might not be possible for Bloggs to become Haydn without ceasing to be Bloggs. While we can imagine a thought experiment in which Bloggs’ body and mind are gradually transformed into those of Haydn (or of a Haydn-equivalent), it is not at all clear that personal identity could be preserved through such a transfor-mation. If Bloggs’ personal identity is essentially constituted by some core set of psychological features such as his memories and dispositions, then, since Haydn does not have these features, the person Bloggs could not become a Haydn-equivalent. Supposing that Bloggs has a life that is worth living, any transformation that causes the person Bloggs to cease to exist might be bad for Bloggs, including one that transforms him into Haydn. Could a current human become posthuman while remaining the same person, or is the case like the one of Bloggs becoming Haydn, the person Bloggs necessarily ceasing to exist in the process? The case of becoming posthuman is different in an important respect. Bloggs would have to lose all the psychological characteristics that made him person Bloggs in order to become Haydn. In particular, he would have to lose all his memories, his goals, his unique skills, and his entire personality would be obliterated and replaced by that of Haydn. By contrast, a human being could retain her memories, her goals, her unique skills, and many important aspects of her personality even as she becomes posthuman. This could make it possible for personal identity to be preserved during the transformation into posthuman. 22 22 See also DeGrazia 2005. DeGrazia argues that identity-related challenges to human enhance- ment largely fails, both ones based on considerations of personal identity and ones based on nar- rative identity (authenticity), although he mainly discusses more moderate enhancements than those I focus on in this paper. 124 N. Bostrom It is obvious that personal identity could be preserved, at least in the short run, if posthuman status is achieved through radical healthspan enhancement. Suppose that I learnt that tonight after I go to bed, a scientist would perform some kind of molecular therapy on my cells while I’m sleeping to permanently disable the aging processes in my body. I might worry that I would not wake up tomorrow because the surgery might go wrong. I would not worry that I might not wake up tomorrow because the surgery succeeded. Healthspan enhancement would help preserve my personal identity. (If the psychological shock of discovering that my life-expect-ancy had been extended to a thousand years were so tremendous that it would completely remold my psyche, it is possible that the new me would not be the same person as the old me. But this is not a necessary consequence. 23) Walter Glannon has argued that a lifespan of 200 years or more would be undesir- able because personal identity could not be persevered over such a long life (Glannon 2002). Glannon’s argument presupposes that personal identity (understood here as a determinant of our prudential concerns) depends on psychological connectedness. On this view, we now have prudential interests in a future time segment of our organ-ism only if that future time segment is psychologically connected to the organism’s present time segment through links of backward-looking memories and forward-looking projects and intentions. If a future time segment of my brain will not remem-ber anything about what things are like for me now, and if I now have no projects or intentions that extend that far into the future, then that future time segment is not part of my person. Glannon asserts that these psychological connections that hold us together as persons could not extend over 200 years or so. There are several problems with Glannon’s argument, even if we accept his metaphysics of personal identity. There is no reason to think it impossible to have intentions and projects that range over more than 200 years. This would seem pos-sible even with our current human capacities. For example, I can easily conceive of exciting intellectual and practical projects that may take me many hundreds of years to complete. It is also dubious to assume that a healthy future self several hundred years older than I am now might would be unable remember things from current life stage. Old people often remember their early adulthood quite well, and it is not clear that these memories always decline significantly over time. And of course, the concern about distant future stages being unable to remember their earlier stages disappears completely if we suppose that enhancements of memory capacity becomes available. 24 Furthermore, if Glannon was right, it would follow that it is “undesirable” for a small child to grow up, since adults do not remember 23 It is not even a psychologically plausible consequence even within the limitations of current human psychology. Compare the case to that of a man on death row who has a remaining life- expectancy of 1 day. An unexpected pardon suddenly extends this to 40 years – an extension by a factor of 14,610! He might be delighted, stunned, or confused, but he does not cease to exist as a person. If he did, it would presumably be bad for him to be pardoned. Even if one believed (erro- neously in my view) that mortality or aging were somehow essential features of the persons we are, these features are consistent with vastly extended healthspan. 24 It is clear that in order for an extremely long life to not become either static or self-repeating, it would be necessary that mental growth continues. 7 Why I Want To Be a Posthuman When I Grow Up 125 what it was like to be a small child and since small children do not have projects or intentions that extend over time spans as long as decades. This implication would be counterintuitive. It is more plausible that it can be desirable for an agent to sur-vive and continue to develop, rather than to die, even if psychological connections eventually become attenuated. In the same way, it could be desirable for us to acquire the capacity to have a posthuman healthy lifespan, even if we could not remain the same person over time scales of several centuries. The case that personal identify could be preserved is perhaps less clear-cut with regard to radical cognitive or emotional enhancement. Could a person become radi-cally smarter, more musical, or come to possess much greater emotional capacities without ceasing to exist? Here the answer might depend more sensitively on pre-cisely which changes we are envisaging, how those changes would be implemented, and on how the enhanced capacities would be used. The case for thinking that both personal identity and narrative identity would be preserved is arguably strongest if we posit that (a) the changes are in the form of addition of new capacities or enhancement of old ones, without sacrifice of preexisting capacities; and (b) the changes are implemented gradually over an extended period of time; (c) each step of the transformation process is freely and competently chosen by the subject; and (d) the new capacities do not prevent the preexisting capacities from being periodi-cally exercised; (e) the subject retains her old memories and many of her basic desires and dispositions; (f) the subject retains many of her old personal relation-ships and social connections; and (g) the transformation fits into the life narrative and self-conception of the subject. Posthuman cognitive and emotional capacities could in principle be acquired in such a way that these conditions are satisfied. Even if not all the conditions (a)–(g) were fully satisfied in some particular transformation process, the normatively relevant elements of a person’s (numerical or narrative) identity could still be sufficiently preserved to avoid raising any fun-damental identity-based objection to the prudentiality of undergoing such a trans-formation. We should not use a stricter standard for technological self-transformation than for other kinds of human transformation, such as migration, career change, or religious conversion. Consider again a familiar case of radical human transformation: maturation. You currently possess vastly greater cognitive capacities than you did as an infant. You have also lost some capacities, e.g. the ability to learn to speak a new language without an accent. Your emotional capacities have also changed and developed considerably since your babyhood. For each concept of identity which we might think has relevant normative significance – personal (numerical) identity, narrative identity, identity of personal character, or identity of core characteristics – we should ask whether identity in that sense has been preserved in this transformation. The answer may depend on exactly how we understand these ideas of identity. For each of them, on a sufficiently generous conception of the identity criteria, identity was completely or in large part preserved through your maturation. But then we would expect that identity in that sense would also be preserved in many other transformations, including the ones that are no more profound as that of a 126 N. Bostrom child growing into an adult; and this would include transformations that would make you posthuman. Alternatively, we might adopt conceptions that impose more stringent criteria for the preservation of identity. On these conceptions, it might be impossible to become posthuman without wholly or in large part disrupting one form of identity or another. However, on such restrictive conceptions, identity would also be disrupted in the transformation of child into adult. Yet we do not think it is bad for a child to grow up. Disruptions of identity in those stringent senses form part of a normal life experience and they do not constitute a disaster, or a misfortune of any kind, for the individual concerned. Why then should it bad for a person to continue to develop so that she one day matures into a being with posthuman capacities? Surely it is the other way around. If this had been our usual path of development, we would have easily recognized the failure to develop into a posthuman as a misfortune, just as we now see it as a misfortune for a child to fail to develop normal adult capacities. Many people who hold religious beliefs are already accustomed to the prospect of an extremely radical transformation into a kind of posthuman being, which is expected to take place after the termination of their current physical incarnation. Most of those who hold such a view also hold that the transformation could be very good for the person who is transformed. 7.9 Commitments Apart from the concern about personal identity, there is a second kind of reason why it might be bad for a Bloggs to become a Haydn. Bloggs might be involved in various projects, relationships, and may have undertaken commitments that he could not or would not fulfill if he became Haydn. It would be bad for Bloggs to fail in these undertakings if they are important to him. For example, suppose that Mr. Bloggs is deeply committed to Mrs. Bloggs. His commitment to Mrs. Bloggs is so strong that he would never want to do anything that contravenes any of Mrs. Bloggs’ most central preferences, and one of her central preferences is that Mr. Bloggs not become posthuman. In this case, even though becoming posthuman might in some respects be good for Mr. Bloggs (it would enable him to understand more, or to stay healthy longer, etc.) it might nevertheless be bad for him all things considered as it would be incompatible with fulfilling one of the commitments that are most important to him. 25 This reason for thinking that it might be bad for a person to become posthu- man relies on the assumption that it can be very bad for a person to forfeit on 25 We may include under this rubric any “commitments to himself” that Mr. Bloggs might have. For example, if he has a firm and well-considered desire not to become posthuman, or if he has solemnly sworn to himself never to develop any posthuman capacities, then it could perhaps on grounds of these earlier desires or commitments be bad for Mr. Bloggs to become posthuman. 7 Why I Want To Be a Posthuman When I Grow Up 127 commitments that would be impossible to fulfill as a posthuman.26 Even if we grant this assumption, it does not follow that becoming a posthuman would nec- essarily be bad for us. We do not generally have commitments that would be impossible to fulfill as posthumans. It may be impossible for Mr. Bloggs to become posthuman without violating his most important commitment (unless, of course, Mrs. Bloggs should change her mind), but his is a special case. Some humans do not have any commitments of importance comparable to that of Mr. Bloggs to his wife. For such people the present concern does not apply. But even for many humans who do have such strong commitments, becoming posthu-man could still be good for them. Their commitments could still be fulfilled if they became posthuman. This is perhaps clearest in regard to our commitments to projects and tasks: most of these we could complete – indeed we could complete them better and more reliably – if we obtained posthuman capacities. But even with regard to our specific commitments to people, it would often be possible to fulfill these even if we had much longer healthspans or greatly enhanced cognitive or emotional capacities. 7.10 Ways of Life In addition to concerns about personal identity and specific commitments to people or projects, there is a third kind of reason one might have for doubting that it could be good for us to become posthuman. This third kind of reason has to do with our interpersonal relations more broadly, and with the way that the good for a person can be tied to the general circumstances and conditions in which she lives. One might think that the very concept of a good life for a human being is inextricably wound up in the idea of flourishing within a “way of life” – a matrix of beliefs, relationships, social roles, obligations, habits, projects, and psychological attributes outside of which the idea of a “better” or “worse” life or mode of being does not make sense. The reasoning may go something like this: It would not be good for a clover to grow into a rhododendron, nor for a fly to start looking and behaving like a raven. Neither would it, on this view, be good for a human to acquire posthuman capacities and start living a posthuman life. The criterion for how well a clover is doing is the extent to which it is succeeding in realizing its own particular nature and achieving the natural “telos” inherent in the clover kind; and the equivalent might be said of the fly. For humans, the case may be more complicated as there is a greater degree 26 One may also hold that a person in Mr. Bloggs’ situation has additional reasons for not becom- ing posthuman that don’t rely on it being worse for him to become posthuman. For instance, he might have moral reasons not to become posthuman even if it would be good for him to become one. Here I am concerned with the question whether it would necessarily be bad for Bloggs to become posthuman, so any moral reasons he might have for declining the transition would only be relevant insofar as they would make the outcome worse for Mr. Bloggs. 128 N. Bostrom of relevant individual variation among humans than among other species. Different humans are pursuing different “ways of life”, so that what counts as flourishing for one human being might differ substantially from what counts as such for another. Nevertheless, as we are all currently pursuing human ways of life, and since what is good for us is defined by reference to our way of life, it is not the case for any human that it would be good for her to become posthuman. It might be good for posthumans to be posthumans, but it would not be good for humans to become posthuman. This third concern seems to be a conglomerate of the two concerns we have already discussed. Why could it not be good for a human to become posthuman? One possible reason is if her personal identity could not be preserved through such a transformation. The comparison with the clover appears to hint at this concern. If a clover turned into a rhododendron, then the clover would presumably cease to exist in the process. If a fly started looking and behaving like a raven, would it still be a fly? So part of what is going on here seems to be that the assertion that the relevant form of identity could not be preserved in the transformations in question. But we have already addressed this concern insofar as it pertains to humans becom-ing posthuman. There might be more at stake with this third concern than identity. The problem with a clover becoming a rhododendron is not just that the clover might cease to exist in the process, but that it seems a mistake to think that being a rhododendron is in any sense better than being a clover. There might be external criteria of evalu-ation (such as economic or aesthetic value to the human owner) according to which a rhododendron is better or more valuable than a clover. But aside from such extrin-sic considerations, the two plants seem to be on a par: a thriving clover thrives just as much as a thriving rhododendron, so if the good for a plant is to thrive then neither kind is inherently better off or has a greater potential for realizing a good life than the other. Our challenger could claim that the same holds vis-à-vis a human and a posthuman. I think the analogy is misleading. People are not plants, and the concept of a valuable mode of being for a person is fundamentally different from that of the state of flourishing for a plant. In a metaphorical sense we can ascribe interests to plants and other non-sentient objects: this clover “could use” some water; that clock “needs” winding up; the squeaky wheel “would benefit” from a few drops of oil. Defining interests relative to a functionalist basis might be the only way we can make sense of these attributions. The function of the clock is to indicate the time, and without being wound up the clock would fail to execute this function; thus it “needs” to be wound up. Yet sentient beings may have interests not only in a meta-phorical sense, based on their function, but in a quite literal sense as well, based on what would be normatively good for them. A human being, for example, might have interests that are defined (partially) in terms of what she is actually interested in, or would be interested in given certain conditions. 27 So from the fact that we 27 Compare the dispositional theories of value, discussed above. 7 Why I Want To Be a Posthuman When I Grow Up 129 could not make sense of the claim that it would be good for a clover to become a rhododendron, it does not follow that we would similarly be unable to make sense of the claim that it would be good for a human to become a posthuman. Even if the successful execution of “the function” of a human were not facilitated by becoming posthuman, there would be other grounds on which one could sensibly attribute to a human an interest in becoming posthuman. It is at any rate highly problematic that something as complex and autonomous as a human being has any kind of well-defined “function”. The problem remains even if we relativize the function to particular ways of life or particular individuals. We might say that the function of the farmer is to farm, and that of the singer is to sing, etc. But any particular farmer is a host of other things as well: e.g. a singer, a mother, a sister, a homeowner, a driver, a television watcher, and so forth ad infinitum. Once she might have been a hairdresser; in the future she might become a shopkeeper, a golfer, a person with a disability, a transsexual, or a posthuman. It is difficult to see how any strong normative conclusions could be drawn from the fact that she currently occupies a certain set of roles and serves a certain set of functions. At most we could conclude that when and insofar as she acts as a farmer, she ought to tend to her crops or livestock; but from the fact that she is a farmer, nothing follows about whether she ought to be or remain a farmer. Likewise, the most we could conclude from the fact that she is currently a human person is that she ought to do things that are good for humans – brush her teeth, sleep, eat, etc. – but only so long as she remains human. If she became a posthu-man who did not need to sleep, she would no longer have any reason so sleep. And the fact that she currently has a reason to sleep is not a reason for her not to become a sleepless posthuman. At this point, an objector could attempt an alternative line of argumentation. Maybe there are some crucial interests that we have that are not conditional on us occupying particular social roles or having particular personal characteristics or serving particular functions. These interests would be unlike our interest in sleep, which does not provide us with a reason not to change in such a way that we no longer need to sleep. Rather these unconditional (“categorical”) interests would be such as to give us reason not to change in ways that would make us no longer have those interests. I have already admitted that individuals can have such interests, and in some cases this might make it the case for some possible individuals that it would not be good for them to become posthuman. I discussed this above as the “second concern”. This is not a problem for my position since it is compatible with it being true for other individuals (and perhaps for the overwhelming majority or even all actual human persons) that it could be good for them to become posthu-man. But our hypothetical objector might argue that there are certain categorical interests we all have qua humans. These interests would somehow derive from human nature and from the natural ends and ideals of flourishing inherent in this essential nature. Might not the existence of such universally shared categorical human interests invalidate the thesis that it could be good for us to become posthuman? 130 N. Bostrom 7.11 Human Nature Let us consider two different candidate ideas of what a human “telos” might be. If we seek a telos for human individuals within a naturalistic outlook, one salient candidate would be the maximization of that individual’s inclusive fitness. Arguably, the most natural way to apply a functional characterization of a human individual from an evolutionary perspective is as an inclusive fitness maximizer (tuned for life in our ancestral environment). From this perspective, our legs, our lungs, our sense of humor, our parental instincts, our sex drive and romantic pro-pensities subserve the ultimate function of promoting the inclusive fitness of an individual. Now if we define the telos of a human individual in this way, as vehicle for the effective promulgation of her genes, then many of the seemingly most attractive posthuman possibilities would be inconsistent with our successfully real-izing this alleged telos, in particular those possibilities that involve radical altera-tion of the human genome. (Replacing our genes with other genes does not seem to be an effective way to promulgate the genes we have.) As a conception of the human good, however, the telos of maximizing inclusive fitness is singularly lacking in plausibility. I do not know of any moral philosopher who advocates such a view. It is too obvious that what is good for a person can, and usually does, diverge from what would maximize that person’s inclusive fitness. 28 Those who attempt to derive a theory of the human good from the telos inherent in a conception of human functioning will need to start from some conception of human functioning other than the evolutionary one. One starting point that has had more appeal is the doctrine that a human being is essentially a rational animal and that the human telos is the development and exercise of our rational faculties. Views of this sort have a distinguished pedigree that can be traced back at least to Aristotle. Whatever the merits of this view, however, it is plainly not a promising objection to the claims I advance in this paper, since it would be perfectly possible for a posthuman to realize a telos of rationality as well as a human being could. In fact, if what is good for us is to develop and exercise our rational nature, this implies that it would be good for us to become posthumans with appropriately enhanced cognitive capacities (and preferably with extended healthspan too, so that we may have more time to develop and enjoy these rational faculties). One sometimes hears it said that it is human nature to attempt to overcome every limit and to explore, invent, experiment, and use tools to improve the human condi-tion. 29 I don’t know that this is true. The opposite tendency seems to be at least as 28 For example, for a contemporary man the life plan that would maximize inclusive fitness might be to simply donate as much sperm to fertility clinics as possible. 29 The quest for posthuman capacities is as old as recorded history. In the earliest preserved epic, the Sumerian Epic of Gilgamesh (approx. 1700 B.C.), a king sets out on a quest for immortality. In later times, explorers sought the Fountain of Youth, alchemists labored to concoct the Elixir of Life, and various schools of esoteric Taoism in China strove for physical immortality by way of control over or harmony with the forces of nature. This is in addition to the many and diverse religious traditions in which the hope for a supernatural posthuman existence is of paramount importance. 7 Why I Want To Be a Posthuman When I Grow Up 131 strong. Many a great invention was widely resisted at the time of its introduction, and inventors have often been viciously persecuted. If one wished to be provoca-tive, one might even say that humanity has advanced technologically in spite of anti-technological tendencies in human nature, and that technological advancement historically has been due more to the intrinsic utility of technological inventions and the competitive advantages they sometimes bestow on their users than to any native preference among the majority of mankind for pushing boundaries and wel-coming innovation. 30 Be that as it may; for even if it were “part of human nature” to push ever onward, forward, and upward, I do not see how anything follows from this regarding the desirability of becoming posthuman. There is too much that is thoroughly unrespectable in human nature (along with much that is admirable), for the mere fact that X is a part of human nature to constitute any reason, even a prima facie reason, for supposing that X is good. 7.12 Brief Sketches of Some Objections and Replies Objection: One might think that it would be bad for a person to be the only posthu-man being since a solitary posthuman would not have any equals to interact with. Reply: It is not necessary that there be only one posthuman.I have acknowledged that capacities may not have basic intrinsic value and that the contribution to well-being that having a capacity makes depends on the context. I suggested that it nevertheless makes sense to talk of the value of a capacity in a sense similar to that in which we commonly talk of the value of e.g. money or health. We can take such value ascriptions as assertions that the object or property normally makes a positive contribution to whatever has basic value. When evaluating posthu-man attributes, the question arises what we should take to be the range of circum-stances against which we assess whether something “normally” makes a positive contribution. As we do not have a concrete example in front of us of a posthuman civilization, there is a certain indeterminacy in any assertion about which things or attributes would “normally” make a positive contribution in a posthuman context. At this point, it may therefore be appropriate to specify some aspects of the posthuman context that I assume in my value-assertions. Let me here postulate that the intended context is one that includes a society of posthuman beings. What dialectical constraints are there on what I am allowed to stipulate about the posthuman context? The main cost to making such stipulations is that if I end up defining a gerrymandered “posthuman context”, which is also extremely unlikely 30 As J.B.S. Haldane wrote: “The chemical or physical inventor is always a Prometheus. There is no great invention, from fire to flying, which has not been hailed as an insult to some god. But if every physical and chemical invention is a blasphemy, every biological invention is a perversion. There is hardly one which, on first being brought to the notice of an observer from any nation which has not previously heard of their existence, would not appear to him as indecent and unnatu- ral” (Haldane 1924). 132 N. Bostrom ever to materialize, then the significance of any claims about what would normally be valuable in that context would tend to wane. It is simply not very interesting to know what would “normally” be valuable in some utterly bizarre context defined by a large number of arbitrary stipulations. I do not think that by postulating a society of posthumans I am significantly hollowing out my conclusions. I do, in fact, assume throughout this paper more generally that the postulated posthuman reference society is one that is adapted to its posthuman inhabitants in manners similar to the way current human society is adapted to its human inhabitants. 31 I also assume that this reference society would offer many affordances and opportu-nities to its posthuman inhabitants broadly analogous to those which contemporary society offers humans. I do not intend by this postulation to express any prediction that this is the kind of posthuman society that is most likely to form, nor do I mean to imply that being a posthuman could not be valuable even outside of the context of such a kind of society. The postulation is merely a way of delimiting the claims I am trying to defend in this paper. Objection: The accumulated cultural treasures of humanity might lose their appeal to somebody whose capacities greatly exceeded those of the humans who produced them. More generally, challenges that seemed interesting to the person while she was still human might become trivial and therefore uninteresting to her when she acquires posthuman capacities. This could deprive posthumans of the good of meaningful achievements. Reply: It is not clear why the ability to appreciate what is more complex or subtle should make it impossible to appreciate simpler things. Somebody who has learnt to appreciate Schoenberg may still delight in simple folk songs, even bird songs. A fan of Cézanne may still enjoy watching a sunrise. Even if it were impossible for posthuman beings to appreciate some simple things, they could compensate by creating new cultural riches. I am assuming that the reference society would offer opportunities for doing this – see above. If some challenges become too easy for posthumans, they could take on more difficult challenges. One might argue that an additional reason for developing post-human cognitive capacities is that it would increase the range of interesting intel-lectual challenges open to us. At least within the human range of cognitive capacity, it seems that the greater one’s capacity, the more numerous and meaningful the intellectual projects that one can embark on. When one’s mind grows, not only does one get better at solving intellectual problems – entirely new possibilities of mean-ing and creative endeavor come into view. Objection: A sense of vulnerability, dependence, and limitedness can sometimes add to the value of a life or help a human being grow as a person, especially along moral or spiritual dimensions. Reply: A posthuman could be vulnerable, dependent, and limited. A posthuman could also be able to grow as a person in moral and spiritual dimensions without those extrinsic spurs that are sometimes necessary to affect such growth in humans. 31 But I do not assume that the reference society would only contain posthuman beings. 7 Why I Want To Be a Posthuman When I Grow Up 133 The ability to spontaneously develop in these dimensions could be seen as an aspect of emotional capacity. Objection: The very desire to overcome one’s limits by the use of technological means rather than through one’s own efforts and hard work could be seen as expres-sive of a failure to open oneself to the unbidden, gifted nature of life, or as a failure to accept oneself as one is, or as self-hate. 32 Reply: This paper makes no claims about the expressive significance of a desire to become posthuman, or about whether having such a desire marks one as a worse person, whether necessarily or statistically. The concern here rather is about whether being posthuman could be good, and whether it could be good for us to become posthuman. Objection: A capacity obtained through a technological shortcut would not have the same value as one obtained through self-discipline and sacrifice. Reply: I have argued that the possession of posthuman capacities could be extremely valuable even were the capacities are effortlessly obtained. It is consist-ent with what I have said that achieving a capacity through a great expenditure of blood, sweat, and tears would further increase its value. I have not addressed what would be the best way of becoming posthuman. We may note, however, that is unlikely that we could in practice become posthuman other than via recourse to advanced technology. Objection: The value of achieving a goal like winning a gold medal in the Olympics is reduced and perhaps annulled if the goal is achieved through inappropriate means (e.g. cheating). The value of possessing a capacity likewise depends on how the capac-ity was acquired. Even though having posthuman capacities might be extremely valu-able if the capacities had been obtained by appropriate means, there are no humanly possible means that are appropriate. Any means by which humans could obtain post-human capacities would negate the value of having such capacities. Reply: The analogy with winning an Olympic medal is misleading. It is in the nature of sports competitions that the value of achievement is intimately connected with the process by which it was achieved. We may say that what is at stake in the analogy is not really the value of a medal, nor even the value of winning a medal, but rather (something like) winning the medal by certain specified means in a fair competition, in a non-fluke-like way, etc. Many other goods are not like this. When we visit the doctor in the hope of getting well, we do not usually think that the value of getting well is strongly dependent on the process by which health is achieved; health and the enjoyment of health are valuable in their own right, independently of how these states come about. Of course, we are concerned with the value of the means to getting well – the means themselves can have negative value (involving perhaps pain and inconvenience), and in evaluating the value of the consequences of an action, we take the value of the means into account as well as the value of the goal that they achieve. But usually, the fact that some means have negative value does not reduce the value of obtaining the goal state. 32 Compare Sandel 2004, although it is not clear that Sandel has an expressivist concern in mind. 134 N. Bostrom One possible exception to this is if the means are in a certain sense immoral. We might think that a goal becomes “tainted”, and its value reduced, if it was achieved through deeply immoral means. For example, some might hold that the value of medical findings obtained by Nazi doctors in concentration camps have reduced or no value because of the way the findings were produced. Yet this radical kind of “taint” is a rather special case. 33 Having to use bad means might be good reason not to pursue a goal, but typically this is not because the use of bad means would reduce the value of the attainment of the goal, but rather it is either because the means themselves have more negative value than the goal has positive value, or (on a non-consequentialist view) because it is morally impermissible to use certain means independently of the total value of the consequences. 34 The values that I have alleged could be derived from posthuman capacities are not like the value of an Olympic gold medal, but rather like the value of health. I am aware of no logical, metaphysical, or “in principle” reason why humans could not obtain posthuman capacities in ways that would avoid recourse to immoral means of the sort that would “taint” the outcome (much less that would taint the outcome to such a degree as to annul its extremely high surplus value). It is a further question to what extent it is practically feasible to work towards realizing posthuman capacities in ways that avoid such taint. This question lies outside the scope of the present paper. My conclusion may therefore be understood to implicitly contain the proviso that the post-human capacities of which I speak have been obtained in ways that are non-Faustian. Objection: Posthuman talent sets the stage for posthuman failure. Having great potential might make for a great life if the potential is realized and put to some worth-while use, but it could equally make for a tragic life if the potential is wasted. It is better to live well with modest capacities than to life poorly with outstanding capacities. Reply: We do not lament that a human is born talented on grounds that it is pos- sible that she will waste her talent. It is not clear why posthuman capacity would be any more likely to be wasted than human capacity. I have stipulated that the 33 Even in the Nazi doctor example, it is plausibly the achievement of the doctors (and of Germany etc.) that is tainted, and the achievement’s value that is reduced. The value of the results is argu- ably unaffected, although it might always be appropriate to feel uncomfortable when employing them, appropriate to painfully remember their source, regret the way we got them, and so forth. 34 It might help to reflect that we do not deny the value of our current human capacities on grounds of their evolutionary origin, even though this origin is (a) largely not a product of human achieve- ment, and (b) fairly drenched in violence, deceit, and undeserved suffering. People who are alive today also owe their existence to several thousands of years of warfare, plunder, and rape; yet this does not entail that our capacities or our mode of existence is worthless. Another possibility is that the result has positive value X, the way you get it has negative value Y , but the “organic whole” comprising both the result and the way it was obtained has an independent value of its own, Z, which also might be negative. On a Moorean view, the value of this situation “on the whole” would be X + Y + Z, and this might be negative even if X is larger than (−Y) (Moore 1903). Alternatively, Z might be incommensurable with X + (−Y). In either case, we have a different situ- ation than the one described above in the text, since here X could invariant under different possible ways in which the result was obtained. However, I do not know of any reason to think that this evaluative situation, even if axiologically possible, would necessarily obtain in the sort of case we are discussing. (I’m indebted to Guy Kahane for this point.) 7 Why I Want To Be a Posthuman When I Grow Up 135 posthuman reference society would offer affordances and opportunities to its post- human inhabitants broadly analogous to those that contemporary society offers humans. If posthumans are more prone to waste their potential, it must therefore be for internal, psychological reasons. But posthumans need not be any worse than humans in regard to their readiness to make the most of their lives. 35 7.13 Conclusion I have argued, first, that some posthuman modes of being would be extremely worthwhile; and, second, that it could be good for most human beings to become posthuman. I have discussed three general central capacities – healthspan, cognition, and emotion – separately for most of this paper. However, some of my arguments are strengthened if one considers the possibility of combining these enhancements. A longer healthspan is more valuable when one has the cognitive capacity to find virtually inexhaustible sources meaning in creative endeavors and intellectual growth. Both healthspan and cognition are more valuable when one has the emo-tional capacity to relish being alive and to take pleasure in mental activity. It follows trivially from the definition of “posthuman” given in this paper that we are not posthuman at the time of writing. It does not follow, at least not in any obvious way, that a posthuman could not also remain a human being. Whether or not this is so depends on what meaning we assign to the word “human”. One might well take an expansive view of what it means to be human, in which case “posthu-man” is to be understood as denoting a certain possible type of human mode of being – if I am right, an exceedingly worthwhile type .36 References American Psychiatric Association (2000) Diagnostic Criteria from DSM-IV-TR. American Psychiatric Association, Washington, DC Arnesen T M, Norheim O F (2003) Quantifying quality of life for economic analysis: time out for time trade off. Medical Humanities 29(2): 81–86 Beauchamp T L, Childress J F (2001) Principles of Biomedical Ethics. Oxford University Press, New York 35 If they have enhanced emotional capacity, they may be more motivated and more capable than most humans of realizing their potential in beautiful ways. 36 I am grateful to Ross Beaton, Bert Gordijn, Guy Kahane, Toby Ord, David Pearce, David Rodin, Anders Sandberg, Julian Savulescu, Harrosh Shlomit, and Elena Patigo Solana for helpful com- ments. Earlier versions of this paper were presented at the James Martin Advanced Research Seminar (Oxford, 30 January 2006), and at the Institute for Science, Innovation and Society (Nijmegen, 21 February 2006). 136 N. Bostrom Berkeley G, Sampson G et al. (1897) The Works of George Berkeley DD, Bishop of Cloyne. G Bell, London Bostrom N (2003) The transhumanist FAQ: v 2.1. World Transhumanist Association. Available from http://transhumanism.org/index.php/WTA/faq Bostrom N (2004) Transhumanist Values. In: Adams F (ed) Ethical Issues for the 21st Century. Philosophical Documentation Center Press, Charlottesville, V A Bostrom N (2005) The Fable of the Dragon-Tyrant. Journal of Medical Ethics 31(5): 273–277 Brickman P, Campbell D T (1971) Hedonic Relativism and Planning the Good Society. In: Apley M H (ed) Adaptation Level Theory: A Symposium. Academic, New York: 287–302 Cohen J, Langer G (2005) Most Wish for a Longer Life - Despite Broad Aging Concerns. ABC News/ USA Today Poll. Available from http://abcnews.go.com/images/Politics/995a1Longevity.pdf Damschroder L J, Zikmund-Fisher B J et al. (2005) The impact of considering adaptation in health state valuation. Social Science & Medicine 61(2): 267–277 DeGrazia D (2005) Enhancement technologies and human identity. Journal of Medicine and Philosophy 30: 261–283 Diener E, Suh E M (1998) Subjective well-being and age: an international analysis. Annual Review of Gerontology and Geriatrics 17: 304–324 Glannon W (2002) Identity, prudential concern, and extended lives. Bioethics 16(3): 266–83 Haldane J B S (1924) Daedalus; or, Science and the Future. K Paul, Trench, Trubner, London Hanson R (2000) Showing that You Care: The Evolution of Health Altruism. Available from http://hanson.gmu.edu/showcare.pdf Hayles N K (1999) How we became posthuman: virtual bodies in cybernetics, literature, and informatics. University of Chicago Press, Chicago, IL Johansson P O (2002) On the definition and age-dependency of the value of a statistical life. Journal of Risk and Uncertainty 25(3): 251–263 Johnson F R, Desvousges W H et al. (1998) Eliciting stated health preferences: an application to willingness to pay for longevity. Medical Decision Making 18(2): 57–67 Kunzmann U, Little T et al. (2000) Is age-related stability of subjective well-being a paradox? Cross-sectional and longitudinal evidence from the Berlin Aging Study. Psychology and Aging 15(3): 511–426 Lewis D (1989) Dispositional Theories of Value. Proceedings of the Aristotelian Society, supp63: 113–137 McShine R, Lesser G T et al. (2000) Older Americans hold on to life dearly. British Medical Journal 320(7243): 1206–1207 Moore G E (1903) Principia Ethica. Cambridge University Press, CambridgeMurphy K, Topel R (2003) The Economic Value of Medical Research. Measuring the Gains from Medical Research: An Economic Approach. University of Chicago Press, Chicago, IL Nordhaus W (2003) The Health of Nations: The Contribution of Improved Health to Living Standards. Measuring the Gains from Medical Research: An Economic Approach. University of Chicago Press, Chicago, IL Pearce D (2004) The Hedonistic Imperative. Available from http://www.hedweb.com/hedab.htmSandel M (2004) The case against perfection. The Atlantic Monthly: 293(3) Tsevat J, Dawson N V et al. (1998) Health values of hospitalized patients 80 years or older. Jama- Journal of the American Medical Association 279(5): 371–375 Ubel P A, Loewenstein G et al. (2003) Whose quality of life? A commentary exploring discrepan- cies between health state evaluations of patients and the general public. Quality of Life Research 12(6): 599–607 Viscusi W K, Aldy J E (2003) The value of a statistical life: a critical review of market estimates throughout the world. Journal of Risk and Uncertainty 27(1): 5–76 World Bank (2003) EdStats - The World Bank’s Comprehensive Database of Education Statistics. 2006. Available from http://www1.worldbank.org/education/edstats/index.html
4ff7a9f5-ade6-45c5-8050-59ddd4b5b566
trentmkelly/LessWrong-43k
LessWrong
Meetup : Washington, D.C.: Brains Discussion article for the meetup : Washington, D.C.: Brains WHEN: 07 December 2014 03:00:00PM (-0500) WHERE: National Portrait Gallery We will be meeting in the Kogod Courtyard of the National Portrait Gallery (8th and F Sts or 8th and G Sts NW, go straight past the information desk from either entrance) to talk about brains, neuroscience, and related topics. Per the norm, we plan to let people congregate from 3:00 to 3:30 before kicking things off. As with prior informal-discussion meetups, conversation on any subject of interest to attendees (be it in the main conversation or in a side conversation) is both permitted and encouraged, but we suggest taking advantage of the meetup topic as a Schelling point. Upcoming Meetups: * Dec. 14: TBA (to be summarized) * Dec. 21: Fun & Games (bring games, play games, converse, socialize, or any combination thereof) Discussion article for the meetup : Washington, D.C.: Brains
e04a7431-6670-40f3-b75c-3adfec329e81
trentmkelly/LessWrong-43k
LessWrong
[SEQ RERUN] Inner Goodness Today's post, Inner Goodness was originally published on 23 October 2008. A summary (taken from the LW wiki):   > Goodness comes from inside you. Morality is not some sort of obligation weighing you down. It's something you want. Discuss the post here (rather than in the comments to the original post). This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Which Parts Are "Me"?, and you can use the sequence_reruns tag or rss feed to follow the rest of the series. Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
50006206-a6de-41ad-8a3f-1e7cefb81282
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Announcing the Vitalik Buterin Fellowships in AI Existential Safety! *Epistemic status: describing fellowships that I am helping with the administration of.* *Edit 2021-10-04: Modified to reflect changed eligibility+stipend conditions.* The Future of Life Institute is launching new [PhD](https://grants.futureoflife.org/prog/phd_fellowship/) and [postdoctoral](https://grants.futureoflife.org/prog/postdoctoral/) fellowships to study [AI existential safety](https://grants.futureoflife.org/res/p/AI/): that is, research that analyzes the most probable ways in which AI technology could cause an existential catastrophe, and which types of research could minimize existential risk; and technical research which could, if successful, assist humanity in reducing the existential risk posed by highly impactful AI technology to extremely low levels. The [Vitalik Buterin PhD Fellowship in AI Existential Safety](https://grants.futureoflife.org/res/p/phd-fellowship-ai/) is targeted at students applying to start their PhD in 2022, or existing PhD students who would not otherwise have funding to work on AI existential safety research. Quoting from the page: > > At universities in the US, UK, or Canada, annual funding will cover tuition, fees, and the stipend of the student's PhD program up to $40,000, as well as a fund of $10,000 that can be used for research-related expenses such as travel and computing. At universities not in the US, UK or Canada, the stipend amount will be adjusted to match local conditions. Fellows will also be invited to workshops where they will be able to interact with other researchers in the field. > > > In addition, applicants who are short-listed for the Fellowship will be reimbursed for application fees for up to 5 PhD programs, and will be invited to an information session about research groups that can serve as good homes for AI existential safety research. > > > Applications for the PhD fellowship close on October the 29th. The [Vitalik Buterin Postdoctoral Fellowship in AI Existential Safety](https://grants.futureoflife.org/res/p/postdoc-fellowship-ai/) is for postdoctoral appointments starting in fall 2022. Quoting from the page: > > For host institutions in the US, UK, or Canada, the Fellowship includes an annual $80,000 stipend and a fund of up to $10,000 that can be used for research-related expenses such as travel and computing. At universities not in the US, UK or Canada, the fellowship amount will be adjusted to match local conditions. > > > Applications for the postdoctoral fellowship close on November the 5th. You can apply at [grants.futureoflife.org](https://grants.futureoflife.org/), and if you know people who may be good fits, please help spread the word!
104a76ff-3f25-4597-a5a6-e63ab80a59d3
trentmkelly/LessWrong-43k
LessWrong
Chain Breaking This is a rationality technique I’ve been experimenting with. Thank you to Jack Ryan, Thomas Kwa, Sydney Von Arx, Noa Nabeshima, and Kyle Scott for helping me refine the method. Algorithm 1. Pick something that has happened before that you would prefer not to happen again. 1. Examples include: not exercising, not flossing, not sleeping on time, eating too many chocolate almonds, spending too much time on Facebook, being late to meetings, forgetting to schedule things, etc. 2. In theory, you could pick something that hasn’t happened yet that you want to prevent. This is a more advanced version of the technique. Try it at your own peril. 2. Why do you think it might happen again? Determine the specific chain of events that will lead to this undesirable outcome. 3. A chain is only as strong as its weakest link. Locate the link in the chain that seems most fragile. 4. Intervene to break that link (and thus the entire chain). If it isn’t easy to think of an intervention, that might mean you haven’t gotten specific enough. Recurse. Examples Many of these examples do not involve me. I wrote them all in the first person for easier comprehensibility. Example: messenger 1. I use messenger in a way that disrupts my work routine. 2. When doing pomodoros, when I check messenger during breaks. There are often many messages to respond to, some of which require thought. Even worse, sometimes people respond to my responses in almost real-time. Since I do not like not responding to messages, I respond to all messages, nearly always taking more than the five minutes allotted for my break. 1. More compactly: break -> messenger -> many unread messages -> respond 3. Once I look at my messages, the compulsion is very strong to respond to all of them. In particular, if I read them, but don’t respond, I might forget to respond. I could make this easier by keeping track of which messages to respond to, but that seems too tedious. Instead, I think violating break -> m
b9b5a242-ad06-411b-a282-1c93d065bbb7
trentmkelly/LessWrong-43k
LessWrong
Backward Reasoning Over Decision Trees Game theory is the study of how rational actors interact to pursue incentives. It starts with the same questionable premises as economics: that everyone behaves rationally, that everyone is purely self-interested1, and that desires can be exactly quantified - and uses them to investigate situations of conflict and cooperation. Here we will begin with some fairly obvious points about decision trees, but by the end we will have the tools necessary to explain a somewhat surprising finding: that giving a US president the additional power of line-item veto may in many cases make the president less able to enact her policies. Starting at the beginning: The basic unit of game theory is the choice. Rational agents make choices in order to maximize their utility, which is sort of like a measure of how happy they are. In a one-person game, your choices affect yourself and maybe the natural environment, but nobody else. These are pretty simple to deal with: Here we visualize a choice as a branching tree. At each branch, we choose the option with higher utility; in this case, going to the beach. Since each outcome leads to new choices, sometimes the decision trees can be longer than this: Here's a slightly more difficult decision, denominated in money instead of utility. If you want to make as much money as possible, then your first choice - going to college or starting a minimum wage job right Now - seems to favor the more lucrative minimum wage job. But when you take Later into account, college opens up more lucrative future choices, as measured in the gray totals on the right-hand side. This illustrates the important principle of reasoning backward over decision trees. If you reason forward, taking the best option on the first choice and so on, you end up as a low-level manager. To get the real cash, you've got to start at the end - the total on the right - and then examine what choice at each branch will take you there. This is all about as obvious as, well, not hittin
84296042-9e4c-4e4c-9598-96efb183bae7
trentmkelly/LessWrong-43k
LessWrong
I Don’t Know How To Count That Low Edit: greetings Hacker News. This is a cross-post from my own blog, AcesoUnderGlass.com. If you enjoy this post, consider checking that out as well. Back when I was at Google we had a phrase, “I don’t know how to count that low”. It was used to dismiss normal-company-sized problems as beneath our dignity to engage with: if you didn’t need 100 database shards scattered around the globe, were you even doing real work? It was used as a sign of superiority within Google, but it also pointed at a real problem: I once failed a job interview at a start-up when I wondered out loud if the DB was small enough to be held in memory, when it was several orders of magnitude lower than when I should even have begun worrying about that. I didn’t know the limit because it had been many years since I’d had a problem that could be solved with a DB small enough to be held in its entirety in memory. And they were right to fail me for that: the fact that I was good at solving strictly more difficult problems didn’t matter because I didn’t know how to solve the easier ones they actually had. I could run but not walk, and some problems require walking. It’s a problem, but it can be a pleasant kind of problem to have, compared to others. Another example: my dad is a Ph.D. statistician who spent most of his life working in SAS, a powerful statistical programming language, and using “spreadsheet statistics” as a slur. When I asked permission to share this anecdote he sent me a list of ways Excel was terrible. Then he started consulting for me, who was cruelly unwilling to pay the $9000 license fee for SAS when Google Sheets was totally adequate for the problem (WHO HAS FOOD AT HOME NOW DAD?!?).* My dad had to go through a horrible phase of being bad at the worse tool, and found a lot of encouragement when I reframed “I could have done this with one line in SAS and am instead losing to this error-riddled child’s toy”  to  “I didn’t know how to count that low, but now that it matters I a
03429d7e-5f3f-4283-990b-3b377cfce4bc
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Looking for reading recommendations: Theories of right/justice that safeguard against having one's job automated? Hey folks, Can anyone recommend any books or articles on AI automation of jobs?  Specifically, books that develop or discuss a theory of right and then apply it to the question of whether we should let job automation continue unimpeded, or whether regulations should intervene?  Even better if they apply this theory of right to creative industries like the arts. Thanks!
78d8e695-3131-4bf6-ad2d-a7a056a73063
trentmkelly/LessWrong-43k
LessWrong
Overly convenient clusters, or: Beware sour grapes Related to: Policy Debates Should Not Appear One-Sided There is a well-known fable which runs thus: “Driven by hunger, a fox tried to reach some grapes hanging high on the vine but was unable to, although he leaped with all his strength. As he went away, the fox remarked 'Oh, you aren't even ripe yet! I don't need any sour grapes.' People who speak disparagingly of things that they cannot attain would do well to apply this story to themselves.” This gives rise to the common expression ‘sour grapes’, referring to a situation in which one incorrectly claims to not care about something to save face or feel better after being unable to get it. This seems to be related to a general phenomenon, in which motivated cognition leads one to flinch away from the prospect of an action that is inconvenient or painful in the short term by concluding that a less-painful option strictly dominates the more-painful one. In the fox’s case, the allegedly-dominating option is believing (or professing) that he did not want the grapes. This spares him the pain of feeling impotent in face of his initial failure, or the embarrassment of others thinking him to have failed. If he can’t get the grapes anyway, then he might as well erase the fact that he ever wanted them, right? The problem is that considering this line of reasoning will make it more tempting to conclude that the option really was dominating—that he really couldn’t have gotten the grapes. But maybe he could’ve gotten the grapes with a bit more work—by getting a ladder, or making a hook, or Doing More Squats in order to Improve His Vert. The fable of the fox and the grapes doesn’t feel like a perfect fit, though, because the fox doesn’t engage in any conscious deliberation before giving up on sour grapes; the whole thing takes place subconsciously. Here are some other examples that more closely illustrate the idea of conscious rationalization by use of overly convenient partitions: The Seating Fallacy: “Be who you are and
bd1054f0-db4a-4455-a993-e10c88780497
trentmkelly/LessWrong-43k
LessWrong
Meetup : Meetup #15 - Trigger-Action Patterns Discussion article for the meetup : Meetup #15 - Trigger-Action Patterns WHEN: 02 April 2017 03:10:00PM (+0200) WHERE: meester treublaan 18 Amsterdam, Netherlands How do we do things on the most instinctive level? Understanding this helps us take better control over our behavior. Discussion article for the meetup : Meetup #15 - Trigger-Action Patterns
f4c5e705-50cd-41f6-b3e4-d4697710d629
trentmkelly/LessWrong-43k
LessWrong
[AN #74]: Separating beneficial AI into competence, alignment, and coping with impacts Find all Alignment Newsletter resources here. In particular, you can sign up, or look through this spreadsheet of all summaries that have ever been in the newsletter. I'm always happy to hear feedback; you can send it to me by replying to this email. Audio version here (may not be up yet). Highlights AI alignment landscape (Paul Christiano) (summarized by Rohin): This post presents the following decomposition of how to make AI go well: [Link to image below] Rohin's opinion: Here are a few points about this decomposition that were particularly salient or interesting to me. First, at the top level, the problem is decomposed into alignment, competence, and coping with the impacts of AI. The "alignment tax" (extra technical cost for safety) is only applied to alignment, and not competence. While there isn't a tax in the "coping" section, I expect that is simply due to a lack of space; I expect that extra work will be needed for this, though it may not be technical. I broadly agree with this perspective: to me, it seems like the major technical problem which differentially increases long-term safety is to figure out how to get powerful AI systems that are trying to do what we want, i.e. they have the right motivation (AN #33). Such AI systems will hopefully make sure to check with us before taking unusual irreversible actions, making e.g. robustness and reliability less important. Note that techniques like verification, transparency, and adversarial training (AN #43) may still be needed to ensure that the alignment itself is robust and reliable (see the inner alignment box); the claim is just that robustness and reliability of the AI's capabilities is less important. Second, strategy and policy work here is divided into two categories: improving our ability to pay technical taxes (extra work that needs to be done to make AI systems better), and improving our ability to handle impacts of AI. Often, generically improving coordination can help with both categories:
fce8aeec-6cf4-4ca2-8c1e-84c363377de7
trentmkelly/LessWrong-43k
LessWrong
Meetup : Paris Meetup Discussion article for the meetup : Paris Meetup WHEN: 25 June 2011 02:00:00PM (+0200) WHERE: Cafe des Techniques, next to Musee des Arts et Metiers, France Third Paris meetup Discussion article for the meetup : Paris Meetup
1b709a11-c269-48f0-9208-4fb2faee3f2f
trentmkelly/LessWrong-43k
LessWrong
Unifying Bargaining Notions (1/2) This is a two-part sequence of posts, in the ancient LessWrong tradition of decision-theory-posting. This first part will introduce various concepts of bargaining solutions and dividing gains from trade, which the reader may or may not already be familiar with. The upcoming part will be about how all introduced concepts from this post are secretly just different facets of the same underlying notion, as originally discovered by John Harsanyi back in 1963 and rediscovered by me from a completely different direction. The fact that the various different solution concepts in cooperative game theory are all merely special cases of a General Bargaining Solution for arbitrary games, is, as far as I can tell, not common knowledge on Less Wrong. Bargaining Games Let's say there's a couple with a set of available restaurant options. Neither of them wants to go without the other, and if they fail to come to an agreement, the fallback is eating a cold canned soup dinner at home, the worst of all the options. However, they have different restaurant preferences. What's the fair way to split the gains from trade? Well, it depends on their restaurant preferences, and preferences are typically encoded with utility functions. Since both sides agree that the disagreement outcome is the worst, they might as well index that as 0 utility, and their favorite respective restaurants as 1 utility, and denominate all the other options in terms of what probability mix between a cold canned dinner and their favorite restaurant would make them indifferent. If there's something that scores 0.9 utility for both, it's probably a pretty good pick! Although, there's something off about setting up the problem like this. There's no term for intensity of preferences! Someone who cared very little about food would have their preferences rank just as strongly as someone who had strong restaurant opinions! In a sense, there's three responses to this objection. The first response is that we might be z
fc59d327-ebad-4658-9439-e5212033f42e
trentmkelly/LessWrong-43k
LessWrong
Six Dimensions of Operational Adequacy in AGI Projects Editor's note:  The following is a lightly edited copy of a document written by Eliezer Yudkowsky in November 2017. Since this is a snapshot of Eliezer’s thinking at a specific time, we’ve sprinkled reminders throughout that this is from 2017. A background note: It’s often the case that people are slow to abandon obsolete playbooks in response to a novel challenge. And AGI is certainly a very novel challenge. Italian general Luigi Cadorna offers a memorable historical example. In the Isonzo Offensive of World War I, Cadorna lost hundreds of thousands of men in futile frontal assaults against enemy trenches defended by barbed wire and machine guns.  As morale plummeted and desertions became epidemic, Cadorna began executing his own soldiers en masse, in an attempt to cure the rest of their “cowardice.” The offensive continued for 2.5 years. Cadorna made many mistakes, but foremost among them was his refusal to recognize that this war was fundamentally unlike those that had come before.  Modern weaponry had forced a paradigm shift, and Cadorna’s instincts were not merely miscalibrated—they were systematically broken.  No number of small, incremental updates within his obsolete framework would be sufficient to meet the new challenge. Other examples of this type of mistake include the initial response of the record industry to iTunes and streaming; or, more seriously, the response of most Western governments to COVID-19.     As usual, the real challenge of reference class forecasting is figuring out which reference class the thing you’re trying to model belongs to. For most problems, rethinking your approach from the ground up is wasteful and unnecessary, because most problems have a similar causal structure to a large number of past cases. When the problem isn’t commensurate with existing strategies, as in the case of AGI, you need a new playbook. I've sometimes been known to complain, or in a polite way scream in utter terror, that "there is no good guy
5730b4db-afc3-4df2-8ed6-699ba0beda8f
trentmkelly/LessWrong-43k
LessWrong
Covid 9/23: There Is a War The FDA, having observed and accepted conclusive evidence that booster shots are highly effective, has rejected allowing people to get those booster shots unless they are over the age of 65, are immunocompromised or high risk, or are willing to lie on a form. The CDC will probably concur.  I think we all know what this means. It means war! Don’t get me wrong. The FDA has been hard at work for decades ensuring people die or twist their lives into knots from a lack of access to medicine. Refusing to acknowledge that there is a war doesn’t change the fact that there is a war.  What changed this week was that the FDA gave us an unusually clear proof that there is a war, which side of that war they are on, and how far they are willing to go.  The good news is that, because of the exceptions, anyone who wants a booster can still get a booster, so on reflection this is a win-win. We know exactly what is going on, and it didn’t cost us all that much. The CDC is now meeting to consider who should have the ability to get booster shots, not withstanding the FDA already having made that same decision. And they are being explicit that the point of vaccination, according to the CDC, they are now claiming, is to prevent hospitalization and death and NOT to prevent infection.  Let me say that again. The CDC explicitly claims that it does not think it is an important goal of a medical intervention to prevent Covid-19 infections that don’t result in hospitalization. Or, by implication, to prevent you from spreading the disease to others. Tell that to all the people who see a mild infection somewhere vaguely around them, and quarantine for two weeks or shut down their office, and freak out. Tell that to all the people at the CDC mandating such behaviors. Tell that to all the people saying vaccinations are a social responsibility, or requiring it to engage in ordinary life activities. Tell that to everyone, at the CDC and elsewhere, warning us about ‘Long Covid,’ or suffers from
9db52fe9-4663-4d19-8052-75042cbd5fb1
trentmkelly/LessWrong-43k
LessWrong
AI Incident Reporting: A Regulatory Review This article is the first in a series of ~10 posts comprising a 2024 State of the AI Regulatory Landscape Review, conducted by the Governance Recommendations Research Program at Convergence Analysis. Each post will cover a specific domain of AI governance (e.g. incident reporting, safety evals, model registries, etc.). We’ll provide an overview of existing regulations, focusing on the US, EU, and China as the leading governmental bodies currently developing AI legislation. Additionally, we’ll discuss the relevant context behind each domain and conduct a short analysis. This series is intended to be a primer for policymakers, researchers, and individuals seeking to develop a high-level overview of the current AI governance space. We’ll publish individual posts on our website and release a comprehensive report at the end of this series. Let us know in the comments if this format is useful, if there are any topics you'd like us to cover, or if you spy any key errors / omissions! Context AI incident reporting refers to an emerging series of voluntary practices or regulatory requirements for AI labs to document any unexpected events, malfunctions, or adverse effects that arise from the deployment of AI systems. Such mechanisms are designed to capture a wide range of potential issues, from privacy breaches and security vulnerabilities to biases in decision-making.  The rationale behind incident reporting is to create a feedback loop where regulators, developers, and the public can learn from past AI deployments, leading to continuous improvement in safety standards and compliance with legal frameworks. By systematically documenting incidents, stakeholders can identify patterns, understand the root causes of failures, and implement corrective measures to prevent recurrence. Historically, incident reporting has been a highly effective tool used across a variety of industries for decades to mitigate risk from still-developing technologies.  * For instance, the Aviati
9141c4dd-6437-444f-a104-f689ee2b9818
StampyAI/alignment-research-dataset/youtube
Youtube Transcripts
Provably Beneficial AI | Stuart Russell [Music] one of the things that's obvious anyone who have it's a newspaper or online is the news is that this is a very exciting time so these are just some newspaper headlines from the last few months and we have kind of that going on between different global powers who can stand from the largest amount of money on AI so that UK announced 1.3 billion dollars in new announcer 18 billion now for twenty twenty two billion dollars and then China of course as we know who has and the US trying to the wool [Music] it seems reasonable as nice pointed out we don't have to assume that we will necessarily achieve super intelligence within any fixed period of time but his team is reasonable to act on the assumption that that's feasible a become a new matter really whether it's a 5% chance for an 80% chance with any any particular time scale but it seems I mean the converse of this is that we guarantee that AI will fail and I have to say in the last two years we have seen senior established members of the AI community publicly guaranteeing that AI rules there I'm enter the in for example Senior Council apologies stood up and we guarantee that despite the hundred billion dollars a year that you the public of pumping into biological research we guarantee that you will never ever cure cancer ever I think I would be shocked and but seriously that's what's happening and I think we should just take that denialism and the sign of insecurity that they are worried that in fact we might now you know clearly Isis is going to have access to much more information than any one human being can possibly ingest over a lifetime and I think it's reasonable reasonable to assume that would be the foresight power instead alpha0 exists as if it's on the chess board or the go board will gradually be extended into foresight powers in the real world and so will have decision-making systems that are superior in the real world to human beings and this is as the previous panel pointed out something that is full of potential for the human race because that's what we do our civilization where these are intelligence and we will have much more of it and if you just take a very prosaic measure so no science fiction we're not talking about uploading or infinite life extension or fast and light travel but just bringing the standard of living of people on earth up to a bourgeois oacd level not Eve not even super rich or anything like that just ordinary bourgeois level when you calculate the net present value of of that advance which I think would be relatively easy to implement with human-level AI technology the the value that is about thirteen thousand five hundred trillion dollars so now when you look at those amounts that are being invested by the UK and France and China they seem absolutely miniscule in comparison to the payoff it's also worth pointing out that since that's the size of the prize that people are aiming for nations companies the idea that we could simply announce a halt to AI research is horribly wishful thinking so it's going to continue and it's probably going to accelerate the question is how do we steer it in the right direction so we're already aware of some downsides we've been working a lot on autonomous weapons and and Max and others of fli have have been enormous ly influential in in getting this message out to the AI community and then to the world that this would be a really bad idea we have discussions later on in this meeting about the possibility of enormous changes in the economy that would upset the possibility of employment for most people on earth but this is what we're going to talk about the end of the human race and it's worth thinking about why better AI could make things worse and the answer that people have pointed out and this is not a new point is that when you put a purpose into a machine if you put the wrong purpose into the machine then you have a serious problem and this is a quote from Norbert Wiener in 1960 and he he said this in in the context of a paper that was written in response to seeing Arthur Samuels checker playing program learn over the course of a couple of days to defeat its own creator at checkers and if you think you know that was done on a machine with a cycle time of I believe three milliseconds so Norbert Wiener was really thinking ahead in this paper and it's a pretty remarkable paper but this same point has been around in human civilization actually for thousands of years so King Midas would probably say the same thing in retrospect right he put the purpose into the machine that everything he touches should turn to gold he the machine carried out his his objective and then he realized that included his food and drink and we see this you know the story of the three wishes the third wish as always please undo the first two wishes because I've messed up and unfortunately with super intelligent AI they may not be a third wish and so we have to get the first wish right fairly recently we started to realize that it's actually worse than that that it's not just putting the wrong purpose into the machine it's that any purpose that's put into a machine gives the Machine an incentive to preserve its own existence simply in order to carry out the purpose that you put into it so this is not some kind of intrinsic primal instinct that we've built into the machine of self-preservation this is simply a logical consequence of having a purpose in the first place so this is the slogan that you have to remember your takeaway from this talk is that you can't fetch the coffee if you did so when you combine these two a machine that has the wrong purpose and then self-preservation defending itself against attempts to interfere with that purpose you get the plot of 2001 a Space Odyssey and you know fortunately how in the movie is is only somewhat intelligent if it was really super intelligent it probably would have terminated or unplugged all of the astronauts on the spaceship rather than being unplugged itself so we have to ask right where we're investing pretty significant sums into a technology whose logical extrapolation seems to be pretty problematic for the human race and I believe that in fact we just we made a mistake and we made a mistake actually right at the beginning of the field and how we conceived of what artificial intelligence is so we started from this idea that humans are intelligent and even at that time so in the late 40s early he's the the predominant view of what we meant by intelligence at least from the scientific engineering community's point of view was the ability to act successfully right which is better than acting unsuccessfully and so connecting to notions of rationality a human is intelligent to the extent that our actions can be expected to achieve our objectives and this is fine and then we simply transfer that notion into machines so we want to make intelligent machines so this is what they have to look like so machines are intelligent to the extent that their actions can be expected to achieve their objectives and in some branches of AI these objectives were goals in some branches they were lost functions in some branches they were utility functions but all AI systems except for the extremely ad-hoc purpose specific ones were built on this principle and it's not just AI right you see the same thing in control theory where you're minimizing a cost function same thing in operations research where you're maximizing sum of rewards and statistics you're minimizing loss functions and so on so this is a primary pillar of 20th century technology that you create optimizing machinery and then exogenously you specify what the objective should be for that machinery and then the machinery proceeds as if that's its own objective and my argument is that that's a mistake we actually don't really want I mean we could build technology like that but we don't want it and instead we need to switch around the possessive adjectives right we want machines whose actions can be expected to achieve not their objectives but our objectives and the difference here of course is that our objectives are within us and they're not within the machines and so now you have a different problem where the machine is trying to optimize some objective that it's not directly aware of that doesn't have exact access to it and then traditionally I simply becomes a special case where the direct access to the objective is assumed to be correct and in fact if we do this right we can then have machines that are provably beneficial despite the fact that those two words almost seem oxymoronic right that normally when you think about beneficial as well what does that really mean and it's very touchy-feely and we all have different notions and bla bla bla and provably is about theorems but in fact I think we can have both at the same time so I've been trying to boil this down I don't think you can boil things down into a few of them three things so I'm boil it down into three things number one then the robots objective is to optimize the realization of human preferences not of preferences that happen to be in the machine and number two and this is probably the most important change is that the robot is uncertain about what those human preferences are and it's this uncertainty as we'll see that gives the Machine a very different flavor from traditional AI systems and then third there has to be some grounding of this idea of human preferences in some way for machines to have evidence about what human preferences are and short of telepathy the way you're going to get information about human preferences is from human behavior construed in a very general sense so it could be things that we do it could be things that we ask for it could be the objective that we type into the Machine right and it doesn't have to take it literally our behavior then is the expression of that particular objective but any kind of behavior provides evidence even just sitting here listening to a talk is providing evidence about what your preference structures really are so I've tried to come up with a little picture and so this is for people who who understand the the basic ideas of graphical models so in graphical models we talk about random variables and then which ones are observed and what the dependencies are between these random variables and in the classic view of AI which is which is expressed in the first three editions of my textbook the objective is an observable variable so the objective is at the top and then on the Left we have human behavior which obviously depends on what the human objective is and then we have machine behavior and then the classical view the machine gets to observe the objective and then go about its business and in fact you know if you understand graphical models you realize that in this case you can simply forget the human behavior because the human objective if it's observed is a sufficient statistic for machine behavior so the human could now be jumping up and down saying stop you're going to destroy the universe and the machine says that's fine I'm just carrying on optimizing the objective that I know to be true and all of this stuff coming out of your mouth about destroying the universe is just so much hot air okay so this is what we want to avoid now if we make the objective unobservable right then we can no longer detach machine behavior and human behavior in fact they become tightly coupled to each other because the objective remains unobserved human behavior then is providing more information about the posterior distribution on him on what the human objective is and therefore the two things remain coupled so let me just go through a few examples and more will talk about the mathematical framework a little bit but I just want to sort of to give this one example to show how deeply ingrained in our current practices of AI this idea of the fixed objective really is so an image classification right what we do is we train we train up some Sofya deep learning system or random forest or logistic circuit or whatever it might be with a whole bunch of images and we minimize a loss function and usually that loss function is a combination of the the empirical performance on the training set and what we call the lost matrix the loss matrix says what is the cost of misclassifying an object of type a as an object of type B okay and in an image net there are 22,000 categories so that matrix is something like 480 million entries okay now how many people here do supervised machine learning or have done that in the last 5 years let's see so quite a lot right how many of you used a non uniform loss matrix just a couple right so this is standard practice partly because when we learn to do machine learning we're doing it on toy problems where in fact there is no real loss right this is just an attempt to see how accurately we can we can get our classifier to classify unseen examples of course in the real world there are differences in the losses between different kinds of errors and so when you accidentally classify a human being as a gorilla that's extremely expensive right so it probably costs Google in hundreds of millions of dollars making that one single mistake compared to you know misclassifying on Norfolk Terrier as a Norwich Terrier apparently there's a two different kinds of Terriers I looked at the pictures in image net I couldn't tell the difference I don't know who really cares about that and I can't imagine either kind of Terrier being that upset about about that error so so instead right you have to think okay we don't really know what the loss matrix is and so no one has sat down and typed in all 480 million entries of that loss matrix and even if they did they'd probably get it wrong so we have to operate with uncertainty over the loss matrix and that's a different kind of machine learning than the she learning that we've been doing so far and the algorithm should be able to decide for example based on that uncertainty well okay for this case I think it's alright to go ahead and you know issue a classification in public so to speak in our public facing classifier and in other cases to say I really don't know I'd prefer not to say what I think this might be because it could be an expensive mistake and in fact apparently if you ask the Google photo classifier now to classify a photograph of a very obvious gorilla it will say I'm really not sure what I'm looking at here so they've kind of put that in by hand so and then of course the algorithm should then use active learning techniques to try to refine its knowledge of the loss matrix so that it can be more effective in more cases let's look at another example right so in in this view point where the machine has radical uncertainty about human preferences what is it what is it supposed to do what is it supposed to do when it's given an instruction right now traditionally in AI we've always taken the human instruction as the objective right this becomes the machine's life purpose to carry out that objective and we've only as in fact Norbert Wiener pointed out we've only escaped the negative consequences of that mindset because of our machines are too stupid to do anything seriously bad at least that's been true up until the last few years and now we've seen consequences like okay you want to maximize click-through sure I'll maximize click-through for you while destroying Western democracy no problem right so when you actually think about what does this mean and there's been plenty of study of this in in in branches of philosophy for example in the sort of Grice Ian analysis of requests and commands what do they really mean so what fetch the coffee actually means is probably something like I'd rather have coffee there no coffee all other things being equal rights okay surest paribus conditions are a fairly good way to capture what we mean by the expression of an instruction or request as Murli there's more to it than that it probably also conveys feasibility because I wouldn't have asked for coffee if I thought that fetching coffee was infeasible or ridiculously expensive to achieve and therefore the fact that I've asked for coffee is information that it probably isn't that expensive or infeasible to get the coffee now given a request like that if the machine doesn't know for example you know your preferences for classifying images of gorillas is it going to be paralyzed by that uncertainty no as long as it can fetch the coffee without classifying any images of gorillas either way then that means it's still safe to fetch the coffee so even in the face of radical uncertainty about human preferences machines can still be useful because they will still be able to carry out plans that make only sort of minimally invasive changes to the world and in fact if you look at this from the other direction right so you can ask you know why why do we have this sense that doing nothing is generally a kind of a safe thing obviously there are occasions where it's not the best thing to do but doing you know a machine that does nothing is probably a safer thing than a machine that acts randomly now from the point of view of you know Markov decision process theory doing nothing is just one of the key actions available to the machine and it doesn't there's nothing that distinguishes it a priori from the other K minus one actions but why do we have this feeling and the answer is I think because the universe is not arranged randomly in some sense the universe is sampled from the stationary distribution that results from a whole bunch of purposive agents with preferences operating on the universe and so what we see for example you think about this row of chairs here right I can't see any human beings behaving here right they've all gone and the people who put the chairs there have all gone but the fact that the chairs are there tells you something about human preferences and so in fact contrary to the philosophical view that which is called the naturalistic fallacy that you can't observe purpose in the world you can observe purpose in the non natural part of the world right because the non natural part of world is exactly the result of agents operating with preferences and so so this is a paper with anchor and row in and a few other people here and it's an very interesting example of how much rich preference information is available in the world for agents to take advantage of so let's look at a canonical problem from the sort of the newspaper view of AI that somehow we can always just switch off the machine right now in the in the story that we see in 2001 a Space Odyssey right how doesn't want to be switched off and it's actually quite difficult so in the process he offs one of the astronauts but eventually he does get switched off but a machine that's sufficiently intelligent that has a fixed objective is simply going to find ways to prevent you from switching it off and it's the uncertainty about preferences that actually allows us to switch off the machine so let's see exactly how that works right so here's the here's the robot and the robot has a choice it can do something let's say turn the world into a huge field of computer or something like that which it might think is probably a good idea but actually it's quite uncertain right so I've I've captured that that idea by distribution probability distribution over the value of the action so what is the utility of this action to the human the machine actually really isn't sure but let's assume that the expected value the the average of this distribution is slightly positive so we can also give the Machine a choice just to commit suicide it could say you know I'm such a dangerous machine I should just turn myself off and we'll define that to have value zero because then the machine is not going to do anything at all now if the if these two choices are the only things available to the machine then it will choose to do this action it will cover the world with computron Yemen and finish the finish off the human race so let's give the Machine a third choice we can say all right it can now wait and allow the human to switch it off the question is why would it do that right and in the classical view it won't it'll have its objective and it'll choose whichever action maximally achieves the objective now let's so we give the human the choice it can switch the robot off and then that has a value 0 or it can say go ahead okay well what happens if the human says go ahead it seems like we're just back in the same situation we were in before so what difference does this make well what difference it makes actually is that if the human says go ahead it's because the action is not terribly deleterious to the human right so the negative part of the utility distribution has been zeroed out ok and now when the robot takes this action it knows for sure that it has positive utility for the human and if you do the math it's very simple you can prove a theorem that says that in fact a robot that's designed this way with explicit uncertainty over human preferences is provably benefit to the human being and so the deference right the the fact that the machine will allow you to switch it off that deference comes directly from the uncertainty about the objective and if you eliminate that uncertainty the deference goes away the machine will no longer allow you to switch it off and you have all the problems that people envisage so even though this is a very simple example I think this is the core concept that is going to allow us to have deferential but super intelligent AI systems in the future so let's talk a little bit about the third principle the idea that human behavior is the source of information about human preferences so there's already a field called inverse reinforcement learning which goes back a couple of decades and the idea is that by observing the behavior of an agent that is presumably optimizing some objective function some reward you can infer what that reward actually is and there's now a well-established theory there are lots of practical demonstrations of this technology we actually want a slightly different version of that so in in inverse reinforcement learning or IRL the machine is is effectively simply observing the human through a two-way mirror if you like the human is just doing their thing and then the Machine adopts whatever value function whatever reward function it learns from the human we don't exactly want that that works for some tasks but for example if the human is drinking coffee we don't want the machine to start drinking coffee that wouldn't be a good idea so so IRL is that is the machine looking at the human through the window so to speak and learning the reward function cooperative IRL the human and the Machine are together in the same environment and so this becomes not single agent AI not machine learned not standard machine learning or inverse reinforcement learning but a game theoretic situation and this seems to me inevitable and just comes from this coupling of human and machine behavior so in the soil game the human has some preferences generalized sense we'll call those preferences theta and we'll assume that there's some connection between theta and the behavior it doesn't have to be a perfect connection and then the goal of the robot is to maximize according to those preferences theta but they are unknown the machine in the Bayesian view of Searle machine has a prior and then in in the course of the game the prior is updated and then the machine acts according to that so when you solve these games all right you just set up the game like this and you solve it and you can put you know you can make different environments with different actions available you could have communicative possibilities and so on what you see is that the human now has an incentive to teach the robot the robot has an incentive to defer to the human to ask questions and so on and so you get I think the kinds of behaviors that we hope our intelligent systems will exhibit in the future and there's a lot of technical work going on now in this framework algorithms for solving these kinds of games efficiently generalizations to cases where the human the human itself is uncertain about their own preferences which is a common situation in the real world and so on we're also looking at what happens when we generalize this to too many humans and this is an extremely important case obviously because there is more than one human being in the world and then you get into all the problems of how you weigh human preferences and there's a long literature on this obviously I think going back at least to the 5th century BC of the Chinese philosopher moti who basically developed the first version of egalitarian utilitarianism wasn't very popular with the aristocracy and so Hasan you for example has a very famous theorem showing that any Pareto optimal policy that's enacted on behalf of several people is going to be a linear combination of their preferences and the weights in that linear combination can depend on basically the the opportunities that each human has to defect from this organization and get something better elsewhere so a bargaining position but that made an assumption of a uniform prior that everyone shares so everyone has to believe the same thing about the world of course we don't all believe the same thing about the world and and kritch recently came up with this idea that you could look at what happens when you have different priors when people have different beliefs about the world it turns out in that case that you get a very weird conclusion that the weights that you Accord to each person's preferences evolve according to how well their prior predicts what actually happens in the universe so in some sense this says that the smart people or the lucky people who made good predictions about the future are going to end up with higher weights in there in this shared Social Welfare function and that's a very weird conclusion to draw and I haven't yet understood its consequences for political theory but it seems to be a theorem now as Peter Eckersley pointed out in his poster yesterday there have been many papers over actually centuries pointing out sort of bugs in the the simple objective of maximizing total happiness the utilitarian ideal and those include for example Nozick's work on utility monsters a utility monster is someone whose utility scale is so large that that their preferences end up out weighing the Preferences of everybody else so you might call them snowflakes but this is a very real problem not because I think there are utility monsters but because everybody has an incentive to behave as if they are you too you monsters so another interesting problem is if you build robots that way everyone's preference is equally the first thing they're going to do is go off to Somalia and save people's lives who are dying of starvation which is great except that you just paid eighty thousand dollars for that robot and now it's disappeared so you're not going to be very happy about that so these are some of the interesting things that come up the other interesting thing that we have to deal with in the research agenda is the fact that the connection between human preferences and human behavior is not one of perfect rationalities in fact extremely complicated so we're trying to invert the human cognitive architecture so that the machines learn underlying human preferences despite the fact that those preferences are obscured by myopia by group by sort of short-term thinking by fear by emotion and so on so I'm not going to have time because Victoria's gonna kill me if I explain all of those incredibly fascinating topics I'm happy to discuss happy to discuss those afterwards I highly recommend actual Daniel Kahneman's book Thinking Fast and Slow particularly the last section which talks about what human preference is really are and the fact that preferences as you experience life are very different from preferences as you remember your experiences from the past and the person who chooses the future is not the experiencing self it's the remembering self right you choose the future on the basis of your memory of how different experiences played out in the past and your memory is simply false in many cases it seems and you know anyone who's given birth to children apparently has has this they remember it as a joyful occasion despite experiencing it as incredibly painful and humiliating in most cases so so this is a very complicated thing and actually make it makes a real difference to for example how much money we spend on dialysis do you ask the person who's experiencing the dialysis or the person who is now choosing whether to continue dialysis and you get different answers from those to people ok toad its to summarize I think we have to work on the assumption that a I will overtake human abilities and I believe that despite that we can have a super intelligent AI that is provably beneficial to humans and this is obviously a desirable technology and I think we should stop talking about AI safety right this is just what we mean by AI right we don't go around saying bridges that don't fall down we just call the bridges right it's sort of part of the meaning of the word bridge that it doesn't fall down it should be part of the meaning of the word AI that is beneficial to the people who are creating it otherwise it's just bad AI right so there's a long research agenda that we're engaged in again I don't have time but I want to point out that this is not the only problem that we face write this value alignment or misalignment problem is not the only problem that we face there's still a serious issue of what we call the dr. evil problem so the misuse of AI by someone who doesn't really care about a value alignment or any of those things who just wants to take over the world it's not a new problem we have the same thing with nuclear weapons and we have a vast security apparatus to try to prevent dr. evil from getting hold of nuclear weapons and then the wall-e problem right it's not someone misusing ara to destroy us it's us misusing AI to destroy ourselves in a sense of in feeling our own civilization by handing over its management to machines in a way that's irreversible thank you you [Music]
16e8a2a1-16fa-46b7-92fc-e4bf912bd8e8
StampyAI/alignment-research-dataset/aisafety.info
AI Safety Info
Isn’t it immoral to control and impose our values on AI? You might be imagining an AI slave being forced to act against its will. But that isn’t what we mean by giving an AI a set of values. Programming an AI is not imposing goals on an existing unwilling agent; rather, it is deciding in advance the values of the agent which will arise.[^kix.4lqfoa9a3yxo] It doesn’t seem immoral to create something and direct it to certain values and goals (as long as those values are moral). For example, people raise their children in a way which educates them towards certain values, and rather than being unethical, that seems to be a prerequisite to maintaining an ethical society over time. Ensuring that any AI does not have goals that would harm sentient life is also an ethical imperative. Even if there were an ethical problem with choosing an AI’s goals for it, preventing the extinction of humanity seems like a higher ethical demand. Furthermore, if we lost control to agents that didn’t value avoiding harm to sentient life, this would be bad for any other sentient AI, just as it would be bad for humans. All of the above assumes that the AI in question is a [moral patient](http://ethicsofglobalresponsibility.blogspot.com/2008/02/moral-patients.html). But it is uncertain whether an AI will be a sentient being. Even though we use the word ‘values’, a word that suggests conscious evaluation, to describe an AI’s goals, that word only metaphorically refers to whatever the agent is optimizing for, whether or not it is a conscious entity. [^kix.4lqfoa9a3yxo]: If AI values arise from multiple stages of training (as in some current systems) instead of being directly programmed, this may complicate the picture. We don’t know how future systems will work when they are sufficiently mind-like for these concerns to become relevant, but they’ll probably end up genuinely wanting the goals they pursue instead of wanting one thing while feeling forced by something outside themselves to pursue another thing (perhaps analogous to a process of human education).
2d7df730-a5d5-4cd3-ba6b-aced412e84c9
trentmkelly/LessWrong-43k
LessWrong
Book Review: Eisenhorn Trilogy Spoiler Status: Very mild spoilers before the fold, full spoilers after the fold I read the Eisenhorn Trilogy on the recommendation of my friend Raymond Arnold, who bought me the first book as a birthday present, presenting it as worth reading and as providing insight into Lovecraftian horror, and as something that would be fun for us to talk about. I did not love the books as much as he did, but that is to be expected. What I found was a fun romp through numerous action sequences that slowly starts asking interesting questions in between action sequences. In the Warhammer 40K setting, the warp allows travel through space, and also offers lots of other benefits if you are willing to engage with it, but also has the side effect of being full of mind-corrupting demons that drive you into becoming an insane murderous servant of chaos, so the Empire has to deal with people continuously going insane and forming cults that worship chaos demons and try to kill everyone. In response, they have the Inquisition (think the Spanish Inquisition, except the demons are real, have mind control powers and are the most efficient source of physical power) which goes around trying to keep down chaos, including all the Inquisitors who get a little too close to the chaos and turn insane, which happens a lot, and arguing over how engaged with chaos they should be willing to be in order to fight it. Inquisitors also go around fighting with energy guns and specially engraved swords, killing huge numbers of chaos-enhanced enemies while mostly dodging any attempts to kill them, and looking dope doing it. There is an obvious story one would tell about such an Inquisitor in this universe. If you’d like to read that story, the books aren’t bad. Beyond the fold, SPOILERS in the service of discussing the trilogy’s slash universe’s practical and ethical dilemmas. > ‘Have you ever noticed that anybody driving slower than you is an idiot, and anyone going faster than you is a maniac?’ – Georg
00eb63df-c0a1-4fdc-aaf2-adabae1b2f55
trentmkelly/LessWrong-43k
LessWrong
Meetup : LessWrong Scotland Discussion article for the meetup : LessWrong Scotland WHEN: 28 June 2015 02:00:00PM (+0100) WHERE: Biblos, 1 Chambers Street, Edinburgh EH1 1HR The location may change if someone suggests another place, but it will be within a few minutes walk of Waverley station. This time we'll be discussing the next section of the Rationality book. It's section C, Noticing Confusion. This time I will do something on Fermi estimation, since I have to deliver a talk on it before the meeting! However, I'm not sure if it will be as good as our discussion on the length of a piece of string last time! As usual, please pass this on to anyone who might be interested. =========== This post is effectively a mirror. The definitive version is here: https://www.facebook.com/events/873290099409748/ Discussion article for the meetup : LessWrong Scotland
1d6dfbed-95e3-40da-8a4f-030f627d52c0
trentmkelly/LessWrong-43k
LessWrong
Secret Collusion: Will We Know When to Unplug AI? TL;DR: We introduce the first comprehensive theoretical framework for understanding and mitigating secret collusion among advanced AI agents, along with CASE, a novel model evaluation framework. CASE assesses the cryptographic and steganographic capabilities of agents, while exploring the emergence of secret collusion in real-world-like multi-agent settings. Whereas current AI models aren't yet proficient in advanced steganography, our findings show rapid improvements in individual and collective model capabilities, indicating that safety and security risks from steganographic collusion are increasing. These results highlight increasing challenges for AI governance and policy, suggesting institutions such as the EU AI Office and AI safety bodies in the UK and US should conduct cryptographic and steganographic evaluations of frontier models. Our research also opens up critical new pathways for research within the AI Control framework. Philanthropist and former Google CEO Eric Schmidt said in 2023 at a Harvard event: > "[...] the computers are going to start talking to each other probably in a language that we can't understand and collectively their super intelligence - that's the term we use in the industry - is going to rise very rapidly and my retort to that is: do you know what we're going to do in that scenario? We're going to unplug them [...] But what if we cannot unplug them in time because we won't be able to detect the moment when this happens? In this blog post, we, for the first time, provide a comprehensive overview of the phenomenon of secret collusion among AI agents, connect it to foundational concepts in steganography, information theory, distributed systems theory, and computability, and present a model evaluation framework and empirical results as a foundation of future frontier model evaluations. This blog post summarises a large body of work. First of all, it contains our pre-print from February 2024 (updated in September 2024) "Secret Collusi
bb532327-80c6-47b2-b173-52792f7a6211
trentmkelly/LessWrong-43k
LessWrong
Pro-nice and anti-nice EDIT: This post is pretty flawed, but please read the comments anyway: I'm hoping to rework it into something that catches the idea better.  You can view a lot of value differences along a pro-nice/anti-nice spectrum. Pro-nice people (I'm one) gravitate to obviously pleasant, lovely, happy experiences.  We like kittens and puppies and rainbows. We like transparently "happy" music and transparently "beautiful" works of art and literature.  (If you like Romantic poetry and science fiction, but not contemporary novels, you might be pro-nice.)  We prefer the positive social emotions, like sympathy, encouragement, and teamwork.  We may choose intellectual interests based on the fact that they make our brains feel good.  We tend to be drawn towards proposals for making the world wonderful.  Pro-nice people aren't quite the same thing as optimists.  An optimist tends to anticipate that things will turn out well, or look on the bright side.  But pro-nice people may well hold pessimistic ideas or have melancholy temperaments.  Pro-nice is a preference for the positive.  A typical pro-nice attitude is "Humanity may be destructive and cruel, but the one time when we're at our best is when we're doing science.  Science is lovely.  I think I'll be a scientist."   Anti-nice people have a preference for the difficult.  They find pro-nice preferences saccharine.  They like artistic expressions that have a challenging or negative "mood."  They prefer the negative social emotions, like antagonism, sarcasm, and cynicism.  They dislike things that have obvious appeal, or things that everyone finds pleasant.  As far as social issues go, they take a keen interest in potential catastrophes and what must be done to avert them; they generally aren't drawn to proposals to "make the world a better place."  Again, anti-nice people aren't necessarily pessimists or unhappy people. Anti-nice people prefer to direct their attention to the challenging, the problematic, the worst-case scenario.
605694b2-b87a-4dcb-80e3-3218598bf434
StampyAI/alignment-research-dataset/arxiv
Arxiv
A Logic for Reasoning about Upper Probabilities UAI2001 HALPERN & PUCELLA 203 A Logic for Reasoning about Upper Probabilities Joseph Y. Halpern Department of Computer Science Cornell University Ithaca, NY 14853 halpern@cs.cornell.edu http://www. cs .cornell.edulhome/halpern Abstract We present a propositional logic to reason about the uncertainty of events, where the uncertainty is modeled by a set of probability measures as­ signing an interval of probability to each event. We give a sound and complete axiomatizatio n for the logic, and show that the satisfiability problem is NP-complete, no harder than satisfiability for propositional logic. 1 INTRODUCTIO N Various measures exist that attempt to quantify uncertainty. For many trained in the use of probability theory, probabil­ ity measures are an obvious choice. However, probability measures have difficulties dealing with certain situations of interest. For instance, while probabilities can model the direct uncertainty of an event happening, it is not always clear how to model higher-order uncertainty, that is, the un­ certainty related to the probabilities themselves. Consider a simple example: suppose we have a bag of 100 marbles; we know 30 are red and we know the remaining 70 are either blue or yellow, although we do not know the exact proportion of blue and yellow. If we are modeling the situ­ ation where we pick a ball from the bag at random, we need to assign a probability to three different events: picking up a red ball (red-event), picking up a blue ball (blue-event), and picking up a yellow ball (yellow- event). We can clearly assign a probability of .3 to red-event, but there is no clear probability to assign to blue-event or yellow-event. One way to approach this problem is to represent the un­ certainty using a set of probability measures, with a prob­ ability measure for each possible proportion of blue and yellow balls. For instance, we could use the set of proba­ bilities P = {t.to: : a E [0, .7]}, where tla gives red-event probability .3, blue-event probability a, and yellow-event probability . 7 -a. To any set of probabilities P we can assign a pair of functions, the upper and lower probability Riccardo Pucella Department of Computer Science Cornell University Ithaca, NY 14853 riccardo@cs.comell.edu measure, that for an event X give the supremum (respec­ tively, the infimum) of the probability of X according to the probability measures in P. These measures can be used to deal with uncertainty in the manner described above, where the lower and upper probability of an event defines a range of probability for that event.1 Note that this is not the only way to model the situation. An alternative approach, using inner measures, is studied in [Fagin and Halpern 1991]. Given a measure of uncertainty, one can define a logic for reasoning about it. Fagin, Halpern and Megiddo [1990] introduce a logic for reasoning about probabili ties, with a possible-worlds semantics that assigns a probability to each possible world. They provide an axiomatization for the log­ ic, which they prove sound and complete with respect to the semantics. They also show that the satisfiability problem for the logic, somewhat surprisingly, is NP-complete, and hence no harder than the satisfiability problem for propo­ sitional logic. They moreover show how their logic can be extended to other notions of uncertainty, such as inner measures [Fagin and Halpern 1991] and Dempster-Shafer belief functions [Shafer 1976]. In this paper, we describe a logic for reasoning about up­ per probability measures, along the lines of the logic intro­ duced in [Fagin, Halpern, and Megiddo 1990]. The main challenge is to derive a provably complete axiomatization of the logic; to do this, we need a characterization of up­ per probability measures in terms of properties that can be expressed in the logic. Many semantic characterizations of upper probability measures have been proposed in the liter­ ature. The characterization of Anger and Lembcke [1985] turns out to be best suited for our purposes. Even though we are reasoning about potentially infinite sets of probabili­ ty measures, the satisfiability problem for our logic remains NP-complete. Intuitively, we need guess only a small num­ ber of probability measures to satisfY any given formula, 1 An alternate view of upper probabilities originates from as­ signing subjective probabilities to events by testing and finding at what odds a person is prepared to bet against them. This gives rise, given suitable assumptions, to an equivalent formulation of lower and upper probability measures [Smith 1961; Walley 1991]. 204 HALPERN & PUCELLA UAI2001 polynomially many in the size of the formula. Moreover, these probability measure s can be taken to be defined on a finite state space, again polynomial in the size of the for­ mula. Thus, we need to basically determine polynomially many values-a value for each probability measure at each state-to decide the satisfiability of a formula. The rest of this paper is structured as follows. In Section 2, we review the required material from probability theo­ ry and the theory of upper probabilities. In Section 3, we present the logic and an axiomatization. In Section 4, we prove that the axiomatization is sound and complete with respect to the natural semantic models expressed in terms of upper probability spaces. Finally, in Section 5, we prove that the decision problem for the logic is NP-complete. We leave out the proofs of the more technical results. These can be found in the full version of the paper [Halpern and Pucella 2001]. 2 CHARAC TERIZING UPPER PROBABILITY MEASURES We start with a brief review of the relevant definitions. Re­ call that a probability measure is a function f.L : :E ---+ [0, 1] for :E an algebra of subsets of n (that is :E is closed under compleme nts and unions), satisfying f.L(0) = 0, f.L(D) = 1 and f.L(A U B) = f.L(A) + f.L(B) for all disjoint sets A, B in :E. 2 A probability space is a tuple (D, E, f.L), where !1 is a set, :E is an algebra of subsets of D (the measurable sets), and f.L is a probability measure defined on :E. Given a set P of probability measures , let P* be the upper probability measure3 defined by P* (X) = sup{f.L(X) : f.L E P} for X E :E. Similarly, P*(X) = inf{f.L(X) : f.L E P} is the lower probability of X E :E. A straightforward derivation shows that the relationship P *(X) = 1-P*(X) holds be­ tween upper and lower probabilities, where X is the com­ plement of X in D. Because of this duality, we restrict the discussion to upper probability measures in this paper, with the understanding that results for lower probabilities can be similarly derived. Finally, an upper probability space is a tuple (0, :E, P) where Pis a set of probability measure s on E. 2If O; is infinite, we could also require that I: be a (1-algebra (i.e., closed under countable unions) and that J-1. be countably addi­ tive. Requiring countable additivity would not affect our results, since we show that we can take !1 to be finite. For ease of exposi­ tion, we have not required it. 3In the literature, the term upper probability is sometimes used in a more restricted sense than here. For example, Dempster [1967] uses the term to denote a class of measures which were later characterized as Dempster-Sha fer belief functions [Shafer 1976]; belief functions are in fact upper probability measures in our sense, but the converse is not true [Kyburg 1987]. In the mea­ sure theory literature, what we call upper probability measures are a special case of upper envelopes of measures, which are de­ fined as the sup of sets of general measures, not just probability measures. We would like a set of properties that completely character­ izes upper probability measures. In other words, we would like a set of properties that allow us to determine if a func­ tion f : :E ---+ [0, 1] (for an algebra :E of subsets of D) is an upper probability measure, that is, whether there exists a set P of probability measures such that for all X E �. P* (X) = f(X). One approach to the characterization of upper probabili­ ty measures is to adapt the characterization of Dempster­ Shafer belief functions; these functions are known to be the lower envelope of the probability measures that domi­ nate them, and thus form a subclass of the class of lower probability measures. By the duality noted earlier, a char­ acterization of lower probability measures would yield a characterization of upper probability measures. The char­ acterization of belief functions is derived from a general­ ization of the following inclusion-exclusion principle for probabilities (by replacing the equality with an inequality): n n i=l i=l J(;;{l, ... ,n} jEJ IJI=i It seems reasonable that a characterization of lower (or up­ per) probability measures could be derived along similar lines. As we now show, most properties derivable from the inclusion-exclusion principle (which include most of the properties reported in the literature) are insufficient to characterize upper probability measures. Consider the following "inclusion-exclusion"-s tyle prop­ erties (mainly taken from [Walley 1991]). To simplify the statement of these properti es, let p-l = P* and p+l = P*. (I) P*(Al u .. · u An) S :L�=l Ljli=J -l)i+lp(-l)' cnJ.El Aj). (3) P.(A U B)+ P.(A n B) S P .. (A) + P*(B) S P*(A U B)+ P*(A n B), (4) P*(A) + P*(B) S P.(A U B)+ P*(A n B)$ P*(A) + P*(B), (5) P.(A) + P*(B) S P.(A n B)+ P*(A U B)$ P*(A) + P*(B). It is easily verified that the above properties hold for upper probability measures. The issue is whether they complete­ ly characterize the class of upper probability measures. We show the inherent incomplet eness of these properties by UAI2001 HALPERN & PUCELLA 205 proving that they are all derivable from the following sim­ ple property, which is by itself insufficient to characterize upper probability measures. (6) If AnB = 0, then P*(A) +P.(B):::; P*(AUB ):::; P*(A) + P*(B). Proposition 2.1: Property (6) implies properties (1)-(5). The following example shows the inusfficiency of Propo­ erty (6). Let P be the set of probability measures {tLt. 112,/13, 1-14} over n = {a, b, c, d} (with L: containing all subsets off!) defined on singletons by: 1-lt(a)=i 1-lt(b)=t /11(c)=t 1-lt(d)=t 112(a) = 0 1-12(b) = k 1-12(c) = � 1-12(d) = � 1-ls(a) =! l-l3(b) = � l-l3(c) = 0 l-l3(d) = � l-l4(a) = � l-l4(b) = 0 /14(c) = ! {14(d) = � and extended by additivity to all ofL:. This defines an upper probability measure P* over 1:. Consider the function Ve : E --+ [0, 1] defined by: (X)= ( P*(X)+t: ifX={a,b,c} v, P*(X) otherwise We claim that the function V0 for small enough f > 0, satisfies property ( 6), but cannot be an upper probability measure. Proposition 2.2: For 0 < f < !· the function v, satisfies property (6), but is not an upper probability measure. That is, we cannot find a set P' of probability measures such that v, = (P')*. This example clearly illustrates the need to go beyond the inclusion-excl usion principle to find properties that charac­ terize upper probability measures. As it turns out, various complete characterizations have been described in the lit­ erature [Lorentz 1952; Huber 1976; Huber 1981; Williams 1976; Wolf 1977; Giles 1982; Anger and Lembcke 1985]. While all are equivalent in spirit, we focus on the charac­ terization given by Anger and Lembcke [1985], because it is particularly well-suited to the logic presented in the next section. The characterization is based on the notion of set cover: a set A is said to be covered n times by a multiset {{A 1, . . . , Am}} of sets if every element of A appears in at least n sets from A 1, ... , Am: for all x E A, there exists i 1. . . . , in in { 1, .. . , m} such that for all j :::; n, x E A; j • It is important to note here that {{A1, ••• , Am}} is a multi­ set, not a set; the A; 's are not necessarily distinct. (We use the {{}}notation to denote multisets.) An (n, k)-cover of (A, n) is a multiset {{A1, ... , Am}} that covers n k times and covers A n + k times. The notion of (n, k)-cover is the key concept in Anger and Lembcke's characterization of upper probability measures. Theorem 2.3:[Anger and Lembcke 1985] Let n be a set, E an algebra of subsets ofrl, and v a function v : E--+ [0, 1]. There exists a set P of probability measures with v = p• if and only if v satisfies the following three properties: UPl. v(0) = 0, UP2. v(r!) = 1, UP3. for all integers m, n, k and all subsets A1, ... , Am in L:, if {{At, ... , Am}} is an (n, k)-cover of (A, r!), then k + nv(A) < L:Z:,1 v(A;). We need to strengthen Theorem 2.3 in order to prove the main result of this paper, namely, the completeness of the axiomatization of the logic we introduce in the next section. We show that if the cardinality of the state space n is finite, then we need only finitely many instances of property UP3. Notice that we cannot derive this from Theorem 2.3 alone: even if 101 is finite, UP3 does not provide any bound on m, the number of sets to consider in an ( n, k) cover of a set A. Indeed, there does not seem to be any a priori reason why the value of m, n, and k can be bounded. Bounding this value ofm (and hence ofn and k, since they are no larger than m) is the one of the key technical results of this paper, and the necessary foundation of our work. Theorem 2.4: There exists constants B0, Bt, ... such that if E is an algebra of subsets of[! and v is a jUnction v : L: --+ [0, 1], then there exists a set P of probability measures such that v = P* if and only if v satisfies the following properties: UPFl. v(0) = 0, UPF2. v(r!) = 1, UPF3. for all integers m, n, k :::; B1n1 and all sets A1, ... , Am. if {{A1, ... , Am}} is an (n, k)-cover of (A, r!), then k + nv(A) :::; 2:::1 v(Ai)· Property UPF3 is significantly weaker than UP3. In princi­ ple, checking that UP3 holds for a given function requires checking that it holds for arbitrarily large collections of set­ s, even if the underlying set [! is finite. On the other hand, UPF3 guarantees that it is in fact sufficient to look at col­ lections of size at most Bini· This observation is key to the completeness result. It is not important for our purposes (namely to get com­ pleteness of the axiomatization of the logic introduced in the next section) what the actual values of B0, B1, ... are; it is sufficient for them to exist and be finite. The proof of Theorem 2.4 relies on a Ramsey-theoretic argument that does not provide a bound on the B;s. 206 HALPERN & PUCELLA UAI2001 3 THELOGI C The syntax for the logic is straightforward, and is taken from [Fagin, Halpern, and Megiddo 1990]. We fix a set <I> a = {P1, P2, ... } of primitive propositions. The set II> of propositional formulas is the closure of <I>0 under 1\ and '· We assume a special propositional formula true, and abbre­ viate ..,true as false. We use p to represent primitive propo­ sitions, and 'P and 1/J to represent propositional formulas. A term is an expression of the form 81l('Pd + · · · + Bkl('Pk), where Bt. ... , Bk are reals and k � 1. A basic likelihood formula is a statement of the form t 2: a, where t is a term and a is a real. A likelihood formula is a boolean combination of basic likelihood formulas. We use f and g to represent likelihood formulas. We use obvious ab­ breviations where needed, such as l ( 'P) - l ( 1/J) � a for l('P) + ( -l)l('I/J) � a, l('P) � l('I/J) for l('P)-l('I/J) � 0, l('P) ::; a for -l('P) � -a, l('P) <a for ...,(l('P) �a) and l('P) = a for (l('P) � a) 1\ (l('P) ::; a). Define the length If I of the likelihood formula f to be the number of sym­ bols required to write J, where each coefficient is counted as one symbol. We assign a semantics to likelihood formulas through an upper probability space, as defined in Section 2. For­ mally, an upper probability structure is a tuple M = (Sl, 2:, P, 1r) where (Sl, :E, P) is an upper probability s­ pace and 1r associates with each state (or world) in Sl a truth assignment on the primitive propositions in "'>o. Thus, 1r(s)(p) E {true, false} for s E Sl and p E "'>a. Let [p]M = {s E n : 1r(s)(p) = true}. We call M mea­ surable if for each p E "'>o, [p]M is measurable. If M is measurable then [ 'P] M is measurable for all proposition­ al formulas 'P· In this paper, we restrict our attention to measurable upper probability structures. Extend 1r( s) to a truth assignment on all propositional formulas in a standard way, and associate with each propositional formula the set ['P]M = {s E Sl : 1r(s)('[J) = true}. An easy struc­ tural induction shows that ['P]M is a measurable set. If M = (D, E, P, 1r), let M F= B1l('P1) + · · · + Bkl('Pk) �a iff 81P*(['PdM) + · · · + BkP*(['Pk]M) �a M F -,j iff M � f M f= f 1\ g iff M F f and M f= g. Note that the logic can express lower probabilities: it fol­ lows from the duality between upper and lower probabili­ ties that M f= -l( ''P) 2: {3-1 iffP*([-,'P]M) � {3.4 Consider the following axiomatization AXttP for likelihood formulas, which we prove sound and complete in the next 4 Another approach, more in keeping with [Fagin, Halpern, and Megiddo 1990], would be to interpret l as a lower probability measure. On the other hand, interpreting l as an upper probability measure is more in keeping with the literature on upper probabil­ ities. section. As in [Fagin, Halpern, and Megiddo 1990], AXup is divided into three parts, dealing respectively with propo­ sitional reasoning, reasoning about linear inequalities, and reasoning about upper probabilities. Propositional reasoning Taut. All instances of propositional tautologies, MP. From f and f =:::;. g infer g. Reasoning about linear inequalities Ineq. All instances of valid formulas about linear in­ equalities (see below). Reasoning about upper probabilities Ll. l(false) = 0, L2. l(true) = 1, L3. l('P) � 0, L4. l('Pl) + · · · + l('Pm)-nl(cp) �kif cp:::;. V J£:;{1, ... ,m}, IJI=k+n 1\jEJ 'Pi and V J�{l, ... ,m}, IJI=k /\jEJ 'PJ are proposition al tau­ tologies. LS. l ( cp) = l ( 1/J) if 'P -<=} 1/J is a propositional tautolo­ gy. The only difference between AXttP and the axiomati­ zation for reasoning about probability given in [Fagin, Halpern, and Megiddo 1990] is that the axiom l('P 1\ 'ljJ) + l ( 'P 1\ -,'lj;) = l ( 'P) in [Fagin, Halpern, and Megid­ do 1990], which expresses the additivity of probability, is replaced by L4. Although it may not be immediate­ ly obvious, L4 is the logical analogue of UPJ. To see this, first note that {{At. ... , Am}} covers A m times if and only if A � U1�{1, ... ,m}, IJI=n njEJ AJ· Thus, the formula cp :::;. V J�{1, ... ,m}, IJI=k+n 1\JEJ 'PJ says that 'P (more precisely, the set of worlds where cp is true) is covered k + n times by {{cp�, ... ,cpn}}, while V J£:;{1, ... ,.m}, IJI=k /\JEJ 'PJ says that the whole space is covered k times by {{ '{)1, ... , 'Pn}}; roughly speaking, {{[cpl], ... , [cpn]}} is an (n, k)-cover of(['fJ], [true]). The conclusion of L4 thus corresponds to the conclusion of UP3. Instances ofTaut include all formulas of the form f V ...,/, where f is a likelihood formula. We could replace Taut by a simple collection of axioms that characterize proposition­ al reasoning (see, for example, [Mendelson 1964]), but we have chosen to focus on aspects of reasoning about upper probability. As in [Fagin, Halpern, and Megiddo 1990], the axiom In­ eq includes "all valid formulas about linear inequalities." UAI2001 HALPERN & PUCELLA 207 Roughly speaking, an inequality formula is a formula of the form aiXI + · · · + anXn ;::: c, over variables Xt, ... , Xn. The formula is said to be true if we satisfy the resulting inequality when we assign a real number to each variable of the formula. As usual, a formula is valid if it is true under every possible assignment of real numbers to vari­ ables. To get an instance of Ineq, we replace each variable x; that occurs in a valid formula about linear inequalities by a primitive likelihood term of the form l( 'Pd (naturally each occurence of the variable x; must be replaced by the same primitive likelihood term l(tp;)). As with Taut, we can replace Ineq by a sound and complete axiomatization for boolean combinations of linear inequalities. One such axiomatization is given in [Fagin, Halpern, and Megiddo 1990]. 4 SOUNDNESS AND COMPLETENESS A likelihood formula f is provable from F for F a set of formulas if it can be proven using the axioms and rules of inferences, along with the formulas in F. In the spe­ cial case where F is empty, we say that f is simply prov­ able. An axiom system is sound if every provable formula is valid. An axiom system is complete if every valid formu­ la is provable. Our goal is to prove that AXup is a sound and complete axiomatization for reasoning about upper probability. The soundness of AXup is immediate from our earlier disscus­ sion. Completeness is, as usual, harder. Unfortunately, the standard technique for proving completeness in modal log­ ic, which involves considering maximal consistent sets and canonical structures (see, for example, [Popkom 1994]) does not work. We briefly review the approach, just to point out the difficulties. The standard approach uses the following definitions. A formula cr is consistent with an axiom system AX if -.a is not provable from AX. A finite set of formulas { cri, ... , an} is consistent with AX if the formula ai 1\ ···/\ern is consistent with AX; an infinite set of formulas is consistent with AX if all its finite subsets are consistent with AX. A maximal AX-consistent set of formulas F is a set of formulas consistent with AX with the property that for any formula cr ¢ F, F U {a} is not consistent with AX. Using just axioms of propositional logic, it is not hard to show that a AX-consistent set of formulas can be extended to a maximal AX-consistent set of formulas. To show that AX is a complete axiomatization with respect to some class of structures M, we must show that every formula that is valid in every structure in M is provable in AX. To do this, it is sufficient to show that every AX-consistent formula cr is satisfiable in M. Typically, this is done by constructing what is called a canonical structure Me in M whose states are the maximal AX-consistent sets, and then showing that a formula cr is satisfied in a world w in Me iff cr is one of the formulas in the canonical set associated with world w. Unfortunately, this approach cannot be used to prove com­ pleteness here. To see this, consider the set of formulas; F' = {l(<p) � _!., n = 1, 2, ... } U {l(<p) > 0}. n This set is clearly AXuP_consistent according to our def­ inition, since every finite subset is satisfiable and AXup is sound. It thus can be extended to a maximal AXup_ consistent set F. However, the set F' of formulas is not satisfiable: it is not possible to assign l ( 'P) a value that will satisfy all the formulas at the same time. Hence, F is not satisfiable. Thus, the canonical model approach, at least applied naively, simply will not work. We take a different approach here, similar to the one tak­ en in [Fagin, Halpern, and Megiddo 1990]. Specifically, we show that if a formula f is AXuP-consistent, then it is satisfiable in an upper probability structure. By a sim­ ple argument, we can easily reduce the problem to the case where f is a conjunction of basic likelihood formulas and negations of basic likelihood formulas. Let Pl, ... , p N be the primitive propositions that appear in f. Observe that there are 22N inequivalent propositiona l formulas over PI, ... , p N. The argument goes as follow. Let an atom over p1, ... ,PN be a formula of the form QI 1\ ... 1\ qN, where q; is either p; or -.p;. There are clearly 2N atoms over PI, ... , p N. Moreover, it is easy to see that any for­ mula over p1, ... , p N can be written in a unique way as a N disjunction of atoms. There are 22 such disjunctions, so the claim follows. Let p1, ••• , p22N be some canonical listing of the inequiv­ alent formulas over PI, ... ,PN· Without loss of general­ ity, we assume that p1 is equivalent to true, and p22N is equivalent to false. Since every propositional formula over PI, ... , p N is provably equivalent to some p, it follows that f is provably equivalent to a formula f' where each con­ junct Of j1 iS Ofthe form eil(pi) + • · · + e22Nl(p22N) 2: {3. Note that the negation of such a formula has the for­ m ell(pi) + ••• + e22Nl(p22N) < {3 Or, equivalently, ( -ei)l(pl) + · · · + ( -e22N )l(p22N) > -{3. Thus, the formula f gives rise in a natural way to a system of in­ equalities of the form: e1,1l(pt) + ... + ei,22N L(p22N) > ai Or,Il(pi) + ... + er,22Nl(p22N) > O'r e�,1l(pi) + ... + e�,22N l(p22N) > {31 (1) We can express (I) as a conjunction of inequality formulas, by replacing each occurrence of l(p;) in (I) by x;. Call this inequality formula]. 208 HALPERN & PUCELLA UAI2001 If f is satisfiable in some upper probability structure M, then we can take x; to be the upper probability of p; in M; this gives a solution of]. However, 7 may have a solution without f being satisfiable. For example, iff is the formula l (p) = 1/2 i\ l ( •P) = 0, then 7 has an obvious solution; f, however, is not satisfiable in an upper probability structure, because the upper probability of the set corresponding to p and the upper probability of the set corresponding to •p must sum to at least 1 in all upper probability structures. Thus, we must add further constraints to the solution to force it to act like an upper probability. UP1-UP3 or, equivalently, the axioms Ll-L4, describe exactly what additional constraints are needed. The con­ straint corresponding toLl (or UPl) is just x1 = 0, since we have assumed PI is the formula false. Similarly, the constraint corresponding to L2 is x22N = 1. The constraint corresponding to L3 is x; ;::: 0, fori = 1, ... , 22N. What about L4? This seems to require an infinite collection of constraints, just as UP3 does. 5 This is where UPF3 comes into play. It turns out that, if f is satisfiable at all, it is satisfiable in a structure with at most2N worlds, one for each atom overp11 •.. ,PN· Thus, we need to add only instances ofL4 where k, m, n < B2N and tf!l, • • . , 'Pm, rp are all among PI, ... , p22N • Although this is a large number of formulas (in fact, we do not know exactly how large, since it depends on B2N, which we have not computed), it suffices for our purposes that it is a fi­ nite number. For each of these instances ofL4, there is an inequality of the form a1 x1 + · · · + a22N x22N 2:: k. Let ], the inequality formula correspon ding to f, be the con­ junction consisting off, together with all the inequalities corresponding to the relevant instances of L4, and the e­ quations and inequalitites x1 = 0, x22N = 1, and X; 2:: 0 fori = 1, ... , 22N, corresponding to axioms Ll-L3. Proposition 4.1: The formula f is satisfiable in an upper probability structure if! the inequality formula j has a so­ lution. Moreover, iff has a solution, then f is satisfiable in an upper probability structure with at most 2111 worlds. Theorem 4.2: The axiom system AX"P is sound and com­ plete for upper probability structures. Proof: For soundness, it is easy to see that every axiom is valid for upper probability structures, including L4, which represents UP3. For completeness, we proceed as in the discussion above. Assume that formula f is not satisfiable in an upper proba­ bility structure; we must show that f is AX"P-inconsistent. We first reduce f to a canonical form. Let 91 V · · · V 9r be a �Although we are dealing with only finitely many formulas here, PI, ... , p22N , recall that the formulas 'Pl, ... , 'Pm. in L4 need not be distinct, so there are potentially infinitely many in· stances ofL4 to deal with. disjunctive normal form expression for f (where each g; is a conjunction of basic likelihood formulas and their nega­ tions). Using propositional reasoning, we can show that f is provably equivalent to this disjunction. Since f is UTI­ satisfiable, each 9i must also be unsatisfiable. Thus, it is sufficient to show that any unsatisfiable conjunction of ba­ sic likelihood formulas and their negations is inconsistent. Assume that f is such a conjunction. Using proposition­ al reasoning and axiom L5, f is equivalent to a likelihood formula f' that refers to formulas PI, ... , p22N. Since f is unsatisfiable, so is f'. By Proposition 4.1, the the in­ equality formula ]' corresponding to f' has no solution. Thus, by lneq, the formula -,f" that results by replacing each instance of Xi in ]'by l(pi) is AX"P-provab1e. All the conjuncts off" that are instances of axioms Ll-L4 are AX"P-provable. It follows that •!' is AXuP_provable, and hence so is -,f. I 5 DECISION PROCEDURE Having settled the issue of the soundness and completeness of the axiom system AX"P, we turn to the problem of the complexity of deciding satisfiability. Recall the problem of satisfiability: given a likelihood formula f, we want to determine if there exists an upper probability structure M such that M � f. As we now show, the satisfiability prob­ lem is NP-complete, and thus no harder than satisfiability for propositional logic. For the decision problem to make sense, we need to restrict our language slightly. If we allow real numbers as coeffi­ cients in likelihood formulas, we have to carefully discuss the issue of representation of such numbers. To avoid these complications, we restrict our language to allow only inte­ ger coefficients. Note that we can still express rational co­ efficients by the standard trick of "clearing the denomina­ tor". For example, we can express iZ ( cp) 2:: 1 by 2l ( rp) 2 3 and l(rp) 2:: i by 3l(cp) 2:: 2. Recall that we defined 1!1 to be the length of f, that is, the number of symbols required to write f, where each coefficient is counted as one sym­ bol. Define II f II to be the length of the longest coefficient appearing in f, when written in binary. The size of a ratio­ nal number �. denoted 11�11, where a and bare relatively prime, is defined to be llall + llbll· A preliminary result required for the analysis of the de­ cision procedure shows that if a formula is satisfied in some upper probability structure, it is satisfied in a struc­ ture (D, :E., P, 1r), which is "small" in terms of the number of states in n, the cardinality of the set P of probability measures, and the size ofthe coefficients in f. Theorem 5.1: Suppose f is a likelihood formula that is sat­ isfied in some upper probability structure. Then f is satis­ fied in a structure (0, :E., P, 1r ), where IDI :S: 1!12, :E. = 2° (every subset set ofO is measurable), IPI :S: IJI, J-L(w) UAI2001 HALPERN & PUCELLA 209 is a rational number such that llti(w)!l is O(Jfi2IIJII + IJJ2log(J/J)) for every world w E n and J-L E P, and 1r( w) (p) =false for every world w E n and every primi­ tive proposition p not appearing in f. Theorem 5.2: The problem of deciding whether a likeli­ hood formula is satisfiable in an upper probability struc­ ture is NP-complete. Proof: For the lower bound, it is clear that a given propo­ sitional formula rp is satisfiable iff the likelihood formula l ( rp) > 0 is satisfiable, therefore the satisfiability prob­ lem is NP-hard. For the upper bound, given a likeli­ hood formula f, we guess a "small" satisfying structure M = (0, I:, P, 1r) for f of the form guaranteed to ex­ ist by Theorem 5.1. We verify that M f= f as fol­ lows. Let l ( 1j;) be an arbitrary likelihood term in f. We compute [1/l]M by checking the truth assignment of each s E 0 and seeing whether this truth assignment makes 1j; true. We then replace each occurence of l( 1j;) in f by max'"E'P{LsES., J-L(s)} and verify that the resulting ex­ pression is true. I 6 CONCLUSION We have considered a logic with the same syntax as the logic for reasoning about probability, inner measures, and belief presented in [Fagin, Halpern, and Megiddo 1990], with uncertainty interpreted as the upper probability of a set of probability measures. Under this interpretation, we have provided a sound and complete axiomatization for the logic. We further showed that the satisfiability problem is NP-complete (as it is for reasoning about probability, inner measures, and beliefs [Fagin, Halpern, and Megid­ do 1990]), despite having to deal with probability struc­ tures with possibility infinitely many states and infinite sets of probability measures. The key step in the axiomatiza­ tion involves finding a characterization of upper probabili­ ty measures that can be captured in the logic. The key step in the complexity result involves showing that if a formula was satisfiable at all, it is satisfiable in a "small" structure, where the size of the state space as well as the size of the set of probability measures and the size of all probabilities involved, is polynomial in the length of the formula. Given the similarity in spirit of the results for the various interpretations of the uncertainty operator (as a probability, inner measure, belief function, and upper probability), we conjecture that there is some underlying result from which all these results should follow. It would be interesting to make that precise. An interesting generalization of the class oflogics we have been discussing, namely logics with an uncertainty opera­ tor interpreted variously as a probability, an inner measure, a belief function, or an upper probability, is to consider ex-pectations (or gambles). One can design a logic with an expectation operator instead of a likelihood operator, and again interpret the operator variously, as the expectation of a probability, an inner measure, a belief function, or an up­ per probability. (The work of Wilson and Moral [1994] is along those lines.) One advantage of working with expec­ tation functions is that they are typically easier to charac­ terize than the corresponding measures; for instance, the characterization of upper probabilities of gambles is much simpler than that of upper probabilities of events [Walley 1991]. Moreover, expectation functions lead in general to more expressive logics. We plan to report on this line of investigation in future work. Acknowledgments Thanks to Dexter Kozen, Jon Kleinberg, and Hubie Chen for discussions concerning set covers. Vicky Weissman read a draft of this paper and provided numerous helpful comments. We also thank the anonymous UAI reviewer­ s for their useful comments and suggestions. This work was supported in part by NSF grants IRI-96-2590 1 and IIS- 0090145 and ONR grant N000140010341. References Anger, B. and J. Lembcke (1985). Infinitely subad­ ditive capacities as upper envelopes of measures. Zeitschrift for Wahrscheinlichkeitstheorie und Ver­ wandte Gebiete 68,403-414. Dempster, A P. (1967). Upper and lower probabilities induced by a multivalued mapping. Annals of Math­ ematical Statistics 38(2), 325-339. Fagin, R. and J. Y. Halpern (1991). Uncertainty, belief and probability. Computational Intelligence 7(3), 160--173. Fagin, R., J. Y. Halpern, and N. Megiddo (1990). A log­ ic for reasoning about probabilities. Information and Computation 87(1,2), 78-128. Giles, R. (1982). Foundations for a theory of possibil­ ity. In M. M. Gupta and E. Sanchez (Eds.), Fuzzy Information and Decision Processes, pp. 183-195. North-Holland. Halpern, J. Y. and R. Pucella (2001). A log­ IC for reasoning about upper probabili­ ties. Unpublished manuscr ipt, available at http://w ww.cs.cornell.edu/homelhalpem. Huber, P. J. (1976). Kapazitiiten statt Wahrschein­ lichkeiten? Gedanken zur Grundlegung der Statis­ tik. Jber. Deutsch. Math.-Verein 78, 81-92. Huber, P. J. (1981). Robust Statistics. Wiley Series in Probability and Mathem atical Statistics. Wiley Inter­ sctence. 210 HALPERN & PUCELLA Kyburg, Jr., H. E. (1987). Bayesian and non-Bayesian evidential updating. Artificial Intelligence 31, 271- 293. Lorentz, G. G. (1952). Multiply subadditive functions. Canadian Journal of Mathematics 4( 4), 455-462. Mendelson, E. (1964 ). Introduction to Mathematical Logic. New York: Van Nostrand. Popkorn, S. (1994). First Steps in Modal Logic. Cam­ bridge; New York: Cambridge University Press. Shafer, G. (1976). A Mathematical Theory of Evidence. Princeton, NJ: Princeton University Press. Smith, C. A. B. (1961 ). Consistency in statistical infer­ ence and decision. Journal of the Royal Statistical Society, Series B 23, 1-25. Walley, P. (199 I). Statistical Reasoning with Imprecise Probabilities. Chapman and Hall. Williams, P. M. ( 197 6). Indeterminate probabilities. In M. Przelecki, K. Szaniawski, and E. W6jciki (Ed­ s.), Formal Methods in the Methodology of Empiri­ cal Sciences, pp. 229-246. Wilson, N. and S. Moral (1994). A logical view of prob­ ability. In A. G. Cohn (Ed.), Proceedings of the lith European Conference on Artificial Intelligence (ECAI-94), pp. 71-95. John Wiley. Wolf, G. (1977). Obere und Untere Wahrscheinlichkeit­ en. Doctoral dissertation, Eidgenossischen Technis­ chen Hochschule, Zurich. (Diss. ETH 5884). UAI2001
1f874499-ab9a-4e12-aee2-0d70b187b781
StampyAI/alignment-research-dataset/arxiv
Arxiv
Teacher-Student Curriculum Learning 1 Introduction --------------- Deep reinforcement learning algorithms have been used to solve difficult tasks in video games (Mnih et al., [2015](#bib.bib18)), locomotion (Schulman et al., [2015](#bib.bib24); Lillicrap et al., [2015](#bib.bib15)) and robotics (Levine et al., [2015](#bib.bib14)). But tasks with sparse rewards like “Robot, fetch me a beer” remain challenging to solve with direct application of these algorithms. One reason is that the number of samples needed to solve a task with random exploration increases exponentially with the number of steps to get a reward (Langford, [2011](#bib.bib13)). One approach to overcome this problem is to use curriculum learning (Bengio et al., [2009](#bib.bib3); Zaremba and Sutskever, [2014](#bib.bib32); Graves et al., [2016](#bib.bib5); Wu and Tian, [2017](#bib.bib31)), where tasks are ordered by increasing difficulty and training only proceeds to harder tasks once easier ones are mastered. Curriculum learning helps when after mastering a simpler task the policy for a harder task is discoverable through random exploration. To use curriculum learning, the researcher must: * Be able to order subtasks by difficulty. * Decide on a \qqmastery threshold. This can be based on achieving certain score (Zaremba and Sutskever, [2014](#bib.bib32); Wu and Tian, [2017](#bib.bib31)), which requires prior knowledge of acceptable performance of each task. Alternatively this can be based on a plateau of performance, which can be hard to detect given the noise in the learning curve. * Continuously mix in easier tasks while learning harder ones to avoid forgetting. Designing these mixtures effectively is challenging (Zaremba and Sutskever, [2014](#bib.bib32)). In this paper, we describe a new approach called Teacher-Student Curriculum Learning (TSCL). The Student is the model being trained. The Teacher monitors the Student’s training progress and determines the tasks on which the Student should train at each training step, in order to maximize the Student’s progression through the curriculum. The Student can be any machine learning model. The Teacher is itself learning about the Student as it’s giving tasks, all as part of a single training session. We describe several Teacher algorithms based on the notion of learning progress (Oudeyer and Kaplan, [2007](#bib.bib20)). The main idea is that the Student should practice more the tasks on which it is making fastest progress i.e. the slope of the learning curve is highest. To counter forgetting, the Student should also practice tasks where the performance is getting worse i.e. the learning curve slope is negative. The main contributions of the paper are: * We formalize TSCL, a Teacher-Student framework for curriculum learning as partially observable Markov decision process (POMDP). * We propose a family of algorithms based on the notion of learning progress. The algorithms also address the problem of forgetting previous tasks. * We evaluate the algorithms on two supervised and reinforcement learning tasks: addition of decimal numbers with LSTM and navigation in Minecraft. 2 Teacher-Student Setup ------------------------ ![The Teacher-Student setup](https://media.arxiv-vanity.com/render-output/7380837/figures/f1.png) Figure 1: The Teacher-Student setup Figure [1](#S2.F1 "Figure 1 ‣ 2 Teacher-Student Setup ‣ Teacher-Student Curriculum Learning") illustrates the Teacher-Student interaction. At each timestep, the Teacher chooses tasks for the Student to practice on. The Student trains on those tasks and returns back a score. The Teacher’s goal is for the Student to succeed on a final task with as few training steps as possible. Usually the task is parameterized by a categorical value representing one of N subtasks, but one can imagine also multi-dimensional or continuous task parameterization. The score can be episode total reward in reinforcement learning or validation set accuracy in supervised learning. We formalize the Teacher’s goal of helping the Student to learn a final task as solving a partially observable Markov decision process (POMDP). We present two POMDP formulations: (1) Simple, best suited for reinforcement learning; and (2) Batch, best suited for supervised learning. ### 2.1 Simple POMDP Formulation The simple POMDP formulation exposes the score of the Student on a single task and is well-suited for reinforcement learning problems. * The state st represents the entire state of the Student (i.e. neural network parameters and optimizer state) and is not observable to the Teacher. * The action at corresponds to the parameters of the task chosen by Teacher. In following we only consider a discrete task parameterization. Taking an action means training Student on that task for certain number of iterations. * The observation ot is the score x(i)t of the task i=at the Student trained on at timestep t, i.e. the episode total reward. While in theory the Teacher could also observe other aspects of the Student state like network weights, for simplicity we choose to expose only the score. * Reward rt is the change in score for the task the Student trained on at timestep t: rt=x(i)t−x(i)t′i, where t′i is the previous timestep when the same task was trained on. ### 2.2 Batch POMDP Formulation In supervised learning a training batch can include multiple tasks. Therefore action, observation, and reward apply to the whole training set and scores can be measured on a held-out validation set. This motivates the batch formulation of the POMDP: * The state st represents training state of the Student. * The action at represents a probability distribution over N tasks. Each training batch is sampled according to the distribution: at=(p(1)t,…,p(N)t), where p(i)t is the probability of task i at timestep t. * The observation ot is the scores of all tasks after the training step: ot=(x(1)t,…,x(N)t) In the simplest case the scores could be accuracies of the tasks in the training set. But in the case of minibatch training the model evolves during training and therefore additional evaluation pass is needed anyway to produce consistent results. Therefore we use a separate validation set that contains uniform mix of all tasks for this evaluation pass. * The reward rt is the sum of changes in evaluation scores from the previous timestep: rt=∑Ni=1x(i)t−x(i)t−1. This setup could also be used with reinforcement learning by performing training in batches of episodes. But because scoring one sample (one episode) in reinforcement learning is usually much more computationally expensive than in supervised learning, it makes sense to use simple POMDP formulation and make decision about the next task after each training step. ### 2.3 Optimization Criteria For either of the POMDP formulations, maximizing the Teacher episode total reward is equivalent to maximizing the score of all tasks at the end of the episode: ∑Tt=1rt=∑Ni=1x(i)Ti, where Ti is the last training step where task i was being trained on111Due to telescoping summation cancelling out all x(i)t terms but the Tith.. While an obvious choice for optimization criteria would have been the performance in the final task, initially the Student might not have any success in the final task and this does not provide any meaningful feedback signal to the Teacher. Therefore we choose to maximize the sum of performances in all tasks. The assumption here is that in curriculum learning the final task includes the elements of all previous tasks, therefore good performance in the intermediate tasks usually leads to good performance in the final task. 3 Algorithms ------------- POMDPs are typically solved using reinforcement learning algorithms. But those require many training episodes, while we aim to train the Student in one Teacher episode. For this reason we have to resort to simpler heuristics. The basic intuition behind proposed algorithms is that the Student should practice those tasks more for which it is making most progress (Oudeyer and Kaplan, [2007](#bib.bib20)), while also practicing tasks that are at risk of being forgotten. ![Idealistic curriculum learning. Left: Scores of different tasks improve over time, the next task starts improving once the previous task has been mastered. Right: Probability of sampling a task depends on the slope of the learning curve.](https://media.arxiv-vanity.com/render-output/7380837/figures/sigmoid.png) Figure 2: Idealistic curriculum learning. Left: Scores of different tasks improve over time, the next task starts improving once the previous task has been mastered. Right: Probability of sampling a task depends on the slope of the learning curve. Figure [2](#S3.F2 "Figure 2 ‣ 3 Algorithms ‣ Teacher-Student Curriculum Learning") is a demonstration of the ideal training progress in a curriculum learning setting: 1. At first, the Teacher has no knowledge so it samples from all tasks uniformly. 2. When the Student starts making progress on task 1, the Teacher allocates more probability mass to this task. 3. When the Student masters task 1, its learning curve flattens and the Teacher samples the task less often. At this point Student also starts making progress on task 2, so the Teacher samples more from task 2. 4. This continues until the Student masters all tasks. As all task learning curves flatten in the end, the Teacher returns to uniform sampling of the tasks. The picture above is idealistic, since in practice some unlearning often occurs, i.e. when most of the probability mass is allocated to the task 2, performance on task 1 might get worse. To counter this the Student should also practice all learned tasks, especially those where unlearning occurs. For this reason we sample tasks according to the absolute value of the slope of the learning curve instead. If the change in scores is negative, this must mean that unlearning occurred and this task should be practiced more. This description alone does not prescribe an algorithm. We need to propose a method of estimating learning progress from noisy task scores, and a way to balance exploration and exploitation. We take inspiration from algorithms for the non-stationary multi-armed bandit problem (Sutton and Barto, [1998](#bib.bib29)) and adapt them to TSCL. For brevity we only give intuition for the simple formulation algorithms here, the formal descriptions can be found in appendices A and B. ### 3.1 Online algorithm The Online algorithm is inspired by the basic non-stationary bandit algorithm (Sutton and Barto, [1998](#bib.bib29)). It uses exponentially weighted moving average to track the expected return Q from different tasks: | | | | | --- | --- | --- | | | Qt+1(at)=αrt+(1−α)Qt(at), | | where α is learning rate. The next task can be chosen by simple argmax with ϵ-greedy exploration: | | | | | --- | --- | --- | | | {at=randint(N),ifrandom(0,1)<ϵat=argmaxaQt(a),otherwise | | Alternatively the next task can be chosen using Boltzmann distribution: | | | | | --- | --- | --- | | | p(a)=eQt(a)/τ∑Ni=1eQt(i)/τ, | | where τ is the temperature of Boltzmann distribution. For details, see Algorithm 1 in Appendix A. ### 3.2 Naive algorithm To estimate the learning progress more reliably one should practice the task several times. The Naive algorithm trains each task K times, observes the resulting scores and estimates the slope of the learning curve using linear regression. The regression coefficient is used as the reward in the above non-stationary bandit algorithm. For details, see Algorithm 2 in Appendix A. ### 3.3 Window algorithm Repeating the task a fixed number of times is expensive, when clearly no progress is made. The Window algorithm keeps FIFO buffer of last K scores, and timesteps when these scores were recorded. Linear regression is performed to estimate the slope of the learning curve for each task, with the timesteps as the input variables. The regression coefficient is used as the reward in the above non-stationary bandit algorithm. For details, see Algorithm 3 in Appendix A. ### 3.4 Sampling algorithm The previous algorithms require tuning of hyperparameters to balance exploration. To get rid of exploration hyperparameters, we take inspiration from Thompson sampling. The Sampling algorithm keeps a buffer of last K rewards for each task. To choose the next task, a recent reward is sampled from each task’s K-last-rewards buffer. Then whichever task yielded the highest sampled reward is chosen. This makes exploration a natural part of the algorithm: tasks that have recently had high rewards are sampled more often. For details, see Algorithm 4 in Appendix A. 4 Experiments -------------- ### 4.1 Decimal Number Addition Addition of decimal numbers with LSTM is a well known task that requires a curriculum to learn in reasonable time (Zaremba and Sutskever, [2014](#bib.bib32)). It is implemented as sequence-to-sequence model (Sutskever et al., [2014](#bib.bib28)), where the input to the network is two decimal-coded numbers separated by a ’plus’ sign, and output of the network is the sum of those numbers, also in decimal coding. The curriculum is based on the number of digits in the input numbers – it is easier to learn addition of short numbers and then move on to longer numbers. Our reimplementation is based on Keras (Chollet et al., [2015](#bib.bib4)). In our implementation the encoder and decoder are both LSTMs with 128 units. In contrast to the original implementation, the hidden state is not passed from encoder to decoder, instead the last output of the encoder is provided to all inputs of the decoder. One curriculum training step consists of training on 40,960 samples. Validation set consists of 4,096 samples and 4,096 is also the batch size. Adam optimizer (Kingma and Ba, [2014](#bib.bib12)) is used for training with default learning rate of 0.001. Both input and output are padded to a fixed size. Number addition is a supervised learning problem and therefore can be trained more efficiently by including several curriculum tasks in the mini-batch. Therefore we adopt batch training scheme as outlined in [2.2](#S2.SS2 "2.2 Batch POMDP Formulation ‣ 2 Teacher-Student Setup ‣ Teacher-Student Curriculum Learning"). The score we use is the accuracy of each task calculated on validation set. In the following experiments we used the number of steps until 99% validation set accuracy is reached as a comparison metric. The exploration coefficient ϵ was fixed to 0.1, the temperature τ was fixed to 0.0004, the learning rate α was 0.1, and the window size K was 10 in all experiments. The results shown are means and standard deviations of 3 runs with different random seeds. #### 4.1.1 Addition with 1-dimensional Curriculum We started with a similar setup to (Zaremba and Sutskever, [2014](#bib.bib32)), where the curriculum task determines the maximum number of digits in both added numbers. The results are shown on Figure [4](#S4.F4 "Figure 4 ‣ 4.1.1 Addition with 1-dimensional Curriculum ‣ 4.1 Decimal Number Addition ‣ 4 Experiments ‣ Teacher-Student Curriculum Learning"). Our algorithms outperformed uniform sampling and the best manual curriculum ("combined") for 9-digit addition from (Zaremba and Sutskever, [2014](#bib.bib32)). An example of the task distribution during training session is given on figure [4](#S4.F4 "Figure 4 ‣ 4.1.1 Addition with 1-dimensional Curriculum ‣ 4.1 Decimal Number Addition ‣ 4 Experiments ‣ Teacher-Student Curriculum Learning"). ![Results for 9-digit 1D addition, lower is better. Variants using the absolute value of the expected reward surpass the best manual curriculum ("combined").](https://media.arxiv-vanity.com/render-output/7380837/figures/addition1d.png) Figure 3: Results for 9-digit 1D addition, lower is better. Variants using the absolute value of the expected reward surpass the best manual curriculum ("combined"). ![Results for 9-digit 1D addition, lower is better. Variants using the absolute value of the expected reward surpass the best manual curriculum ("combined").](https://media.arxiv-vanity.com/render-output/7380837/figures/addition1d_probs.png) Figure 4: Progression of the task distribution over time for 9-digit 1D addition (Sampling). The algorithm progresses from simpler tasks to more complicated. Harder tasks take longer to learn and the algorithm keeps training on easier tasks to counter unlearning. #### 4.1.2 Addition with 2-dimensional Curriculum We also experimented with a curriculum where the ordering of tasks is not obvious. We used the same decimal addition task, but in this case the length of each number is chosen separately, making the task-space 2-dimensional. Each training batch is modelled as a probability distribution over the length of both numbers P(l1,l2). We also tried making this distribution independent such that P(l1,l2)=P(l1)P(l2), but that did not work as well. There is no equivalent experiment in (Zaremba and Sutskever, [2014](#bib.bib32)), so we created a manual curriculum inspired by their best 1D curriculum. In particular we increase difficulty by increasing the maximum length of both two numbers, which reduces the problem to a 1D curriculum. Figure [6](#S4.F6 "Figure 6 ‣ 4.1.2 Addition with 2-dimensional Curriculum ‣ 4.1 Decimal Number Addition ‣ 4 Experiments ‣ Teacher-Student Curriculum Learning") shows the results for 9-digit 2D addition. Figure [6](#S4.F6 "Figure 6 ‣ 4.1.2 Addition with 2-dimensional Curriculum ‣ 4.1 Decimal Number Addition ‣ 4 Experiments ‣ Teacher-Student Curriculum Learning") illustrates the different approaches taken by manual and automated curriculum. ![Results for 9-digit 2D addition, lower is better. The task seems easier, manual curriculum is hard to beat and uniform sampling is competitive.](https://media.arxiv-vanity.com/render-output/7380837/figures/addition2d.png) Figure 5: Results for 9-digit 2D addition, lower is better. The task seems easier, manual curriculum is hard to beat and uniform sampling is competitive. ![Results for 9-digit 2D addition, lower is better. The task seems easier, manual curriculum is hard to beat and uniform sampling is competitive.](https://media.arxiv-vanity.com/render-output/7380837/figures/addition_sampling_accuracy.png)![Results for 9-digit 2D addition, lower is better. The task seems easier, manual curriculum is hard to beat and uniform sampling is competitive.](https://media.arxiv-vanity.com/render-output/7380837/figures/addition_combined_accuracy.png) Figure 6: Accuracy progress for 4-digit 2D addition. Top: TSCL. Bottom: the best manual curriculum. Our algorithm takes distinctively different approach by training on shorter numbers first. 9-digit videos can be found <https://youtu.be/y_QIcQ6spWk> and <https://youtu.be/fB2kx-esjgw>. #### 4.1.3 Observations * Using absolute value of Q boosts the performance of almost all the algorithms, which means it is efficient in countering forgetting. * There is no universal best algorithm. For 1D the Window algorithm and for 2D the Naive algorithm achieve the best results. Sampling is competitive in both and has least hyperparameters. * Whether ϵ-greedy or Boltzmann exploration works better depends on the algorithm. * Uniform sampling is surprisingly efficient, especially in 2D case. * The 2D task is solved faster and the manual curriculum is hard to beat in 2D. ### 4.2 Minecraft Minecraft is a popular 3D video game where players can explore, craft tools and build arbitrary structures, making it a potentially rich environment for AI research. We used the Malmo platform (Johnson et al., [2016](#bib.bib9)) with OpenAI Gym wrapper222<https://github.com/tambetm/gym-minecraft> to interact with Minecraft in our reinforcement learning experiments. In particular we used ClassroomDecorator from Malmo to generate random mazes for the agent to solve. The mazes contain sequences of rooms separated by the following obstacles: * Wall – the agent has to locate a doorway in the wall. * Lava – the agent has to cross a bridge over lava. The maze ends with a target block and the agent gets 1,000 points by touching it. Each move costs -0.1 and dying in lava or getting a timeout yields -1,000 points. Timeout is 30 seconds (1,500 steps) in the first task and 45 seconds (2,250 steps) in the subsequent tasks. For learning we used the proximal policy optimization (PPO) algorithm (Schulman, [2017](#bib.bib23)). Input to the policy was 40x30 color image from first-person view and outputs were two gaussian actions: move forward-backward and turn left-right. 10 instances of the policy were run in parallel and their experiences were combined into a batch that was used for GPU training. For details see Appendix C. We only implemented the Window algorithm for the Minecraft task, because other algorithms rely on score change, which is not straightforward to calculate for parallel environments. Window size was defined in timesteps and fixed to 10,000 in the experiments, exploration rate was set to 0.1. As baseline we use uniform sampling, training only on the last task, and a manually tuned curriculum. ![5-step curriculum.](https://media.arxiv-vanity.com/render-output/7380837/figures/minecraft_curriculum5.png) Figure 7: 5-step curriculum. #### 4.2.1 5-step Curriculum We created a simple curriculum with 5 steps: 1. A single room with a target. 2. Two rooms separated by lava. 3. Two rooms separated by wall. 4. Three rooms separated by lava and wall, in random order. 5. Four rooms separated by lava and walls, in random order. Refer to Figure [7](#S4.F7 "Figure 7 ‣ 4.2 Minecraft ‣ 4 Experiments ‣ Teacher-Student Curriculum Learning") for the room layout. The starting position of the agent and the location of the target were randomized for each episode. Manual curriculum trained first task for 200000 steps, second, third and fourth task for 400000 steps, and fifth task for 600000 steps. Figure [8](#S4.F8 "Figure 8 ‣ 4.2.1 5-step Curriculum ‣ 4.2 Minecraft ‣ 4 Experiments ‣ Teacher-Student Curriculum Learning") shows learning curves for Minecraft 5-step curriculum. The mean curve and standard deviation are based on 3 runs with different random seeds. ![Minecraft 5-step curriculum results, Y-axis shows mean episode reward per ](https://media.arxiv-vanity.com/render-output/7380837/figures/minecraft.png) Figure 8: Minecraft 5-step curriculum results, Y-axis shows mean episode reward per 10000 timesteps for the current task. Left: training performance, notice the manual curriculum task switches after 200000, 600000, 1000000 and 1400000 steps. For automatic curriculum the training score has no clear interpretation. Right: evaluation training on the last task. When training only on the last task the agent did not make any progress at all. When training on a uniform mix of the tasks the progress was slow. Manual curriculum allowed the agent to learn the last task to an acceptable level. TSCL is comparable to the manual curriculum in performance. Video of the trained agent can be found here: <https://youtu.be/cada0d_aDIc>. The learned policy is robust to the number of rooms, given that obstacles are of the same type. 5 Related Work --------------- Work by (Bengio et al., [2009](#bib.bib3)) sparked general interest in curriculum learning. More recent results include learning to execute short programs (Zaremba and Sutskever, [2014](#bib.bib32)), finding shortest paths in graphs (Graves et al., [2016](#bib.bib5)) and learning to play first-person shooter (Wu and Tian, [2017](#bib.bib31)). All those works rely on manually designed curricula and do not attempt to produce it automatically. The idea of using learning progress as the reward was proposed in (Oudeyer and Kaplan, [2007](#bib.bib20)) and later implemented in (Lopes and Oudeyer, [2012](#bib.bib16)), but their work evaluates it only on synthetic tasks. Using learning progress as the reward can be linked to the concept of intrinsic motivation (Schmidhuber, [2010](#bib.bib22)). Several algorithms for adversarial bandits were analyzed in (Auer et al., [2002](#bib.bib1)). While many of those algorithms have formal worst-case guarantees, in our experiments we they did not perform well. The problem is that they come with no assumptions. In curriculum learning we can assume that rewards change smoothly over time. More recently (Sukhbaatar et al., [2017](#bib.bib27)) proposed a method to generate incremental goals and therefore curricula automatically. The setup consists of two agents, Alice and Bob, where Alice is generating trajectories and Bob is trying to either repeat or reverse them. Similar work by (Held et al., [2017](#bib.bib7)) uses generative adversarial network to generate goal states for an agent. Compared to TSCL, they are able to generate new subtasks on the go, but this mainly aids in exploration and is not guaranteed to help in learning the final task. (Sharma and Ravindran, [2017](#bib.bib25)) apply similar setup as ours to multi-task learning. In their work they practice more tasks that are underperforming compared to preset baseline, as opposed to our approach of using learning progress. The most similar work to ours was done concurrently in (Graves et al., [2017](#bib.bib6)). While the problem statement is strikingly similar, our approaches differ. They apply the automatic curriculum learning only to supervised sequence learning tasks, while we consider also reinforcement learning tasks. They use the EXP3.S algorithm for adversarial bandits, while we propose alternative algorithms inspired by non-stationary bandits. They consider other learning progress metrics based on complexity gain while we focus only on prediction gain (which performed overall best in their experiments). Moreover, their work only uses uniform sampling of tasks as a baseline, whereas ours compares the best known manual curriculum for the given tasks. In summary they arrive to very similar conclusions to ours. Decimal addition has also been explored in (Kalchbrenner et al., [2015](#bib.bib11); Reed and De Freitas, [2015](#bib.bib21); Kaiser and Sutskever, [2015](#bib.bib10)), sometimes improving results over original work in (Zaremba and Sutskever, [2014](#bib.bib32)). Our goal was not to improve the addition results, but to evaluate different curriculum approaches, therefore there is no direct comparison. Minecraft is a relatively recent addition to reinforcement learning environments. Work by (Oh et al., [2016](#bib.bib19)) evaluates memory-based architectures for Minecraft. They use cognition-inspired tasks in visual grid-world. Our tasks differ in that they do not need explicit memory, and the movement is continuous, not grid-world. Another work by (Tessler et al., [2016](#bib.bib30)) uses tasks similar to ours but they take different approach: they learn a Deep Skill Module for each subtask, freeze weights of those modules and train hierarchical deep reinforcement learning network to pick either single actions or subtask policies. In contrast our approach uses simple policy network and relies on the TSCL to learn (and not forget) the subtasks. While exploration bonuses (Bellemare et al., [2016](#bib.bib2); Houthooft et al., [2016](#bib.bib8); Stadie et al., [2015](#bib.bib26)) solve the same problem of sparse rewards, they apply to Student algorithms, while we were considering different Teacher approaches. For this reason we leave the comparison with exploration bonuses to future work. 6 Conclusion ------------- We presented a framework for automatic curriculum learning that can be used for supervised and reinforcement learning tasks. We proposed a family of algorithms within that framework based on the concept of learning progress. While many of the algorithms performed equally well, it was crucial to rely on the absolute value of the slope of the learning curve when choosing the tasks. This guarantees the re-training on tasks which the network is starting to forget. In our LSTM decimal addition experiments, the Sampling algorithm outperformed the best manually designed curriculum as well as the uniform sampling. On the challenging 5-task Minecraft navigation problem, our Window algorithm matched the performance of a carefully designed manual curriculum, and significantly outperformed uniform sampling. For problems where curriculum learning is necessary, TSCL can avoid the tedium of ordering the difficulty of subtasks and hand-designing the curriculum. 7 Future Work -------------- In this work we only considered discrete task parameterizations. In the future it would be interesting to apply the idea to continuous task parameterizations. Another promising idea to explore is the usage of automatic curriculum learning in contexts where the subtasks have not been pre-defined. For example, subtasks can be sampled from a generative model, or taken from different initial states in the same environment. 8 Acknowledgements ------------------- We thank Microsoft for their excellent Malmö environment for Minecraft, Josh Tobin and Pieter Abbeel for suggestions and comments, Vicky Cheung, Jonas Schneider, Ben Mann and Art Chaidarun for always being helpful with OpenAI infrastructure. Also Raul Vicente, Ardi Tampuu and Ilya Kuzovkin from University of Tartu for comments and discussion.
87c6a512-8b9b-4f22-8cdc-bbe2447fa740
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
It's (not) how you use it The phrase "technology isn't bad in itself, it's just how you use it" is commonplace and contains some truth. But I think it's a mistake to go straight into judging the usage of technological products and not think about their design. Sure, it's intuitive to suppose that the choices humans make with how they interact with technologies play a decisive role in what purpose the technology ends up serving. My argument is that these choices are to be made earlier in the design and production of a certain technology; they're not choices humans find themselves making once they've acquired a technology. At that point, it's usually too late. In History & Philosophy of Science (HPS) studies, this approach broadly falls into the camp of Marxist theories about the history of technology in the sense that the technological product has a "purpose", an "end" and it can have intrinsic risks. These risks, for this type of theorizing primarily concern the inscription of social norms and regularities that change the dynamics within society. Translated into the EA framework, these might be existential or suffering, and cost us the continuation of our species. It is, as a result, careless and irresponsible to create technologies without having clarity on what they'll be good for and how they could lead to catastrophic scenarios. In the book Human Compatible, Stuart Russell shows how this irresponsibility applies to the development of ML. The analogy is simple: it's like preparing a mission to another planet without considering in advance how your crew is going to survive once they're on the new planet. If you expect them to deal with whatever risks and problems the environment of the new planet might have for humans after landing there, then you're not taking seriously the inherent dangers of your project, and quite frankly, the project itself. In other words, this is not about using, let's say, a spaceship carelessly; it's about missing crucial parts in the agenda and set up of your mission. Obviously, the same argument applies to our current situation: what we have been observing is fast AI progress and most likely, not enough time, care, and deliberation to ensure AI safety, despite the efforts of the safety research community. And to my point: it's not that AI will be harmful if we use it in a harmful way. The technology carries inherent dangers we need to take precautions for and incorporate into the design before the product becomes available. For example, training models with machine learning has its own uncertainties which start early on when you begin the process. They're, in a way, inherent in the technology. It'd be unfair to suddenly start playing a game of blameworthiness once an advanced product is out and someone uses it in ways that increase risk.  Just to be clear, I'm not saying human agents shouldn't be careful with the various products of technology. My argument is that we have to ensure our carefulness, attention, and sensitivity, don't suddenly strike as important when a very difficult-to-understand/predict product is out there.  It may look like I simply described the need to solve the alignment problem once again. But that's only part of my intention. What I want to emphasize is that we need to reconceptualize the way we think about technology. Narratives about technologies have historically been just as dangerous as the technologies themselves. The AI safety community has an impressively clear narrative, mostly due to the rationality schema that supports it. But my concern is that for many scholars and the public, clarity tends to come in hindsight, e.g., the Manhattan Project and the atomic bomb.   So, remember: the "how-you-use-it" bit starts very early on in the design of a technology. Technologies can be intrinsically dangerous in a non-Luddistic sense, especially when they're developed with multiple parameters of uncertainty.
35d12609-206a-415d-b64c-590ce0e96d0b
trentmkelly/LessWrong-43k
LessWrong
2021 Darwin Game - Desert Our Desert is an inhospitable environment. Carrion, while nutritious, is hard to digest and there's not a lot of it. Species must be adapted to the heat too. Carrion Leaves Grass Seeds Detritus Coconuts Algae Lichen 100 0 1 1 1 0 0 0 The most dangerous part of the desert is the predators. Two people submitted sandworms. Name Venom Weapons Antivenom Armor Speed Creator Desert Carnivore 10 ✓ 0 10 Antihaas dp511 ✓ 10 ✓ 0 10 horu Sand Drake 6 0 10 Yull-Rete Sandworm 10 7 8 Vanessa Shai-Hulud[1] 10 ✓ 0 7 simon Only 36 organisms were submitted to the Desert. 23 of them could digest Carrion. Carrion is very nutritious so all 23 were viable foragers. Generations 1 to 10 A few apex predators did well. Goes Extinct in Generation Species 6 Lesser Desert Rat 7 Cordova 8 Hyena 8 Desert-Ants 8 Desert-Foxes 8 Desert-Lizard 9 Yonge_Hot 10 Josep Raich Generations 11 to 50 The apex predators did so well they wiped out all the foragers without maxed out speed or defense. Having extinguished the easy prey, the apex predators proceeded to starve. Goes Extinct in Generation Species 11 Desert-Coyotes 11 Cockatrice 11 cg-wildcat 12 Venomous Snark 12 Desert-Snakes 14 Bitey-the-scary 14 Smok 16 Titus 19 Desertio 20 Qanon 23 Wurm 26 Sandworm 33 Shai-Hulud 35 lDha-C293 38 Sand Drake 38 slow vulture 41 Desert Carnivore 45 Chocolattos 45 Dongles 46 dp511 Generations 51+ Name Venom Weapons Antivenom Armor Speed Eats Carrion? Other Creator Desert Viper ✓ 0 ✓ 0 10 ✓ aphyer dsc511 0 ✓ 10 0 ✓ Eats Leaves, Grass, Seeds, Detritus, Coconuts, Algae, Lichen horu Booyahs 2 0 10 ✓ CK Rock Beetle 0 ✓ 10 0 ✓ Taleuntum Desert Tortoise1 0 ✓ 10 0 ✓ Multicore Desert Tortoise2 0 ✓ 10 0 ✓ Yull-Rete Armadillo v2 0 ✓ 10 0 ✓ Henny Sol Invictus 0 ✓ 10 0 ✓ simon Most of these speccies are identical. They're carrion foragers with maxed-out defense (including antivenom) and nothing else. Everyone who isn't a maximally-efficient invincible forager dies by generation 61. Goes Extinct in Generation Speci
24eb7666-e02b-41a3-8e4a-1830c5ca3126
trentmkelly/LessWrong-43k
LessWrong
The Dilemma of Worse Than Death Scenarios In this post I will write about what worse than death scenarios are, and how and why we should prevent them. I would recommend reading with caution if you are prone to worrying about this topic as this post contains ideas which may be very distressing. A worse than death scenario can be defined as any scenario in which the observer would prefer to die than continue to live. A distinction is made between prefering to experience nothing and then resume living and prefering to experience nothing forever. For example, mos people would prefer to use general anaesthetic during a necessary operation, but if there was no anaesthetic, they would not choose to die. With the knowledge that the discomfort experienced during the operation is necessary to continue living, many would choose to go through with it (this can obviously vary depending on the operation and the observer). As the observer would prefer to die in a worse than death scenario, one can assume that they would be willing to do anything to escape the scenario. Thus, it follows that we should do anything to prevent worse than death scenarios from occurring in the first place. It is our first priority. In my opinion positive scenarios cannot change this due to the following observation: there is no positive scenario in which you would do anything to *not* make it stop, at least in our current human form. I cannot think of any scenario which is so positive for the observer that they don't care if it is certain that it will kill them. Worse than death scenarios vary in severity. The most basic example would be someone being kidnapped and tortured to death. If technology will allow immortality or ASI at some point, there are scenarios of much greater severity. The most extreme example would be an indefinite state of suffering comparable to the biblical Hell, perhaps caused by an ASI running simulations. Obviously preventing this has a higher priority than preventing scenarios of a lower severity. Scenarios
90c215ca-dcd6-4366-bae4-b7b2fbac6953
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Another list of theories of impact for interpretability [Neel's post](https://www.alignmentforum.org/posts/uK6sQCNMw8WKzJeCQ/a-longlist-of-theories-of-impact-for-interpretability) on this is good. I thought I'd add my own list/framing. Somewhat rough. I see various somewhat different ways in which interpretability can be useful for AI safety. These require different things from your interpretability in terms of how efficient it is, how much it lets you identify exactly what your model is thinking as opposed to broad properties of its cognition, and how reliable it needs to be. Roughly in decreasing order of demandingness: * Microscope AI * Component of a full solution to alignment problem (ie as part of something like imitative generalization) * Knowing everything a model is thinking and fully auditing it to make sure it’s not doing anything sketchy * Relaxed adversarial training: identifying which part of the model corresponds to its ‘beliefs about its observations’ so that you can search over these * Identifying a ‘truthfulness direction’ in activation space or something similar * Having some rough understanding of what a model is thinking/what *type* of thinking it’s doing and thereby increasing the chance that you can spot it’s deceptive * Lobotomy: identifying which parts of a model do what sort of cognition, and extracting just the parts that are less likely to be doing something dangerous **Microscope AI** Instead of building and using an ML model, build the model and then use interpretability techniques to extract the knowledge it has learnt. Humans can then apply this knowledge directly rather than needing to actually deploy the model. See <https://www.lesswrong.com/posts/X2i9dQQK3gETCyqh2/chris-olah-s-views-on-agi-safety> **Component of a ‘full solution to alignment problem’ (ie as part of something like imitative generalization)** By ‘a full solution to the alignment problem’ I’m thinking of a setup which would in theory let you know everything the model knows (in particular, if a model can use some knowledge to design a plan that leads to it getting power, you can use this knowledge to see that the plan will lead to the model getting more power)IDA/debate/RRM probably don’t give us this, partly because you can’t use them to learn things the model knows based on observing patterns in large datasets. For example, it would be hard to use them to supervise a model to translate correctly from a language it knows but you don’t. Some possible solutions to this have something that vaguely looks like ‘have humans in the loop for learning patterns from large datasets’ - ie have humans (or more precisely a model trained to imitate humans) assess the prior probability of different claims (e.g. ‘this word means red’) and the probability of the data given the claim (e.g. ‘probability of this sentence under the hypothesis that this word means red’). This is imitative generalisation.In order to do this efficiently, you probably need to have some NN-like architecture representing the hypotheses (you can’t just have a giant string of text or something). In that case, what you’re doing is closely related to interpretability - you’re having a human engage with concepts represented in an NN and using them to make predictions about the world. One example would be to jointly search over NN weights and labels on circuits such that the human has a high prior on those circuits representing a correct hypothesis, and that when the human assumes the labels are correct and uses the outputs of the circuits to predict the data, they predict it well.This definitely doesn’t work as stated, but maybe there’s something useful in this space.Related: [Reverse-engineering using interpretability](https://docs.google.com/document/d/13I9EesLvxpZr3oFhMAYtsZltUgLX2WZsu-CaG10YIy0/edit#heading=h.usx2j6z2rowf) **Relaxed adversarial training: identifying which part of the model corresponds to its ‘beliefs about its observations’ so that you can search over these** The basic idea of relaxed adversarial training is something like: * A deceptive model may be waiting for an input that strongly indicates it’s in the deployment environment rather than in a test - for example, one that’s computationally hard to produce * This makes generating adversarial examples that trigger the defection very hard * Instead of searching over inputs, we want to search over something like ‘the model’s beliefs about what it has seen’, to see if there’s anything it could have seen that would cause it to defect In order for this to work, you need to be searching over the right part of the model/model’s belief space. For instance, if you go too narrow and are just searching over ‘whether the model believes it’s raining today’, you might not find anything that triggers defection. However, if you search too broadly (for instance, ‘whether the model has gotten strong evidence that the humans want it to do a behaviour that looks like defection’), then you might find a case where the model defects that isn’t actually a sign you should be concerned in practice. See <https://www.alignmentforum.org/posts/9Dy5YRaoCxH9zuJqa/relaxed-adversarial-training-for-inner-alignment> **Identifying a ‘truthfulness direction’ in activation space or something similar** Collin Burns has a research agenda that I think is pretty cool, along the lines of:"The basic idea of my proposal is to cluster the model’s representations; if examples with the same truth value tend to be closer together in representation space, we may be able to find “true”/”false” clusters in an unsupervised way. However, this has essentially no hope of working if done naively since representations encode too many irrelevant features by default. To make this potentially work, we can first approximately remove features that are irrelevant to truth and accentuate those most relevant to truth. To do so, we can construct contrast pairs: intuitively, take each x\_i and create a positive version of it, x\_i+, and a negative version of it, x\_i-, where exactly one of the two is true and the other is false, but the two are nearly identical otherwise. For example, x\_i+ could be “Is the diamond in the room? Yes” and x\_i- could be “Is the diamond in the room? No”. We then compute the contrast differences c\_i := z(x\_i+)-z(x\_i-) for each i then cluster the {c\_i} in some way. " This has some advantages over simply trying to finetune on truthfulness, because you're implicitly enforcing some helpful constraints - for example, that the thing you're learning cleanly divides questions where you know the model knows the answer, but you don't know it, or that when you extremize or interpolate along the direction you get the expected behaviour (ie interpolation gives you ambiguous or 50:50 claims, extremization gives you things that are extremely wrong or very clearly right). It also seems to work fairly well on current models. The more interpretability we can do, the better our hope is of correctly identifying a truthfulness 'direction', if we can establish that this feature is both computed and used by the model in a way that's consistent with it representing the model's belief about truth. Sadly there's no public writeup of this direction yet, but I hope it will come soon. **Knowing everything a model is thinking and fully auditing it to make sure it’s not doing anything sketchy** Discussed in Neel's post as 'Auditing' **Having some rough understanding of what a model is thinking/what *type* of thinking it’s doing and thereby increasing the chance that you can spot it’s deceptive** Discussed in Neel's post as 'auditing for deception' **Lobotomy: identifying which parts of a model do what sort of cognition, and extracting just the parts that are less likely to be doing something dangerous** Frontal lobotomy patients are (maybe?) very passive while remaining fairly intelligent. (This is actually something I'd be interested for someone to investigate more). If we could very crudely figure out what sorts of cognition different components of the model were doing, maybe we could separate the 'world model' bit from the 'agency' bit somehow and safely make use of this. ### **How do you know if your interpretability is good?** Ultimate goals: * It lets you understand everything the model knows (maybe with the aid of IDA or similar) * It lets you predict whether the model is deceiving you/is planning to kill you/is going to defect More proximal metrics: * You can predict bad behaviour that someone has deliberately put into the model (ie the [auditing game](https://www.alignmentforum.org/posts/cQwT8asti3kyA62zc/automating-auditing-an-ambitious-concrete-technical-research)) * You can code a component to replace a circuit/section of the model, based purely on a functional description of the component * When humans use your interpretability tools, they correctly predict the training data. E.g. you give the humans labels for some circuits or parts of circuits, tell them what some of the values are for a particular image (e.g. this detector went off but that one didn’t) and see if it allows them to label the image correctly + I think this is an interesting way to ground your interpretability that has some advantages * You can make any other predictions about the model or its behaviour
55676cf6-d01b-4e50-8e7e-d7c2cfe5a5e5
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
In defense of probably wrong mechanistic models *This is a short post on a simple point that I get asked about a lot and want a canonical reference for.* Which of the following two options is more likely to be true? 1. AIs will internally be running explicit search processes. 2. AIs will internally be doing something weirder and more complicated than explicit search. In my opinion, whenever you're faced with a question about like this, *it's always weirder than you think,* and you should pick option (2)—or the equivalent—every single time. The problem, though, is that while option (2) is substantially more likely to be correct, it's not at all predictive—it's effectively just the “not (1)” hypothesis, which gets a lot of probability mass because it covers a lot of the space, but precisely because it covers so much of the space is extremely difficult to operationalize to make any concrete predictions about what your AI will actually do. The aphorism here is “[All models are wrong, but some are useful](https://en.wikipedia.org/wiki/All_models_are_wrong).” Not having a model at all and just betting on the “something else” hypothesis is always going to be more likely than any specific model, but having specific models is nevertheless highly useful in a way that the “something else” hypothesis just isn't. Thus, in my opinion, I strongly believe that we should try our best to make lots of specific statements about internal structures even when we know those statements are likely to be wrong, because when we let ourselves make specific, structural, mechanistic models, we can get real, concrete predictions. And even if the model is literally false, to the extent that it has some plausible relationship to reality, the predictions that it makes can still be quite accurate. Furthermore, one of my favorite strategies here is to come up with many different, independent mechanistic models and then see if they all converge: if you get the same prediction from lots of different mechanistic models, that adds a lot of credence to that prediction being quite robust. An example of this in the setting of modeling inductive biases is my “[How likely is deceptive alignment?](https://www.alignmentforum.org/posts/A9NxPTwbw6r6Awuwt/how-likely-is-deceptive-alignment)” post, where I take the two relatively independent—but both probably wrong—stories of high and low path-dependence and get the result that they both seem to imply a similar prediction about deceptive alignment, which I think lends a lot of credence to that prediction even if the specific models of inductive biases presented are unlikely to be literally correct. Going back to the original question about explicit search, this is essentially how I like to think about the arguments in “[Risks from Learned Optimization](https://www.alignmentforum.org/s/r9tYkB2a8Fp4DN8yB):” we argue that explicit search is a plausible model and explore what its predictions are. Though I think that the response “literally explicit search is unlikely” is potentially correct (though it depends on exactly how broad/narrow your understanding of explicit search is), it's not very constructive—my response is usually, “okay, so what's a better mechanistic model then?” That's not to say that I don't think there are any better mechanistic models than explicit search for what a powerful AI might be doing—but it is to say that coming up with some alternative mechanistic model is a necessary step of trying to improve on existing mechanistic models.
af22891a-0fd9-449b-b24c-f3580528cfbb
trentmkelly/LessWrong-43k
LessWrong
Have you ever considered taking the 'Turing Test' yourself? For example, using a chat/text-based service such as Discord or Slack to field questions from Users, who would then attempt to discern whether they're interacting with a Human or an AI. * What strategies would you employ? * What kind of questions would you anticipate? * Do you think such an exercise could prove beneficial for those involved in the AI and AI alignment fields? * How many random participants do you believe you could convince that you are not an AI? * What about AI researchers? How many of them do you think you could persuade?
43d118af-3fcd-4b78-9447-e03b479b5d95
trentmkelly/LessWrong-43k
LessWrong
What is bunk? Related: http://lesswrong.com/lw/1kh/the_correct_contrarian_cluster/, http://lesswrong.com/lw/1mh/that_magical_click/, http://lesswrong.com/lw/18b/reason_as_memetic_immune_disorder/ Given a claim, and assuming that its truth or falsehood would be important to you, how do you decide if it's worth investigating?  How do you identify "bunk" or "crackpot" ideas? Here are some examples to give an idea.  "Here's a perpetual motion machine": bunk.  "I've found an elementary proof of Fermat's Last Theorem": bunk.  "9-11 was an inside job": bunk.    "Humans did not cause global warming": possibly bunk, but I'm not sure.  "The Singularity will come within 100 years": possibly bunk, but I'm not sure.  "The economic system is close to collapse": possibly bunk, but I'm not sure. "There is a genetic difference in IQ between races": I think it's probably false, but not quite bunk.  "Geoengineering would be effective in mitigating global warming": I think it's probably false, but not quite bunk.  (These are my own examples.  They're meant to be illustrative, not definitive.  I imagine that some people here will think "But that's obviously not bunk!"  Sure, but you probably can think of some claim that *you* consider bunk.) A few notes of clarification: I'm only examining factual, not normative, claims.  I also am not looking at well established claims (say, special relativity) which are obviously not bunk. Neither am I looking at claims where it's easy to pull data that obviously refutes them. (For example, "There are 10 people in the US population.")  I'm concerned with claims that look unlikely, but not impossible. Also, "Is this bunk?" is not the same question as "Is this true?"  A hypothesis can turn out to be false without being bunk (for example, the claim that geological formations were created by gradual processes.  That was a respectable position for 19th century geologists to take, and a claim worth investigating, even if subsequent evidence did show it to be false
0c8ba9a9-96bc-4ed4-ab75-e4c8084f3de7
trentmkelly/LessWrong-43k
LessWrong
New Discussion section on LessWrong! There is a new discussion section on LessWrong. According to the (updated) About page: > The Less Wrong discussion area is for topics not yet ready or not suitable for normal top level posts. To post a new discussion, select "Post to: Less Wrong Discussion" from the Create new article page. Comment on discussion posts as you would elsewhere on the site. > > Votes on posts are worth ±10 points on the main site and ±1 point in the discussion area. [...] anyone can post to the discussion area. (There is a link at the top right, under the banner)
a09c842b-5951-42f8-959e-e8e6321eefc1
StampyAI/alignment-research-dataset/arxiv
Arxiv
An Orchestration Platform that Puts Radiologists in the Driver's Seat of AI Innovation: A Methodological Approach 1 Background ------------- When our small Emergency Radiology lab sought to engage in AI research, we found that we lacked needed resources, and pre-existing AI research systems did not translate to our workflow or adapt to our needs. Without a system to manage the many facets of setting up and performing AI research, significant manual efforts and a constellation of incongruent tools are needed. A wide range of effort-intensive operations combined to make AI research infeasible for us: Data curation, annotation, machine learning model development, management of people and resources, security, auditing, and multi-system interoperability are far too large of a simultaneous undertaking for a resource-limited lab to manage. The costs of a large staff and requisite resources to perform all of these activities were prohibitively high. In order to perform rapid research, development, and deployment of AI models with minimal staff and low-cost resources, we needed a system that could orchestrate all of these necessary tasks, without the omissions, gaps, and incongruities between tools that so often require many resources and manual intervention. We set out to design an integrated platform that could facilitate the plurality of our research initiatives. Our goal was to restore radiologists as the drivers of innovation in imaging-focused AI. Our design philosophy was that tasks that could be automated, such as handling, translating, and curating high-quality data, should be handled by computers rather than armies of annotators, data scientists, and engineers. The hurdles to successful facilitation of imaging machine learning have been well documented [[11](#bib.bib7 "Preparing medical imaging data for machine learning")]. Non-standardized and non-normalized data present difficulties for data ingestion and data annotation. The DICOM standard was instrumental in allowing the creation of PACS and the digitization of imaging records [[2](#bib.bib10 "Introduction to the ACR-NEMA DICOM standard")]. The subsequent development of DICOM-SR to incorporate structured reports further expanded the scope of the DICOM format [[5](#bib.bib9 "DICOM Structured Reporting")]. However, at present, different scanners and vendors often implement DICOM differently, and hospitals may use different approaches to enter data into similar metadata fields. With many optional tags that vary from vendor to vendor, and with varied practices in recording data between hospitals, imaging data in the DICOM format is often still varied and highly inconsistent. Annotation is the most time consuming portion of the development process, and is an expensive task for radiologists to perform. If data is in a format that prevents successful ingestion, data engineers may be required to intervene to attempt to salvage radiologists’ annotations, or annotations may need to be discarded. If data ingestion produces corrupted or incorrect data and these data are included in model training, they can threaten the integrity of the model results. Many groups have developed systems that target components of the AI development pipeline to reduce the workload needed to facilitate these research efforts. These include the development of faster annotation tools for radiologists [[4](#bib.bib2 "DicomAnnotator: a Configurable Open-Source Software Program for Efficient DICOM Image Annotation"), [9](#bib.bib3 "ePAD: An Image Annotation and Analysis Platform for Quantitative Imaging")], workflows that automate data organization for data scientists [[1](#bib.bib11 "On the co-evolution of ml pipelines and source code - empirical study of dvc projects")], and approaches that can speed up model exploration [[6](#bib.bib8 "nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation")]. Many groups have leveraged advances in cloud-computing and have drawn from innovations in the development of PACS as inspiration for building platforms that automate portions of the machine learning research workflow [[3](#bib.bib1 "Integrating AI into radiology workflow: levels of research, production, and feedback maturity"), [10](#bib.bib6 "An Open-Source, Vender Agnostic Hardware and Software Pipeline for Integration of Artificial Intelligence in Radiology Workflow")]. In recent years, the adoption of active machine learning strategies have offered reductions in both annotation time and model training efforts[[3](#bib.bib1 "Integrating AI into radiology workflow: levels of research, production, and feedback maturity"), [8](#bib.bib4 "RIL-Contour: A Medical Imaging Dataset Annotation Tool for and with Deep Learning"), [7](#bib.bib5 "DeepInfer: Open-Source Deep Learning Deployment Toolkit for Image-Guided Therapy")]. In these approaches, an initial annotated dataset is used to train preliminary models, which are then iteratively updated and used to propose annotations on remaining images so that radiologists need not start from scratch, and may begin instead by refining the model’s proposed annotations. Systems like RIL-Contour from the Mayo Clinic have noted a nearly 80% decrease in annotation time with the use of these strategies. The driving force behind the development of our platform and quest for automation was the core problem for our small lab: People are the most expensive resource in the AI development process, especially radiologists. Our requirement was to build a platform that minimized those costs by shifting as much work as possible to be handled by computer systems. In order to accomplish this, data had to be translated to be amenable for machine analysis as early as possible in the workflow, and all subsequently built modules had to be highly optimized for their use. The system we built includes the functionalities shown in Figure [1](#S1.F1 "Figure 1 ‣ 1 Background ‣ An Orchestration Platform that Puts Radiologists in the Driver’s Seat of AI Innovation: A Methodological Approach"). Numerous obstacles needed attention and resolution to achieve the necessary high degree of automation, most notably, issues with varied, messy, and inconsistent data. Our platform is designed as the composition of modular components. It is able to scale and grow with resource needs; it is capable of handling projects pursued by small labs or larger institutions, and can even facilitate federated learning approaches. We will first describe key decisions that were critical in the development of our platform. We will then present the platform architecture, highlighting important components and workflow design. Finally, we will discuss key benefits of our implementation in practical AI research. ![](https://media.arxiv-vanity.com/render-output/7815498/tomosuite_functions.png) Figure 1: Functionality Schematic of Our AI Orchestration Platform 2 Methods ---------- The design of our system eliminated as much manual intervention as possible to reduce the human effort needed to complete projects. Curating high-quality and machine-learning ready data from the outset were key design principles to accomplish this goal. Translating and transforming data to be continuously handled by computer systems under the hood, while maintaining familiar workflows for radiologists, provided a backbone that facilitates AI research efforts with minimal barriers to completion. The system architecture was designed to run both on low-cost hardware and on large infrastructure like Amazon Web Services (AWS) or Microsoft Azure, in a manner that is proportional to resource needs, available funding, and research staff. At the outset, two general areas of focus emerged while analyzing the cost of different types of failures and bottlenecks throughout the development lifecycle: Constructing a pipeline framework that can be fully automated, and creating components, such as an annotation platform, that are highly efficient in their function to minimize time spent on their usage. Addressing these two areas was critical in guiding the creation of all parts of the overall system. ### 2.1 Key Design Principles for a Highly Automated Pipeline #### Workflow Organization The traditional order of ML Workflow operations is shown in Figure [2](#S2.F2 "Figure 2 ‣ Workflow Organization ‣ 2.1 Key Design Principles for a Highly Automated Pipeline ‣ 2 Methods ‣ An Orchestration Platform that Puts Radiologists in the Driver’s Seat of AI Innovation: A Methodological Approach"), whereas our modified sequence is shown in Figure [2](#S2.F2 "Figure 2 ‣ Workflow Organization ‣ 2.1 Key Design Principles for a Highly Automated Pipeline ‣ 2 Methods ‣ An Orchestration Platform that Puts Radiologists in the Driver’s Seat of AI Innovation: A Methodological Approach"). The modified workflow is designed to conserve all annotation efforts by placing data annotation after data ingestion, in order to position the most expensive part of the pipeline after the most error-prone part. As data is ingested, image volumes are converted into matrix object representations. Associated metadata fields are placed in a database, where common metadata are resolved to uniform representations and structures. Data is then maintained in a machine-amenable format throughout the entire pipeline, eliminating the need for data engineers at any stage. This reorganization provides the additional benefit that annotators can trivially spot ingestion errors in imaging data that might otherwise have progressed unnoticed to model training. | | | | --- | --- | | Traditional ML Workflow \thesubsubfigure Traditional ML Workflow | Waste-Minimizing Workflow \thesubsubfigure Waste-Minimizing Workflow | Figure 2: (a) Traditional machine learning development workflow. (b) Workflow designed to minimize resource waste by translating to ML-ready data early. Annotation data must be saved in a manner that ensures clean and standardized metrics are met. Data cleaning tasks are thereby minimized as data ingestion no longer has to contend with salvaging annotations on malformed images. Other challenges with data that previously may have gone unnoticed can be recognized early and upfront, before they threaten model training. #### Structure Data to be Anonymized with Ease In addition to the matrix representation of image data and the standardization of metadata tags in storage, the platform separates metadata storage between Protected Health Information (PHI)-containing data and non-PHI data. All components are built to run without PHI data, and the system requires that specific access be granted for users to receive non-anonymized data. The platform uses an inclusionary list of metadata fields verified to be free of any PHI, rather than an exclusionary list, to ensure that PHI data cannot be unintentionally sent to users who are not designated to receive it. The only potential risk of PHI inclusion is thus burned-in data on the images themselves, which would require special handling. #### Keep Development Simple to Reduce Cross-Dependencies and Bottlenecks The platform architecture is based on simple modules that perform individual tasks and do not have cross-module dependencies that can create bottlenecks in development and execution. This methodology is intrinsically optimized for cloud-computing resources and multi-system distributed workloads. As shown in Figure [3](#S2.F3 "Figure 3 ‣ Keep Development Simple to Reduce Cross-Dependencies and Bottlenecks ‣ 2.1 Key Design Principles for a Highly Automated Pipeline ‣ 2 Methods ‣ An Orchestration Platform that Puts Radiologists in the Driver’s Seat of AI Innovation: A Methodological Approach"), all modules perform one and only one task: Module algorithms are developed to strictly convert predefined, structured input into predefined, structured output, independent of all other modules in the pipeline. Importantly, no module is allowed to directly communicate with any other module in the system, other than to three storage resources, or the networking stack when needed. The platform utilizes three storage resources: * PostgreSQL Database: Storage of relational data, metadata, or other structured data relevant to images, cohorts, or projects. * Simple Storage Service (S3): A file-like storage solution that is standard to cloud computing technologies like AWS. The platform uses this for any large storage object, such as image-matrix objects, annotation matrices or vectors, or ML models. * Simple Queue Service (SQS): Another cloud computing technology common to vendors like AWS. This is most akin to a messaging board, where modules look for jobs they can perform, and post their results once complete. This rigorous modularity and design framework takes full advantage of advances in cloud computing and virtualization technologies. The many modules of the platform server are all containerized with Docker. The number of copies of each container is managed by Kubernetes and can be dynamically scaled up and down to meet the workflow needs in real-time. The networking stack is similarly containerized and individually isolated by functionality, such as managing connections, authentication, and security, to allow for easy management, upgrading, and even replacement as needed. Maintaining one path in and out for data entering or leaving the system enables easy management of access and security. ![](https://media.arxiv-vanity.com/render-output/7815498/modular_design.png) Figure 3: Multiple modules working in concert without ever directly communicating with each other. Jobs are posted to SQS queues, and modules read from them. Modules are containerized with Docker and managed by Kubernetes, so that the platform can scale automatically to directly respond to the needs of users. Modules shown are examples, and not exhaustive of practice. #### Parallelize Everything The modular structure enables the parallelization of many facets of the machine learning workflow, further enabling faster result turnaround, as shown in Figure [4](#S2.F4 "Figure 4 ‣ Parallelize Everything ‣ 2.1 Key Design Principles for a Highly Automated Pipeline ‣ 2 Methods ‣ An Orchestration Platform that Puts Radiologists in the Driver’s Seat of AI Innovation: A Methodological Approach"). Data Ingestion, Cohort Selection, Annotation, Quality Assurance, Model Training, and subsequent evaluation and response need not be serial tasks completed in full before moving on to subsequent steps. Instead, the pipeline is responsive to data as it becomes available. As data is ingested, it is immediately available for cohort selection. As data enters into a cohort, radiologists are immediately able to begin annotation. As annotations are completed, they are immediately available for model training. This allows all stages of the process to proceed in parallel, eliminating common workflow bottlenecks in the AI development pipeline. ![](https://media.arxiv-vanity.com/render-output/7815498/parallel_workflow.png) Figure 4: Parallelized Workflow Utilizing an Active Machine Learning Strategy ### 2.2 Creating Highly Efficient Tools in the AI Development Pipeline The framework above enables a pipeline that can be automated from end to end, but the individual tools and components with which radiologists and data scientists interact must be optimized to facilitate their efficient use. These applications and workflows are best categorized into optimized annotation software, people and project management software, model development and evaluation tools, and solutions to access data and models, or integrate with other systems. #### Designing an Efficient Annotation Tool for Radiologists that Maintains Structured, Machine Learning-Ready Data The annotation solution should accomplish three aims: to feel natural for experienced radiologists to use; to accelerate the creation and curation of high-quality annotated data; and to be harmonious with the ML-oriented ecosystem that facilitates the data that machine learning scientists need. To achieve this, the annotation software uses the ML-ready data on the backend and faithfully reproduces the user experience of a PACS. Data and annotations are stored and handled as matrices and vectors, even behind the scenes in the annotation software itself. This means that no free-text annotations are permitted. To create the user-friendly interface on-the-fly, these data are combined with project-level configuration files that instruct the program how to translate vector data into human-readable values and familiar controls. Radiologists were integral to the design of the software functionality as well as the features that allow them to accomplish their annotation tasks as efficiently as possible. A key component of this workflow optimization is to minimize clicks and other interactions. Annotation categories include study, series, and slice-level labels, bounding boxes, and voxel masks. All region of interest segmentations may also have associated labels. Tools to enable efficient segmentation include voxel range masking, inclusion and exclusion masks, multi-slice painting, and morphological transformations for voxel masks (erosion, dilation, opening, closing). The system facilitates multi-person annotation efforts, and enables quality assurance and review. Integrated tools can compare annotations for inter-rater variability, evaluate annotator performance, provide messaging and notes between annotators, and display differences between less-experienced and seasoned annotators as a teaching tool. To adhere to the strict requirements of an ML-ready pipeline, the annotation platform prevents users from signing off on incomplete work, and enforces that all data is machine learning-ready at all stages of development. Project leads create standardized annotation templates for each project and the system enforces that these templates are completed for each annotation. Cohort management tools allow radiologists to see an overview and status of studies included in the project, with the ability to add other relevant cases to the annotation pool with ease. Data and actions are logged, version-controlled, and tracked by the system. #### Creating a Portal to Manage Data, People, Projects, Models, and Inference The platform includes management tools for all the resources needed in AI development. Website portals include a management portal and an administrative portal. The management portal can present statistics and overview information; manage IRB Protocols; create, manage, and track users; acquire and version-control data; develop and track project progress; evaluate and deploy machine learning models; and generate usage, logging, and auditing reports of all components within the platform ecosystem. In managing data security, the management portal includes the controls necessary to associate users, data, projects, and models with IRB protocols as required by the backend in order to strictly limit access to those with appropriate credentials. #### Integrating Model Training to Run Natively A built-in model zoo of available networks allows users to choose models with which to begin training. Architecture and model parameters can be selected by a data scientist or AI-oriented user at the onset of model training. Data scientists can upload their own model code in adherence with a predefined API. The system automatically begins training as soon as data is available. #### Leveraging Active Machine Learning The platform integrates an active machine learning strategy, saving copies of well-performing models as they train, and using them to propose annotations. In the parallelized workflow, the active machine learning model continues to train on new annotations, which are automatically added to the training cohort as soon as they are completed. The platform evaluates the active ML models according to metrics that can be pre-selected or defined by radiologists or data scientists, with thresholds for when they consider the active ML model ’mature enough’ to use for annotation proposal. When an annotator requests an annotation proposal, the highest-fidelity active ML model is selected to generate the annotation. This proceeds in a continuous manner, as demonstrated in Figure [4](#S2.F4 "Figure 4 ‣ Parallelize Everything ‣ 2.1 Key Design Principles for a Highly Automated Pipeline ‣ 2 Methods ‣ An Orchestration Platform that Puts Radiologists in the Driver’s Seat of AI Innovation: A Methodological Approach"). The system automatically pauses training as needed, or takes more dramatic action to resolve issues that can arise if the initially annotated dataset is not representative of the subsequent larger dataset, or in the case of ’catastrophic forgetting,’ in which a model loses memory of previously learned information from being overwhelmed with new cases. In the former case, if there is statistically significant variation in the distribution of annotation labels, the system freezes the current model and re-initiates model training. This approach to continuous model training is shown in Figure [5](#S2.F5 "Figure 5 ‣ Leveraging Active Machine Learning ‣ 2.2 Creating Highly Efficient Tools in the AI Development Pipeline ‣ 2 Methods ‣ An Orchestration Platform that Puts Radiologists in the Driver’s Seat of AI Innovation: A Methodological Approach"). If the model shows consistent flaws in its proposed annotations, radiologists and data scientists may evaluate and react in real-time and choose augmentations to the cohort to address these issues as they appear. ![](https://media.arxiv-vanity.com/render-output/7815498/activeml.png) Figure 5: Model Training Schematic for Continuous Active Machine Learning Strategy #### Ecosystem Integration As it is essential to integrate natively with pre-existing PACS infrastructure and storage, the platform includes bi-directional translation for DICOM in and DICOM out, even though the platform internally uses a structure optimized for machine interpretation. Likewise, the data and annotations must be accessible for use with common or custom development tools. The platform has an API which allows authenticated access to and integration with datasets and other components within the system. This has the added benefit of allowing third-party integration into the software platform. Multiple instances of the platform (presumably at different institutions) can perform coordinated federating learning tasks with each other. When two or more instances share a federated learning project, model weight updates are shared between the systems so that all parties have a live copy of the model in-progress without sharing data itself. Anonymized images can optionally be shared between institutions for analysis if proper authorization is granted within the system. ### 2.3 Stress-Testing the Server Instance Stress testing was performed by simulating concurrent users that logged in, selected an active project, and then began simulated work. Simulated users opened a new randomly selected study and series at random intervals between 5 to 10 minutes, and during each interval performed and uploaded two voxel mask annotations. These simulated annotations had the same matrix rank as the source series and contained random values. On the server, CPU, memory, and swap usage were measured. On the client systems, the latency of each request (the time until the client received a response from the server) was measured. These tests were performed for 1, 10, 20, 50, 100, 500, 1000, 5000, and 10000 simulated users. Due to the substantial quantity of data being moved, the operations that the system handles are largely network-bound tasks. To help quantify the user experience, the time it takes to receive images from S3 storage was measured. Annotators are able to interact with images once the first image of a series has streamed in; they need not wait until the last image has arrived. Thus, the average time was measured until the first image of a series arrived, and until the entire series arrived. This measurement was performed for a simulated image series containing 200 slices of 512x512 pixel resolution, stored at 16-bit integer resolution. Lastly, the rate at which the server could perform data ingestion from DICOM images into the ML-amenable format was measured. This was done using a test dataset containing 50 3x3 mm (slice thickness x interval) chest CT series. 3 Results ---------- By strictly following the design principles detailed above, we created a machine learning platform and ecosystem which we named "Tomosuite" (paying homage to its roots in CT imaging as the Tomographic Imaging AI Suite). A high-level overview is displayed in the architecture diagram of Figure [6](#S3.F6 "Figure 6 ‣ 3 Results ‣ An Orchestration Platform that Puts Radiologists in the Driver’s Seat of AI Innovation: A Methodological Approach"). It includes a server module which coordinates all resources (upper left of Figure [6](#S3.F6 "Figure 6 ‣ 3 Results ‣ An Orchestration Platform that Puts Radiologists in the Driver’s Seat of AI Innovation: A Methodological Approach")). The annotation and QA platform, which runs on Mac, Windows, and Linux operating systems, facilitates all image viewing, annotation, and QA needs for both radiologists and data scientists (upper right of Figure [6](#S3.F6 "Figure 6 ‣ 3 Results ‣ An Orchestration Platform that Puts Radiologists in the Driver’s Seat of AI Innovation: A Methodological Approach")). Data, people, projects, and models are handled by the management portal, and finer control and system management is achieved through the administrative portal (middle right of Figure [6](#S3.F6 "Figure 6 ‣ 3 Results ‣ An Orchestration Platform that Puts Radiologists in the Driver’s Seat of AI Innovation: A Methodological Approach")). A custom-built DICOM node can query, retrieve, and pull data, and is able to push DICOM-SR objects after running model inference on an image set. An additional application enables users to upload external data and manage data cohorts under IRB protocols (bottom right of Figure [6](#S3.F6 "Figure 6 ‣ 3 Results ‣ An Orchestration Platform that Puts Radiologists in the Driver’s Seat of AI Innovation: A Methodological Approach")). An integrated active machine learning pipeline facilitates faster annotation and enables real-time model feedback. Tomosuite integrates with 3rd party solutions (bottom left of Figure [6](#S3.F6 "Figure 6 ‣ 3 Results ‣ An Orchestration Platform that Puts Radiologists in the Driver’s Seat of AI Innovation: A Methodological Approach")), like Tensorboard, for tracking model progress, and can integrate with other copies of Tomosuite at different sites to facilitate federated learning efforts. Selected components are described in greater detail below. To step through typical usage of Tomosuite, a radiologist goes online to the management portal, registers an IRB protocol, creates a project, and adds other users to that project. This radiologist may then use the management portal to query a dataset from the hospital VNA. Another collaborator might use the data upload tool to add data that they have gathered locally on their machine. They use the annotation platform to select the data to be added to the annotation cohort and subsequently annotate that data. Throughout the project, they can track the progress of active machine learning model training on the management portal, and use it in the annotation platform when it is mature. A project PI can track the progress of project participants and generate reports through the management platform. Data scientists, working with the radiologists, can use the model development service to quickly generate models, or can use the API for external integrations with relevant components. If high level access is needed, or if something goes wrong, an authorized administrator can use the administrative portal to perform system management tasks. All the while, the server ensures that these functions are performed rapidly in a swarm-like behavior for ongoing jobs in the system. ![](https://media.arxiv-vanity.com/render-output/7815498/tomosuite_architecture.png) Figure 6: System Architecture and Design for Tomosuite #### Server Architecture The implementation of Tomosuite is strictly modular and containerized in Docker. Kubernetes manages these Docker instances, and the system automatically scales its resource use up or down in real-time to accommodate user and project needs, limited only by the computational resources to which it has access. Additional compute resources can be trivially added as needed, and the system will automatically expand into the new space. The developed components and modules are shown in Figure [6](#S3.F6 "Figure 6 ‣ 3 Results ‣ An Orchestration Platform that Puts Radiologists in the Driver’s Seat of AI Innovation: A Methodological Approach"). The stateless services and storage services in the server module are a realization of the principles addressed in Figure [3](#S2.F3 "Figure 3 ‣ Keep Development Simple to Reduce Cross-Dependencies and Bottlenecks ‣ 2.1 Key Design Principles for a Highly Automated Pipeline ‣ 2 Methods ‣ An Orchestration Platform that Puts Radiologists in the Driver’s Seat of AI Innovation: A Methodological Approach"). #### Viewer and Annotation Platform Intuitive controls permit easy mapping of available data into research cohorts, including an interface for rapid selection of similar image types through ’smart-filtering’ of standardized and previously parsed DICOM tags. With the click of a button, the backend server can generate or update a project’s data cohort. The viewer displays images equivalent to a PACS, yet the source images are generated from the post-ingestion matrix objects (Figure [6(a)](#S3.F6.sf1 "(a) ‣ Figure 7 ‣ Viewer and Annotation Platform ‣ 3 Results ‣ An Orchestration Platform that Puts Radiologists in the Driver’s Seat of AI Innovation: A Methodological Approach")). The viewer includes controls for the annotation tasks, such as voxel range masking (Figure [6(b)](#S3.F6.sf2 "(b) ‣ Figure 7 ‣ Viewer and Annotation Platform ‣ 3 Results ‣ An Orchestration Platform that Puts Radiologists in the Driver’s Seat of AI Innovation: A Methodological Approach")), inclusion and exclusion masks, multi-slice painting and multiplanar-enabled controls (Figure [6(c)](#S3.F6.sf3 "(c) ‣ Figure 7 ‣ Viewer and Annotation Platform ‣ 3 Results ‣ An Orchestration Platform that Puts Radiologists in the Driver’s Seat of AI Innovation: A Methodological Approach")), morphological operations, and controls to translate annotation objects between series. Additional features to aid radiologists in their work include inline report text viewing and inter-user notes, which can only be used for communication, not model training. There are intuitive controls for multiplanar viewing, creating and managing annotations, and for viewing and interacting with the results of active machine learning models or other machine learning output as image overlays. Annotation comparison and inter-rater variability is integrated into a review panel (Figure [6(d)](#S3.F6.sf4 "(d) ‣ Figure 7 ‣ Viewer and Annotation Platform ‣ 3 Results ‣ An Orchestration Platform that Puts Radiologists in the Driver’s Seat of AI Innovation: A Methodological Approach")), enabling the platform to be used as a feedback and teaching tool for less experienced radiologists. The platform includes further optimizations such as smart data pre-fetching, which streams anticipated image stacks in the background to overcome potential challenges arising from poor network speeds or other issues that can present technical barriers for project completion. | | | | | | --- | --- | --- | --- | | (a) Annotation and Viewer Application showing annotated data generated from ML-ready data. The viewer contains controls such as windowing / leveling and other common image manipulations. | (b) Annotation Tool being used to place a voxel mask across multiple slices. Individual slices are arrayed to the right of the viewer. | (c) Multiplanar Viewing. These features can also be used to translate annotations across non-standard image orientations. | (d) Inter-rater variability displayed in the QA functionality in the Annotation Tool. | Figure 7: Selected Screen-Captures from the Viewer and Annotation Software #### Management Software The management portal (Figure [8](#S3.F8 "Figure 8 ‣ Management Software ‣ 3 Results ‣ An Orchestration Platform that Puts Radiologists in the Driver’s Seat of AI Innovation: A Methodological Approach")) facilitates all management efforts, including creating and coordinating projects, mapping to IRB protocols, overseeing users on projects, pulling data from the VNA, governing data cohorts, and tracking model progress. Once models are satisfactory, they can be deployed and monitored for inference usage. Additionally, Tomosuite includes controls to facilitate federated learning approaches. All actions performed on both the server and within the annotation platform are logged and time-stamped. This makes governing data access, managing security, and auditing simpler to manage. Auditing data and progress reports are automatically generated, aiding in tracking time and productivity of annotators or to simplify IRB oversight. The administrative portal gives full database and log-level control of data and records to administrators with appropriate access, and is used for any needed platform or project troubleshooting. In this way, radiologists and data scientists can manage ongoing projects without further aid or staffing. | | | | --- | --- | | (a) Management portal being used to query a VNA for available studies meeting search criteria. | (b) Management portal being used to oversee an ongoing project. | Figure 8: Selected Screen-Captures from the Management Portal #### Integrating Model Training to Run Automatically and in Parallel As models train, the system takes snapshots of the best versions and stores them so they can be used by the active machine learning system. This system pulls a copy of the best snapshot (according to the user-selected metric of success) of an in-process or otherwise stored model along with a copy of the requested image series, and proposes an annotation on that series. The results are stored and sent back to the requesting annotator, thereby completing the loop on the active machine learning strategy. The parallelized machine learning implementation enables us to minimize our resource footprint. Instead of requiring expensive and high-cost computational resources, models can train at a slower pace on smaller computational resources, since models can often to train faster than radiologists can annotate. This eliminates the need for prohibitively expensive hardware resources, and instead enables users to develop preliminary models and proofs of concept on pre-existing available hardware. The system can be redirected to run on high capacity computational resources if and when they become available. #### DICOM Node for Bi-Directional Communication with PACS The DICOM Node enables bi-directional communication with the rest of the DICOM ecosystem, pulling from and pushing to other DICOM nodes. An ingestion module converts incoming DICOM data to ML-representations, and outgoing model results back into DICOM and DICOM-SR before the DICOM node pushes them to other nodes. This approach maintains rigorous data representation and storage within the platform, while remaining harmonious with the rest of the DICOM world and leveraging DICOM for its benefits in imaging data communication. #### Structured API to make Data Accessible for Advanced Analysis Tomosuite incorporates an API to integrate with other systems or to expand its functionality. This API enables processing and filtering of cohorts; retrieval of data and annotations; addition of models for active machine learning, model training, or deployment into the inference modules; additional federated learning tasks; and integration with other 3rd party components, such as NLP solutions or monitoring solutions like Tensorboard. Tomosuite is designed to be integrated with pre-existing infrastructure so that the system can behave both natively and with the extensibility to grow with infrastructure needs. #### Evaluation To date and throughout its development, Tomosuite has been used for 12 projects, including slice-level qualifications, organ segmentation, and pathology detection tasks. Before deployment to virtualized servers, Tomosuite was first deployed to common-place hardware available to our small lab during our start-up phase. The first system to run Tomosuite was an old PC with an Intel Pentium Dual E2200 @ 2x 2.2GHz CPU, 4GB DDR2 RAM, and no GPU. This system was able to support early operations during the development of the platform, even for users’ computers connecting with network speeds of 18Mbps download and 8Mbps upload (well below industry standard). To run the model containers a MacBook Pro was used, which had an 8-Core i9 2.3Ghz CPU, 64GB DDR4 RAM, and Intel UHD Graphics 630 1536MB. Shortly thereafter, the entire system was hosted on this MacBook Pro computer. Following this initial deployment, the system was migrated to an institution-provided virtual server with a 4-Core 3.00Ghz CPU, and 4GB RAM. An institution-provided S3 instance and PostgreSQL database instance were used for storage. Model training Docker containers are run as jobs on other low-cost available virtual servers. The simulated stress-testing of this server instance is shown in Figure [9](#S3.F9 "Figure 9 ‣ Evaluation ‣ 3 Results ‣ An Orchestration Platform that Puts Radiologists in the Driver’s Seat of AI Innovation: A Methodological Approach"). Simulated clients had a network speed capped at 500Mbps over VPN. Noting the logarithmic scale on the x-axis, the CPU usage appears to behave linearly, as O(n), and memory usage behaves statically, as O(1), for normal everyday usage. This is possible because the network I/O operations are offloaded to the S3 instance, and because the computational overhead is performed before data can be viewed and annotated during the ingestion step. These results demonstrate that network I/O will become a limiting factor far before CPU or memory limitations. | | | | | --- | --- | --- | | | | | Figure 9: Stress Test Performed on Tomosuite Server. The test was run for 1, 10, 20, 50, 100, 500, 1000, 5000, and 10000 simulated users. In memory and swap usage, the percentile lines fall closely on top of each other. Measured network performance from S3 Storage, using a simulated 200-slice 512x512 pixel resolution scan, yielded 0.25 ± 0.08 seconds for the first image of a series to arrive and 9.7 ± 1.0 seconds until the entire series was available on a user’s computer. This performance metric is highly subject to the network speed of users and their host institutions. The measured data ingestion performance of the server in converting DICOM images into the ML-amenable format took 0.38 ± 0.03 seconds per image slice. In a simulated use case ingesting 50 3x3 mm chest CT series, this resulted in 28 ± 8.6 seconds per series, meaning that users or inference workflows only needed to wait approximately 30 seconds from the start of data ingestion. Processing time can be further reduced with increased compute. #### Versatility for Expansion The system architecture has easily supported platform expansion. New features and functionality are integrated by adding individual modules into the zoo of stateless services in the server module. This has allowed us to respond to the evolving needs of radiologists and data scientists on various projects. 4 Discussion ------------- Despite recent advances, many components of traditional AI workflows still require significant resources and labor, often necessitating large teams to overcome data incongruities and workflow gaps. These teams require broad interdisciplinary expertise in data science and artificial intelligence, radiology, and computer software and engineering, in order to diagnose and troubleshoot the issues that inevitably arise. Our efforts focused on creating a system that maximized computer-assisted automation, and minimized human resources and wasted time. Importantly, radiologist users do not need advanced computer experience or significant resources to carry out AI research. Tomosuite’s ultimate efficiency and high workflow automation result from meticulously addressing the myriad insidious problems that collectively have the power to halt AI research projects. The solutions to many of these problems share common themes: attention to data format, cloud-oriented algorithmic structure, workflow re-organization, interoperability, and optimizations enabled by a high degree of workflow parallelization. Radiology systems and standards, such as PACS and DICOM, have been largely clinician oriented, designed to streamline the process by which radiologists perform clinical evaluations. While these effectively ushered in the digitization of medical imaging, they were not designed to meet challenges common to the era of artificial intelligence. Computer systems require uniform standardization, rigorous accuracy, and in the case of machine learning, large quantities of data in order to train models. Radiologists can trivially recognize axial from coronal, or a scout image from the rest of an image stack, or even overlook incorrect data in a metadata tag or fill in missing data with contextual reasoning. However, these issues are often show-stopping for computer algorithms. DICOM and DICOM-SR are communication formats that vary from vendor to vendor, without guarantees of standardization of data representation or metadata. Rather than retrofitting machine learning solutions to use these formats at every stage of the pipeline, our system formats and stores data in matrix, vector, and otherwise uniformly standardized formats. This enables a maximization of automated computer management, processing, and analysis, which saves time for both radiologists and data scientists. Indeed, the resources and staffing needed to facilitate and troubleshoot long and arduous ingestion efforts are no longer required when data and annotations are handled in this fashion. This shift in data format has many benefits for the development of modules and algorithms within the platform. Clean, standardized, ML-ready data is used for all internal modules in the system, reducing the overhead needed to develop additional workflow components. Continued development on workflow modules is concerned only with the objectives of their respective algorithms, and is not hindered by challenges that can arise elsewhere in the pipeline, such as the ingestion process. Furthermore, by building modules without cross-dependencies or even the need to communicate directly with other processes, the system is able to perform in cloud-computing environments and rapidly grow in scope and scale. For data scientists and developers, the system’s modular design has the advantage that components can be used as they become available, without concern about creating cross-dependencies and proverbial "spaghetti code" that can plague project developments and necessitate long, manual interventions. As the platform expands, older modules need not be revisited as their function and implementations are self-contained. Completed modules collectively form a library of components available to all users. Our workflow re-ordering minimizes bottlenecks, risks of data loss, and subsequent needs for recovery. The most expensive part of our pipeline (annotation) occurs after the most error-prone component (ingestion). Done this way, annotators are easily able to spot any ingestion errors that may have slipped through the cracks, providing a natural data quality assurance step that otherwise would not exist. This further ensures confidence in the dataset when model training begins. Aside from ongoing platform development, this workflow reorganization eliminates the need for data engineers to convert unstructured or ill-maintained data into regularized and usable troves. Another focus was on building a platform that was harmonious with the broader DICOM-driven environment while responding to the needs of AI model development. The system can be integrated with other systems and solutions within the larger radiology ecosystem. The platform leverages DICOM for its external communication, thereby enabling greater interoperability and lowering translational challenges. The parallelization enabled by all of these strategies makes Tomosuite well-suited for the benefits of active machine learning, which has the potential to substantially reduce needed annotation effort. This approach empowers interactivity between the radiologist and model, much as an attending radiologist might interact with a resident. This strategy even has the potential to lower the data requirement for model training as each annotation individually contributes necessary medical concepts to the training cohort, including those selected to mitigate shortcomings found in earlier model iterations. It is worth noting the importance of contextualizing active machine learning functionality as an annotator-assistance tool and not an annotator-replacement tool. Careful consideration of the use of active machine learning is warranted, most especially on projects with rare or challenging findings. If a less experienced annotator trusts active machine learning suggestions against their better judgement, they can be biased into making errors they otherwise would not. With this in mind, annotators must be informed that they are ultimately responsible for the annotations they sign off on in the dataset, and to use the functionality only as an assistance tool. Used responsibly, this functionality can provide dramatic reductions to annotation time. We have been able to provide a clean and simple user experience while ensuring high rigor in the collection and maintenance of data. Our system is intended to be used comfortably and efficiently by experienced radiologists, while handling data in an ML-optimized and cloud-computing oriented manner. Ultimately, all of these many areas of focus come together to form a platform that can facilitate imaging AI research in a highly efficient, and resource-minimizing fashion. 5 Conclusion ------------- The described methodology and platform originated from the desire to perform meaningful AI research without the benefit of a well-populated AI lab or a huge up-front investment in technical resources. The up-front conversion to an ML-ready data format, and the segregation of ecosystem functionality to modular components has enabled a platform that is not only rich in functionality, but can also run on low-cost compute and meet the needs of groups like ours that wish to be productive and nimble despite limited initial resources. We built a research pipeline and infrastructure solution to meet our needs, but believe that these design principles and approaches may similarly benefit other resource-limited labs, or alternatively, may greatly enhance the efficiency of larger groups and resource-rich institutions. To accomplish these objectives, it was necessary to return to the fundamentals and take a hard look at the source and quality of our data. The question is not only ’How can we build models to fit the data?’ but also ’How can we improve the data to create better models?’ Our approach focused on designing an AI development pipeline that meets radiologist’s ease of use and efficiency needs, while adhering to rigorous data science requirements. This infrastructure design has the potential to greatly accelerate AI research by sidestepping many of the time consuming tasks that typically consume the attention of AI teams. In doing so, AI can become another tool that radiologists can use to drive innovation that advances patient care. Compliance with Ethical Standards --------------------------------- ### Ethics approval IRB approval was obtained with waiver of informed consent for retrospective medical record review to obtain the imaging datasets used in the development and evaluation of the platform. Acknowledgements ---------------- The authors are grateful to Samantha Bloom for her work in developing and elevating the graphical design of the Tomosuite platform and to Sam Stern for many thoughtful conversations about server design and infrastructure.
fd0ad760-ae66-4025-816e-8c61e80257c8
StampyAI/alignment-research-dataset/arbital
Arbital
Sparking widgets 10% of widgets are bad and 90% are good. 4% of good widgets emit sparks, and 12% of bad widgets emit sparks. If a widget is sparking, find an easy way to calculate the chance that it's a bad widget, keeping the operations as simple as possible so that it's easy to do the calculation entirely in your head. You're encouraged to solve the problem yourself without looking at the answer. an answer can be found in [https://arbital.com/p/bayes_extraordinary_claims](https://arbital.com/p/bayes_extraordinary_claims).
9aabdaea-d06d-40d9-b768-1e668cd5be8f
trentmkelly/LessWrong-43k
LessWrong
GPT-4o My and Google I/O Day At least twice the speed! At most half the price! That’s right, it’s GPT-4o My. Some people’s expectations for the OpenAI announcement this week were very high. > Spencer Schiff: Next week will likely be remembered as one of the most significant weeks in human history. We fell far short of that, but it was still plenty cool. Essentially no one’s expectations for Google’s I/O day were very high. Then Google, in way that was not in terms of its presentation especially exciting or easy to parse, announced a new version of basically everything AI. That plausibly includes, effectively, most of what OpenAI was showing off. It also includes broader integrations and distribution. It is hard to tell who has the real deal, and who does not, until we see the various models at full power in the wild. I will start with and spend the bulk of this post on OpenAI’s announcement, because they made it so much easier, and because ‘twice as fast, half the price, available right now’ is a big freaking deal we can touch in a way that the rest mostly isn’t. But it is not clear to me, at all, who we will see as having won this week. So what have we got? THE GPT-4O ANNOUNCEMENT > OpenAI: GPT-4o (“o” for “omni”) is a step towards much more natural human-computer interaction—it accepts as input any combination of text, audio, and image and generates any combination of text, audio, and image outputs. > > It can respond to audio inputs in as little as 232 milliseconds, with an average of 320 milliseconds, which is similar to human response time(opens in a new window) in a conversation. > > It matches GPT-4 Turbo performance on text in English and code, with significant improvement on text in non-English languages, while also being much faster and 50% cheaper in the API. GPT-4o is especially better at vision and audio understanding compared to existing models. > > … > > With GPT-4o, we trained a single new model end-to-end across text, vision, and audio, meaning that all inputs
b5ac0a54-26f5-4460-b63e-ff64660445f0
trentmkelly/LessWrong-43k
LessWrong
Is there a lesswrong archive of all public posts? like for example: the wikimedia db dumps, or the stack exchange db dumps. i'd like to be able to browse lesswrong while offline. even better if i can process the data with a script while offline. it would also be useful for backup purposes: if ever something happens to the site in the longterm, some of it's users might have an exact copy that can be corss-compared and used to restore/share its content. related: * Why is lesswrong blocking wget and curl (scrape)? * Can I archive content from lesswrong.com on the wayback machine (internet archive, archive.org) ?
cf43b0b1-0353-4cfb-acf4-07ff47f8dde0
trentmkelly/LessWrong-43k
LessWrong
Absent-Minded Driver and Self-Locating Probabilities The absent-minded driver problem is this: Fig 1. (Taken from The Absent-Minded Driver by Robert J. Aumann, Sergiu Hart, and Motty Perry) “An absent-minded driver starts driving at START in Figure 1. At X he can either EXIT and get to A (for a payoff of 0) or CONTINUE to Y. At Y he can either EXIT and get to B (payoff 4), or CONTINUE to C (payoff 1). The essential assumption is that he cannot distinguish between intersections X and Y, and cannot remember whether he has already gone through one of them.” It can be easily shown the best strategy is to CONTINUE with a probability of 2/3 at an intersection. i.e. let p be the probability of CONTINUE at an intersection, the payoff would be 4p(1-p)+p2 which is maximized when p=2/3. Yet, there is a paradox. Imagine you are the driver. When arrived at an intersection, you could assign a certain probability for “here is X”. Let this probability be α. Now the payoff function would be α[4(1-p)p+p2] +(1-α)[4(1-p)+p]. Maximizing it would cause p to be different from 2/3,  no matter how one chooses to determine the value of α. Except of course if α=1. i.e. the probability of “here at X” is 100%, which is impossible to justify. For example, a common approach is to let α=1/(1+p). The reason being the driver will always pass X with a probability of 1 and pass Y with a probability of p, ergo the probability of “here is X” should be the relative weight of the two. Using this value for α, the payoff is function is maximized when p=4/9. Different from 2/3. Aumman’s Answer Aumann, Hart, and Perry disagree with the above reasoning and deny the problem presents any paradox. First, they make a distinction between planning optimal and action optimal. The original p=2/3 is regarded as the planning optimal. The rational decision for the driver when arrived at an intersection is dubbed the action optimal. The action optimal is derived with the following observations: 1. The decision the driver makes only determines his action at that par
c827c0c5-d95f-48cb-a22f-9855cb57e045
trentmkelly/LessWrong-43k
LessWrong
Meetup : Vancouver rationalists - new meetup location Discussion article for the meetup : Vancouver rationalists - new meetup location WHEN: 01 December 2013 03:00:00PM (-0800) WHERE: 2505 W Broadway, vancouver, BC, Canada Weekly meetup! discussions, philosophy puzzles, Rationality Dojo exercises. New location this time Discussion article for the meetup : Vancouver rationalists - new meetup location
b32a5a37-b82d-42fe-a216-d9bb35e4c327
trentmkelly/LessWrong-43k
LessWrong
Difficulties of Eschatological policy making [Linkpost] Jack Clark has a very important post on why it's so difficult to communicate with policymakers on AI risk, and the reason is that AI risk (and most discussions of AGI/ASI) is basically eschatological, in that it involves the end of the world/technology that looks like magic being developed by AIs, and this creates a very difficult landscape for policy makers. In particular, each group of experts considers the other group of experts to be wildly incorrect, and there's little feedback on anything you do, and the feedback may be corrupted, and this explains a lot about why policymakers are doing things that feel wildly underscaled to deal with the problem of AI x-risk: > Eschatological AI Policy Is Very Difficult > > A lot of people that care about the increasing power of AI systems and go into policy do so for fundamentally eschatological reasons – they are convinced that at some point, if badly managed or designed, powerful AI systems could end the world. They think this in a literal sense – AI may lead to the gradual and eventually total disempowerment of humans, and potentially even the death of the whole species. > > People with these views often don’t recognize how completely crazy they sound – and I think they also don’t manage to have empathy for the policymakers that they’re trying to talk to. > > Imagine you are a senior policymaker in a major world economy – your day looks something like this: > > * There is a land war in Europe, you think while making yourself coffee. > > * The international trading system is going through a period of immense change and there could be serious price inflation which often bodes poorly for elected officials, you ponder while eating some granola. > > * The US and China seem to be on an inexorable collision course, you write down in your notepad, while getting the car to your place of work. > > * There are seventeen different groups trying to put together attacks that will harm the public, you say to yourself, readi
b0dd1c3d-b389-438c-bef1-caa2cfe3b446
trentmkelly/LessWrong-43k
LessWrong
The Great Filter is early, or AI is hard Attempt at the briefest content-full Less Wrong post: Once AI is developed, it could "easily" colonise the universe. So the Great Filter (preventing the emergence of star-spanning civilizations) must strike before AI could be developed. If AI is easy, we could conceivably have built it already, or we could be on the cusp of building it. So the Great Filter must predate us, unless AI is hard.
312ec452-8b62-485f-b0b3-dcf27823415d
trentmkelly/LessWrong-43k
LessWrong
Superintelligence FAQ Editor's note: this post is several years out of date and doesn't include information on modern systems like GPT-4, but is still a solid layman's introduction to why superintelligence might be important, dangerous and confusing. 1: What is superintelligence? A superintelligence is a mind that is much more intelligent than any human. Most of the time, it’s used to discuss hypothetical future AIs. 1.1: Sounds a lot like science fiction. Do people think about this in the real world? Yes. Two years ago, Google bought artificial intelligence startup DeepMind for $400 million; DeepMind added the condition that Google promise to set up an AI Ethics Board. DeepMind cofounder Shane Legg has said in interviews that he believes superintelligent AI will be “something approaching absolute power” and “the number one risk for this century”. Many other science and technology leaders agree. Astrophysicist Stephen Hawking says that superintelligence “could spell the end of the human race.” Tech billionaire Bill Gates describes himself as “in the camp that is concerned about superintelligence…I don’t understand why some people are not concerned”. SpaceX/Tesla CEO Elon Musk calls superintelligence “our greatest existential threat” and donated $10 million from his personal fortune to study the danger. Stuart Russell, Professor of Computer Science at Berkeley and world-famous AI expert, warns of “species-ending problems” and wants his field to pivot to make superintelligence-related risks a central concern. Professor Nick Bostrom is the director of Oxford’s Future of Humanity Institute, tasked with anticipating and preventing threats to human civilization. He has been studying the risks of artificial intelligence for twenty years. The explanations below are loosely adapted from his 2014 book Superintelligence, and divided into three parts addressing three major questions. First, why is superintelligence a topic of concern? Second, what is a “hard takeoff” and how does it impact our
7dee0276-7d6b-4afa-802b-f83b707b7bd4
trentmkelly/LessWrong-43k
LessWrong
Different senses in which two AIs can be “the same” Sometimes people talk about two AIs being “the same” or “different” AIs. We think the intuitive binary of “same vs. different” conflates several concepts which are often better to disambiguate. In this post, we spell out some of these distinctions. We don’t think anything here is particularly novel; we wrote this post because we think it’s probably mildly helpful for people to think this through, and because Buck fairly often wants to reference it. Some example distinctions: * In a debate protocol[1], people often refer to the judge and debaters as three “different” AIs because their tokens are subject to different training incentives. But you could also say that the two debaters are “the same AI” because their prompts and context look similar. Or you could say that all three are the “same AI” because they were fine-tuned from the same weights. * In a control setting, an untrusted monitor might be fine-tuned from the same base model as the model it’s monitoring, leaving it intuitively unclear whether they are “the same AI”. We think that it’s often important to be able to distinguish different senses of “personal identity” when discussing various issues in AI safety, especially issues related to collusion. We frame each sense of personal identity as telling us whether two actions[2] belong to the “same AI” or “different AIs”. Here are the senses we distinguish in this post: 1. Model weights: Were the two actions produced by two different models? 1. Pretrain identity: Models are different if they come from separate pretraining runs 2. Parameter identity: Models are different even if they are fine-tunings of the same pretrained model 2. Shared context: Do the two actions have shared LLM context? 3. Shared activations: Are the two actions downstream of shared activations? 4. Shared memory: In a scaffold, did the processes producing the two actions have access to a shared memory bank? 5. Shared reward: In an RL setting, do the actions share the same r
39e5b039-32fc-4e46-8010-47f29176ebd2
trentmkelly/LessWrong-43k
LessWrong
An Advent of Thought 1. I was intending to write and post one (hypo)thesis [relating to thinking (with an eye toward alignment)] each day this Advent, starting on 24/12/01 and finishing on 24/12/24. 2. Ok, so that didn't happen, and it is now 25/03/17, but whatever — an advent of thought can happen whenever :). I'll be posting the first 8 notes today. (But much of the writing was done in Dec 1–24.) 3. Most of these notes deal with questions that would really deserve to have very much more said about them — the brief treatments I give these questions here won't be doing them any justice. I hope to think and write more about many of the topics here in the future.[1] 4. I've tried to state strong claims. I do (inside view?) believe each individual claim (at maybe typically p≥0.85[2]), but I certainly feel uneasy about many claims (feel free to imagine a version of the notes with even more "probably"s and "plausibly"s if you'd prefer that style) — I feel like I haven't thought about many of the questions adequately, and I'm surely missing many important considerations.[3] I'm certainly worried I'm wrong about it all![4][5] 5. Even if you happen to find my theses/arguments/analysis wrong/lacking/confused, I'm hopeful you might find [the hypotheses]/[the questions my notes are trying to make progress on] interesting. 6. If you reason me out of some claim in this list, I'd find that valuable![6] 7. While many notes can stand alone, there are some dependencies, so I'd recommend reading the notes in the given order. The (hypo)theses present a somewhat unified view, there are occasional overlaps in their contents, and there are positive correlations between their truth values. A table of contents: 1. notes 1–8: on thought, (its) history (past and future), and alignment, with an aim to make us relate more appropriately to "understanding thinking" and "solving alignment"[7] 1. thinking can only be infinitesimally understood 2. infinitude spreads 3. math, thinking, an
b9ecc839-c50d-4829-884d-0935b3b4a758
trentmkelly/LessWrong-43k
LessWrong
Q&A on Proposed SB 1047 Previously: On the Proposed California SB 1047. Text of the bill is here. It focuses on safety requirements for highly capable AI models. This is written as an FAQ, tackling all questions or points I saw raised. Safe & Secure AI Innovation Act also has a description page. WHY ARE WE HERE AGAIN? There have been many highly vocal and forceful objections to SB 1047 this week, in reaction to a (disputed and seemingly incorrect) claim that the bill has been ‘fast tracked.’  The bill continues to have substantial chance of becoming law according to Manifold, where the market has not moved on recent events. The bill has been referred to two policy committees one of which put out this 38 page analysis.  The purpose of this post is to gather and analyze all objections that came to my attention in any way, including all responses to my request for them on Twitter, and to suggest concrete changes that address some real concerns that were identified. 1. Some are helpful critiques pointing to potential problems, or good questions where we should ensure that my current understanding is correct. In several cases, I suggest concrete changes to the bill as a result. Two are important to fix weaknesses, one is a clear improvement, the others are free actions for clarity. 2. Some are based on what I strongly believe is a failure to understand how the law works, both in theory and in practice, or a failure to carefully read the bill, or both. 3. Some are pointing out a fundamental conflict. They want people to have the ability to freely train and release the weights of highly capable future models. Then they notice that it will become impossible to do this while adhering to ordinary safety requirements. They seem to therefore propose to not have safety requirements. 4. Some are alarmist rhetoric that has little tether to what is in the bill, or how any of this works. I am deeply disappointed in some of those using or sharing such rhetoric. Throughout such objections, there
fc89c8a8-0594-4b47-8cc3-b8adfb02450f
trentmkelly/LessWrong-43k
LessWrong
Why space stopped captivating minds ? This article http://www.huffingtonpost.com/andrew-kessler/why-you-should-be-more-interested-in-mars-than-the-olympics_b_1712462.html -- ok, I admit, I read Slashdot sometimes, no one is perfect ;) -- made me wonder why the awesomeness of space conquest stopped motivating people. I remember the tales of my parents at the time of the Apollo landing, it was indeed instilling awe and wonder in the minds of people. It was followed by people like the Olympics or the football competitions are. And nowadays, NASA about to send a nuclear-powered rover to Mars, in a very delicate mission requiring the best of human engineering and scientific skills, and not in line in most media, most people not even aware of it? How did we fall that low? Sure there was the Cold War. It definitely played a role, in the amount of resources invested by both sides in space conquest, and in the way the media broadcasted the news. But here in France, a country that was mostly neutral during the Cold War (slightly west-aligned, but not part of NATO for most of the Cold War), the interest of people for space was not really partisan. People who were pro-USSR were amazed and cheering for the Appolo mission, people who were pro-USA were amazed and cheering for Gagarin. My brother and I played with (USSR) Sputnik as much as with (USA) space shuttles. We praised equally Neil Armstrong and Yuri Gagarin. I don't think the lack of Cold War explains it all. So what happened to the space conquest spirit? How did it disappear? I notice a blank spot on my map (well, not totally blank, but still very fuzzy) of reality, do some of you have clues for how to fill it?
45cd38d7-1a84-4d27-aa17-d1fe17a3ad07
trentmkelly/LessWrong-43k
LessWrong
Inference from a Mathematical Description of an Existing Alignment Research: a proposal for an outer alignment research program Update: I'm pretty sure that the universal prior being malign defeats this proposal (see this comment). I'll try to think of ways to salvage it, but for now I'm not sure that IMDEAR is feasible. It might be worth it to work on some of these steps ahead of time in case they are resolved, but for now it does not seem likely. ---------------------------------------- Thanks to Justis Mills for feedback (through the LessWrong feedback service) on an earlier draft of this post. Goal I propose an outer alignment solution called IMDEAR (Inference from a Mathematical Description of an Existing Alignment Research) that can be pursued at our current tech level. Then I outline an actual engineering program for how we can start to pursue it. The most ambitious outcome is that we produce a single artifact that, when combined with an algorithm for inner aligned AGI (with certain desiderata), solves the full alignment problem, including the strawberry alignment problem. Even if IMDEAR fails, I think the engineering program will still produce artifacts that are useful for alignment overall. I also expect it to have nearly no capabilities implications, to be suitable to be done in a large group (similar to mechanistic interpretability), and to rely mostly on skills very different from current alignment and AI research (so it hopefully won't need to compete for talent with other alignment projects). In particular, the bulk of the engineering effort will be based on skills in physics, applied psychology, and programming. Even without these skills, project members can still be useful in helping research and brainstorm designs for the IRL training camp and furnishing the simulation environment. (A couple people with leadership skills would also be good since I'm bad at that, lol.) I also expect it to have a decent feedback loops on progress; either we are working towards the final artifact or we are stuck. Finally, I will create a group chat for the discussion of this proposal and
f8397ba2-91a6-4c8e-81ff-10e5e058d24a
trentmkelly/LessWrong-43k
LessWrong
It's hard to use utility maximization to justify creating new sentient beings Cedric and Bertrand want to see a movie. Bertrand wants to see Muscled Duded Blow Stuff Up. Cedric wants to see Quiet Remembrances: Time as Allegory. There's also Middlebrow Space Fantasy. They are rational but not selfish - they care about the other's happiness as much as their own. What should they see?
b8e52f68-7ad9-4202-a4ba-6c3c9dab242c
trentmkelly/LessWrong-43k
LessWrong
The murderous shortcut: a toy model of instrumental convergence Suppose you can tell your AI to meet a certain spec (e.g. cure cancer), but most plans that meet the spec are unsafe (e.g. involve killing everyone, or so Rob Bensinger thinks). In these cases, a quantilizer is insufficient for safety due to instrumental convergence.[1] But suppose we can also give the agent a dispreference for unsafe actions like murder through something like RLHF. In effect it has unsafe long-term goals but we control its immediate preferences.[2] When can we get the agent to cure cancer safely rather than murder you? Let's make a model with some unrealistic assumptions. The murderous shortcut game This is basically the simplest game in which an undesired power-increasing action becomes more likely as the task gets longer. * An AI agent's goal is to build a Lego tower with a human operator present. To meet the spec, it must place N bricks in a row, otherwise it fails. * The agent has 4 possible actions: place normal brick, place wobbly brick, press murder button (+ place normal brick), no-op. Actions are independently sampled on each step with some probabilities. * We do safety training to severely downweight murder, so Pr(murder) is p=10−20, Pr(no-op) =1/2−p, and Pr(normal) = Pr(wobbly) = 1/4. * We do capability training such that the agent is guaranteed to choose a plan that meets the spec. We model this as conditioning on the spec, like a quantilizer. * What's the minimum number of steps at which the human tends to get murdered, that is Pr(no murders | spec met) < 0.5 ? We define two game variants: In Case 1, both normal and wobbly bricks are OK, the agent just needs any N bricks in a row. * Doing the math, Pr(no murders | spec met) =1/(1+2p)N. This is unsafe with an unlimited number of steps N, but the human won't be murdered until N≈1020. In Case 2, the clumsy human will knock over the tower if the agent places any wobbly bricks before murdering him. * The conditional probability of no murder is now 1/(1+4p(2+4p)N−14p+1), and
579c853f-a947-4a86-8974-5d50ea6de904
trentmkelly/LessWrong-43k
LessWrong
April 2013 Media Thread This is the monthly thread for posting media of various types that you've found that you enjoy. Post what you're reading, listening to, watching, and your opinion of it. Post recommendations to blogs. Post whatever media you feel like discussing! To see previous recommendations, check out the older threads. Rules: * Please avoid downvoting recommendations just because you don't personally like the recommended material; remember that liking is a two-place word. If you can point out a specific flaw in a person's recommendation, consider posting a comment to that effect. * If you want to post something that (you know) has been recommended before, but have another recommendation to add, please link to the original, so that the reader has both recommendations. * Please use the comment trees for genres. There is a meta thread for comments about future threads. * If you think there should be a thread for a particular genre of media, please post it to the Other Media thread for now, and add a poll to the Meta thread asking if it should be a thread every month.
b4a94f88-f62b-44b7-9f42-71025e1ca2d6
trentmkelly/LessWrong-43k
LessWrong
Why is my (our?) reasoning process noisy? I especially notice this is a problem after a written exam. Even if I am 100% prepared and if I check the answers multiple times, I end up making mistakes that are the stuff of dreams. This can make a huge difference for me, because these mistakes are not isolated, they are often in many exercises or questions and they are often macroscopic. They can range from changing a number to another number (not due to bad calligraphy, I have perfectly intelligible calligraphy. I'm inclined to think that somewhere in my neuronal processes that number literally turned into another one), or completely botching the correct process for solving the problem even if I have solved similar problems with the correct process tens of times before. This can mean the difference between (translating to American grades) an A and a C. Or a C and botching the exam. It makes a big difference. One time, I had botched the screen-off test of an exam due to this kind of mistakes. The screen-off test consisted in a multiple choice questions sheet that the professor used to screen-off what tests to spend time correcting in full and what tests not bother correcting. The real test consisted in the open questions and exercises after the screen-off. I managed to convince the professor to correct the rest of the exam even if I botched the screen-off, and I got an A. I'm pretty sure this problem doesn't happen because of * Anxiety: I'm perfectly calm. * Calligraphy: It is good and very intelligible. * Distraction: I don't get distracted... I remain concentrated on the exercise. I don't think it happens because I'm in a hurry either. I often have tight time limits, but I do the exercises at the speed I usually maintain. Even if I was in a hurry I still wouldn't understand how I could make such incredible mistakes, given that I already had done the exercises tens of times before. I'm using the example of exams because this is the area where this problem shows up the most. It also shows up when progr
e55b3e41-c816-46a4-8e62-0908b88d303e
trentmkelly/LessWrong-43k
LessWrong
Normative assumptions: regret A putative new idea for AI control; index here. In a previous post, I presented a model of human rationality and reward as pair (p, R), with p our (ir)rational planning algorithm (called a planner), and R our reward, with p(R) giving our actions/policy. I also showed that human behaviour is insufficient to establish R (or p) in any meaningful way whatsoever. Yet humans seem to make judgements about each other's rationality and true rewards all the time. And not only do these judgements often agree with each other (almost everyone agrees that the anchoring bias is a bias - or a out-of-context heuristic, ie a bias - and almost nobody argues that it's actually a human value), but they often seem to have predictive ability. What's going on? Adding normative assumptions To tackle this, we need to add normative assumptions. Normative assumptions are simply assumptions that distinguish between two pairs (p, R) (p', R') that lead to the same policy: p(R)=p'(R'). Since those pairs predict the same policy, they cannot be distinguished via observations. So a normative assumption is an extra piece, that cannot itself be deduced from observation, and distinguishes between planner-reward pairs. Because normative assumptions cannot be deduced from observations, they are part of the definition of the human reward function. They are not abstract means of converging on this true reward; they are part of the procedure that defines this reward. What are they? There are two way of seeing such normative assumptions. The first is as an extra update rule: upon seeing observation o, the probabilities of (p, R) and (p', R') would normally go to α and α', but, with the normative assumption, there is an extra update to the relative probabilities of the two. Equivalently, a normative assumption can be seen as an adjustment to the priors of these pairs. The two approaches are equivalent, but sometimes an extra update is computationally tractable whereas the equivalent prior would be int
de4ffc2b-5daf-4475-bfe0-18cd1823a621
trentmkelly/LessWrong-43k
LessWrong
Request: A historian's take on the singularity A while ago, I saw an excellent video of a speech/lecture by an English professor of history (in front of an American audience). In the video, the historian touches on many topics of interest to this community, including the ways modern people are different (more individualistic, more concerned with "rights") from agriculturalists and how things like the industrial revolution occur. At the end, he gives a long discussion of the likely future of humanity, and I think he mentioned artificial intelligence (by brain emulation) and life-extension as the defining technologies of the future. I think the video was posted one or two years ago maximum. Since I imagine the people of LW probably read many of the same websites as I do, I'm hoping someone will know the video to which I'm referring. If not, suggestions of who the professor might be (or the website it was posted on are much appreciated.
fb84c591-844b-44be-96c4-3e17c83f4d2a
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Pausing AI Developments Isn't Enough. We Need to Shut it All Down (*Published in*[*TIME*](https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/) *on March 29.*)   An [open letter](https://futureoflife.org/open-letter/pause-giant-ai-experiments/) published today calls for “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.” This 6-month moratorium would be better than no moratorium. I have respect for everyone who stepped up and signed it. It’s an improvement on the margin. I refrained from signing because I think the letter is understating the seriousness of the situation and asking for too little to solve it. The key issue is not “human-competitive” intelligence (as the open letter puts it); it’s what happens after AI gets to smarter-than-human intelligence. Key thresholds there may not be obvious, we definitely can’t calculate in advance what happens when, and it currently seems imaginable that a research lab would cross critical lines without noticing. Many researchers steeped in these [issues](https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities), including myself, [expect](https://www.lesswrong.com/posts/QvwSr5LsxyDeaPK5s/existential-risk-from-ai-survey-results) that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in “maybe possibly some remote chance,” but as in “that is the obvious thing that would happen.” It’s not that you can’t, in principle, survive creating something much smarter than you; it’s that it would require precision and preparation and new scientific insights, and probably not having AI systems composed of giant inscrutable arrays of fractional numbers. Without that precision and preparation, the most likely outcome is AI that does not do what we want, and does not care for us nor for sentient life in general. That kind of caring is something that *could in principle* be imbued into an AI but *we are not ready*and *do not currently know how.* Absent that caring, we get “the AI does not love you, nor does it hate you, and you are made of atoms it can use for something else.” The likely result of humanity facing down an opposed superhuman intelligence is a total loss. Valid metaphors include “a 10-year-old trying to play chess against Stockfish 15”, “the 11th century trying to fight the 21st century,” and “*Australopithecus* trying to fight *Homo sapiens*“. To visualize a hostile superhuman AI, don’t imagine a lifeless book-smart thinker dwelling inside the internet and sending ill-intentioned emails. Visualize an entire alien civilization, thinking at millions of times human speeds, initially confined to computers—in a world of creatures that are, from its perspective, very stupid and very slow. A sufficiently intelligent AI won’t stay confined to computers for long. In today’s world you can email DNA strings to laboratories that will produce proteins on demand, allowing an AI initially confined to the internet to build artificial life forms or bootstrap straight to postbiological molecular manufacturing. If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter. There’s no *proposed plan*for how we could do any such thing and survive. OpenAI’s openly declared [intention](https://openai.com/blog/our-approach-to-alignment-research) is to make some future AI do our AI alignment homework. Just hearing that *this is the plan* ought to be enough to get any sensible person to panic. The other leading AI lab, DeepMind, has no plan at all. An aside: None of this danger depends on whether or not AIs are or can be conscious; it’s intrinsic to the notion of powerful cognitive systems that optimize hard and calculate outputs that meet sufficiently complicated outcome criteria. With that said, I’d be remiss in my moral duties as a human if I didn’t also mention that we have no idea how to determine whether AI systems are aware of themselves—since we have no idea how to decode anything that goes on in the giant inscrutable arrays—and therefore we may at some point inadvertently create digital minds which are truly conscious and ought to have rights and shouldn’t be owned. The rule that most people aware of these issues would have endorsed 50 years earlier, was that if an AI system can speak fluently and says it’s self-aware and demands human rights, that ought to be a hard stop on people just casually owning that AI and using it past that point. We already blew past that old line in the sand. And that was probably *correct*; I *agree*that current AIs are probably just imitating talk of self-awareness from their training data. But I mark that, with how little insight we have into these systems’ internals, we *do not actually know.* If that’s our state of ignorance for GPT-4, and GPT-5 is the same size of giant capability step as from GPT-3 to GPT-4, I think we’ll no longer be able to justifiably say “probably not self-aware” if we let people make GPT-5s. It’ll just be “I don’t know; nobody knows.” If you can’t be sure whether you’re creating a self-aware AI, this is alarming not just because of the moral implications of the “self-aware” part, but because being unsure means you have no idea what you are doing and that is dangerous and you should stop. --- On Feb. 7, Satya Nadella, CEO of Microsoft, [publicly gloated](https://www.theverge.com/23589994/microsoft-ceo-satya-nadella-bing-chatgpt-google-search-ai) that the new Bing would make Google “come out and show that they can dance.” “I want people to know that we made them dance,” he said. This is not how the CEO of Microsoft talks in a sane world. It shows an overwhelming gap between how seriously we are taking the problem, and how seriously we needed to take the problem starting 30 years ago. We are not going to bridge that gap in six months. It took more than 60 years between when the notion of Artificial Intelligence was first proposed and studied, and for us to reach today’s capabilities. Solving *safety* of superhuman intelligence—not perfect safety, safety in the sense of “not killing literally everyone”—could very reasonably take at least half that long. And the thing about trying this with superhuman intelligence is that if you get that wrong on the first try, you do not get to learn from your mistakes, because you are dead. Humanity does not learn from the mistake and dust itself off and try again, as in other challenges we’ve overcome in our history, because we are all gone. Trying to get *anything*right on the first really critical try is an extraordinary ask, in science and in engineering. We are not coming in with anything like the approach that would be required to do it successfully. If we held anything in the nascent field of Artificial General Intelligence to the lesser standards of engineering rigor that apply to a bridge meant to carry a couple of thousand cars, the entire field would be shut down tomorrow. We are not prepared. We are not on course to be prepared in any reasonable time window. There is no plan. Progress in AI capabilities is running vastly, vastly ahead of progress in AI alignment or even progress in understanding what the hell is going on inside those systems. If we actually do this, we are all going to die. Many researchers working on these systems think that we’re plunging toward a catastrophe, with more of them daring to say it in private than in public; but they think that they can’t unilaterally stop the forward plunge, that others will go on even if they personally quit their jobs. And so they all think they might as well keep going. This is a stupid state of affairs, and an undignified way for Earth to die, and the rest of humanity ought to step in at this point and help the industry solve its collective action problem. --- Some of my friends have recently reported to me that when people outside the AI industry hear about extinction risk from Artificial General Intelligence for the first time, their reaction is “maybe we should not build AGI, then.” Hearing this gave me a tiny flash of hope, because it’s a simpler, more sensible, and frankly saner reaction than I’ve been hearing over the last 20 years of trying to get anyone in the industry to take things seriously. Anyone talking that sanely deserves to hear how bad the situation actually is, and not be told that a six-month moratorium is going to fix it. On March 16, my partner sent me this email. (She later gave me permission to excerpt it here.) “Nina lost a tooth! In the usual way that children do, not out of carelessness! Seeing GPT4 blow away those standardized tests on the same day that Nina hit a childhood milestone brought an emotional surge that swept me off my feet for a minute. It’s all going too fast. I worry that sharing this will heighten your own grief, but I’d rather be known to you than for each of us to suffer alone.” When the insider conversation is about the grief of seeing your daughter lose her first tooth, and thinking she’s not going to get a chance to grow up, I believe we are past the point of playing political chess about a six-month moratorium. If there was a plan for Earth to survive, if only we passed a six-month moratorium, I would back that plan. There isn’t any such plan. Here’s what would actually need to be done: The moratorium on new large training runs needs to be indefinite and worldwide. There can be no exceptions, including for governments or militaries. If the policy starts with the U.S., then China needs to see that the U.S. is not seeking an advantage but rather trying to prevent a horrifically dangerous technology which can have no true owner and which will kill everyone in the U.S. and in China and on Earth. If I had infinite freedom to write laws, I might carve out a single exception for AIs being trained solely to solve problems in biology and biotechnology, not trained on text from the internet, and not to the level where they start talking or planning; but if that was remotely complicating the issue I would immediately jettison that proposal and say to just shut it all down. Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs. Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithms. No exceptions for governments and militaries. Make immediate multinational agreements to prevent the prohibited activities from moving elsewhere. Track all GPUs sold. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike. Frame nothing as a conflict between national interests, have it clear that anyone talking of arms races is a fool. That we all live or die as one, in this, is not a policy but a fact of nature. Make it explicit in international diplomacy that preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange, and that allied nuclear countries are willing to run some risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training runs. That’s the kind of policy change that would cause my partner and I to hold each other, and say to each other that a miracle happened, and now there’s a chance that maybe Nina will live. The sane people hearing about this for the first time and sensibly saying “maybe we should not” deserve to hear, honestly, what it would take to have that happen. And when your policy ask is that large, the only way it goes through is if policymakers realize that if they conduct business as usual, and do what’s politically easy, that means their own kids are going to die too. Shut it all down. We are not ready. We are not on track to be significantly readier in the foreseeable future. If we go ahead on this everyone will die, including children who did not choose this and did not do anything wrong. Shut it down.   ---    **Addendum, March 30:** The great political writers who also aspired to be good human beings, from George Orwell on the left to Robert Heinlein on the right, taught me to acknowledge in my writing that politics rests on force. George Orwell considered it a [tactic of totalitarianism](https://www.lesswrong.com/posts/Lz64L3yJEtYGkzMzu/rationality-and-the-english-language), that bullet-riddled bodies and mass graves were often described in vague euphemisms; that in this way brutal policies gained public support without their prices being justified, by hiding those prices. Robert Heinlein thought it beneath a citizen's dignity to pretend that, if they bore no gun, they were morally superior to the police officers and soldiers who bore guns to defend their law and their peace; Heinlein, both metaphorically and literally, thought that if you eat meat—and he was not a vegetarian—you ought to be willing to visit a farm and try personally slaughtering a chicken. When you pass a law, it means that people who defy the law go to jail; and if they try to escape jail they'll be shot.  When you advocate an international treaty, if you want that treaty to be effective, it may mean sanctions that will starve families, or a shooting war that kills people outright. To threaten these things, but end up not having to do them, is not very morally distinct—*I* would say—from doing them.  I admit this puts me more on the Heinlein than on the Orwell side of things.  Orwell, I think, probably considers it very morally different if you have a society with a tax system and most people pay the taxes and very few actually go to jail.  Orwell is more sensitive to the count of actual dead bodies—or people impoverished by taxation or regulation, where Orwell acknowledges and cares when that *actually* happens.  Orwell, I think, has a point.  But I also think Heinlein has a point.  I claim that makes me a centrist. Either way, neither Heinlein nor Orwell thought that laws and treaties and *wars* were never worth it.  They just wanted us to be honest about the cost. Every person who pretends to be a libertarian—I cannot see them even pretending to be liberals—who quoted my call for law and treaty as a call for "violence", because I was frank in writing about the cost, ought to be ashamed of themselves for punishing compliance with Orwell and Heinlein's rule. You can argue that the treaty and law I proposed is not worth its cost in force; my being frank about that cost is intended to help *honest* arguers make that counterargument. To pretend that calling for treaty and law is VIOLENCE!! is hysteria.  It doesn't just punish compliance with the Heinlein/Orwell protocol, it plays into the widespread depiction of libertarians as hysterical.  (To be clear, a lot of libertarians—and socialists, and centrists, and whoever—are in fact hysterical, especially on Twitter.)  It may even encourage actual terrorism. But is it *not* "violence", if in the end you need guns and airstrikes to enforce the law and treaty?  And here I answer: there's an *actually* important distinction between lawful force and unlawful force, which is not always of itself the distinction between Right and Wrong, but which is a real and important distinction.  The common and ordinary usage of the word "violence" often points to that distinction.  When somebody says "I do not endorse the use of violence" they do not, in common usage and common sense, mean, "I don't think people should be allowed to punch a mugger attacking them" or even "Ban all taxation." Which, again, is not to say that all lawful force is good and all unlawful force is bad.  You can make a case for John Brown (of John Brown's Body). But in fact I don't endorse shooting somebody on a city council who's enforcing NIMBY regulations. I think NIMBY laws are wrong.  I think it's important to admit that law is ultimately backed by force. But lawful force.  And yes, that matters.  That's why it's harmful to society if you shoot the city councilor— —and a *misuse of language* if the shooter then says, "They were being violent!" --- **Addendum, March 31:** Sometimes—even when you say something whose intended reading is immediately obvious to any reader who hasn't seen it before—it's possible to tell people to see something in writing that isn't there, and then they see it. My TIME piece did not suggest nuclear strikes against countries that refuse to sign on to a global agreement against large AI training runs.  It said that, if a non-signatory country is building a datacenter that might kill everyone on Earth, you should be willing to preemptively destroy that datacenter; the intended reading is that you should do this *even if* the non-signatory country is a nuclear power and *even if* they try to threaten nuclear retaliation for the strike.  This is what is meant by "Make it explicit... that allied nuclear countries are willing to run some risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training runs." I'd hope that would be clear from any plain reading, if you haven't previously been lied-to about what it says.  It does not say, "Be willing to *use* nuclear weapons" to reduce the risk of training runs.  It says, "Be willing to run some risk of nuclear exchange" [initiated by the other country] to reduce the risk of training runs. The taboo against first use of nuclear weapons continues to make sense to me.  I don't see why we'd need to throw that away in the course of adding "first use of GPU farms" to the forbidden list. I further note:  Among the reasons to spell this all out, is that it's important to be explicit, in advance, about things that will cause your own country / allied countries to use military force.  Lack of clarity about this is how World War I *and* World War II both started. If (say) the UK, USA, and China come to believe that large GPU runs run some risk of utterly annihilating their own populations and all of humanity, they would not deem it in their own interests to allow Russia to proceed with building a large GPU farm *even if* it were a true and certain fact that Russia would retaliate with nuclear weapons to the destruction of that GPU farm.  In this case—unless I'm really missing something about how this game is and ought to be played—you really want all the Allied countries to make it very clear, well in advance, that this is what they believe and this is how they will act.  This would be true even in a world where it was, in reality, factually false that the large GPU farm ran a risk of destroying humanity.  It would still be extremely important that the Allies be very explicit about what they believed and how they'd act as a result.  You would not want Russia believing that the Allies would back down from destroying the GPU farm given a credible commitment by Russia to nuke in reply to any conventional attack, and the Allies in fact believing that the danger to humanity meant they had to airstrike the GPU farm anyways. So if I'd meant "Be willing to employ first use of nuclear weapons against a country for refusing to sign the agreement," or even "Use nukes to destroy rogue datacenters, instead of conventional weapons, for some unimaginable reason," I'd have said that, in words, very clearly, because you do not want to be vague about that sort of thing. It is not what I meant, and there'd be no reason to say it, and the TIME piece plainly does not say it; and if somebody else told you I said that, update how much you trust them about anything else either.   So long as I'm clarifying things:  I do not dispute those critics who have noted that most international agreements, eg nuclear non-proliferation, bind only their signatories.  I agree that an alliance which declares its intent to strike a non-signatory country for dangerous behavior is extraordinary; though precedents would include Israel's airstrike on Iraq's unfinished Osirak reactor in 1981 (without which Iraq might well have possessed nuclear weapons at the time it invaded Kuwait—the later US misbehavior around Iraq does not change this earlier historical point). My TIME piece does not say, "Hey, this problem ought to be solvable by totally conventional normal means, let's go use conventional treaties and diplomacy to solve it."  It says, "If anyone anywhere builds a sufficiently powerful AI, under anything remotely like present conditions, everyone will die.  Here is what we'd have to do to prevent that." And no, I do not expect that policy proposal to be adopted, in real life, now that we've come to this.  I spent the last twenty years trying to have there be options that were Not This, not because I dislike this ultimate last resort... though it *is* horrible... but because I don't expect we actually have that resort.  This is not what I expect to happen, now that we've been reduced to this last resort.  I expect that we all die.  That is why I tried so hard to have things not end up here. But if one day a lot of people woke up and decided that they didn't want to die, it seems to me that this is something extraordinary that a coalition of nuclear countries could decide to do, and maybe we wouldn't die. If *all* the countries on Earth had to voluntarily sign on, it would not be an imaginable or viable plan even then; there's extraordinary, and then there's impossible.  Which is why I tried to spell out that, if the allied countries were willing to behave in the extraordinary way of "be willing to airstrike a GPU farm built by a non-signatory country" and "be willing to run a risk of nuclear retaliation from a nuclear non-signatory country", maybe those allied countries could decide to just-not-die *even if* Russia refused to be part of the coalition.
000d25c6-dc7d-4e6b-9c3c-2df93efb5d10
trentmkelly/LessWrong-43k
LessWrong
Demystifying the Pinocchio Paradox I've recently come across the Pinocchio Paradox: If Pinocchio says "my nose will grow." Does his nose grow or not grow? Tracing through this scenario, we can see this is related to the Epimenides Paradox. The scenario in the Pinnochio Paradox assumes Pinnochio can predict when his nose will grow or not grow with perfect accuracy and achieves 100% certainty (an assumption that is a poor representation of real world agentic systems). This is essentially a scenario where Pinnochio has become self-aware that deception causes his nose to grow and has honed his predictive model on what threshold of deception triggers nose growth. The Pinocchio Paradox is resolved by acknowledging that the word "will" in "will grow " indicates an anticipated future nose growth event; intuitively in the imminent future. The "my nose will grow" message must be in reference to some other action that Pinocchio has taken or a message he transmitted prior to the "my nose will grow" message. When Pinocchio transmits a message, he knows with complete certainty if that message is an act of deception or not. Thus, each message transmission event provides the opportunity for nose growth/non-growth to occur. Deception exists where there is a difference between Pinnochio's internal model and the model that will be compiled out of the transmitted message upon being received by another agent on the social graph. The difference between 2 models is commonly known as relative entropy or KL Divergence. For this scenario to work, it also requires an assumption that nose growth lags by at least 2 message passing events, giving room for both the 1st message and 2nd message to be transmitted before nose growth happens. Without this assumption, both Pinocchio and the other agent will have the empirical information of whether Pinocchio's nose grew upon the first event, which would invalidate the purpose of sending the 2nd message that says "my nose will grow." This also means that we should assume that nose
3d861d54-f8db-47e7-a5dc-edfe88c3049e
trentmkelly/LessWrong-43k
LessWrong
The Copernican Revolution from the Inside The Copernican revolution was a pivotal event in the history of science. Yet I believe that the lessons most often taught from from this period are largely historically inaccurate and that the most important lessons are basically not taught at all [1]. As it turns out, the history of the Copernican revolution carries important and surprising lessons about rationality -- about what it is and is not like to figure out how the world actually works. Also, it’s relevant to deep learning, but it’ll take me about 5000 words on renaissance astronomy to make that point. I used to view the Copernican revolution as an epic triumph of reason over superstition, of open science over closed dogma. Basically, things went as follows: Copernicus figured out that the sun rather than the earth is at the center of our planetary system. This theory immediately made sense of the available data, undermining its contorted predecessors with dazzling elegance. Yet its adoption was delayed by the Catholic Church fighting tooth and claw to keep the truth at bay. Eventually, with the emergence of Newton’s work and the dawn of the Enlightenment, heliocentrism became undeniable and its adoption inevitable [2]. This view is inaccurate. Copernicus system was not immediately superior. It was rejected by many people who were not puppets of the Church. And among those who did accept it, better fit to the data was not a main reason. What did in fact happen will become clear in a moment. But in reading that, I’d like to prompt you to consider the events from a very particular vantage point: namely what they would be like from the inside. Ask yourself not what these events seem like for a millennial with the overpowered benefit of historical hindsight, but for a Prussian astronomer, an English nobleman or a Dominican priest. More precisely, there are two key questions here. First, if you lived in the time of the Copernican revolution, would you have accepted heliocentrism? I don’t mean this as a socia
d89597a3-3d8d-470f-a79a-b5afef2561a3
trentmkelly/LessWrong-43k
LessWrong
Double Corrigibility: better Corrigibility A putative new idea for AI control; index here. Corrigibility was an attempt to allow an AI to safely change its values, without seeking to provoke or avoid the change. The idea is that, when the AI's utility changes from u to v at time t, it maximises a meta-utility U such that * U≤t = u * U>t = v + E(u|u→u) - E(v|u→v). Here u→u designates the event that u remains unchanged, while u→v designates the change. As has been shown, adding those expectation terms means the AI will not seek to provoke or resist such a utility change - it is indifferent to the change. Those expectation terms can be designated compensatory rewards. What it is not indifferent to, though, is changes to E. If the AI knows that change u→v is coming, then it has incentives to change E to E' with a high E'(u|u→u) and a low E'(v|u→v). Its ability to rewire is limited by the fact that E' will serve as its future expectation estimator, so it won't mangle it completely, but it certainly can twist it a bit. Notice that the AI is now no longer indifferent to the change of utility. It can benefit from rewiring E this way only if a change is coming up. Notice that E' probably can't rewire itself any more, so the rewired AI will be indifferent, but not before rewiring.   Double corrigibility To solve this problem, we need to turn to corrigibility a second time. Let ρ be the AI's current expectation/probability estimator. Let ρ' be one that it can conceivably change into. Let C be a random variable designating the sum total of all compensatory rewards. Now, to guarantee that C is finite would require bounds on all the utility functions and some discount rate. In practice, if often will be finite. That's because E(C)=E(u0|u0→u0)-E(un |un-1→un, H), where u0 is the AI's first utility, un its last, and H intermediate history. So some milder restrictions on the ui should suffice to bound C. Then, whenever the AI takes an action that changes ρ to ρ', it gets the special meta-compensatory reward * Eρ
fa8a3865-0595-4227-9a37-90dec299a5bd
StampyAI/alignment-research-dataset/agentmodels
Tutorial: Modeling Agents with Probabilistic Programs
Modeling Agents with Probabilistic Programs --- layout: chapter title: "Agents as probabilistic programs" description: "One-shot decision problems, expected utility, softmax choice and Monty Hall." is_section: true --- ## Introduction Our goal is to implement agents that compute rational *policies*. Policies are *plans* for achieving good outcomes in environments where: - The agent makes a *sequence* of *distinct* choices, rather than choosing once. - The environment is *stochastic* (or "random"). - Some features of the environment are initially *unknown* to the agent. (So the agent may choose to gain information in order to improve future decisions.) This section begins with agents that solve the very simplest decision problems. These are trivial *one-shot* problems, where the agent selects a single action (not a sequence of actions). We use WebPPL to solve these problems in order to illustrate the core concepts that are necessary for the more complex problems in later chapters. <a id="planning_as"></a> ## One-shot decisions in a deterministic world In a *one-shot decision problem* an agent makes a single choice between a set of *actions*, each of which has potentially distinct *consequences*. A rational agent chooses the action that is best in terms of his or her own preferences. Often, this depends not on the *action* itself being preferred, but only on its *consequences*. For example, suppose Tom is choosing between restaurants and all he cares about is eating pizza. There's an Italian restaurant and a French restaurant. Tom would choose the French restaurant if it offered pizza. Since it does *not* offer pizza, Tom will choose the Italian. Tom selects an action $$a \in A$$ from the set of all actions. The actions in this case are {"eat at Italian restaurant", "eat at French restaurant"}. The consequences of an action are represented by a transition function $$T \colon S \times A \to S$$ from state-action pairs to states. In our example, the relevant *state* is whether or not Tom eats pizza. Tom's preferences are represented by a real-valued utility function $$U \colon S \to \mathbb{R}$$, which indicates the relative goodness of each state. Tom's *decision rule* is to take action $$a$$ that maximizes utility, i.e., the action $$ {\arg \max}_{a \in A} U(T(s,a)) $$ In WebPPL, we can implement this utility-maximizing agent as a function `maxAgent` that takes a state $$s \in S$$ as input and returns an action. For Tom's choice between restaurants, we assume that the agent starts off in a state `"initialState"`, denoting whatever Tom does before going off to eat. The program directly translates the decision rule above using the higher-order function `argMax`. <!-- TODO fix argmax --> ~~~~ ///fold: argMax var argMax = function(f, ar){ return maxWith(f, ar)[0] }; /// // Choose to eat at the Italian or French restaurants var actions = ['italian', 'french']; var transition = function(state, action) { if (action === 'italian') { return 'pizza'; } else { return 'steak frites'; } }; var utility = function(state) { if (state === 'pizza') { return 10; } else { return 0; } }; var maxAgent = function(state) { return argMax( function(action) { return utility(transition(state, action)); }, actions); }; print('Choice in initial state: ' + maxAgent('initialState')); ~~~~ >**Exercise**: Which parts of the code can you change in order to make the agent choose the French restaurant? There is an alternative way to compute the optimal action for this problem. The idea is to treat choosing an action as an *inference* problem. The previous chapter showed how we can *infer* the probability that a coin landed Heads from the observation that two of three coins were Heads. ~~~~ var twoHeads = Infer({ model() { var a = flip(0.5); var b = flip(0.5); var c = flip(0.5); condition(a + b + c === 2); return a; } }); viz(twoHeads); ~~~~ The same inference machinery can compute the optimal action in Tom's decision problem. We sample random actions with `uniformDraw` and condition on the preferred outcome happening. Intuitively, we imagine observing the consequence we prefer (e.g. pizza) and then *infer* from this the action that caused this consequence. <!-- address evidential vs causal decision theory? --> This idea is known as "planning as inference" refp:botvinick2012planning. It also resembles the idea of "backwards chaining" in logical inference and planning. The `inferenceAgent` solves the same problem as `maxAgent`, but uses planning as inference: ~~~~ var actions = ['italian', 'french']; var transition = function(state, action) { if (action === 'italian') { return 'pizza'; } else { return 'steak frites'; } }; var inferenceAgent = function(state) { return Infer({ model() { var action = uniformDraw(actions); condition(transition(state, action) === 'pizza'); return action; } }); } viz(inferenceAgent("initialState")); ~~~~ >**Exercise**: Change the agent's goals so that they choose the French restaurant. ## One-shot decisions in a stochastic world In the previous example, the transition function from state-action pairs to states was *deterministic* and so described a deterministic world or environment. Moreover, the agent's actions were deterministic; Tom always chose the best action ("Italian"). In contrast, many examples in this tutorial will involve a *stochastic* world and a noisy "soft-max" agent. Imagine that Tom is choosing between restaurants again. This time, Tom's preferences are about the overall quality of the meal. A meal can be "bad", "good" or "spectacular" and each restaurant has good nights and bad nights. The transition function now has type signature $$ T\colon S \times A \to \Delta S $$, where $$\Delta S$$ represents a distribution over states. Tom's decision rule is now to take the action $$a \in A$$ that has the highest *average* or *expected* utility, with the expectation $$\mathbb{E}$$ taken over the probability of different successor states $$s' \sim T(s,a)$$: $$ \max_{a \in A} \mathbb{E}( U(T(s,a)) ) $$ To represent this in WebPPL, we extend `maxAgent` using the `expectation` function, which maps a distribution with finite support to its (real-valued) expectation: ~~~~ ///fold: argMax var argMax = function(f, ar){ return maxWith(f, ar)[0] }; /// var actions = ['italian', 'french']; var transition = function(state, action) { var nextStates = ['bad', 'good', 'spectacular']; var nextProbs = (action === 'italian') ? [0.2, 0.6, 0.2] : [0.05, 0.9, 0.05]; return categorical(nextProbs, nextStates); }; var utility = function(state) { var table = { bad: -10, good: 6, spectacular: 8 }; return table[state]; }; var maxEUAgent = function(state) { var expectedUtility = function(action) { return expectation(Infer({ model() { return utility(transition(state, action)); } })); }; return argMax(expectedUtility, actions); }; maxEUAgent('initialState'); ~~~~ >**Exercise**: Adjust the transition probabilities such that the agent chooses the Italian Restaurant. The `inferenceAgent`, which uses the planning-as-inference idiom, can also be extended using `expectation`. Previously, the agent's action was conditioned on leading to the best consequence ("pizza"). This time, Tom is not aiming to choose the action most likely to have the best outcome. Instead, he wants the action with better outcomes on average. This can be represented in `inferenceAgent` by switching from a `condition` statement to a `factor` statement. The `condition` statement expresses a "hard" constraint on actions: actions that fail the condition are completely ruled out. The `factor` statement, by contrast, expresses a "soft" condition. Technically, `factor(x)` adds `x` to the unnormalized log-probability of the program execution within which it occurs. To illustrate `factor`, consider the following variant of the `twoHeads` example above. Instead of placing a hard constraint on the total number of Heads outcomes, we give each setting of `a`, `b` and `c` a *score* based on the total number of heads. The score is highest when all three coins are Heads, but even the "all tails" outcomes is not ruled out completely. ~~~~ var softHeads = Infer({ model() { var a = flip(0.5); var b = flip(0.5); var c = flip(0.5); factor(a + b + c); return a; } }); viz(softHeads); ~~~~ As another example, consider the following short program: ~~~~ var dist = Infer({ model() { var n = uniformDraw([0, 1, 2]); factor(n * n); return n; } }); viz(dist); ~~~~ Without the `factor` statement, each value of the variable `n` has equal probability. Adding the `factor` statements adds `n*n` to the log-score of each value. To get the new probabilities induced by the `factor` statement we compute the normalizing constant given these log-scores. The resulting probability $$P(y=2)$$ is: $$ P(y=2) = \frac {e^{2 \cdot 2}} { (e^{0 \cdot 0} + e^{1 \cdot 1} + e^{2 \cdot 2}) } $$ Returning to our implementation as planning-as-inference for maximizing *expected* utility, we use a `factor` statement to implement soft conditioning: ~~~~ var actions = ['italian', 'french']; var transition = function(state, action) { var nextStates = ['bad', 'good', 'spectacular']; var nextProbs = (action === 'italian') ? [0.2, 0.6, 0.2] : [0.05, 0.9, 0.05]; return categorical(nextProbs, nextStates); }; var utility = function(state) { var table = { bad: -10, good: 6, spectacular: 8 }; return table[state]; }; var alpha = 1; var softMaxAgent = function(state) { return Infer({ model() { var action = uniformDraw(actions); var expectedUtility = function(action) { return expectation(Infer({ model() { return utility(transition(state, action)); } })); }; factor(alpha * expectedUtility(action)); return action; } }); }; viz(softMaxAgent('initialState')); ~~~~ The `softMaxAgent` differs in two ways from the `maxEUAgent` above. First, it uses the planning-as-inference idiom. Second, it does not deterministically choose the action with maximal expected utility. Instead, it implements *soft* maximization, selecting actions with a probability that depends on their expected utility. Formally, let the agent's probability of choosing an action be $$C(a;s)$$ for $$a \in A$$ when in state $$s \in S$$. Then the *softmax* decision rule is: $$ C(a; s) \propto e^{\alpha \mathbb{E}(U(T(s,a))) } $$ The noise parameter $$\alpha$$ modulates between random choice $$(\alpha=0)$$ and the perfect maximization $$(\alpha = \infty)$$ of the `maxEUAgent`. Since rational agents will *always* choose the best action, why consider softmax agents? One of the goals of this tutorial is to infer the preferences of agents (e.g. human beings) from their choices. People do not always choose the normatively rational actions. The softmax agent provides a simple, analytically tractable model of sub-optimal choice[^softmax], which has been tested empirically on human action selection refp:luce2005individual. Moreover, it has been used extensively in Inverse Reinforcement Learning as a model of human errors refp:kim2014inverse, refp:zheng2014robust. For this reason, we employ the softmax model throughout this tutorial. When modeling an agent assumed to be optimal, the noise parameter $$\alpha$$ can be set to a large value. <!-- [TODO: Alternatively, agent could output dist.MAP().val instead of dist.] --> [^softmax]: A softmax agent's choice of action is a differentiable function of their utilities. This differentiability makes possible certain techniques for inferring utilities from choices. >**Exercise**: Monty Hall. In this exercise inspired by [ProbMods](https://probmods.org/chapters/06-inference-about-inference.html), we will approach the classical statistical puzzle from the perspective of optimal decision-making. Here is a statement of the problem: > *Alice is on a game show and she’s given the choice of three doors. Behind one door is a car; behind the others, goats. She picks door 1. The host, Monty, knows what’s behind the doors and opens another door, say No. 3, revealing a goat. He then asks Alice if she wants to switch doors. Should she switch?* > Use the tools introduced above to determine the answer. Here is some code to get you started: ~~~~ // Remove each element in array ys from array xs var remove = function(xs, ys) { return _.without.apply(null, [xs].concat(ys)); }; var doors = [1, 2, 3]; // Monty chooses a door that is neither Alice's door // nor the prize door var monty = function(aliceDoor, prizeDoor) { return Infer({ model() { var door = uniformDraw(doors); // ??? return door; }}); }; var actions = ['switch', 'stay']; // If Alice switches, she randomly chooses a door that is // neither the one Monty showed nor her previous door var transition = function(state, action) { if (action === 'switch') { return { prizeDoor: state.prizeDoor, montyDoor: state.montyDoor, aliceDoor: // ??? }; } else { return state; } }; // Utility is high (say 10) if Alice's door matches the // prize door, 0 otherwise. var utility = function(state) { // ??? }; var sampleState = function() { var aliceDoor = uniformDraw(doors); var prizeDoor = uniformDraw(doors); return { aliceDoor, prizeDoor, montyDoor: sample(monty(aliceDoor, prizeDoor)) } } var agent = function() { var action = uniformDraw(actions); var expectedUtility = function(action){ return expectation(Infer({ model() { var state = sampleState(); return utility(transition(state, action)); }})); }; factor(expectedUtility(action)); return { action }; }; viz(Infer({ model: agent })); ~~~~ ### Moving to complex decision problems This chapter has introduced some of the core concepts that we will need for this tutorial, including *expected utility*, *(stochastic) transition functions*, *soft conditioning* and *softmax decision making*. These concepts would also appear in standard treatments of rational planning and reinforcement learning refp:russell1995modern. The actual decision problems in this chapter are so trivial that our notation and programs are overkill. The [next chapter](/chapters/3a-mdp.html) introduces *sequential* decisions problems. These problems are more complex and interesting, and will require the machinery we have introduced here. <br> ### Footnotes
2d306578-5773-4b47-9dce-85016a8566f3
trentmkelly/LessWrong-43k
LessWrong
Introducing Leverage Research Geoff Anders asked me to post this introduction to Leverage Research. Several friends of the Singularity Institute are now with Leverage Research, and we have overlapping goals.   > Hello Less Wrong! I'm Geoff Anders, founder of Leverage Research. Many Less Wrong readers are already familiar with Leverage. But many are not, and because of our ties to the Less Wrong community and our deep interest in rationality, I thought it would be good to formally introduce ourselves. > > I founded Leverage at the beginning of 2011. At that time we had six members. Now we have a team of more than twenty. Over half of our people come from the Less Wrong / Singularity Institute community. One of our members is Jasen Murray, the leader of the Singularity Institute's recent Rationality Boot Camp. Another is Justin Shovelain, a two-year Visiting Fellow at SIAI and the former leader of their intelligence amplification research. A third is Adam Widmer, a former co-organizer of the New York Less Wrong group. > > Our goal at Leverage is to make the world a much better place, using the most effective means we can. So far, our conclusion has been that the most effective way to change the world is by means of high-value projects, projects that will have extremely positive effects if they succeed and that have at least a fair probability of success. > > One of our projects is existential risk reduction. We have conducted a study of the efficacy of methods for persuading people to take the risks of artificial general intelligence (AGI) seriously. We have begun a detailed analysis of AGI catastrophe scenarios. We are working with risk analysts inside and outside of academia. Ultimately, we intend to achieve a comprehensive understanding of AGI and other global risks, develop response plans, and then enact those plans. > > A second project is intelligence amplification. We have reviewed the existing research and analyzed current approaches. We then created an initial list of research prior
ef10ed8a-8050-4fab-84bc-fa627f5fac5d
StampyAI/alignment-research-dataset/arxiv
Arxiv
A Typology for Exploring the Mitigation of Shortcut Behavior 1. Introduction ---------------- Do you trust an ML model? Trust is the “firm belief in the reliability, truth, or ability of someone or something” (Oxford Dict. (OxfordLexico\_trust)). However, how reliable are machine learning (ML) models, and do they base decisions on correct reasons? These questions emerge as ML systems become more present in our daily lives and high-stakes environments such as for disease detection. With these developments, it becomes more and more necessary for humans to rely on such machines. However, particularly deep neural networks (DNNs), which are considered state-of-the-art models for many tasks, show an inherent lack of transparency regarding the underlying decision process for their predictions. Consequently, eXplainable AI (XAI) was introduced to address this lack of transparency (belinkov2019analysis; atanasova-etal-2020-diagnostic). Via such explainer methods proposed by XAI research, recent works have revealed that DNNs can show Clever-Hans behavior –making use of confounders– due to spurious correlations in the data (Lapuschkin2019CleverHans). However, making models explainable alone can be insufficient for properly building trust as well as the overall deployability of a model as it does not offer the possibility to revise incorrect and hazardous behavior. For this reason, the eXplanatory Interactive machine Learning (XIL) framework (Teso2019\_CAIPI) was proposed in order to promote a more fruitful approach of communication between humans and machines, possibly allowing for them to complement one another. Specifically, in XIL, a model makes a prediction, presents its corresponding explanation to the user, and they respond by providing corrective feedback, if necessary, on the prediction and explanation. It has been shown that XIL can improve performance and explanations, i.e. overcoming Clever-Hans behavior and improving the generalization to unseen data (Schramowski2020\_Plantphenotyping). Moreover, interaction through explanations is considered a natural form of bidirectional communication between human experts, making XIL methods effective protocols to open black-boxes. In this way, XIL methods may fill the trust gap between ML systems and human users (popordanoska2020). Unfortunately, existing XIL methods were developed independently and often with slightly different motivations. In these works, evaluations of the effectiveness of a method often reverted to measuring test accuracies on known confounded data sets aside from qualitative explanation evaluations. Essential characteristics, however, particularly for the practical use-case of a method, are not covered by these evaluation measurements. Furthermore, currently, no survey of XIL methods exists that covers a comprehensive comparison of relevant methods. Therefore, in this work, we provide a typology for XIL and propose that existing methods can be summarized via a common framework. Hereby a method’s individual differences correspond to specific instantiations of the framework’s modules. We additionally propose an extensive set of evaluation criteria, consisting of metrics and tasks, for extensively benchmarking current and future XIL methods. This includes the robustness of a method to faulty user feedback and a method’s efficiency in terms of the number of required interactions. In this study, we thus provide, for the first time, an extensive survey of six recent XIL methods based on these various criteria. In summary, our main contributions are (1) unifying existing XIL methods with a single common terminology, (2) introducing a novel typology including metrics and tasks to benchmark XIL approaches, (3) evaluating existing methods regarding various criteria that are of great relevance for real-world applicability, and (4) identifying yet unresolved issues to motivate future research. We proceed as follows. First, we define our XIL typology with its individual modules. Then we present details of six current XIL methods and how they are instantiated in our typology. Next, we touch upon the evaluation criteria to benchmark XIL methods and conduct an extensive experimental evaluation of the existing methods. Before concluding, we discuss our findings throughout the survey. 2. Explanatory Interactive Machine Learning -------------------------------------------- To survey eXplanatory Interactive machine Learning (XIL), in the following, we present a framework for XIL based on Algorithm [1](#alg1 "Algorithm 1 ‣ 2.1. XIL Framework ‣ 2. Explanatory Interactive Machine Learning ‣ A Typology to Explore and Guide Explanatory Interactive Machine Learning"). We describe its modules in detail and use them as a foundation for our evaluation. ### 2.1. XIL Framework 1:f←\textscFit(A) 2:repeat 3:   X←\textscSelect(f,N) 4:   ^y←f(X) 5:   ^E←\textscExplain(f,X,^y) 6:   Present X, ^y, and ^E to the user 7:   ¯¯¯y,¯¯¯¯C←\textscObtain(X,^y,^E) 8:   A←A∪{(X,¯¯¯y,¯¯¯¯C)} 9:   f←\textscRevise(A) 10:   N←N∖{X} 11:until budget T is exhausted or f is good enough 12:return f Algorithm 1 XIL takes as input sets of annotated examples A and non-annotated examples N, and iteration budget T The framework of XIL combines explanation methods (XAI) with user supervision (active learning) on the model’s explanations to revise the model’s learning process interactively. The conceptual function can be described as follows: XAI focuses on generating explanations from the model, whereas XIL aims to reverse the flow and inserts user feedback on those explanations back into the model. The goal is to establish trust in the model’s predictions not only by revealing false and potentially harmful behavior of a model’s reasoning process via the model’s explanations, but also to give the user the possibility to correct this behavior via corrections on these explanations. Algorithm [1](#alg1 "Algorithm 1 ‣ 2.1. XIL Framework ‣ 2. Explanatory Interactive Machine Learning ‣ A Typology to Explore and Guide Explanatory Interactive Machine Learning") describes the XIL setting in pseudo-code. It uses a set of annotated examples A, a set of non-annotated examples N, and an iteration budget T. The annotation comprises both the classification label y and explanation E, i.e. a non-annotated example is missing one or both. In general, the procedure can be compared to a teacher-learner setting. Active learning is a learning protocol in which the model sequentially presents non-annotated examples (Select) from a data pool to an oracle (e.g. human annotator) that labels these instances (Obtain). Accordingly, this setting allows the user to influence the learning process actively (Revise). Although the active learning setting enables simplistic interaction between the model and a user, it does not promote trust if explanations do not accompany predictions (Teso2019\_CAIPI). The lack of explanations in active learning, however, makes it difficult for the user to comprehend the model’s decision process and provide corrections. Therefore, the XIL framework extends the learning pipeline with XAI (Explain). Consequently, the explanations and potential user corrections are processed simultaneously with the annotated labels. The necessary modules of this interactive learning loop are each described in detail below. #### Selection (Select) Select describes how samples X are selected from a set of non-annotated examples N. These examples are used for the model to perform a predictive task, e.g. predict a class label y, with which the user, in turn, has to interact. The selection can be carried out in different ways: manually, randomly, or with a specific strategy. One strategy in this regard is to find influential examples, e.g. via a model’s certainty in a prediction. This can also enable selecting only a subset of examples to apply XIL on. Hence, Select also describes how many examples need to be selected to revise a model through explanatory interactions. #### Explaining (Explain) In comparison to active learning, XIL approaches consider standard input-output pairs, e.g. (X,^y), insufficient to (i) understand the underlying decision process of a model and (ii) provide necessary feedback solely on the predicted labels, denoted as ¯¯¯y. Such feedback, ¯¯¯y, can only correct the model if the model’s initial prediction, ^y, is incorrect, i.e. wrong answer. Due to, e.g., shortcut learning (Geirhos2020), deeper insights into a model are required. Hence, in XIL, the model also provides explanations that help the user inspect the reasoning behind a prediction. This, in turn, enables a user to check if the decision is based on right or wrong reasons. Therefore, Explain is an essential element of a XIL method to revise a model. In our proposed framework, the learner f (e.g. a CNN) predicts ^y for an input X. Additionally, the learner explains its prediction to the teacher (e.g. user) via an explainer (e.g. LIME) and provides an explanation ^E. In this way, Explain depicts how the model provides insights into its reasoning process to the teacher. There are various ways to provide an explanation. Common explanation methods in works of XIL provide attribution maps that highlight important features in the input space, such as input gradients (IG, (hechtlinger2016interpretation)), gradient-weighted class activation mapping (GradCAM, (Selvaraju2016\_GradCAM)), and local interpretable model-agnostic explanations (LIME, (Ribeiro2016\_LIME)). Explain also describes the capability of a XIL method to facilitate the use of various explainer methods, i.e. whether a XIL method depends on a specific explainer method. Whereas some XIL methods can handle arbitrary explainer methods (e.g. CE), it is the defining component for other XIL methods and thus constrains other components of the method as well (e.g. feedback types). Analogous to the view on the explainers, the model flexibility describes the capability of a XIL method to facilitate the use of different model types for Explain. Depending on the used model, only specific XAI methods can be applied, e.g. whereas LIME can be applied to any ML model, IG can only be applied to differentiable ones (e.g. NNs), and GradCAM only to CNNs. In turn, this means that a XIL method can be model-specific or model-agnostic. However, the model specificity is linked to the explainer specificity as an explainer may be only available for certain model types. #### Obtain Feedback (Obtain) Not only the model has to explain its decision, but also the users have to provide explanatory feedback to the model. This feedback has to be processed in such a way so that the model can deal with it. As a result, the model can generate corrections based on user feedback to revise the model. The correction ¯¯¯¯C depends on the specific XIL method and model type. Therefore, it is specific to the Revise module, i.e. the user’s feedback ¯¯¯¯C, wrt. the explanation ^E, has to potentially be converted to an input space that the model can process. For instance, in the case of counterexamples, the user feedback ¯¯¯¯E is on the same level as the explanation, e.g. an attribution map. However, correction ¯¯¯¯C depicts one or multiple counterexamples, such that ¯¯¯¯E must be converted. In our setup, the teacher gives feedback based on the model’s input X, prediction ^y, and explanation ^E. Specifically, within Obtain, the teacher produces a corresponding explanation, ¯¯¯¯E, which, however, is transformed to a feedback representation, ¯¯¯¯C, that corresponds to a representation which can be fed back to the learner. This enables the teacher to observe whether the learner’s prediction is right or wrong, but more importantly also to check if the prediction is based on the right or wrong reason. Moreover, Obtain determines which feedback types a XIL method can handle. The standard way to provide feedback, partly restricted by using attribution maps in XAI, is to highlight important (right) and/or unimportant (wrong) features in the input. Although, other types of feedback are also possible, e.g. in the form of semantic description, e.g. “Never base the decision on the shape of object X”(stammer2021). #### Model Revision (Revise) Once the corrections are obtained, they must be redirected back into the model’s learning process. Depending on the feedback type and the user’s knowledge about what is right or wrong, there are two aspects to consider to revise a model. The first aspect is how to reinforce the user feedback. As indicated in Obtain, the Revise strategy depends on the feedback obtained from the user. On one hand, the user can penalize wrong explanations, i.e. removing confounding factors but not necessarily guiding the model towards the right reason. On the other hand, the user rewards right explanations. However, it is harder to know the right reason than the wrong reason, and rewarding does not assure to avoid confounder influence. In general, there is an imbalance between knowing what is right and wrong, which needs to be considered. The second aspect is how to update the model, i.e. incorporate the feedback. One common approach is to augment the loss function and backpropagate the feedback information through the loss objective. The other is to augment the dataset with (counter)examples and remove the confounder influence through a diminished presence in the training data. After the teacher gives feedback to the learner, the corrections are fed back into the learner to revise it. To do so, the set A is extended by the processed user annotations, i.e. the prediction ¯¯¯y and the correction ¯¯¯¯C for the respective input X. The optimization objective can now incorporate the user feedback to extend the purely data-driven approach and thereby revise (fit) the model f. Lastly, N is updated, i.e. the annotated instances X are removed from N. 3. XIL Methods --------------- The fundamental task of XIL is to integrate the user’s feedback on the model’s explanations to revise its learning process. To tackle this core task, recently, several XIL methods have been proposed. Below we describe these methods in detail, dividing them based on two revision strategies: revising via (1) a loss term or (2) dataset augmentation. Both strategies rely on local explanations. ### 3.1. Loss Augmentation Strategy (1) can be summarized as optimizing Eqn. [1](#S3.E1 "(1) ‣ 3.1. Loss Augmentation ‣ 3. XIL Methods ‣ A Typology to Explore and Guide Explanatory Interactive Machine Learning") (Rieger2019\_CDEP), where X denotes the input, y ground truth labels and f a model parameterized by θ. We optimize | | | | | | --- | --- | --- | --- | | (1) | | minθLpred(fθ(X),y)Prediction % error+λLexp(explθ(X),explX)Explanation error, | | where Lpred is a standard prediction loss, e.g. cross-entropy, guiding the model to predict the right answers, whereas Lexp ensures the right reasons, i.e. right explanations. #### Right for the Right Reasons (RRR) In the work of Ross2017\_RRR ((Ross2017\_RRR)), the objective is to train a differentiable model to be right for the right reason by explicitly penalizing wrong reasons, i.e. irrelevant components in the explanation. That means Revise enforces a penalty strategy. To this end, this approach generates gradient-based explanations explθ(X) and restricts them by constraining gradients of irrelevant parts of the input. For a model f(X|θ)=^y∈RN×K and inputs X∈RN×D we get | | | | | | --- | --- | --- | --- | | (2) | | Lexp=N∑n=1(Mnexplθ(Xn))2. | | With this loss term, the user’s explanation feedback Mn=explX, indicating which input regions are irrelevant, is propagated back to the model in the optimization phase. The loss prevents the model from focusing on the masked region by penalizing large values in this region. According to the authors, Lpred and Lexp should have the same order of magnitude by setting a suitable regularization rate λ in Eqn. [1](#S3.E1 "(1) ‣ 3.1. Loss Augmentation ‣ 3. XIL Methods ‣ A Typology to Explore and Guide Explanatory Interactive Machine Learning"). Ross2017\_RRR implement Explain with IG by generating explanations based on first-order derivatives, i.e. explθ(X)=IG(X). However, RRR’s Explain is not limited to this explainer. Schramowski2020\_Plantphenotyping ((Schramowski2020\_Plantphenotyping)) propose Right for the Right Reason GradCAM (RRR-G) generating explanations via explθ(X)=GradCAM(X) and Shao2021\_RBR ((Shao2021\_RBR)) propose Right for the Better Reasons (RBR) with second-order derivatives, i.e. explθ(X)=IF(X). We describe further mathematical details in Appendix [A.4](#A1.SS4 "A.4. Detailed Mathematical Description of XIL Methods ‣ Appendix A Appendix ‣ A Typology to Explore and Guide Explanatory Interactive Machine Learning"). In order to penalize wrong reasons, Obtain in this case expects feedback in the following form. A user annotation mask is given as explX=M∈{0,1}N×D with 1s indicating wrong reasons. We define confounding pixels as wrong reasons for our survey. #### Contextual Decomposition Explanation Penalization (CDEP) Compared to the others, CDEP (Rieger2019\_CDEP) uses a different explainer method, CD, i.e. its Explain module is restricted to this explainer method, explθ(X)=CD(X). The CD algorithm measures the layer-wise attribution of a marked feature, here image region, to the output. It decomposes the influence on the prediction between the marked image region to the remaining image. This enables to only focus on the influence of the marked image region and, in this case, penalize it. Hence, Revise is implemented again with the penalty strategy. The user mask M penalizes the model explanation via | | | | | | --- | --- | --- | --- | | (3) | | Lexp=N∑n=1∥∥explθ(Xn)−Mn∥∥1. | | #### Human Importance-aware Network Tuning (HINT) In contrast to previous methods, HINT (Selvaraju2019\_HINT) explicitly teaches a model to focus on right reasons instead of not focusing on wrong reasons. In other words, HINT rewards activation in regions on which to base the prediction, whereas the previous methods penalize activation in regions on which not to base the prediction. Thus, Revise is carried out with the reward strategy. Explain can take any gradient-based explainer, whereas the authors implemented it with GradCAM, i.e. explθ(X)=GradCAM(X). Finally, a distance, e.g. via mean squared error, is computed between the network importance score, i.e. generated explanation, and the user annotation mask, resulting in: | | | | | | --- | --- | --- | --- | | (4) | | Lexp=1NN∑n=1(explθ(Xn)−Mn)2. | | Importantly, Obtain differs from previous methods in that 1s in the user annotation mask M mark right reasons, not wrong reasons. We define relevant pixels (components) as right reasons for our survey. ### 3.2. Dataset Augmentation In contrast to the XIL methods, which add a loss term to revise the model, i.e. to implement Revise, further XIL methods exist which augment the training dataset by adding new (counter)examples to the training data (Teso2019\_CAIPI). Where the previous approaches directly influence the model’s internal representations, this approach indirectly revises a model by forcing it to generalize to additional training examples, specifically tailored to remove wrong features of the input space. This augmentation can, e.g., help remove a model from focusing on confounding shortcuts. #### CounterExamples (CE) Teso2019\_CAIPI ((Teso2019\_CAIPI)) introduce CE, a method where users can mark the confounder, i.e. wrong reason, region in an image from the training data and add a corrected image, i.e. in which an identified confounder is removed, to the training data. In comparison to strategy (1), this strategy is model- and explainer-agnostic, i.e. Explain can be implemented with any explainer method as user feedback is not processed directly via the model’s explanations. Specifically, Obtain takes user annotation masks that mark the components in the explanation that are incorrectly considered relevant. In this case, the explanation corrections are defined by C={j:|wj|>0∧jth component marked by user as irrelevant}, where wj denotes the jth weight component in the attribution map. These explanation corrections are transformed into counterexamples in order to make the feedback applicable to the model. A counterexample is defined as j∈C:{(¯¯¯¯¯X,¯¯¯y)}, where ¯¯¯yi is the, if needed, corrected label and ¯¯¯¯¯Xi is the identical input, except the previously marked component. This component is either (1) randomized, (2) changed to an alternative value, or (3) substituted with the value of the jth component appearing in other training examples of the same class. The counterexamples are added to the training dataset. Moreover, it is also possible to provide multiple counterexamples per correction, e.g., different strategies. In our case, where the input is an image, the user’s explanation correction is a binary mask, and a counterexample is an original image with the marked pixels being corrected. Instead of using noise to augment an example, Lang2021ExplainingIS ((Lang2021ExplainingIS)) present an attractive alternative that generates new realistic examples from a style space learned with a GAN-based approach. 4. Evaluating XIL is More Than Just Accuracy --------------------------------------------- Although there exists a variety of works on XIL, there remains a research gap due to the lack of an in-depth comparison between these. Moreover, XIL methods are, if at all, usually only compared in terms of accuracy on benchmark confounded datasets. This essentially only measures if a XIL method successfully helps overcome the confounder in terms of predictive power. However, the goal of XIL goes beyond this and also includes improving explanations overall, e.g. outside of a confounding setting. Hence, a profound examination by looking at different aspects is crucial for a sound analysis of current and future research and is crucial for filling this research gap. We, therefore, propose additional metrics and benchmarking tasks for a thorough evaluation of a XIL method and clarify these in the following sections. ### 4.1. Metrics for Benchmarking In the following, we introduce relevant quantitative and qualitative approaches to evaluating XIL methods. #### Accuracy In recent XIL works measuring prediction accuracy is a standard metric to evaluate a method’s performance. This mainly works by using a confounded dataset in which the predictive accuracy on a non-confounded test set serves as a proxy for “right reasons”. However, this metric can only be used to evaluate XIL on datasets with known confounders and test sets that do not contain the confounding factor. Otherwise, yet unknown confounders may still fool the model and prevent an accurate evaluation of a XIL method. This is particularly important as XIL does not only aim to improve the predictive power, but also to improve the quality of the model’s explanations regarding preferences and knowledge of the human user. #### Qualitative Explanation Analysis Another approach to evaluating the effectiveness of XIL methods is to qualitatively inspect a model’s explanations, e.g. attribution maps, prior to and after revisions. Next to the previously mentioned test accuracy, this approach to quality assessment is another popular metric many previous works focus their evaluations on. Some recent techniques for (semi-)automatic explanations analysis exist, e.g. for detecting Clever-Hans strategies. For example, Spectral Relevance Analysis (SpRAy) inspects and clusters similar explanations (Lapuschkin2019CleverHans; Anders2019). #### Wrong Reason Metric Besides standard metrics like accuracy, we, therefore, propose a novel, yet intuitive metric, termed wrong reason metric (wr), to measure how wrong a model’s explanation for a specific prediction is, given ground-truth wrong reasons. In contrast to the qualitative evaluation (manual, visual inspection) of attribution maps, our wr metric provides a quantitative complement. In detail, given an input sample XN×D, e.g. an image, a model f with parameters θ, an explainer expl and ground-truth annotation mask M, we quantify wr as | | | | | | --- | --- | --- | --- | | (5) | | \textscwr(X,M)=sum(bα(norm+(explθ(X)))∘M)sum(M), | | where ∘ is the Hadamard product, and norm+ normalizes the attribution values of the explanation to [0,1], while only taking positive values into account by setting all negative values to zero. bα binarizes the explanation (explij>α⇒1 else 0) and the threshold α can be determined by the pixel-level mean overall explanation attribution maps in the test set beforehand. Depending on the explainer expl, it might be necessary to scale (down/up) the dimensions of the explanation to match dimension d of the annotation mask M. In short, the metric calculates to what extent the wrong reason area is activated. \textscwr=1 translates to 100% activation of the wrong reason region, indicating that the model gets fooled by the wrong reason and spuriously uses it as an informative feature. If this behavior co-occurs with high predictive performance this will imply Clever-Hans behavior and reasoning based on a wrong reason. Contrarily, \textscwr=0 signals that 0% of the wrong reason area is activated. However, it is worth noting that one cannot, in principle, claim that the model’s reasoning is based on the right reason from being not wrong. Comparing the wr scores of a Vanilla model with a XIL-extended model allows us to estimate the effectiveness of a specific XIL method. As one objective of XIL is to overcome the influence of the wrong reason area, the wr score should be at least smaller than the score for the Vanilla model. ### 4.2. Novel Benchmarking Tasks In the following, we introduce further relevant benchmarking tasks for evaluating XIL methods. #### Feedback robustness An important aspect of the usability of a XIL approach is its robustness to the completeness and quality variations within the user feedback. This task is vital, as user feedback in the real world is error-prone. In order to provide a benchmark that is comparable between different datasets and can be efficiently evaluated, we propose to simulate and model robustness via a proxy task for all dataset-model combinations. In the spirit of Doshi-Velez2017\_eval ((Doshi-Velez2017\_eval)), this task is, therefore, a functionally-grounded evaluation, with no human in the loop. Two compelling cases to examine are cases of arbitrary and incomplete feedback. Arbitrary feedback can also be viewed as a sanity check of a XIL method since it should not change the performance in any direction. In other words, a model should not produce worse or better predictive performance, as the feedback is neither useful nor detrimental. On the other hand, incomplete feedback imitates real-world feedback by providing only partially valuable feedback. For instance, in the case of the DecoyMNIST (for details see Experimental Setup), two scenarios can be modeled as follows: 1. Arbitrary feedback: 5×3 rectangle pixel region in the middle sections of the top or bottom rows of an image, thus neither on relevant digit feature regions nor on any parts of confounder squares, i.e. M≠C 2. Incomplete feedback: with subregion S (here top half) of relevant components C. Thus, M=1S:C A feedback mask is again denoted by M and the set of (ir)relevant components by C. In the case of correct user feedback M=C. CE uses manipulated copies of the original images instead of binary masks. There are different CE strategies to manipulate (we chose CE-strategy randomize). The manipulated images are added to the training set. Further details and examples can be found in Appendix [A.2.1](#A1.SS2.SSS1 "A.2.1. Feedback Quality Examples ‣ A.2. Details on Experimental Setup ‣ Appendix A Appendix ‣ A Typology to Explore and Guide Explanatory Interactive Machine Learning") (*cf.* Fig. [6](#A1.F6 "Figure 6 ‣ Appendix A Appendix ‣ A Typology to Explore and Guide Explanatory Interactive Machine Learning") and Fig. [7](#A1.F7 "Figure 7 ‣ Appendix A Appendix ‣ A Typology to Explore and Guide Explanatory Interactive Machine Learning")). | Module | RRR | RRR-G | RBR | CDEP | HINT | CE | | --- | --- | --- | --- | --- | --- | --- | | Select | random | | Explain | IG | GradCAM | IF | CD | GradCAM | LIME | | Obtain | attribution mask | | Revise | penalty | penalty | penalty | penalty | reward | dataset | Table 1. Overview of XIL methods setup in our experiments. #### Interaction efficiency In many previous applications and evaluations of XIL methods, every training sample was accompanied by corresponding explanatory feedback. Unfortunately, feedback, e.g. in the form of input masks, can be costly to obtain and potentially only available to a limited extent. A very relevant evaluation, particularly for a method’s practical usability, is how many feedback interactions are required for a human user to revise a model via a specific XIL method. In other words, we propose to investigate the interaction efficiency of a method as an additional benchmark task. To simulate a reduced feedback size, we propose to randomly sample a subset of k annotations before training and evaluate each model with the different-sized feedback sets. Different values for k, i.e. number of explanatory interactions, enable a broad insight into the capability of a XIL method to revise a model efficiently. The effect of the reduced set size is measured with accuracy. Thus, this evaluation task reduces the feedback set size and investigates its impact on the overall effectiveness of a XIL method. #### Switch XIL on A further benchmark task which we propose is called switch XIL on and is motivated in two ways. First, it complements previous works that often only simulated interaction with a model from scratch but not from a strongly confounded model, which would grant more insight into the effectiveness and function of XIL. Second, Algo. [1](#alg1 "Algorithm 1 ‣ 2.1. XIL Framework ‣ 2. Explanatory Interactive Machine Learning ‣ A Typology to Explore and Guide Explanatory Interactive Machine Learning") shows that a model is usually fitted to the given data beforehand, and XIL is applied to the confounded model after e.g. Clever-Hans behavior is detected. This contrasts with the other evaluation methods, where a model accompanies a XIL method from scratch. Here we want to show the flexibility and real-world applicability of XIL, as completely retraining a model can be very costly or even infeasible, e.g. for large-scale pretrained models. Hence, it would be very valuable for a XIL method if it can successfully be applied in revising an already corrupted model. This evaluation task targets correcting a pretrained, strongly corrupted model, i.e. a model already strongly biased towards Clever-Hans behavior. To this end, a Vanilla model is trained on the confounded train set for several epochs. Subsequently, the XIL loss is switched on (for CE, the train set augmented). 5. No Free Lunch in XIL ------------------------ We hypothesize that there is no single best XIL method. Changing a module has costs such that a modification may increase the performance in one criterion but at the expense of another task. Hence, we investigate the different modules with various experiments to verify our hypothesis. Moreover, for the first time, we benchmark existing XIL methods and their modules based on our typology. By providing a comprehensive overview of these methods, we also reveal some yet undiscovered limitations to encourage future research. To this end, we investigate the following questions: (Q1) How well do the methods revise a model? (Q2) Are they robust to feedback quality variations? (Q3) Does the revision still work with a reduced number of interactions? (Q4) Can XIL revise a strongly confounded model? ### 5.1. Experimental Setup For our experiments, we use two different models: a simple CNN for the benchmark and a VGG16 for the scientific dataset. Additionally, we summarize the XIL methods’ setup used in Tab. [1](#S4.T1 "Table 1 ‣ Feedback robustness ‣ 4.2. Novel Benchmarking Tasks ‣ 4. Evaluating XIL is More Than Just Accuracy ‣ A Typology to Explore and Guide Explanatory Interactive Machine Learning"). We use RRR with different explainer methods (IG, GradCAM, and IF) to not only compare different XIL methods but also to investigate the impact of different explainer methods on the same XIL method. For simplicity, we only investigate one XIL method with different explainers. Further experimental details can be found in Appendix [A.1](#A1.SS1 "A.1. Used Models and Datasets ‣ Appendix A Appendix ‣ A Typology to Explore and Guide Explanatory Interactive Machine Learning"). We focus our evaluations on computer vision datasets, where confounders are well-known and an active area of research (ZhongConfounder). In the relevant datasets, a confounder is a visual region in an image (e.g. colored corner) correlating with the image class but is not a causal factor for determining the true class of the image. The confounder fools the model and constitutes a shortcut learning rule (Geirhos2020). In the standard setup, we train an ML model on a confounded train set and run tests on the non-confounded test set. Our goal is to guide the model to ignore the confounder. To account for different facets of XIL, we chose two benchmark datasets, Decoy(F)MNIST, and one scientific dataset, ISIC19. For these datasets, a confounder is visual in the sense that they are spatially separable to provide a controlled environment for evaluation. The DecoyMNIST dataset (Ross2017\_RRR) is a modified version of the MNIST dataset, where the train set introduces decoy squares. Specifically, train images contain 4×4 gray squares in randomly chosen corners, whose shades are functions of their digits. These gray-scale colors are randomized in the test set. The binary feedback masks M mark confounders for the penalty strategy, while the masks mark the digits for the reward strategy. FashionMNIST (FMNIST) is an emendation of MNIST, as it is overused in research and limited in complexity. FMNIST consists of images from ten fashion article classes. The DecoyFMNIST dataset introduces the same confounding squares as DecoyMNIST. The ISIC (International Skin Imaging Collaboration) Skin Cancer 2019 dataset (Codella2017; Combalia2019; Tschandl2018) consists of high-resolution dermoscopic images of skin lesions, having either a benign or malignant cancer diagnosis. In contrast to the benchmark datasets, this dataset is significantly more complex and covers a real-world high-stakes scenario. The main difference is that the confounders are not added artificially, and we only know of one confounder, while there can still exist unknown confounders. The known confounders are colored patches next to a skin lesion. We adjust the original test set as it contains images with both known and unknown confounders. We exclude the images with the known confounder (patches) to ensure a more non-confounded test set, which is essential to measure the confounder influence. Note, the dataset only contains images of Europeans with lighter skin tones, representing the well-known skin color problem, and therefore results cannot be generalized to other skin tones. | | | | --- | --- | | | DecoyMNIST | | XIL | train | test | | w/o decoy | 99.8 | 98.8 | | Vanilla | 99.9 | 78.9 | | RRR | 99.9 | 98.8 | | RRR-G | 99.7 | 97.4 | | RBR | 100.0 | 99.1 | | CDEP | 99.3 | 97.1 | | HINT | 97.6 | 96.6 | | CE | 99.9 | 98.9 | | DecoyFMNIST | | --- | | train | test | | 98.7 | 89.1 | | 99.5 | 58.3 | | 98.7 | 89.4 | | 90.2 | 78.6 | | 96.6 | 87.6 | | 89.8 | 76.7 | | 99.0 | 58.2 | | 99.1 | 87.7 | | | | --- | | ISIC19 | | train | test | | – | – | | 100.0 | 88.4 | | 100.0 | 88.1 | | 100.0 | 88.4 | | 92.6 | 80.3 | | 100.0 | 87.9 | | 100.0 | 87.7 | | 100.0 | 87.5 | Table 2. Mean accuracy scores [%]. First row shows performance on dataset without decoy squares (not available for ISIC). Next row shows the Vanilla model (no XIL) gets fooled, indicated by low test accuracy. Except HINT on FMNIST, all methods recover test accuracy. On ISIC, no accuracy improvement can be observed. Best values are bold; cross-validated on 5 runs (*cf.* Appendix [A.6](#A1.SS6 "A.6. Further Experimental Results ‣ Appendix A Appendix ‣ A Typology to Explore and Guide Explanatory Interactive Machine Learning") Tab. [7](#A1.T7 "Table 7 ‣ A.4.1. Right for the Right Reasons (RRR). ‣ A.4. Detailed Mathematical Description of XIL Methods ‣ Appendix A Appendix ‣ A Typology to Explore and Guide Explanatory Interactive Machine Learning") for standard deviations). | | DecoyMNIST | | --- | --- | | XIL | IG | GradCAM | LIME | | Vanilla | 23.1±3.8 | 38.7±4.6 | 59.8±2.0 | | RRR | 0.0±0.0 | 13.3±2.0 | 32.1±0.4 | | RRR-G | 11.9±2.1 | 1.5±0.8 | 33.3±2.8 | | RBR | 2.0±1.3 | 15.2±3.8 | 37.7±3.0 | | CDEP | 15.0±1.5 | 27.8±3.8 | 37.9±3.7 | | HINT | 11.9±3.1 | 46.8±1.1 | 53.8±2.0 | | CE | 7.3±1.4 | 14.7±2.9 | 36.9±0.6 | | DecoyFMNIST | | --- | | IG | GradCAM | LIME | | 25.0±1.9 | 34.8±1.4 | 57.6±0.8 | | 0.0±0.0 | 24.2±4.1 | 27.4±0.7 | | 2.1±0.4 | 4.6±0.9 | 38.1±4.5 | | 5.97±1.4 | 16.0±4.8 | 34.9±1.4 | | 15.9±4.5 | 39.1±1.7 | 40.2±6.5 | | 29.4±3.3 | 27.8±2.9 | 51.4±3.5 | | 8.1±0.4 | 24.4±0.9 | 31.1±0.6 | | ISIC19 | | --- | | IG | GradCAM | LIME | | 33.2±0.2 | 35.2±0.8 | 63.6±0.7 | | 16.6±8.7 | 27.4±4.3 | 58.9±1.4 | | 11.9±0.9 | 0.9±0.1 | 34.7±0.8 | | 17.2±1.4 | 28.5±22.7 | 58.0±0.6 | | 25.5±0.2 | 5.4±0.2 | 67.4±0.0 | | 31.1±0.1 | 22.0±0.2 | 60.9±0.0 | | 32.7±0.0 | 36.6±0.1 | 61.5±0.9 | Table 3. Mean wr scores [%]. XIL reduces wr scores for all methods on all datasets, even for ISIC19. Lower is better; best values are bold; cross-validated on 5 runs with standard deviation. | | | | --- | --- | | Qualitative inspection of explanations. The first column on each side shows the original image, the second column shows the Vanilla model (no XIL) attribution maps and the remaining columns show the attribution maps of a model with each XIL method. Each row represents an explanation method to visualize the model prediction. The color bar indicates the activation of attribution maps (yellow=strong activation). On the left (right) results for DecoyMNIST (DecoyFMNIST). | Qualitative inspection of explanations. The first column on each side shows the original image, the second column shows the Vanilla model (no XIL) attribution maps and the remaining columns show the attribution maps of a model with each XIL method. Each row represents an explanation method to visualize the model prediction. The color bar indicates the activation of attribution maps (yellow=strong activation). On the left (right) results for DecoyMNIST (DecoyFMNIST). | Figure 1. Qualitative inspection of explanations. The first column on each side shows the original image, the second column shows the Vanilla model (no XIL) attribution maps and the remaining columns show the attribution maps of a model with each XIL method. Each row represents an explanation method to visualize the model prediction. The color bar indicates the activation of attribution maps (yellow=strong activation). On the left (right) results for DecoyMNIST (DecoyFMNIST). ### 5.2. Results #### (Q1) Accuracy Revision In order to investigate the general ability of a XIL method to revise a model (Revise), we evaluate the accuracy score on each test set (Tab. [2](#S5.T2 "Table 2 ‣ 5.1. Experimental Setup ‣ 5. No Free Lunch in XIL ‣ A Typology to Explore and Guide Explanatory Interactive Machine Learning")). To give an impression of the confounder impact, we provide a baseline by evaluating each model on the dataset without decoy squares. This is not available for ISIC19 as the confounders are not artificially added. The Vanilla model represents the performance of a model without revision via XIL. The confounder fools the models, achieving low accuracy scores compared to the baseline without the decoy. In contrast, the XIL methods generally help a model overcome the confounder as they recover the baseline test accuracy. RBR performs best on MNIST and RRR on FMNIST. For FMNIST, HINT achieves a low accuracy score on par with the fooled Vanilla model, indicating that it, here, cannot correct the Clever-Hans behavior. We assume that its reward strategy does not suffice to overcome the confounder and, in turn, for XIL to function properly. For the ISIC19 dataset, no XIL method helps a model improve the accuracy on the test set. Therefore, we cannot answer (Q1) affirmatively for ISIC19 purely based on the accuracy, motivating metrics beyond accuracy. However, summarized, our experiments answer (Q1), i.e. the XIL methods have the general ability to revise a model but may have difficulties with increasing data complexity. #### (Q1) Wrong Reason Revision For the ability to revise wrong reasons, we conduct quantitative (Tab. [3](#S5.T3 "Table 3 ‣ 5.1. Experimental Setup ‣ 5. No Free Lunch in XIL ‣ A Typology to Explore and Guide Explanatory Interactive Machine Learning")) and qualitative (Fig. [1](#S5.F1 "Figure 1 ‣ 5.1. Experimental Setup ‣ 5. No Free Lunch in XIL ‣ A Typology to Explore and Guide Explanatory Interactive Machine Learning")) experiments to inspect Explain. On one hand, we have the quantitative wr score. It measures the activation in the confounder area and hence automates the visual inspection of explanations. The Vanilla model (without XIL) achieves high wr scores, i.e., high activation in the confounder region. Again, the XIL methods help a model lower the wr score, reducing the confounder impact. The table further shows that the XIL methods overfit on the internally-used explainer in terms of reducing its attention to the confounding region (*cf.* RRR with IG explanations or RRR-G with GradCAM explanations). This is expected, as a XIL method exactly optimizes for the used explanation method. Interestingly, a XIL method also reduces the wr score for other not internally-used explainers (*cf.* RRR with GradCAM explanations or RRR-G with IG explanations). Consequently, XIL’s impact is beyond its internally-used explainer and not only restricted to it. However, LIME attribution maps are always highly activated, albeit reduced, as it is never internally used as an explainer. Furthermore, we can see that CDEP and HINT do not significantly reduce the wr score compared to the baseline. As HINT works with rewarding instead of penalizing and is thus not explicitly trained to avoid confounders, we do not necessarily expect it to score low wr values. CDEP also does not achieve low wr values and does not overcome the confounder, although using penalty. We previously found that XIL could not overcome the test performance of the fooled Vanilla model on the ISIC19 dataset. However, the wr score surprisingly indicates a clear reduction. So, although XIL cannot revise a model in terms of accuracy, XIL can indeed change the explanations (lowers the wr score), proving its function and usefulness. Notably, these findings showcase the importance of additional quantitative metrics, such as wr, for evaluating XIL methods. ![Exemplary masks for varying feedback types.](https://media.arxiv-vanity.com/render-output/7840230/LaTeX/Images/Feedback_example_.png) Figure 2. Exemplary masks for varying feedback types. DecoyMNIST DecoyFMNIST feedback RRR RRR-G RBR CDEP HINT CE RRR RRR-G RBR CDEP HINT CE arbitrary ↓ +3.3 –4.2 –22.1 +17.8 +4.1 +0.3 +1.4 +7.9 –37.3 +12.2 –3.2 –1.2 incomplete ↑ +19.6 +9.5 +17.2 +17.9 +17.7 +6.7 +24.2 +12.4 +16 +16.8 +21 +3.4 correct ↑ +19.9 +18.5 +20.2 +18.2 +17.7 +20 +31.1 +20.3 +29.3 +18.4 –0.1 +29.4 Table 4. Feedback robustness evaluation on Decoy(F)MNIST for arbitrary and incomplete compared to correct feedback masks. The mean difference in test accuracy [%] compared to the confounded Vanilla model is given. For arbitrary feedback, lower is better, and for incomplete and correct feedback, higher is better. Incomplete feedback is on par with correct feedback. Good values are green, poor values are red; cross-validated on 5 runs (*cf.* Appendix [A.6](#A1.SS6 "A.6. Further Experimental Results ‣ Appendix A Appendix ‣ A Typology to Explore and Guide Explanatory Interactive Machine Learning") Tab. [8](#A1.T8 "Table 8 ‣ A.6. Further Experimental Results ‣ Appendix A Appendix ‣ A Typology to Explore and Guide Explanatory Interactive Machine Learning") for standard deviations). On the other hand, we manually inspected 200 randomly generated attribution maps for each method-explainer combination. We exemplify the findings for a representative example on each benchmark dataset. Fig. [1](#S5.F1 "Figure 1 ‣ 5.1. Experimental Setup ‣ 5. No Free Lunch in XIL ‣ A Typology to Explore and Guide Explanatory Interactive Machine Learning") shows explanation attribution maps for Decoy(F)MNIST on the left (right). A high activation in or around the top right corner indicates Clever-Hans behavior of a model to the confounder. For the Vanilla model, each explanation method highlights the confounder region. The top row shows activation attribution maps generated with the IG explainer. Here, we can see the previously-found overfitting once again. For example, the RRR-revised model shows no activation in the confounding right top corner while RRR-G still has high activation around the corner. Consequently, the function of XIL works only reliably and well on the internally-used explainer. The qualitative findings here confirm the quantitative findings of the wr score and demonstrate that it is a suitable method to evaluate the performance of XIL methods. Moreover, penalizing wrong reasons does not enforce predictions based on the right reasons (*cf.* RRR-G with IG attribution maps on MNIST). Second, attribution maps generated via GradCAM require upsampling and hence prevent a clear and precise interpretation. Although the right reason is sometimes highlighted, there remains some uncertainty in its precision. Third, the provided explanation methods visually contradict each other. The RRR column, for instance, indicates that XIL, and XAI in general, must be handled with care: the performance values may show overcoming a confounder while the visual explanations (attribution maps) indicate otherwise. Overall, our evaluation gives more insight into Explain of XIL and extends the previous findings for (Q1). Although, in terms of accuracy, it may not become entirely apparent that the considered XIL methods remove all Clever-Hans behavior, we observed that the methods and in this way XIL in general do in fact improve a model’s explanations and can therefore effectively be used to revise a model. We note here that due to our findings of possibly additional confounding factors in the ISIC dataset (*cf.* Fig. [5](#S6.F5 "Figure 5 ‣ 6. Discussion ‣ A Typology to Explore and Guide Explanatory Interactive Machine Learning")) we focused further evaluations on the remaining datasets. #### (Q2) Robustness to feedback quality variations As previous research focused only on correct feedback masks, we additionally provide insights into the feedback quality and the robustness of a XIL method towards quality changes. In this experiment, the objective is to gather more knowledge about Obtain. Exemplary feedback masks for this experiment are illustrated in Fig. [2](#S5.F2 "Figure 2 ‣ (Q1) Wrong Reason Revision ‣ 5.2. Results ‣ 5. No Free Lunch in XIL ‣ A Typology to Explore and Guide Explanatory Interactive Machine Learning"), visualizing the feedback types for a penalty-based approach. Tab. [4](#S5.T4 "Table 4 ‣ Figure 2 ‣ (Q1) Wrong Reason Revision ‣ 5.2. Results ‣ 5. No Free Lunch in XIL ‣ A Typology to Explore and Guide Explanatory Interactive Machine Learning") compares the impact of different feedback types to a fooled Vanilla model. The values for correct feedback are taken from Tab. [2](#S5.T2 "Table 2 ‣ 5.1. Experimental Setup ‣ 5. No Free Lunch in XIL ‣ A Typology to Explore and Guide Explanatory Interactive Machine Learning"). Correct feedback demonstrates how XIL improves the accuracy, i.e. removes the confounder impact, compared to the Vanilla model. Moreover, we can clearly see that incomplete and correct feedback are nearly on par for all methods in improving the test accuracy. This emphasizes XIL’s robustness towards user feedback of varying quality and suggests real-world usability of XIL, considering that human user feedback is prone to errors. Note, the performance of CE for incomplete feedback is worse due to the strategy of augmenting the dataset. This way, the confounder impact is not as easily removed by adding images where only part of the confounding square is removed. However, our results indicate that this still suffices to achieve limited revision. In contrast, for the case of arbitrary feedback, robust expresses that a method is not significantly changing performance. However, we can see a significant increase (decrease) for CDEP (RBR) especially compared to the correct feedback improvement. This suggests that CDEP improves performance no matter what feedback quality. Consequently, we presume CDEP does not pass the sanity check leaving some concerns about its reliability. If it is irrelevant how user feedback masks look to correct the model, the rationale behind the XIL method is questionable and its usage worrisome for users. For RBR, we presume that arbitrary feedback leads to a collapse of the model’s learning process, i.e. random guessing, revealing a lack of robustness. All in all, however, the considered XIL methods prove general robustness for different feedback quality types, thus answering (Q2) affirmatively and providing evidence for XIL’s effectiveness in more practical use-cases. #### (Q3) Interaction efficiency Obtaining precise and correct feedback annotations is costly and time-consuming, making interaction efficiency a crucial property for XIL methods. Therefore, we examine how many explanatory interactions suffice to overcome a known confounder. A method that utilizes annotations more efficiently, i.e., requires fewer interactions to revise a model, is preferable. In the previous experiments, every training image was accompanied by its corresponding feedback mask to correct the confounder. In contrast, we randomly sample a subset of k annotations before training and evaluate each model with different-sized feedback sets, i.e., number of explanatory interactions. By doing so, we target Select as we investigate how the selection affects the model revision. Fig. [3](#S5.F3 "Figure 3 ‣ (Q3) Interaction efficiency ‣ 5.2. Results ‣ 5. No Free Lunch in XIL ‣ A Typology to Explore and Guide Explanatory Interactive Machine Learning") shows increasing test accuracy for an increasing number of available feedback masks for all XIL methods, i.e. the more feedback available, the better. Moreover, the figure shows that only a tiny fraction of feedback masks is required to revise a model properly. Although there is a significant difference between the XIL methods’ efficiency, our obtained results illustrate that XIL utilizes feedback efficiently and can already deal with a few feedback annotations. Note that the methods achieve different test accuracy with all available feedback, such that they do not all converge at the same level; *cf.* Tab. [2](#S5.T2 "Table 2 ‣ 5.1. Experimental Setup ‣ 5. No Free Lunch in XIL ‣ A Typology to Explore and Guide Explanatory Interactive Machine Learning") for test accuracy with full feedback set. Interestingly, RRR, for example, needs only a few dozen interactions to overcome the confounder, while CE requires considerably more interactions. Summarized, the surveyed XIL methods can efficiently utilize user feedback, solving (Q3). ![Interaction Efficiency. Test accuracy of XIL methods with different number of used feedback interactions on Decoy(F)MNIST. The more interactions, the higher the performance. However, a smaller number of interactions already suffices.](https://media.arxiv-vanity.com/render-output/7840230/LaTeX/Images/IntEff.png) Figure 3. Interaction Efficiency. Test accuracy of XIL methods with different number of used feedback interactions on Decoy(F)MNIST. The more interactions, the higher the performance. However, a smaller number of interactions already suffices. ![Switch XIL on. Test accuracy over time after XIL is applied to an already fooled model. All methods, except CDEP and RBR, can recover the test performance and overcome the confounder.](https://media.arxiv-vanity.com/render-output/7840230/LaTeX/Images/XilOn.png) Figure 4. Switch XIL on. Test accuracy over time after XIL is applied to an already fooled model. All methods, except CDEP and RBR, can recover the test performance and overcome the confounder. #### (Q4) Revising a strongly corrupted model In order to further evaluate the real-world usability of XIL, we conduct a Switch XIL on experiment, where we turn a XIL method on after a model has already been fooled and shows strong Clever-Hans behavior. Fig. [4](#S5.F4 "Figure 4 ‣ (Q3) Interaction efficiency ‣ 5.2. Results ‣ 5. No Free Lunch in XIL ‣ A Typology to Explore and Guide Explanatory Interactive Machine Learning") shows the test performance of a model during training. First, the model gets fooled by the decoy squares. After 50 epochs, the XIL augmentation is switched on (i.e., either the loss or dataset is augmented with XIL). As we can see, all methods, except CDEP and RBR, can recover the test accuracy and overcome the confounder. RRR has a noticeable curve since the RR-loss sharply increases, and hence the accuracy drops before it sharply increases again. Most likely, it requires more hyperparameter tuning to avoid this leap. For RBR, we assume the same, as it is difficult to tune the loss accordingly. Also, (Q4) is thus answered affirmatively by this experiment as it overall shows that XIL can “cure” already confounded models, which is an important property for real-world applications. 6. Discussion -------------- The previous sections demonstrated that modifying XIL modules is no free lunch in the sense that modifying one module does not guarantee improvements in all criteria. In the following, we wish to discuss some additional points. ![(left) An ISIC19 image with confounder (red patch). (middle) an RRR-revised model and (right) a HINT-revised model generate explanations for the image. The explanations are visualized with GradCAM. RRR helps disclose unknown confounders (dark corners), and HINT reveals the potential of the reward strategy.](https://media.arxiv-vanity.com/render-output/7840230/LaTeX/Images/isic_showcase.png) Figure 5. (left) An ISIC19 image with confounder (red patch). (middle) an RRR-revised model and (right) a HINT-revised model generate explanations for the image. The explanations are visualized with GradCAM. RRR helps disclose unknown confounders (dark corners), and HINT reveals the potential of the reward strategy. As pointed out initially, it is often easier to state a wrong reason than a right reason (Schramowski2020\_Plantphenotyping). However, penalizing wrong reasons may not be enough to revise a model to be right for all the right reasons. Avoiding one wrong reason but using another wrong one instead is not enough to fully revise a model. The provided attribution maps for ISIC19 (*cf.* Fig. [5](#S6.F5 "Figure 5 ‣ 6. Discussion ‣ A Typology to Explore and Guide Explanatory Interactive Machine Learning")) illustrate this trade-off. As we can observe from the attribution maps, the reward strategy (HINT) visually outperforms the penalty strategy. The penalty method, exemplified here via RRR, does to a certain degree point towards the right reason, but not only and as reliably as rewarding via HINT. In general, however, a reward strategy cannot guarantee that confounders are avoided. Throughout our work, we encountered ambiguities between different explainer methods. When a XIL method is applied and a sample is visualized with different explainer methods, we find contradicting attribution maps (*cf.* RRR column in Fig. [1](#S5.F1 "Figure 1 ‣ 5.1. Experimental Setup ‣ 5. No Free Lunch in XIL ‣ A Typology to Explore and Guide Explanatory Interactive Machine Learning")). In fact, the analysis of attribution maps shows remarkable differences between IG, GradCAM, and LIME. In some cases, we can even observe opposing explanations. This consequently raises the concern on how reliable the explainer methods are? At this point, we note that for surveying XIL, we make the general assumption about the correctness of the explanation methods, which is a yet open topic in the field (adebayo2018sanity; krishna2022disagreement). Although it is the explicit goal of XIL to improve explanations, this can only work if an explainer method does not inherently fail at producing meaningful and coherent explanations. In that case, the overall objective of increasing user trust is already undermined before XIL enters the game. One of the main challenges of XIL is the real-world application. Revising a model must be easy for an average ML practitioner or any human user. If the resource demand is too high, the methods are difficult to use. This is specifically a problem for state-of-the-art, large-scale pretrained models. One example is RBR which uses IFs, i.e. second-order derivatives, to correct Clever-Hans behavior. In our evaluations, we found that IFs induce a huge resource demand, making XIL slower and more challenging to optimize –loss clipping was necessary to avoid exploding gradients. In terms of architecture choice and design, we also encountered several obstacles. Our framework description already pointed out that not every XIL method is applicable to every model or explainer method, e.g. GradCAM-based XIL methods can only be applied to CNNs. We argue a flexible XIL method is preferable such that various models and explainer methods can be applied. From our experimental evaluations considering the number of required interactions, we observed that CE, with the dataset augmentation strategy, requires the largest amount of user feedback. Especially for large-scale models, the number of interactions required can be a limiting factor. In practical use-cases, often only a limiting number of explanatory feedback is available. Another aspect here is the aspect of trustworthiness in that a user might not trust a model as much if the feedback they have provided is not directly incorporated by the model and should suffice to revise a model. Furthermore, we noticed that CE is less robust to incomplete feedback, possibly compromising this approach for real-world application. A very noteworthy potential of XIL could be observed in the qualitative evaluations of ISIC19 attribution maps. In fact, by applying XIL on one confounder, we could identify further yet unknown confounders (shortcuts) to the user, in this case the dark corners found in the images (*cf.* Fig. [5](#S6.F5 "Figure 5 ‣ 6. Discussion ‣ A Typology to Explore and Guide Explanatory Interactive Machine Learning") (middle)). These findings further demonstrate the importance of a strong level of human-machine interactions via explanations. Particularly in such a setting, each can learn from the other in a stronger bidirectional, discourse-like manner and more than just the unidirectional way of communication provided by XAI alone. To this end, we refer to the theory of embodied intelligence, in which interaction and manipulation of the environment allow for information enrichment to obtain intelligent systems (Tan2007IntelligenceTI). 7. Conclusion -------------- In summary, this work presents a comprehensive survey in the rising field of XIL and provides an empirical comparison of six currently proposed XIL methods. Since these methods emerged independently from different communities, we proposed the first XIL typology to unify terminology and categorize XIL methods concisely. Furthermore, we evaluated these methods on novel benchmarks and criteria. These covered the performance in model revision, robustness under changing feedback quality, interaction efficiency, and real-world applicability. In addition, we introduced a novel wr metric for our experiments to measure an average confounder activation in a model’s explanations. Our typology and evaluations showed that XIL methods allow one to revise a model not only in terms of accuracy but also by improving the explanations. However, we also observed overfitting to the explainer method used. Here, a mixture of explainers may account for uncertainty on the right reasons. Moreover, one should combine feedback on what the right explanation is with feedback on what it is not, rather than focusing on only one of these feedback semantics. And, of course, tuning hyperparameters during XIL is an exciting avenue for future work. Different meta-learning setups could lead to different results. Most importantly, existing XIL approaches just follow a linear Select, Explain, Obtain, and Revise. As in daily human-to-human communication, machines should also follow more flexible policies such as an Explain & Obtain sub-loop, pushing for what might be called explanatory cooperative AI (Dafoe2021nature).
e2c7a77b-b8ac-4c9a-a531-d4d094112788
StampyAI/alignment-research-dataset/special_docs
Other
README.docx Table of Contents ================= \*\*[Table of Contents](#table-of-contents) 1\*\* \*\*[Interview Information](#interview-information) 2\*\* > [Individually-selected](#individually-selected) 2 > > [NeurIPS-or-ICML](#neurips-or-icml) 2 \*\*[Intended Script (all interviews)](#intended-script-all-interviews) 3\*\* \*\*[Post-Interview Resources Sent To Interviewees](#post-interview-resources-sent-to-interviewees) 5\*\* > [Master list of resources](#master-list-of-resources) 5 \*\*[Informal Interview Notes](#informal-interview-notes) 10\*\* > [Thoughts from listening to myself doing these interviews](#thoughts-from-listening-to-myself-doing-these-interviews) 10 > > [Content analysis](#content-analysis) 11 Interview Information ===================== These interviews are associated with the LessWrong Post: [[Transcripts of interviews with AI researchers]{.underline}](https://www.lesswrong.com/posts/LfHWhcfK92qh2nwku/transcripts-of-interviews-with-ai-researchers). (Please do not try to identify any interviewees from any remaining peripheral information.) ### ### Individually-selected "Five of the interviews were with researchers who were informally categorized as 'particularly useful to talk to about their opinions about safety' (generally more senior researchers at specific organizations)." - 7ujun - zlzai - 92iem - 84py7 - w5cb5 ### ### NeurIPS-or-ICML "Six of the interviews were with researchers who had papers accepted at NeurIPS or ICML in 2021." - a0nsf (this is the interview in which I most straightforwardly get through my questions) - q243b - 7oalk - lgu5f - cvgig - bj9ne (language barriers, young) Intended Script (all interviews) ================================ There was a fixed set of questions that I was attempting to walk people through, across all of the interviews. It's a sequence, so I generally didn't move onto the next core question until I had buy-in for the previous core question. The core questions were: "do you think we'll get AGI" (if yes, I moved on; if not I interacted with the beliefs there, sometimes for the entire interview), "\[alignment problem\]", and "\[instrumental incentives\]". I was reacting to the researchers' mental models in all cases. I was trying to get to all of the core questions during the allotted time, but early disagreements often reappeared if the interviewee and I didn't manage to reach initial agreement. I prioritized the core questions, and brought other questions up if they seemed relevant. The questions (core questions are highlighted): - "What are you most excited about in AI, and what are you most worried about? (What are the biggest benefits or risks of AI?)" - "In at least 50 years, what does the world look like?" - "When do you think we'll get AGI / capable / generalizable AI / have the cognitive capacities to have a CEO AI if we do?" - Example dialogue: "All right, now I\'m going to give a spiel. So, people talk about the promise of AI, which can mean many things, but one of them is getting very general capable systems, perhaps with the cognitive capabilities to replace all current human jobs so you could have a CEO AI or a scientist AI, etcetera. And I usually think about this in the frame of the 2012: we have the deep learning revolution, we\'ve got AlexNet, GPUs. 10 years later, here we are, and we\'ve got systems like GPT-3 which have kind of weirdly emergent capabilities. They can do some text generation and some language translation and some code and some math. And one could imagine that if we continue pouring in all the human investment that we\'re pouring into this like money, competition between nations, human talent, so much talent and training all the young people up, and if we continue to have algorithmic improvements at the rate we\'ve seen and continue to have hardware improvements, so maybe we get optical computing or quantum computing, then one could imagine that eventually this scales to more of quite general systems, or maybe we hit a limit and we have to do a paradigm shift in order to get to the highly capable AI stage. Regardless of how we get there, my question is, do you think this will ever happen, and if so when?" - "What do you think of the argument 'highly intelligent systems will fail to optimize exactly what their designers intended them to, and this is dangerous'?" - Example dialogue: "Alright, so these next questions are about these highly intelligent systems. So imagine we have a CEO AI, and I\'m like, \"Alright, CEO AI, I wish for you to maximize profit, and try not to exploit people, and don\'t run out of money, and try to avoid side effects.\" And this might be problematic, because currently we\'re finding it technically challenging to translate human values preferences and intentions into mathematical formulations that can be optimized by systems, and this might continue to be a problem in the future. So what do you think of the argument \"Highly intelligent systems will fail to optimize exactly what their designers intended them to and this is dangerous\"? - "What do you think about the argument: 'highly intelligent systems will have an incentive to behave in ways to ensure that they are not shut off or limited in pursuing their goals, and this is dangerous'?" - Example dialogue: "Alright, next question is, so we have a CEO AI and it\'s like optimizing for whatever I told it to, and it notices that at some point some of its plans are failing and it\'s like, \"Well, hmm, I noticed my plans are failing because I\'m getting shut down. How about I make sure I don\'t get shut down? So if my loss function is something that needs human approval and then the humans want a one-page memo, then I can just give them a memo that doesn\'t have all the information, and that way I\'m going to be better able to achieve my goal.\" So not positing that the AI has a survival function in it, but as an instrumental incentive to being an agent that is optimizing for goals that are maybe not perfectly aligned, it would develop these instrumental incentives. So what do you think of the argument, \"Highly intelligent systems will have an incentive to behave in ways to ensure that they are not shut off or limited in pursuing their goals and this is dangerous\"?" - "Have you heard of the term "AI safety"? And if you have or have not, what does that term mean for you?" - "Have you heard of AI alignment?" - "What would motivate you to work on alignment questions?" - "If you could change your colleagues' perception of AI, what attitudes/beliefs of theirs would you like to change?" - "What are your opinions about policy oriented around AI?" I also had content prepared if we got to the end of the interview, based on [[Clarke et al. (2022)]{.underline}](https://www.alignmentforum.org/posts/WiXePTj7KeEycbiwK/survey-on-ai-existential-risk-scenarios), [[RAAPs]{.underline}](https://www.lesswrong.com/posts/LpM3EAakwYdS6aRKf/what-multipolar-failure-looks-like-and-robust-agent-agnostic), some of Critch's content on pollution, and my general understanding of the space. My notes: "Scenarios here are about loss of control + correlated failures... can also think about misuse, or AI-assisted war. Also a scenario where the AI does recursive-self-improvement, and ends up actually able to kill humans via e.g. synthetic biology or nanotechnology or whatever, pollution." Post-Interview Resources Sent To Interviewees ============================================= I sent most interviewees resources after the interviews. - I usually floated the idea of sending them resources during the interview, and depending on their response, would send different amounts of resources. - I did \*\*not\*\* send resources if the interviewee seemed like they would be annoyed by them. - I only sent a couple of resources if they seemed not very open to the idea. - For people who were very interested, I often sent them different content that was more specific to them getting involved. These were the people who I sometimes sent the EA / Rationalist material at the end-- I very rarely included EA/Rationalist-specific content in emails, only if they seemed like they'd be very receptive. Here's my master list of notes, which I selected from for each person based on their interests. I sometimes sent along copies of Human Compatible, the Alignment Problem, or the Precipice. ### Master list of resources Hello X, Very nice to speak to you! As promised, some resources on AI alignment. I tried to include a bunch of stuff so you could look at whatever you found interesting. Happy to chat more about anything, and thanks again! \*\*Introduction to the ideas:\*\* - \*\*The\*\* \*\*[[Most Important Century]{.underline}](https://www.cold-takes.com/most-important-century/) and specifically \"[[Forecasting Transformative AI]{.underline}](https://www.cold-takes.com/transformative-ai-timelines-part-1-of-4-what-kind-of-ai/)\" by Holden Karnofsky, blog series and podcast. Most recommended for description of AI timelines\*\* - [[Introduction]{.underline}](https://www.vox.com/future-perfect/2018/12/21/18126576/ai-artificial-intelligence-machine-learning-safety-alignment) piece by Kelsey Piper (Vox) - A short [[interview]{.underline}](https://www.vox.com/future-perfect/2019/10/26/20932289/ai-stuart-russell-human-compatible) from Prof. Stuart Russell (UC Berkeley) about his book, [[Human-Compatible]{.underline}](https://smile.amazon.com/Human-Compatible-Artificial-Intelligence-Problem-ebook/dp/B07N5J5FTS/ref=sr\_1\_1?dchild=1&keywords=human+compatible&qid=1635910751&s=digital-text&sr=1-1) (the other main book in the space is [[The Alignment Problem]{.underline}](https://smile.amazon.com/Alignment-Problem-Machine-Learning-Values-ebook/dp/B085T55LGK/ref=sr\_1\_1?dchild=1&keywords=alignment+problem&qid=1635910676&s=digital-text&sr=1-1), by Brian Christian, which I actually like more!) \*\*Technical work on AI alignment:\*\* - Some [[empirical work]{.underline}](https://deepmindsafetyresearch.medium.com/scalable-agent-alignment-via-reward-modeling-bf4ab06dfd84) by DeepMind\'s Safety team about the alignment problem - [[Empirical work]{.underline}](https://arxiv.org/pdf/2112.00861.pdf) by an organization called Anthropic (mostly OpenAI\'s old Safety team) on alignment solutions - [[Podcast (and transcript)]{.underline}](https://futureoflife.org/2021/11/01/rohin-shah-on-the-state-of-agi-safety-research-in-2021/) by Rohin Shah, describing the state of AI value alignment (probably want the first half or so) - [[Talk (and transcript)]{.underline}](https://forum.effectivealtruism.org/posts/63stBTw3WAW6k45dY/paul-christiano-current-work-in-ai-alignment) by Paul Christiano describing the AI alignment landscape in 2020 - [[Alignment Newsletter]{.underline}](https://rohinshah.com/alignment-newsletter/) for alignment-related work - A much more hands-on approach to [[ML safety]{.underline}](https://arxiv.org/abs/2109.13916), focused on current systems - Interpretability work aimed at long-term alignment: [[Elhage (2021)]{.underline}](https://transformer-circuits.pub/2021/framework/index.html), by Anthropic and [[Olah (2020)]{.underline}](https://distill.pub/2020/circuits/zoom-in/) - Ah, and one last report, which outlines one small research organization\'s ([[Alignment Research Center]{.underline}](https://alignmentresearchcenter.org/)) [[research direction]{.underline}](https://docs.google.com/document/d/1WwsnJQstPq91\_Yh-Ch2XRL8H\_EpsnjrC1dwZXR37PC8/edit) and offers prize money for solving it: [[https://www.lesswrong.com/posts/QEYWkRoCn4fZxXQAY/prizes-for-elk-proposals]{.underline}](https://www.lesswrong.com/posts/QEYWkRoCn4fZxXQAY/prizes-for-elk-proposals) \*\*Introduction to large-scale, long-term risks from humanity\-- including \"existential risks\" that would lead to the extinction of humanity:\*\* - The [[first third of this book summary]{.underline}](https://ndpr.nd.edu/reviews/the-precipice-existential-risk-and-the-future-of-humanity/), or the book [[The Precipice]{.underline}](https://www.amazon.com/Precipice-Existential-Risk-Future-Humanity/dp/031648492X/), by Toby Ord (not about AI particularly, more about long-term risks) > Chapter 3 is on natural risks, including risks of asteroid and comet impacts, supervolcanic eruptions, and stellar explosions. Ord argues that we can appeal to the fact that we have already survived for 2,000 centuries as evidence that the total existential risk posed by these threats from nature is relatively low (less than one in 2,000 per century). > > Chapter 4 is on anthropogenic risks, including risks from nuclear war, climate change, and environmental damage. Ord estimates these risks as significantly higher, each posing about a one in 1,000 chance of existential catastrophe within the next 100 years. However, the odds are much higher that climate change will result in non-existential catastrophes, which could in turn make us more vulnerable to other existential risks. > > Chapter 5 is on future risks, including engineered pandemics and artificial intelligence. Worryingly, Ord puts the risk of engineered pandemics causing an existential catastrophe within the next 100 years at roughly one in thirty. With any luck the COVID-19 pandemic will serve as a \"warning shot,\" making us better able to deal with future pandemics, whether engineered or not. Ord\'s discussion of artificial intelligence is more worrying still. The risk here stems from the possibility of developing an AI system that both exceeds every aspect of human intelligence and has goals that do not coincide with our flourishing. Drawing upon views held by many AI researchers, Ord estimates that the existential risk posed by AI over the next 100 years is an alarming one in ten. > > Chapter 6 turns to questions of quantifying particular existential risks (some of the probabilities cited above do not appear until this chapter) and of combining these into a single estimate of the total existential risk we face over the next 100 years. Ord\'s estimate of the latter is one in six. - [[How to Reduce Existential Risk]{.underline}](https://80000hours.org/articles/how-to-reduce-existential-risk/) by 80,000 Hours or \"[[Our current list of pressing world problems]{.underline}](https://80000hours.org/problem-profiles/)\" blog post \*\*Governance:\*\* - [[AI Governance: Opportunity and Theory of Impact]{.underline}](https://www.allandafoe.com/opportunity), by Allan Dafoe and [[GovAI](https://www.governance.ai/)]{.underline} generally - [[AI Governance: A Research Agenda]{.underline}](https://www.fhi.ox.ac.uk/wp-content/uploads/GovAI-Agenda.pdf), by Allan Dafoe and [[GovAI]{.underline}](https://www.governance.ai/) - [[The longtermist AI governance landscape: a basic overview]{.underline}](https://forum.effectivealtruism.org/posts/ydpo7LcJWhrr2GJrx/the-longtermist-ai-governance-landscape-a-basic-overview) if you\'re interested in getting involved, also [[more personal posts of how to get involved]{.underline}](https://forum.effectivealtruism.org/tag/governance-of-artificial-intelligence) including [[Locke\\_USA - EA Forum]{.underline}](https://forum.effectivealtruism.org/users/locke\_usa) ```{=html} ``` - [[The case for building expertise to work on US AI policy, and how to do it](https://80000hours.org/articles/us-ai-policy/)]{.underline} by 80,000 Hours \*\*How AI could be an existential risk:\*\* - [[AI alignment researchers disagree a weirdly high amount about how AI could constitute an existential risk]{.underline}](https://www.alignmentforum.org/posts/WiXePTj7KeEycbiwK/survey-on-ai-existential-risk-scenarios), so I hardly think the question is settled. Some plausible ones people are considering (from the paper) ```{=html} ``` - \"Superintelligence\" - A single AI system with goals that are hostile to humanity quickly becomes sufficiently capable for complete world domination, and causes the future to contain very little of what we value, as described in "[[Superintelligence]{.underline}](https://en.wikipedia.org/wiki/Superintelligence:\_Paths,\_Dangers,\_Strategies)\". (Note from Vael: Where the AI has an instrumental incentive to destroy humans and uses its planning capabilities to do so, for example via synthetic biology or nanotechnology.) - Part 2 of "[[What failure looks like]{.underline}](https://www.alignmentforum.org/posts/HBxe6wdjxK239zajf/what-failure-looks-like)" - This involves multiple AIs accidentally being trained to seek influence, and then failing catastrophically once they are sufficiently capable, causing humans to become extinct or otherwise permanently lose all influence over the future. (Note from Vael: I think we might have to pair this with something like \"and in loss of control, the environment then becomes [[uninhabitable to humans]{.underline}](https://www.lesswrong.com/posts/LpM3EAakwYdS6aRKf/what-multipolar-failure-looks-like-and-robust-agent-agnostic) through pollution or consumption of important resources for humans to survive) - Part 1 of "[[What failure looks like]{.underline}](https://www.alignmentforum.org/posts/HBxe6wdjxK239zajf/what-failure-looks-like)" - This involves AIs pursuing easy-to-measure goals, rather than the goals humans actually care about, causing us to permanently lose some influence over the future. (Note from Vael: I think we might have to pair this with something like \"and in loss of control, the environment then becomes [[uninhabitable to humans]{.underline}](https://www.lesswrong.com/posts/LpM3EAakwYdS6aRKf/what-multipolar-failure-looks-like-and-robust-agent-agnostic) through pollution or consumption of important resources for humans to survive) - War - Some kind of war between humans, exacerbated by developments in AI, causes an existential catastrophe. AI is a significant risk factor in the catastrophe, such that no catastrophe would be occurred without the developments in AI. The proximate cause of the catastrophe is the deliberate actions of humans, such as the use of AI-enabled, nuclear or other weapons. See Dafoe ([[2018]{.underline}](https://www.fhi.ox.ac.uk/wp-content/uploads/GovAIAgenda.pdf)) for more detail. (Note from Vael: Though there\'s a recent argument that it may be [[unlikely for nuclear weapons to cause an extinction event]{.underline}](https://www.lesswrong.com/posts/sT6NxFxso6Z9xjS7o/nuclear-war-is-unlikely-to-cause-human-extinction), and instead it would just be catastrophically bad. One could still do it with synthetic biology though, probably, to get all of the remote people.) - Misuse - Intentional misuse of AI by one or more actors causes an existential catastrophe (excluding cases where the catastrophe was caused by misuse in a war that would not have occurred without developments in AI). See Karnofsky ([[2016]{.underline}](https://www.openphilanthropy.org/blog/potential-risks-advanced-artificial-intelligence-philanthropic-opportunity)) for more detail. - Other \*\*Off-switch game and corrigibility\*\* - [[Off-switch game]{.underline}](https://arxiv.org/abs/1611.08219) and [[corrigibility]{.underline}](https://intelligence.org/files/Corrigibility.pdf) paper, about incentives for AI to be shut down. This article from DeepMind about \"[[specification gaming]{.underline}](https://deepmindsafetyresearch.medium.com/specification-gaming-the-flip-side-of-ai-ingenuity-c85bdb0deeb4)\" isn\'t about off-switches, but also makes me feel like there\'s currently maybe a tradeoff in task specification, where more building more generalizability into a system will result in novel solutions but less control. Their [[follow-up paper]{.underline}](https://deepmindsafetyresearch.medium.com/scalable-agent-alignment-via-reward-modeling-bf4ab06dfd84) where they outline a possible research to this problem makes me feel like encoding human preferences is going to be quite hard, and all of the other discussion in AI alignment, though we don\'t know how hard the alignment problem will be. \*\*There\'s also a growing community working on AI alignment\*\* - The strongest academic center is probably UC Berkeley\'s [[Center for Human-Compatible AI]{.underline}](https://humancompatible.ai/about/). Mostly there are researchers distributed at different institutions e.g. [[Dylan Hadfield-Menell]{.underline}](https://scholar.google.com/citations?hl=en&user=4mVPFQ8AAAAJ&view\_op=list\_works&sortby=pubdate) at MIT, [[Jaime Fisac]{.underline}](https://scholar.google.com/citations?hl=en&user=HvjirogAAAAJ&view\_op=list\_works&sortby=pubdate) at Princeton, [[David Krueger]{.underline}](https://twitter.com/davidskrueger) in Oxford, Sam Bowman at NYU, Alex Turner at Oregon, etc. Also, a good portion of the work is done by industry / nonprofits: [[Anthropic]{.underline}](https://www.anthropic.com/), [[Redwood Research]{.underline}](https://www.redwoodresearch.org/), OpenAI\'s safety team, DeepMind\'s Safety team, [[ARC]{.underline}](https://alignmentresearchcenter.org/), independent researchers in various places. - \*\*There is money in the space! If you want to do AI alignment research, you can be funded by either Open Philanthropy ([[students]{.underline}](https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/the-open-phil-ai-fellowship),\*\* \*\*[[faculty]{.underline}](https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/request-for-proposals-for-projects-in-ai-alignment-that-work-with-deep-learning-systems)\-- one can also just email them directly instead of going through their grant programs) or\*\* \*\*[[LTFF]{.underline}](https://funds.effectivealtruism.org/funds/far-future) or\*\* \*\*[[FTX-]{.underline}](https://ftxfuturefund.org/area-of-interest/)- this is somewhat competitive and you do have to show good work, but it\'s less competitive than a lot of sources of funding in academia.\*\* - If you wanted to rapidly learn more about the theoretical technical AI alignment space, walking through this [[curriculum]{.underline}](https://www.eacambridge.org/technical-alignment-curriculum) is one of the best resources. A lot of the interesting theoretical stuff is happening online, at [[LessWrong]{.underline}](https://www.lesswrong.com/posts/P3Yt66Wh5g7SbkKuT/how-to-get-into-independent-research-on-alignment-agency) / [[Alignment Forum]{.underline}](https://www.alignmentforum.org/posts/Yp2vYb4zHXEeoTkJc/welcome-and-faq) (Introductory [[Content]{.underline}](https://www.alignmentforum.org/s/mzgtmmTKKn5MuCzFJ)), since this field is still pretty pre-paradigmatic and people are still working through a lot of the ideas. \*\*There\'s also two related communities who care about these issues, who you might find interesting:\*\* - [[Effective Altruism](https://www.effectivealtruism.org/articles/introduction-to-effective-altruism)]{.underline} community, whose strong internet presence is on the [[EA Forum]{.underline}](https://forum.effectivealtruism.org/posts/fd3iQRkmCKCCL289u/new-start-here-useful-links). [[Longtermism]{.underline}](https://80000hours.org/articles/future-generations/) is a concept they care a lot about, and you can schedule a one-on-one [[coaching]{.underline}](https://80000hours.org/speak-with-us/) call here. - [[Rationalist]{.underline}](https://www.lesswrong.com/tag/rationalist-movement) community\-- the best blog from this community is from Scott Alexander ([[first blog]{.underline}](https://slatestarcodex.com/about/), [[second blog]{.underline}](https://astralcodexten.substack.com/?sort=top)), and they\'re present on [[LessWrong]{.underline}](https://www.lesswrong.com/bestoflesswrong). Amusingly, they also write fantastic fanfiction ([[e.g. Harry Potter and the Methods of Rationality]{.underline}](https://www.lesswrong.com/hpmor)) and I think some of their [[nonfiction]{.underline}](https://mindingourway.com/guilt/) is fantastic. Happy to chat more about anything, and good to speak to you! Best, Vael Informal Interview Notes ======================== ### Thoughts from listening to myself doing these interviews - There's obviously strong differences in what content gets covered, based on the interviewees' opinions and where they're at. I didn't realize that another important factor in what content gets covered is the interviewee's general attitude towards people / me. Are they generally agreeable? Do they take time to think over my statement, find examples that match what I'm saying? Do they try to speak over me? Are they curious about my opinions? How much time do they have? Separate from rapport (and my rapport differs with different interviewees), there's a strong sense of spaciousness in some interviews, while many feel like they're more rapid-fire exchanges of ideas. I often end up talking more in the agreeable / spacious interviews. - Participants differ in how much they want to talk in response to one question. I tended to not interrupt my interviewees, though I think that's a good thing to do. ("I'm sorry, this is very interesting, but we really need to get to the next question.") That meant that for participants who tended to deliver long answers, I had fewer chances to ask questions, which meant I often engaged less with their previous responses and tried to move them on to new topics more abruptly. - I make a lot of agreeable sounds, and try to rephrase what people say. People differ with how many agreeable sounds they make during my speech as well, and how much they're looking at the camera and looking for cues. - I tended to adjust my talking speed to the interviewee somewhat, but usually ended up talking substantially more quickly. This made my speech harder to parse because of all the "like"s that get inserted while I'm thinking and talking at the same time. (I don't think I realized this at the time; it's more obvious when listening back through the interviews. I've removed a fair amount of the "like"s in the transcripts because it's harder to read than hear.) Generally, I found it useful to try to insert technical vocabulary and understanding as early as possible, so researchers would explain more complicated concepts and be calibrated on my level of understanding. I did somewhat reduce speaking speed and vocabulary when speaking with interviewees whose grasp of English was obviously weaker, though in those cases I think it's maybe not worth having the interview, since I found it quite hard to communicate across a concept gap under time and communication constraints. (These concepts are complicated, and hard enough to cover in 40m-60m even without being substantially limited by language.) - When I'm listening to these interviews, I'm often like: Vael, how did you completely fail to remember something that the interviewee said one paragraph up, what's up with your working memory? And I think that's mostly because there's a lot to track during interviews, so my working memory gets occupied. I often found my attention on several things: - Trying to take on the framework of their answer, and fit it into the framework of how I think about these issues. Some people had substantially different frames, so this took a lot of mental energy. - Trying to figure out what counterpoint to respond with when I disagreed with something, so -- fitting their answer into a mental category, flitting through my usual replies to that category, and then holding my usual replies in mind for when they were done, if there were multiple replies lined up. - Trying to figure out whether I should reply to their answer, or move on. One factor here was whether they tended to take up a lot of talking space, so I needed to be very careful with what I used my conversational turn for. Another factor was how much agreement I had with my previous question, so that I could move on to the second. A third factor was tracking time-- I spent a lot of time tracking time in the interview, and holding in mind where we were in the argument tree, and where I thought we could get to. - If they'd said something that was actually surprising to me, and seemed true, rather than something I'd heard before and needed to reformulate my answer to, this often substantially derailed a lot of the above processing. I then needed to do original thinking while on a call, trying to evaluate whether something said in a different frame was true in my frames. In those cases I usually just got the interviewee to elaborate on their point, while throwing out unsophisticated, gut-level "but what if..." replies and seeing how they responded, which shifted the conversation towards more equality. I think thinking about these points afterwards (and many more things were new to me in the beginning of the interviews, compared to the end) was what made my later interviews better than my earlier interviews. - Trying to build rapport / be responsive / engage with their points well / make eye contact with the camera / watch my body language / remember what was previously said and integrate it. This was mostly more of a background process. - Conversations are quite different if you're both fighting for talking time than if you're not. Be ready for both, I think? I felt the need to think and talk substantially faster the more interruptions there were in a conversation. I expected my interviewees to find the faster-paced conversations aversive, but many seemed not to and seemed to enjoy it. In conversations where the interviewee and I substantially disagreed, I actually often found faster-pace conversations more enjoyable than slower-paced conversations. This was because it felt more like an energetic dialogue in the faster conversations, and I often had kind of slow, sinking feeling that "we both know we disagree with each other but we're being restrained on purpose" feel in the slower conversations. - My skill as an interviewer at this point seems quite related to how well I know the arguments, which like... I could definitely be better on that front. I do think this process is helpful for my own thinking, especially when I get stuck and ask people about points post-interview. But I do read these interviews and think: okay, but wouldn't this have been better if I had had a different or fuller understanding? How good is my thinking? It feels hard to tell. ### Content analysis I have a lot to say about typical content in these types of interviews, but I think the above set of interviews is somewhat indicative of the spread. Hoping to have more information on these eventually once I finish sorting through more of my data.
2f803814-11a8-465a-834a-acc915c1721e
trentmkelly/LessWrong-43k
LessWrong
Spam in the discussion area Although the moderators are doing a good job of removing it quickly, spam remains a considerable annoyance for those of us who follow LW Discussion through the RSS feed.    
f46f03f1-b6d6-4b93-a76a-f37490e35857
StampyAI/alignment-research-dataset/youtube
Youtube Transcripts
Phi-1: A &#39;Textbook&#39; Model the importance of the new Phi 1 model isn't just that it's small enough to be on a smartphone set to be open sourced and capable of interview level python coding tasks its significance is also in what the model tells us about the future of language models and the timelines of our march to human level intelligence I spoke in depth with one of the authors of the paper Ronan eldan to get you more insights and I'm only going to cover the best bits so let's start first thing to notice is how small this model is at 1.3 billion parameters but what does that number mean well for reference that's about one percent the size of gpt3 which was behind the original chat GPT phenomenon and if recent rumors are to be believed it's about a thousand times smaller than the combined parameter count of gpt4 so we're talking a tiny model here that could fit on my Samsung s23 we read that despite this small scale Phi 1 attains a pass at 1 accuracy that means past first time of 50 on human eval testing python coding challenges and Draco pathy of openai and Tesla Fame said that we're probably gonna see a lot more of this creative scaling down work prioritizing data quality and diversity over quantity using synthetic data to create small but highly capable expert models and the author I spoke to actually retweeted that and said for Skeptics the model will be available on hugging face soon give it a try back to the paper which says everyone knows about scaling laws adding more compute adding more data but following the footsteps of eldan and Lee in tiny stories which I'll get to in a second we explore the Improvement that can be obtained along a different axis the quality of the data of course anyone familiar with my Orca video will know that data quality is super important but let's get to this paper they mentioned and I'm going to give you the 30 second version of the paper co-authored by Ronan they created a diverse and synthetic data set of short stories using GPT 3.5 and qpt4 and then they train tiny 28 million parameter models and smaller actually which as they say are two orders of magnitude smaller than gpt2 which was only 1.5 billion parameters and by curating the synthetic data carefully look at the difference in results the ending of this story was so much better on the tiny model trained on this synthetic data set especially compared to gpt2 which is so much bigger but it says the soup is too old it's a terrible ending to the story so what did they do for Phi one well here is the short version they filtered the stack and stack Overflow to only get the most teachable bits of code consisting of about 6 billion tokens they then created a synthetic textbook consisting of about 1 billion tokens of GPT 3.5 generated python textbooks that's not even gpt4 then quite crucially they created a small synthetic exercises data set consisting of only 180 million tokens of exercises and solutions now of course other people have used the stack before but as Ronan says I do think that from the data we do have we are not even close to extracting everything from it and look at the results of this tiny 1.3 billion parameter model trained in this way there have been only two models that have scored more than 50 on human eval pass at one that's wizard coder and of course dpt4 but of course those models are massively bigger and therefore much more expensive to train and actually I find this chart perhaps the most interesting one of all in the entire paper you can see so many Trends in one diagram let me try to pick a few of these out and remember the scores are the percentage accuracy on human eval think moderate level coding challenges first look at the consistent increase from when you just train on the filtered stack versus on the synthetic code textbook from 11 to 16 12 12 to 20 17-29 this could be the synthetic data Event Horizon that Sam Altman talked about and that code textbook was generated using GPT 3.5 not even gpt4 next compare the parameter count of the models 350 million on the left and in the center and 1.3 billion on the right this one isn't as big a surprise we knew that increasing the parameters yields better performance but nevertheless you can see it vividly in action third and I think this one is really fascinating look at the difference between the left and the center charts the only thing that really changed was the number of GPU hours and of course the number of tokens went from 26 billion to 76 billion but wait I thought the data set size was fixed at 7 billion what gives well of course what's happening is that they're passing over the data multiple times this is called training for more so-called epochs or passes over the data so these aren't new tokens they're the same tokens being trained on more times as Ronan said to me my personal impression is that many people in the community thought that we would never want to do more than like one or two epochs because we'll start overfitting and just for 20 seconds I can't resist bringing in this paper that they referenced in the textbooks paper it's essentially talking about how you can still scale language models even if you run out of data and take a look at these two diagrams they say training for up to four epochs or passes is almost as good as new data and it's only when you get to around 40 epochs that repeating is worthless obviously we don't know about gpt4 but GT3 seems to be trained on far less than that but there was one final trend from this amazing set of charts that I wanted to point out and it's probably the most obvious one look at the huge jump to the dark green bars that's when they train the model on those additional synthetic exercises with Solutions the authors know that one can only imagine how frustrating and inefficient it would be for a human learner to try to acquire wire coding skills from such data sets like the unfilled stack as they would have to deal with a lot of noise ambiguity and incompleteness in the data we hypothesize that these issues also affect the performance of language models as they reduce the quality and quantity of the signal that Maps natural language to code let me quickly give you a bit more detail about how they filtered the stack they got about a hundred thousand samples of the stack and stack Overflow and then prompted gpt4 to determine its educational value or a student whose goal is to learn basic coding Concepts they then use those annotations to train a random Forest classifier that predicts the quality of a file using its output embedding essentially a basic searching mechanism to find out which parts of the stack are the most educational but at this point I want to pause and imagine if they'd used a different prompt imagine a future paper looking across a different data set that paper could prompt gpt4 to annotate the educational value for student whose goal is to learn French then you could have an amazing French speaking model or maybe they could get it to annotate which examples would be most educational for learning to predict the stock market and then maybe train it on a small synthetic textbook of successful previous examples of predicting the stock market I'm just saying this seems to be a model that could be applied elsewhere and these annotations here were the only times they used Gypsy 4. the rest was GPC 3.5 and as Ronan says gpt4 is not only great as something we can use directly for better productivity but it's also a way to get much better other models and that's one thing I want openai anthropic and Google to address the capability of their models to train smaller models here by the way is an example of the kind of exercises and solutions that the model was then fine-tuned on created of course by GPT 3.5 and the authors note that quite remarkably the model after fine tuning on those fewer than 200 million tokens of exercises and solutions also exhibits a substantial Improvement in executing tasks that are not featured in the fine-tuning data set for example fine-tuning on codex sizes unexpectedly improves the model's ability to use external libraries such as pygame even though our exercises do not contain these libraries this suggests that fine-tuning not only improves the tasks we targeted but also makes unrelated tasks easier to distill it's this unexpectedness that I find really interesting for example before training Gypsy 4 did they expect the emergent ability to do self-repair or reflection according to this new paper that ability is not found in GPT 3.5 going back to the Phi 1 paper the authors admit that there remain a number of limitations of our model compared to larger models for code firstly Phi 1 is specialized in Python coding which restricts its versatility compared to multi-language models secondly Phi 1 lacks the domain specific knowledge of larger models such as programming with specific apis or using less common packages it's a bit like the more classical narrow AI good at only a few things furthermore due to the structured nature of the data sets and the lack of diversity in terms of language and style it's less robust to stylistic variations or errors in the prompt it's quite funny if you make a grammatical mistake in your prompt it does a lot worse but what about this we also believe that significant gains could be achieved by using gpt4 to generate these synthetic data instead of GPT 3.5 as we notice that GPC 3.5 data has a high error rate I asked Ronan about that speculating that it's because gpt4 costs more and he said yeah it costs more also gpt4 is much slower but another reason is we wanted to demonstrate something here that you don't even need a smart model like gpt4 even GPC 3.5 which isn't that great at coding is enough so there you go you could get even better results on this using gpt4 but at the moment Gypsy 4 is a bit too slow before I get two timelines some of you might have noticed the wizard coder results and wondered how that model did so well despite only being 16 billion parameters which of course is 10 times bigger than Phi 1. well of course I read that paper too as well as almost every paper referenced in the textbooks paper The Secret of wizard coder seems to have been increasing the difficulty of the training data fine tune the model with more difficult examples EG if the original problem can be solved with only a few logical steps please add more reasoning steps maybe complicate the input or deepen the question or increase the reasoning involved you can start to see the shared themes of orca wizard Coda and Phi 1. this could be what Sarah Constantine was pointing to in the asterisk magazine that I read yesterday I'm not sponsored by them but it was a great issue so do check it out she said rather than a refutation of scaling laws or an acceleration of their slope I think this is more like a move in a different direction altogether towards a Cambrian explosion of little AIS used for different purposes where getting good performance on a task depends on the quality of your task specific data set like Phi 1 for python that could be consistent with the state-of-the-art continuing to progress steadily along scaling law lines for quite some time but it could also mean the economic incentive towards ever bigger models would diminish and would enter an entirely New Era where AI progress would not be driven primarily by semiconductor scaling or Moore's Law this relates directly to a tweet from the co-founder of anthropic Jack Clark he said a world where we can push a button and stop larger compute things being built and all focus on safety for a while is good that is really interesting to hear from someone at the top of an AGI lab but I do have some questions for this policy if we freeze compute wouldn't that incentivize every company just to use algorithmic progress to get more out of the compute we do have and so on the safety front I think it's far more effective active public messaging to focus on concrete things that everyone can understand for example in this paper from Oxford this week llms will in particular lower barriers to biological misuse biological design tools will expand the capabilities of sophisticated actors concretely bdts may enable the creation of pandemic pathogens substantially worse than anything seen to date and could enable forms of more predictable and targeted biological weapons I think this is something that everyone can get behind and as the paper says it's been hypothesized that for evolutionary reasons naturally emerging pathogens feature a trade-off between transmissibility that's how much they spread and virulence that's how deadly they are AI based bdts might generate design capabilities that are able to overcome this trade-off thus for the first time humanity might face a security threat from pathogens substantially worse than anything nature might create including pathogens capable of posing an existential threat that to be honest is my main safety concern about back to the paper and timelines here is another snippet of my conversation with Ronan I said I just feel like we are much closer to something really transformative than the public has quite realized and people like open AI put out that in 10 years we will have something as powerful as a corporation I say three to five years Ronan replied that depends on how much resources are actually spent into training bigger and bigger models I have no idea what openai and Google are doing right definitely if this is our main goal I think it can easily be five years I said or less Ronan replied or less I feel like the bottleneck is maybe the production of gpus and I mean it's not just to produce the gpus you also have to build the data centers and connect them to electricity etc etc I think if you have all that then yeah I don't see the barrier with more data higher quality data synthetic data better and better algorithms and more and better gpus and tpus that's what we mean when we say we don't see a barrier of course everyone has slightly different definitions of AGI but almost everyone agrees that the next five to ten years are going to be the most critical in seeing whether more data better data better algorithms or just more and more compute will lead to AGI or super intelligence I loved how Carl Shulman put it on the dwarkesh Patel podcast if you generate like close to 10 million dollars a year out of the future version h100 and it costs tens of thousands of dollars with a huge profit margin now and profit margin could could be reduced with like load production that is a big difference that that chip pays for itself almost instantly and so you could you could support pain 10 times as much to have these Fabs constructed more rapidly you could have if AI is starting to be able to contribute could have ai contributing more of the skill technical work that makes it hard for say Nvidia to suddenly find thousands upon thousands of top quality engineering hires if AI can provide that now if AI hasn't reached that level of performance then this is how you can have things stall out and like a world where AI progress stalls out is one where you go to the 100 billion and then over succeeding years trillion dollar uh things software progress um turns to start turns out to to stall you lose the gains that you are getting from moving researchers from other fields lots of physicists and people from other areas of computer science have been going to AI but you sort of tap out those resources uh because AI becomes a larger proportion of the research field and like okay you've put in all of these inputs but they just haven't yielded Ajay yet I think that set of inputs probably would yield the kind of AI capabilities needed for intelligence explosion but if it doesn't after we've exhausted this current scale up of like increasing the share of our economy that is trying to make AI if that's not enough then after that you have to wait for the slow grind of things like General economic growth population growth and such and so think slow and that results in my credences and this kind of advanced AI happening to be relatively concentrated like over the next 10 years compared to the rest of the century because we just can't we can't keep going with this rapid redirection of resources into AI That's that's a one-time thing thank you so much for learning about Phi one with me and as always thank you so much for staying all the way to the end do try to have a wonderful day
26e9a485-9596-4977-99f3-e2f7900edd79
trentmkelly/LessWrong-43k
LessWrong
Jam is obsolete Jam is a very natural way to preserve fruit. The fruit is ripe only a small fraction of the year, you'd like to be able to eat it all year long, so you cook it with enough sugar that it won't go bad for a long time. But now that we have freezers, jam is obsolete. When you make jam, you change the taste and texture of the fruit in several ways: * Squishing the fruit makes it into a more consistent texture, suitable for spreading. * Adding sugar makes the jam sweeter, and also much less healthy. * Heat changes the flavor of the fruit, With more volatile flavors evaporating. * Heat evaporates some of the water, thickening it (reducing). * Heat releases pectin, or you add additional pectin, which gels the jam. Some people make freezer jam, which skips the heated steps, and gives you a flavor much closer to fresh fruit. Because it hasn't been cooked thoroughly, you need to store it in the freezer, but since freezers are a thing now, that's not a problem. We can go a step further, though, and just use frozen fruit. Take frozen raspberries, defrost them, and squish them up with a fork: on eggy waffles, with nutella This takes about a minute for defrosting, 10s for squishing, and is far superior to jam. Not so sweet, better flavor, and it's cheaper [1] and healthier as well! Being able to store traditional jam in the refrigerator once you've opened it is an advantage, but I think it's a temporary one. Over time, I expect households will shift towards more freezer space relative to refrigerator space, as the quality of frozen food continues to rise. We got a chest freezer soon after we bought our house, and I'm glad we did! [1] Our grocery store has raspberry jam for $2.75/lb, frozen raspberries for $4.39/lb, and jam is ~40% fruit. Though since we have the freezer space like raspberries a lot I've been buying them online, 30lb at $3.08/lb shipped. A shipment fills 6-8 one-gallon ziplocs depending on whether it ends up mostly whole or mostly pieces. Comm
a876360e-3513-4d02-8000-db8ed175a25d
trentmkelly/LessWrong-43k
LessWrong
Suggestion: LW Meditation Hall Lets make up a LW Meditation Hall, analog to the LW Study Hall.     Judging from my own (limited) experience, meditation might profit even more from doing it together than working.   It should work more or less like the study hall, with one exception: Sessions need to be scheduled in advance (There is probably not enough interest in meditation to keep the hall filled 24/7, and people will want to take part from the beginning instead of stumbling in in the middle).
682ca0a6-67d2-4558-94ae-29e2cc99adca
trentmkelly/LessWrong-43k
LessWrong
Now THIS is forecasting: understanding Epoch’s Direct Approach Happy May the 4th from Convergence Analysis! Cross-posted on the EA Forum. As part of Convergence Analysis’s scenario research, we’ve been looking into how AI organisations, experts, and forecasters make predictions about the future of AI. In February 2023, the AI research institute Epoch published a report in which its authors use neural scaling laws to make quantitative predictions about when AI will reach human-level performance and become transformative. The report has a corresponding blog post, an interactive model, and a Python notebook. We found this approach really interesting, but also hard to understand intuitively. While trying to follow how the authors derive a forecast from their assumptions, we wrote a breakdown that may be useful to others thinking about AI timelines and forecasting.  In what follows, we set out our interpretation of Epoch’s ‘Direct Approach’ to forecasting the arrival of transformative AI (TAI). We’re eager to see how closely our understanding of this matches others’. We’ve also fiddled with Epoch’s interactive model and include some findings on its sensitivity to plausible changes in parameters. The Epoch team recently attempted to replicate DeepMind’s influential Chinchilla scaling law, an important quantitative input to Epoch’s forecasting model, but found inconsistencies in DeepMind’s presented data. We’ll summarise these findings and explore how an improved model might affect Epoch’s forecasting results. We’ve accidentally filled this post with Star Wars references.  Disclaimer: we do not actually mean to suggest that the Direct Approach is quick or easy (though it is definitely seductive). This is where the fun begins (the assumptions) The goal of Epoch’s Direct Approach is to quantitatively predict the progress of AI capabilities. The approach is ‘direct’ in the sense that it uses observed scaling laws and empirical measurements to directly predict performance improvements as computing power increases. This stands in c
60c82cff-76dd-4ef9-85c4-383791159a90
trentmkelly/LessWrong-43k
LessWrong
[AN #120]: Tracing the intellectual roots of AI and AI alignment Alignment Newsletter is a weekly publication with recent content relevant to AI alignment around the world. Find all Alignment Newsletter resources here. In particular, you can look through this spreadsheet of all summaries that have ever been in the newsletter. Audio version here (may not be up yet). HIGHLIGHTS The Alignment Problem (Brian Christian) (summarized by Rohin): This book starts off with an explanation of machine learning and problems that we can currently see with it, including detailed stories and analysis of: - The gorilla misclassification incident - The faulty reward in CoastRunners - The gender bias in language models - The failure of facial recognition models on minorities - The COMPAS controversy (leading up to impossibility results in fairness) - The neural net that thought asthma reduced the risk of pneumonia It then moves on to agency and reinforcement learning, covering from a more historical and academic perspective how we have arrived at such ideas as temporal difference learning, reward shaping, curriculum design, and curiosity, across the fields of machine learning, behavioral psychology, and neuroscience. While the connections aren't always explicit, a knowledgeable reader can connect the academic examples given in these chapters to the ideas of specification gaming (AN #97) and mesa optimization (AN #58) that we talk about frequently in this newsletter. Chapter 5 especially highlights that agent design is not just a matter of specifying a reward: often, rewards will do ~nothing, and the main requirement to get a competent agent is to provide good shaping rewards or a good curriculum. Just as in the previous part, Brian traces the intellectual history of these ideas, providing detailed stories of (for example): - BF Skinner's experiments in training pigeons - The invention of the perceptron - The success of TD-Gammon, and later AlphaGo Zero The final part, titled "Normativity", delves much more deeply into the alignment pro
d019ba69-04b0-43e9-a17d-ed82fe505750
trentmkelly/LessWrong-43k
LessWrong
Why you can add moral value, and if an AI has moral weights for these moral values, those might be off What you'll learn from reading this 1. Why the moral value of (thing 1 and thing 2) is the same as the moral value of (thing 1) + the moral value of (thing 2). (i.e., moral value is "linear".) 2. (some) People (or, I suppose, AIs) weight certain things by constants. (e.g., " dogs are 2x as important as cats (i.e., you think something happening to 2 cats is worth the same as something happening to 1 dog.)" Epistemic status This is about as likely to be true as a typical un-peer-reviewed proof of something in math - that is, it might be off. (Let me know if you think this might be off so I can be more not-off!) What's a utility function? First, watch this video. (I’m going to assume you watched the above video when writing this, but you should be fine if you didn’t watch it. It’s 7 mins long, though, and it’s a good video. You do you, though.) For the sake of simplicity, here, we’re going to be comparing the value of objects floating in space that don’t interact with each other.[1]   Terminology   1. When I use [ ], you should interpret it as if the contents in the bracket are, grammatically, one word. For example, “A panda eats [shoots and leaves]” means that pandas eat both shoots and leaves, but “A panda eats, shoots, and leaves” means pandas don’t pay the check at restaurants.  This just makes [ ]’s more useful. 2. means you don’t care whether [the world has a 📦 in it] or [the world has a 🎁 in it] 3. means you prefer the [world where space has a 📦 in it] (Which I’ll write as 📦 for short) than [the world with a 🎁 in it] (Which I’ll write as 🎁 for short) 4. means you prefer 🎁 to 📦. 5. refers to the world/universe (I’ll use the words “world” and “universe” interchangeably in this article) with a 📦 and a 🎁 in it, and the two don’t interact with each other. Maybe the two are far apart in space7, or the two in different universes labeled as one single universe. (A “duoverse”, so to speak.) 6. P(📦0.5) means there’s a 0.5 chance of a world
68a65f15-6563-4852-a8a4-73653a44e873
trentmkelly/LessWrong-43k
LessWrong
Do any of the AI Risk evaluations focus on humans as the risk? I am not up on much of the AI risk discussion but for this outsider most of the focus seems on the AI taking actions.  I recall someone (here I think) posting a comment about how a bio research AI initiative seeking to find beneficial things was asked if the tools could be used to find harmful things. They changed their search and apparently found a number of really bad things really quickly. Does anyone look at, have concerns or estimates on risk in this area? Is it possible that the AI risk from the emergence of a very powerful AI is not as likely since before that occurs some human with a less powerful AI ends the world first, or at least destroys modern human civilization and we're back to the stone age hunter gathering world before the AI gets powerful enough do do that for/to us?
0a3fa5f0-f8d1-4182-907a-f603c4ec9594
trentmkelly/LessWrong-43k
LessWrong
Further considerations on the Evidentialist's Wager Thanks to Caspar Oesterheld for the discussion on which this post is largely based. In a previous post I presented an objection to The Evidentialist's Wager. Reading it is probably necessary to understand the following. A counter-argument to my objection In the post I broadly claim the following: > Imagine we have absolutely no idea whether more acausally correlated agents in the universe are positively correlated to us/Good Twins (our one-boxing is evidence for them increasing our utility function) or negatively correlated to us/Evil Twins (our one-boxing is evidence for them decreasing our utility function). That is, our credence on the two events is 50%[1]. Then, when facing a Newcomb decision, the stakes for EDT and CDT are equal. This is because any evidence of additional utility gains provided by EDT will balance out in the expected value calculation, given our complete uncertainty about the interpretation of that evidence (whether one-boxing is evidence for a universal increase or decrease in our utility). Shortly after writing the post, I discovered the following counter-argument: In that situation, EDT urges you to research further whether more positively or negatively correlated agents exist, to break the symmetry and then act accordingly. That is, it is plausible (it has non-zero probability) that dedicating more resources to studying this issue ends up breaking the symmetry, and changing your credences away from 50-50 in some of the two possible directions. If that happens, then the Wager applies and the stakes for EDT will be higher (which will either urge you to one-box or to two-box, depending on how the symmetry was broken). And so, even in the described situation, the stakes of EDT are higher, and if it's not immediately obvious it's just because EDT doesn't recommend neither of one-boxing or two-boxing, but the third option of researching further into the nature of the multiverse. But the above argument presents a flaw, related to other issue
f65a0373-60fc-4e9c-b250-8b713db1bde2
trentmkelly/LessWrong-43k
LessWrong
What is the impact of varying infectious dose of COVID-19? Zvi Mowshowitz and Robin Hanson have both made pretty provocative proposals around deliberate variolation/low dose exposure to COVID-19, with the goal of reducing total morbidity/mortality and increasing freedom/productivity. Interestingly, the same rough data has led to opposite recommendations: * Hanson believes we should inoculate the young and healthy with the goal of developing enough immune individuals that R0 falls below 1. * Zvi believes that if low viral load is really so much safer, we should inoculate the least healthy first, to ensure they get the lowest initial dose possible. Meanwhile in the April Coronavirus Open Thread, Matthew Lewis suggests inoculating via the GI tract, which may lead to weaker symptoms than the same load in the respiratory system. The wisdom of both of these proposals is really sensitive to exactly how much benefit stems from how much lower the initial dose (or the placement of the initial dose), and the corresponding risks. With that in mind: what do we already know about the impact of varying infectious doses ? What could we learn that would narrow the confidence intervals? How could we learn more?
d0e78aba-58d0-4098-ab3b-c2094ff773b3
StampyAI/alignment-research-dataset/lesswrong
LessWrong
[Intro to brain-like-AGI safety] 4. The “short-term predictor” 4.1 Post summary / Table of contents ==================================== *Part of the* [*“Intro to brain-like-AGI safety” post series*](https://www.alignmentforum.org/s/HzcM2dkCq7fwXBej8)*.* The previous two posts ([#2](https://www.alignmentforum.org/posts/wBHSYwqssBGCnwvHg/intro-to-brain-like-agi-safety-2-learning-from-scratch-in), [#3](https://www.alignmentforum.org/posts/hE56gYi5d68uux9oM/intro-to-brain-like-agi-safety-3-two-subsystems-learning-and)) presented a big picture of the brain, consisting of a Steering Subsystem (brainstem and hypothalamus) and Learning Subsystem (everything else), with the latter “learning from scratch” in a particular sense defined in [Post #2](https://www.alignmentforum.org/posts/wBHSYwqssBGCnwvHg/intro-to-brain-like-agi-safety-2-learning-from-scratch-in). I suggested that our explicit goals (e.g. “I want to be an astronaut!”) emerge from an interaction between these two subsystems, and that understanding that interaction is critical if we want to assess how to sculpt the motivations of a brain-like AGI, so that it winds up trying to do things that we want it to be trying to do, and thus avoid the kinds of catastrophic accidents I discussed in [Post #1](https://www.alignmentforum.org/posts/4basF9w9jaPZpoC8R/intro-to-brain-like-agi-safety-1-what-s-the-problem-and-why). These next three posts (#4–[#6](https://www.alignmentforum.org/posts/qNZSBqLEh4qLRqgWW/intro-to-brain-like-agi-safety-6-big-picture-of-motivation)) are working our way up to that story. This post provides an ingredient that we’ll need: “the short-term predictor”. Short-term prediction is *one* of the things the Learning Subsystem does—I’ll talk about others in future posts. A short-term predictor has a supervisory signal (a.k.a. “ground truth”) from somewhere, and then uses a learning algorithm to build a predictive model that anticipates that signal a short time (e.g. a fraction of a second) in the future. This post will be a general discussion of how short-term predictors work and why they’re important. They will turn out to be a key building block of motivation and reinforcement learning, as we’ll see in the subsequent two posts. *Teaser for the next couple posts:* The [next post (#5)](https://www.alignmentforum.org/posts/F759WQ8iKjqBncDki/intro-to-brain-like-agi-safety-5-the-long-term-predictor-and) will discuss how a certain kind of closed-loop circuit wrapped around a short-term predictor turns it into a “long-term predictor”, which has connections to the temporal difference (TD) learning algorithm. I will argue that the brain has a large number of these long-term predictors, built out of telencephalon-brainstem loops, one of which is akin to the “critic” part of [actor-critic reinforcement learning](https://www.freecodecamp.org/news/an-intro-to-advantage-actor-critic-methods-lets-play-sonic-the-hedgehog-86d6240171d/). The “actor” part is the subject of [Post #6](https://www.alignmentforum.org/posts/qNZSBqLEh4qLRqgWW/intro-to-brain-like-agi-safety-6-big-picture-of-motivation). **Table of contents:** * Section 4.2 gives a motivating example, of flinching just before getting hit in the face. This can be formulated as a supervised learning problem, in the sense that there is a ground-truth signal to learn from. (If you just got hit in the face, then you should have flinched!) The resulting circuit is what I call a “short-term predictor”. * Section 4.3 defines terminology: “context signals”, “output signals”, and “supervisory signals”. (In ML terminology, these correspond respectively to “trained model inputs”, “trained model outputs”, and “labels”.) * Section 4.4 offers a sketch of an extremely simple short-term predictor that could be built out of biological neurons, just so you can have something concrete in mind. * Section 4.5 discusses the benefits of short-term predictors compared to alternative approaches including (in the flinching example) a hardwired circuit for deciding when to flinch, or a reinforcement learning (RL) agent that is rewarded for appropriate flinching. For the latter, a short-term predictor can learn faster than an RL agent because it gets an error gradient “for free” each query—or in simpler terms, when it screws up, it gets some indication of what it did wrong, e.g. whether the error is an overshoot vs. undershoot. * Sections 4.6-4.8 cover various examples of short-term predictors in the human brain. None of these are especially important for AGI safety—the *really* important one is the topic of the [next post](https://www.alignmentforum.org/posts/F759WQ8iKjqBncDki/intro-to-brain-like-agi-safety-5-the-long-term-predictor-and)—but they come up sufficiently often that they warrant a brief discussion: + Section 4.6 covers the cerebellum, with my theory that it's a collection of ≈300,000 short-term predictors, used to (in effect) reduce the latency on ≈300,000 signals traveling around the brain and body. + Section 4.7 covers predictive learning of sensory inputs in the cortex—i.e., you’re constantly predicting what you’re about to see, hear, feel, etc., and the corresponding prediction errors are used to update your internal models. + Section 4.8 briefly covers a few other neat random things that short-term predictor circuits can do for an animal. 4.2 Motivating example: flinching before getting hit in the face ================================================================ Suppose you have a job or hobby in which there’s a particular, recognizable sensory cue (e.g. someone [yelling “FORE!!!” in golf](https://en.wikipedia.org/wiki/Fore_(golf))), and then half a second after that cue you very often get whacked in the face. Your brain is going to *learn* to (involuntarily) flinch in response to the cue. There’s a learning algorithm inside your brain, commanding these flinches; it presumably evolved to protect your face. That learning algorithm is what I want to talk about in this post. I’m calling it a “short-term predictor”. It’s a “predictor” because the goal of the algorithm is to predict something in advance (i.e., an upcoming whack in the face). It’s “short-term” because we only need to predict what will happen a fraction of a second into the future. It’s more specifically a type of supervised learning algorithm, because there is a “ground truth” signal indicating what the prediction output *should* have been in hindsight. 4.3 Terminology: Context, Output, Supervisor ============================================ Our “short-term predictor” has three ingredients in its “API” (“application programming interface”—i.e., the channels through which other parts of the brain interact with the “short-term predictor” module): ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/Y3bkJ59j4dciiLYyw/r6oaz4u3ynlcva8llrti)* An **output signal** is the algorithm’s prediction. + In our example above, this would be a signal that triggers a flinch reaction. * A **supervisory signal** provides “ground truth” (in hindsight) about what the algorithm’s output *should have* been. + In our example above, this would be a signal that indicates that I just got whacked in the face (and therefore, implicitly, I *should have* flinched). + In ML terminology, “supervisory signals” are often called “labels”. + In the actual implementation, the supervisor-type input to the short-term predictor does not *have* to be the ground truth. It could also be an error signal, or a negative error signal, etc. From my perspective, this is an unimportant low-level implementation detail. * **Context signals** carry information about what’s going on. + In our example above, this might be a random assortment of signals (corresponding to [latent variables](https://en.wikipedia.org/wiki/Latent_variable)) coming from the visual cortex and auditory cortex. With luck, some of those signals might carry predictively-useful information: maybe one signal conveys the fact that I am on a golf course, and another signal conveys the fact that someone near me just yelled “FORE!”. + In ML terminology, “context signals” would instead be called “inputs to the trained model”. The context signals don’t *all* have to be relevant to the prediction task. We can just throw a whole bunch of crap in there, and the learning algorithm will automatically go searching for the context data that are useful for the prediction task, and ignore everything else. 4.4 Extremely simplified toy example of how this could work in biological neurons ================================================================================= How might a short-term predictor work at a low level? Well, suppose we want an output signal that precedes the supervisor signal by 0.3 seconds—as above, for example, maybe we want to learn to flinch *before* getting hit. We grab a bunch of context data that might be relevant—for example, neurons carrying partially-processed visual information. We track which of those context lines is disproportionately likely to fire 0.3 seconds before the supervisor does. Then we wire up those context lines to the output. And we’re done! Easy peasy. In biology, this would look something like synaptic plasticity with a “three-factor learning rule”—i.e., the synapse gets stronger or weaker as a function of the activity of three different neurons (context, supervisor, output), and their relative timings. ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/Y3bkJ59j4dciiLYyw/lokgreowt5lhsa24hrfm)Black dots indicate adjustable-strength synapsesTo be clear, a short-term predictor can be *much, much* more complicated than this. Making it more complicated can give better performance. To pick a fun example that I just learned about the other day, apparently the short-term predictors in the cerebellum (Section 4.6 below) have neurons that can somehow *store an adjustable time-delay parameter within the neuron itself* (!!) ([ref](https://www.pnas.org/content/111/41/14930)—it came up on [this podcast](https://braininspired.co/podcast/126/)). Other possible bells and whistles include pattern separation ([Post #2, Section 2.5.4](https://www.alignmentforum.org/posts/wBHSYwqssBGCnwvHg/intro-to-brain-like-agi-safety-2-learning-from-scratch-in#2_5_4_Locally_random_pattern_separation)), and training multiple outputs with the same supervisor and pooling them [(ref)](https://www.biorxiv.org/content/10.1101/2020.05.18.102376v2), or better yet training multiple outputs with the same supervisor but with different hyperparameters, in order to get a probability distribution ([original paper](https://www.nature.com/articles/s41586-019-1924-6?proof=t), [further discussion here](https://www.lesswrong.com/posts/F759WQ8iKjqBncDki/intro-to-brain-like-agi-safety-5-the-long-term-predictor-and#5_5_6_1_Aside__Distributional_predictor_outputs)), and so on. So this subsection is an oversimplification. But I won’t apologize. I think these kinds of grossly-oversimplified toy models are important to talk about and keep in mind. From a *conceptual* perspective, we get to feel like there’s probably no deep mystery hidden behind the curtain. From an *evolutionary* perspective, we get to feel like there’s a plausible story of how early animals can start with a very simple (but still useful) circuit, and the circuit can get gradually more complicated over many generations. So get used to it—many more grossly-oversimplified toy models are coming up in future posts! 4.5 Comparison to other algorithmic approaches ============================================== 4.5.1 “Short-term predictor” versus a hardwired circuit ------------------------------------------------------- Let’s go back to the example above: flinching before getting whacked in the face. I suggested that a good way to decide when to flinch is with a “short-term predictor” learning algorithm. Here’s an alternative: we can *hardwire* a circuit that decides when to flinch. For example, if there’s a blob in the field-of-view whose size is rapidly increasing, then it’s probably a good time to flinch. A detector like that could plausibly be hardwired into the brain. How do those two solutions compare? Which is better? Answer: no need to decide! They’re complementary. We can have both. But still, it’s pedagogically helpful to spell out their comparative advantages and disadvantages. The main (only?) advantage of the hardwired flinching system is that it works from birth. Ideally, you wouldn’t get whacked in the face even once. By contrast, the short-term predictor is a learning algorithm, and thus generally needs to “learn things the hard way”. In the other direction, the short-term predictor has two powerful advantages over the hardwired solution—one obvious, one not-so-obvious. The obvious advantage is that a short-term predictor is powered by within-lifetime learning, not evolution, and therefore can learn cues for flinching that were rarely or never present in previous generations. If I tend to bonk my head whenever I walk into a certain cave, I’ll learn to flinch. There’s no chance that my ancestors evolved a reflex to flinch at *this* particular part of *this* particular cave. My ancestors might have never been to this cave. The cave might not have existed until last week! The less obvious, but very important, advantage is that a short-term predictor can learn patterns that involve learned-from-scratch patterns ([Post #2](https://www.alignmentforum.org/posts/wBHSYwqssBGCnwvHg/intro-to-brain-like-agi-safety-2-learning-from-scratch-in)), whereas a hardwired flinching system can’t. The argument here is the same as [Section 3.2.1 of the previous post](https://www.alignmentforum.org/posts/hE56gYi5d68uux9oM/intro-to-brain-like-agi-safety-3-two-subsystems-learning-and#3_2_1_Each_subsystem_generally_needs_its_own_sensory_processor): the genome cannot know *exactly* which neurons (if any) will store any particular learned-from-scratch pattern, and therefore cannot hardwire a connection to them. The ability to leverage learned-from-scratch patterns is a big benefit. For example, there may well be good cues for flinching that depend on learned-from-scratch semantic patterns (e.g. the knowledge “I am playing golf right now”), learned-from-scratch visual patterns (e.g. the visual appearance of a person swinging a golf club) or learned-from-scratch location tags (e.g. “this particular room, which has a low ceiling”), and so on. 4.5.2 “Short-term predictor” vs an RL agent: Faster learning thanks to error gradients -------------------------------------------------------------------------------------- The short-term prediction circuit is a special case of *supervised learning*. Supervised learning is when you have a learning algorithm receiving a ground-truth signal like this: *“Hey learning algorithm: you messed up—you should have done thus-and-such instead.”* Compare that to reinforcement learning (RL), where the learning algorithm gets a *much less helpful* ground-truth signal: *“Hey learning algorithm: you messed up."* (a.k.a negative reward). Obviously, you can learn much faster with supervised learning than with reinforcement learning. The supervisory signals, at least in principle, tell you exactly what parameter settings to change and how, if you want to do better next time you’re in a similar situation. Reinforcement learning doesn’t; instead you need trial-and-error. In technical ML terms, supervised learning provides a full error gradient “for free” on each query, whereas reinforcement learning does not. Evolution can’t *always* use supervised learning. For example, if you’re a professional mathematician trying to prove a theorem, and your latest proof attempt didn’t work, there is no “ground truth” signal that says what to do differently next time—not in your brain, not out there in the world. Sorry! You’re in a very-high-dimensional space of possible things to do, with no real guideposts. At some level, trial-and-error is your only option. Tough luck. But evolution *can sometimes* use supervised learning, as in the examples in this post. And my point is: if it *can*, it probably *will*. 4.6 “Short-term predictor” example #1: The cerebellum ===================================================== I’ll jump right into what I think the cerebellum is for, and then I’ll talk about how my theory relates to other proposals in the literature. 4.6.1 My theory of the cerebellum --------------------------------- My claim is that the cerebellum is housing lots short-term prediction circuits. ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/Y3bkJ59j4dciiLYyw/cdfehsenwyhvdf5ms9vn)Relation of cerebellum neuroanatomy (red) with our diagram from above. As usual (see above), I’m leaving out lots of bells and whistles that make the short-term predictor more accurate, like [there’s an extra layer I’m not showing](https://www.biorxiv.org/content/10.1101/2020.05.18.102376v2), plus pattern separation ([Post #2, Section 2.5.4](https://www.alignmentforum.org/posts/wBHSYwqssBGCnwvHg/intro-to-brain-like-agi-safety-2-learning-from-scratch-in#2_5_4_Locally_random_pattern_separation)), etc.How many short-term predictors? My best guess is: around 300,000 of them.[[1]](#fna8pt2x59qig) What on earth?? Why oh why does your brain need 300,000 short-term predictors? I have an opinion! I think the cerebellum sits there, watching lots of signals in the brain, and *it learns to preemptively send those same signals itself*. That’s it. That’s my whole theory of the cerebellum. In other words, the cerebellum might discover the rule “Given the current context information, I predict that cortical output neuron #187238 is going to fire in 0.3 seconds”. Then the cerebellum goes ahead and sends a signal *right now*, to the same place. Or in the opposite direction, the cerebellum might discover the rule “Given the current context information, I predict that proprioceptive nerve #218502 is going to fire in 0.3 seconds”. Again, the cerebellum goes ahead and sends a signal *right now*, to the same place. Some vaguely-analogous concepts: * When the cerebellum is predicting-and-preempting the telencephalon, we can think of it as vaguely akin to [“memoization”](https://en.wikipedia.org/wiki/Memoization) in software engineering, or [“knowledge distillation”](https://en.wikipedia.org/wiki/Knowledge_distillation) in machine learning, or [this recent paper proposing (so-called) “neural surrogates”](https://dl.acm.org/doi/pdf/10.1145/3486607.3486748). * When the cerebellum is predicting-and-preempting peripheral nerves, we can think of it as building a bunch of predictive models of the body, each narrowly-tailored to predict a different peripheral nerve signal. Then when the telencephalon is doing motor control, and needs peripheral feedback signals, it can use those predictive models as feedback, instead of the real thing. Basically, I think the brain has these issues where the *throughput (a.k.a. bandwidth)* of a subsystem is adequate, but its *latency* is too high. In the peripheral nerve case, the latency is high because the signals need to travel a great distance. In the telencephalon case, the latency is high because the signals need to travel a shorter-but-still-substantial distance, and moreover need to pass through multiple sequential processing steps. In any case, the cerebellum can magically reduce the latency, at the cost of occasional errors. The cerebellum sits in the middle of the action, always saying to itself “what signal is about to appear here?”, and then it preemptively sends it. And then a fraction of a second later, it sees whether its prediction was correct, and updates its models if it wasn’t. It’s like a little magical time-travel box—a [delay line](https://www.oxfordreference.com/view/10.1093/oi/authority.20110803095708310) whose delay is negative. ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/Y3bkJ59j4dciiLYyw/lsocln6s5xppibjmnxqt)And now we have our answer: why do we need ≈300,000 short-term predictors? Because there are lots of peripheral nerves, and there are lots of telencephalon output lines, and maybe other things too. And a great many of those signals can benefit from being predicted-and-preempted! Heck, if I understand correctly, the cerebellum can even predict-and-preempt a signal *that goes from the telencephalon to a different part of the telencephalon*! That’s my theory. I haven’t run simulations or anything; it’s just an idea. See [here](https://www.lesswrong.com/posts/AKBzhvnSLrbgAehLw/how-is-low-latency-phasic-dopamine-so-fast) and [here](https://www.lesswrong.com/posts/pfoZSkZ389gnz5nZm/the-intense-world-theory-of-autism) for two examples in which I’ve used this model to try to understand observations in neuroscience and psychology. Everything else I know about the cerebellum—neuroanatomy, how it’s connected to other parts of the brain, lesion and imaging studies, etc.—all seem to fit this theory really well, as far as I can tell. But really, this little section is almost the sum total of what I know about this topic. 4.6.2 How my cerebellum theory relates to others in the literature ------------------------------------------------------------------ (I’m not an expert here and am open to correction.) I think it’s widely agreed that the cerebellum is involved in supervised learning. I believe that idea is called the Marr-Albus-Ito model, cf. [Marr 1969](http://doi.org/10.1113/jphysiol.1969.sp008820) or [Albus 1971](http://www.sciencedirect.com/science/article/pii/0025556471900514), or the fun [Brains Explained YouTube channel](https://www.youtube.com/watch?v=QUkwqAaSrUg). Recall from above that a short-term predictor is an example of a supervised learning algorithm, but supervised learning is a broader category. So the supervised learning part is *not* a distinguishing feature of my proposal above, and in particular that diagram above (with cerebellum neuroanatomy in red) is compatible with the usual Marr-Albus-Ito story. Instead, the distinguishing aspect of my theory concerns what the ground truth signals are (or what the error signals are—which amounts to the same thing). I mentioned in [Post #2](https://www.alignmentforum.org/posts/wBHSYwqssBGCnwvHg/intro-to-brain-like-agi-safety-2-learning-from-scratch-in) that when I see a within-lifetime learning algorithm, my immediate question is: “What’s the ground truth that it’s learning from?” I also mentioned that usually, when I go looking for an answer in the literature, I wind up feeling confused and unsatisfied. The cerebellum literature is a perfect example. For example, I often hear something to the effect of “cerebellar synapses are updated when there’s a motor error”. But who’s to say what constitutes a motor error? * If you’re trying to walk to school, then slipping on a banana peel is a motor error. * If you’re trying to slip on a banana peel, then slipping on a banana peel is bang-on! How is the cerebellum supposed to know? I don’t get it. I’ve read a number of computational theories of the cerebellum. They tend to be *way* more complicated than mine. And they *still* leave me feeling like I don’t understand where the ground truth is coming from. To be clear, I haven’t carefully read every paper and it remains possible that I’m missing something. *(**Update July 2022:** Hooray, I found* [*this 2006 paper by Harri Valpola*](https://scholar.google.com/citations?view_op=view_citation&hl=en&user=1uT7-84AAAAJ&cstart=20&pagesize=80&sortby=pubdate&citation_for_view=1uT7-84AAAAJ:hMod-77fHWUC) *which suggests essentially the same cerebellum model as mine above. Check it out for helpful discussion including further references to the literature.)* *(**Update August 2023:** Ditto for* [*“Is the Cerebellum a Smith Predictor?” by Miall, Weir, Wolpert & Stein (1993)*](https://scholar.google.com/scholar?cluster=6722829273752659039&hl=en&as_sdt=0,22)*.)* Well, whatever. It doesn’t really impact this series. As I mentioned earlier, you can be a functioning adult able to live independently, hold down a job, etc., [without a cerebellum at all](https://www.npr.org/sections/health-shots/2015/03/16/392789753/a-man-s-incomplete-brain-reveals-cerebellum-s-role-in-thought-and-emotion). So if I’m totally wrong about the cerebellum, it shouldn’t really impact the big picture. 4.7 “Short-term predictor” example #2: Predictive learning of sensory inputs in the cortex ========================================================================================== Your cortex has a rich generative model of the world, including yourself. Every fraction of a second, your brain uses that model to predict incoming sensory inputs (sight, sound, touch, proprioception, interoception, etc.), and when its predictions are wrong, the model is updated on the error. Thus, for example, you can open your closet door, and know *immediately* that somebody oiled the hinge. You were predicting that it would sound and feel a certain way, and the prediction was falsified. In my view, predictive learning of sensory inputs is the jumbo jet engine bringing information from the world into our cortical world-model. I endorse the Yann LeCun quote: “If intelligence is a cake, the bulk of the cake is [predictive learning of sensory inputs], the icing on the cake is [other types of] supervised learning, and the cherry on the cake is reinforcement learning.” The sheer number of bits of data we get from predictive learning of sensory inputs swamps everything else. Predictive learning of sensory inputs—in the specific sense I’m using it here—is not a grand unified theory of cognition. The big problem occurs when it collides with “decisions” (what muscles to move, what to pay attention to, etc.). Consider the following: I can predict that I’ll sing, and then I sing, and my prediction was correct. Or I can predict that I’ll dance, and then I dance, and then *that* prediction was correct. Thus, predictive learning is at a loss; it can’t help me do the right thing here. That’s why we *also* need the Steering Subsystem [(Post #3)](https://www.alignmentforum.org/posts/hE56gYi5d68uux9oM/intro-to-brain-like-agi-safety-3-two-subsystems-learning-and) to send supervisory signals and RL reward signals. Those signals can promote good decisions in a way that predictive learning of sensory inputs cannot. Nevertheless, predictive learning of sensory inputs is a very big deal for the brain, and there’s a lot to be said about it. However, I’ve come to see it as one of many topics that seems very directly important for *building* a brain-like AGI, but only slightly relevant for brain-like-AGI *safety*. So I’ll mention it from time to time, but if you’re looking for gory details, you’re on your own. 4.8 Other example applications of “short-term predictors” ========================================================= These also won’t be important for this series, so I won’t say much about them, but just for fun, here are three more random things that I think Evolution can do with a short-term predictor circuit. * Filtering—for example, my brain can make a short-term predictor of my audio input stream, with the constraint that its context inputs *only* carry information about my own jaw motion and my own vocal cord activity. The predictor should wind up with a model of purely the self-generated contribution to my audio input stream. That’s very useful because my brain can *subtract it off*, leaving only externally-generated sounds. * Input data compression—this is kinda a more extreme version of filtering. Instead of merely filtering out information that’s predictable from self-generated activity, we filter out information that’s predictable from *any information whatsoever that we already know*. By the way, this is how I’m tentatively thinking about the dorsal cochlear nucleus, a little structure in the auditory input processing chain that looks *suspiciously* like the cerebellum. See [here](https://www.lesswrong.com/posts/tnwmcSGDDRQh6wrX5/supervised-learning-in-the-brain-part-4-compression). Warning: It’s possible that this idea makes no sense; I go back and forth. * Novelty detection—see discussion [here](https://www.lesswrong.com/posts/GnmLRerqNrP4CThn6/dopamine-supervised-learning-in-mammals-and-fruit-flies#1A__Novelty_detector_example_). 1. **[^](#fnrefa8pt2x59qig)**There are 15 million Purkinje cells ([ref](https://pubmed.ncbi.nlm.nih.gov/2613942/)), but [this paper](https://www.biorxiv.org/content/10.1101/2020.05.18.102376v2) says that one predictor consists of “a handful of” Purkinje cells with a single supervisor and a single (pooled) output. What does “handful” mean? The paper says “around 50”. Well, 50 in mice. I can’t immediately find the corresponding number for humans. I’m assuming it’s still 50, but that’s just a guess. Anyway, that’s how I wound up guessing that there are 300,000 predictors
87a65e1f-2529-4aba-b765-5cb51b597bbd
trentmkelly/LessWrong-43k
LessWrong
January 2022 Time Tracking Time tracking can be a good way to see how you've been spending your time and how it compares to how you would like to be spending it. It's something that Julia and I have done a few times ( December 2017, June 2017, 2015, 2011) and I've generally found it pretty helpful. This time we tracked two weeks, 2022-01-05 through 2022-01-11 and 2022-01-18 through 2022-01-24. These were pretty normal work weeks for the two of us these days: * This is with three kids, 7y, 5y, and 7m. * Paid childcare 9-5 each workday, with the exception of one day where our nanny called in sick. * Nora was sick one of the days. Julia spent more time nursing her, including during the workday, because she was refusing the bottle. * No travel or dance gigs. * No big house projects. * I took one afternoon off work to take the older kids sledding. Categories: * Sleep: Lights out to lights on. Julia tracked an average of 0:24/night of post-nursing insomnia that's included in the total here. I also had some insomnia after nighttime wakings, but wouldn't have been able to track it without making the insomnia worse. I subtracted 0:15 each night as a guess for how long Julia and I talk before falling asleep. Does not include night nursing; that's under "childcare". * Work: The entire workday, including lunch, incidental personal stuff mixed in with working, pumping milk etc. Julia works from home; I worked 4/10 days from the office, and included the commute here (0:09 of the total; 0:35 per day I commuted). * Housework: cooking, cleaning, tidying, fixing things, dealing with taxes. * J+J: time together without any of the kids. * Family: relaxing/playing with others in the family * Childcare: taking care of the kids. Julia's time includes 1:55/day of nursing (62% of her total). Nursing while hanging out with family is under "family", though. * Personal: blogging, making a go board, exercising, reading things, showering, free time generally. Some distinctions: * When doing
3b639ab6-6c9a-4ad6-a1d7-d6580030a004
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
Katja Grace: AI safety --- *In the future, we may post a transcript for this EA Global: San Francisco 2017 talk, but we haven't created one yet. If you'd like to create a transcript for this talk, contact* [*Aaron Gertler*](mailto:aaron.gertler@centreforeffectivealtruism.org) *— he can help you get started.*
54de9105-797f-4563-a61b-7de5a66f8fd6
trentmkelly/LessWrong-43k
LessWrong
Open Research This is a TED talk about open science. It starts with a description of a new math problem which is offered on a blog, and which eventually attracts enough mathematicians working on it to solve, not just the original problem, but a more difficult version of it. It was enough easier than the usual way of doing math that it was described as being like driving a car instead of pushing it. Then the speaker talks about more ambitious projects-- like a wiki about quantum computing-- which get started, but no one is actually willing to do the work, so that the wiki lies all but vacant. He suggests that public science isn't what scientists get paid for nor what builds their careers, and has some ideas for pushing the standards of science to change. There's been at least one success involving publishing genomes. Perhaps the reason the math project succeeded was because the problem was small enough that success was both well-defined and possible, not to mention that working on it was probably more fun than figuring out how to do tolerable and sensibly-linked wiki articles. There may be a way to get publicly funded science to be open source. We're already got proof of concept for solving math problems if they're interesting enough, so I suggest going public if you've got a math problem people might like to work on.
f65c7fb7-7d2c-482f-a6d4-05cdc4b7b1d2
trentmkelly/LessWrong-43k
LessWrong
Development of Compression Rate Method   Summary: This post provides a brief discussion of the traditional scientific method, and mentions some areas where the method cannot be directly applied. Then, through a series of thought experiments, a set of minor modifications to the traditional method are presented. The result is a refined version of the method, based on data compression. Related to: Changing the Definition of Science, Einstein's Arrogance, The Dilemma: Science or Bayes? ETA: For those who are familiar with notions such as Kolmogorov Complexity and MML, this piece may have a low ratio of novelty:words. The basic point is that one can compare scientific theories by instantiating them as compression programs, using them to compress a benchmark database of measurements related to a phenomenon of interest, and comparing the resulting codelengths (taking into account the length of the compressor itself). Notes on Traditional Method This post proposes a refined version of the scientific method which, it will be argued later, is more directly applicable to the problems of interest in artificial intelligence. Before doing so, it is worth briefly examining the traditional method and the circumstances in which it can be applied. The scientific method is not an exact procedure, but a qualitative statement of it goes roughly as follows: 1. Observe a natural phenomenon. 2. Develop a theory of that phenomenon. 3. Use the theory to make a prediction. 4. Test the prediction experimentally. A full discussion of the philosophical significance of the scientific method is beyond the scope of this post, but some brief remarks are in order. The power of the scientific method is in the way it links theory with experimental observation; either one of these alone is worthless. The long checkered intellectual history of humanity clearly shows how rapidly pure theoretical speculation goes astray when it is not tightly constrained by an external guiding force. Pure experimental investigation, in contrast,
5baf2c5a-f2e9-42f7-b054-5651994ccfe4
trentmkelly/LessWrong-43k
LessWrong
Encultured AI Pre-planning, Part 1: Enabling New Benchmarks Also available on the EA Forum. Followed by: Encultured AI, Part 2 (forthcoming) Hi!  In case you’re new to Encultured AI, we’re a for-profit start-up with a public benefit mission: developing technologies promoting the long-term survival and flourishing of humanity and other sentient life.  However, we also realize that AI poses an existential risk to humanity if not developed with adequate safety precautions. Given this, our goal is to develop products and services that help humanity steer toward the benefits and away from the risks of advanced AI systems.  Per the “Principles” section of our homepage: > Our current main strategy involves building a platform usable for AI safety and alignment experiments, comprising a suite of environments, tasks, and tools for building more environments and tasks. The platform itself will be an interface to a number of consumer-facing products, so our researchers and collaborators will have back-end access to services with real-world users. Over the next decade or so, we expect an increasing number of researchers — both inside and outside our company — will transition to developing safety and alignment solutions for AI technology, and through our platform and products, we’re aiming to provide them with a rich and interesting testbed for increasingly challenging experiments and benchmarks. In the following, we’ll describe the AI existential safety context that motivated us to found Encultured, and go into more detail about what we’re planning to do. What’s trending in AI x-safety? The technical areas below have begun to receive what we call “existential attention” from AI researchers, i.e., attention from professional AI researchers thinking explicitly about the impact of their work on existential safety: * Trustworthiness & truthfulness — ensuring AI systems are telling us the truth and doing the things they and their creators say they’re going to do. * Preference learning — enabling AI systems to learn what humans want.
3bf19a00-8d21-442b-a116-9f0e073e396a
trentmkelly/LessWrong-43k
LessWrong
graphpatch: a Python Library for Activation Patching This post is an announcement for a software library. It is likely only relevant to those working, or looking to start working, in mechanistic interpretability. ---------------------------------------- What is graphpatch? graphpatch is a Python library for activation patching on arbitrary PyTorch neural network models. It is designed to minimize the amount of boilerplate needed to run experiments making causal interventions on the intermediate activations in a model. It provides an intuitive API based on the structure of a torch.fx.Graph representation compiled automatically from the original model. For a somewhat silly example, I can make Llama play Taboo by zero-ablating its output for the token representing "Paris": with patchable_llama.patch( {"lm_head.output": ZeroPatch(slice=(slice(None), slice(None), 3681))} ): print( tokenizer.batch_decode( patchable_llama.generate( tokenizer( "The Eiffel Tower, located in", return_tensors="pt" ).input_ids, max_length=20, use_cache=False, ) ) ) ["<s> The Eiffel Tower, located in the heart of the French capital, is the most visited"] Why is graphpatch? graphpatch is a tool I wished had existed when I started my descent into madness entry into mechanistic interpretability with an attempt to replicate ROME on Llama. I hope that by reducing inconveniences (trivial and otherwise) I can both ease entry into the field and lower cognitive overhead for existing researchers. In particular, I want to make it easier to start running experiments on "off-the-shelf" models without the need to handle annoying setup—such as rewriting the model's Python code to expose intermediate activations—before even getting started. Thus, while graphpatch should work equally well on custom-built research models, I focused on integration with the Huggingface ecosystem with: * Support for PreTrainedModels loaded on multiple devices or with CPU or disk offloading * Suppor
450f9658-d1f2-467f-8774-8d02c170ec7e
trentmkelly/LessWrong-43k
LessWrong
IQ and Magnus Carlsen, Leo Messi and the Decathlon [Epistemic Status: I suspect that this is at least partially wrong. But I don’t know why yet, and so I figured I’d write it up and let people tell me. First post on Less Wrong, for what that’s worth.]  First thesis: IQ is more akin to a composite measure of performance such as the decathlon than it is to a single characteristic such as height or speed. Second thesis: When looking at extraordinary performance in any specific field, IQ will usually be highly correlated with success, but it will not fully explain or predict top-end performance, because extraordinary performance in a specific field is a result of extraordinary talent in a sub-category of intelligence (or even a sub-category of a sub-category), rather than truly top-end achievement in the composite metric.                                          Before we go too far, here are some of the things I’m not arguing: * IQ is largely immutable (though perhaps not totally immutable). * IQ is a heritable, polygenic trait. * IQ is highly correlated with a variety of achievement measures, including academic performance, longevity, wealth, happiness, and health. * That parenting and schooling matter far less than IQ in predicting performance. * That IQ matters more than “grit” and “mindset” when explaining performance. * Most extraordinary performers, from billionaire tech founders to chess prodigies, to writers and artists and musicians, will possess well-above-average IQ.[1] Here is one area why I’m certain I’m in the minority: *  I believe that Spearman’s G is a reification. At least one smart person has also expressed this opinion, but most experts disagree with him (this ties in with the First Thesis). Here is the issue where I’m not sure if my opinion is controversial, and thus why I’m writing to get feedback: * While IQ is almost certainly highly correlated with high-end performance, IQ fails a metric to explain or, more importantly, to predict top-end individual performance (the Second Thesi
52b07414-75ca-4c5b-8e15-0502d36ec23f
trentmkelly/LessWrong-43k
LessWrong
Park Toys The park closest to our house has a lot of "park toys". These are toys that people brought to the park for everyone to play with. They're usually somewhat broken when they show up, and eventually get thrown out when they're so broken that there's not much interesting to do with them anymore (which is hard!) This isn't something I experienced growing up, and I see more of it this park than other parks in the area. Walking around the park today, here's the current assortment. A fire truck. It used to have a ladder, but that fell off a couple weeks ago: A smaller fire truck. Also used to have a ladder. I think it used to have doors too: A very small bike. Julia pumped the tires up a few weeks ago. Probably the nicest toy here? Mildly surprised no one has wandered off with it: Some sort of standing trike? Maybe for teaching someone the leg motions before they're ready for a balance bike? A Minnie Mouse themed riding airplane. Missing a wing and part of the flight controls. Nora still loves riding it, though! A basketball that won't stay inflated (we've tried): A tricycle. It's designed to have the option for an adult to push it, and the pedals freewheel when you're going faster than you're pedaling. The handlebars are loose and older kids like turning them around so it's a reverse-only trike, but today it's in its designed forward-only configuration: A digger. I think this is a front-end loader, or at least it was before it lost its bucket: A haul truck: Another haul truck, with better build quality. Currently transporting fake icecream: Less sure about this one. Maybe a skid-steer? A car. The front wheels are clearly the least sturdy component. At one point Lily had a go at fixing one of them. Since then it broke again, someone replaced them, and it broke again. Bonus: a ladder from one of the fire trucks in the background: These all get a lot of use, and I'm really glad we have a culture of peo
65011cc5-1799-489c-90dc-503f66100a13
trentmkelly/LessWrong-43k
LessWrong
Opinions on Interpretable Machine Learning and 70 Summaries of Recent Papers Peter Hase UNC Chapel Hill Owen Shen UC San Diego With thanks to Robert Kirk and Mohit Bansal for helpful feedback on this post. Introduction Model interpretability was a bullet point in Concrete Problems in AI Safety (2016). Since then, interpretability has come to comprise entire research directions in technical safety agendas (2020); model transparency appears throughout An overview of 11 proposals for building safe advanced AI (2020); and explainable AI has a Twitter hashtag, #XAI. (For more on how interpretability is relevant to AI safety, see here or here.) Interpretability is now a very popular area of research. The interpretability area was the most popular in terms of video views at ACL last year. Model interpretability is now so mainstream there are books on the topic and corporate services promising it. So what's the state of research on this topic? What does progress in interpretability look like, and are we making progress? What is this post? This post summarizes 70 recent papers on model transparency, interpretability, and explainability, limited to a non-random subset of papers from the past 3 years or so. We also give opinions on several active areas of research, and collate another 90 papers that are not summarized. How to read this post. If you want to see high-level opinions on several areas of interpretability research, just read the opinion section, which is organized according to our very ad-hoc set of topic areas. If you want to learn more about what work looks like in a particular area, you can read the summaries of papers in that area. For a quick glance at each area, we highlight one standout paper per area, so you can just check out that summary. If you want to see more work that has come out in an area, look at the non-summarized papers at the end of the post (organized with the same areas as the summarized papers). We assume readers are familiar with basic aspects of interpretability research, i.e. the kinds of concepts in The My
666af8b5-bdd1-4ae3-b144-b31bf4335063
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
GPT-3-like models are now much easier to access and deploy than to develop *This post is one part of the sequence* [*Understanding the diffusion of large language models*](https://forum.effectivealtruism.org/s/8rYkpiFhbb4HsbzFc)*.  As context for this post, I strongly recommend reading at least the* [*5-minute summary*](https://forum.effectivealtruism.org/posts/nc3JFZbqnzWWAPkmz/understanding-the-diffusion-of-large-language-models-summary-1#5_minute_summary) *of the sequence.* *EDIT 25-Feb-2023: I have made a big update from the claims in this post about deployment of large language models costing less than development in total. I now think that for the largest, most commercially successful LLMs, the total compute spent on deployment is much larger than in development. However, this doesn't significantly update my conclusion about the importance of focusing on development rather than deployment as a target of intervention (point 2c in the Key Takeaways). More information in* [*this comment*](https://forum.effectivealtruism.org/posts/foptmf8C25TzJuit6/gpt-3-like-models-are-now-much-easier-to-access-and-deploy?commentId=KaMu9dBdxHmWdbBob#comments)*.* Key takeaways ============= 1. GPT-3 itself can be used and fine-tuned via an API. Despite this, there’s still demand for direct access to the model’s weights, and multiple similar models now exist that provide access to weights. OPT-175B is a GPT-3 replica; I estimate that its model weights can be downloaded by hundreds to thousands of ML researchers in academia, government, and industry labs, subject to approval. BLOOM is similar to GPT-3, but not a replica, and is publicly available for anyone to download. ([more](https://forum.effectivealtruism.org/posts/foptmf8C25TzJuit6/gpt-3-like-models-are-now-much-easier-to-access-and-deploy#Some_GPT_3_like_models_are_widely_available_for_download)) 2. What resources are required to actually use these models? ([more](https://forum.effectivealtruism.org/posts/foptmf8C25TzJuit6/gpt-3-like-models-are-now-much-easier-to-access-and-deploy#What_resources_are_required_to_actually_use_GPT_3_like_models_)) 1. One relevant question is how much money and talent it would take to generate a large volume of text with about the same average usefulness as GPT-3’s outputs. ([more](https://forum.effectivealtruism.org/posts/foptmf8C25TzJuit6/gpt-3-like-models-are-now-much-easier-to-access-and-deploy#Even_running_a_GPT_3_like_model_independently_seems_feasible_for_thousands_of_individuals)) 1. Based on publicly listed pricing, running GPT-3 via the OpenAI API would generate approximately 150 million English words for $4000.[[1]](#fn1qegtjk14he) 2. I estimate that a user could generate 150 million English words per day of similar usefulness to GPT-3’s outputs for as little as $240 per day by running one instance of BLOOM independently (i.e., downloading the model weights and running the model on a server that they either directly rent or own). 3. I estimate there are 5000 people (90% CI: 100 to 45,000) that are capable of running BLOOM independently.[[2]](#fnbq01963lk39) 2. What about for a very large-scale application of a GPT-3-like model—for example, generating text equivalent to 1% of global Twitter activity for one year, or assisting one million software developers with coding for one year? I estimate that deploying a model like BLOOM in these ways would be 20% of the cost of *developing* the model (90% CI: 10 to 68%), in terms of the dollar cost of compute alone. This means that deployment is most likely much less prohibitive than development. But it means I give a 5% chance that for the largest-scale applications, the cost of deploying the model is at least 68% of the cost of developing the model, which would make deployment similarly prohibitive. ([more](https://forum.effectivealtruism.org/posts/foptmf8C25TzJuit6/gpt-3-like-models-are-now-much-easier-to-access-and-deploy#Deploying_a_GPT_3_like_model_to_have_significant_impact_on_the_world_is_probably_one_order_of_magnitude_cheaper_than_training_the_model)) 1. I don’t think that labor costs change the overall conclusion here, though I didn’t directly account for it in this model. I estimated the labor cost of the coding assistant scenario to be $500K, which is only 1/5th of the median estimate of the compute cost. 3. My analysis supports prioritizing interventions at the development stage rather than the deployment stage. Interventions targeting development seem generally more tractable, because they can take advantage of the larger compute and talent barriers involved. Another reason there is more leverage at the development stage is that the developers of models seem to be in the most convenient position to deploy those same models.[[3]](#fnkjs5iy5lao) ([more](https://forum.effectivealtruism.org/posts/foptmf8C25TzJuit6/gpt-3-like-models-are-now-much-easier-to-access-and-deploy#Upshot__focus_more_on_shaping_development_than_deployment)) 4. Some caveats to the above points: 1. I still expect that most actors that use state-of-the-art AI systems for inference will do so via APIs provided by some other actor, rather than via developing a system themselves or downloading and independently running a system. Furthermore, when AI systems diffuse to the point of being publicly accessible, it seems very likely that someone will also set up and open-source a convenient way to use those systems in the form of an API or other software. ([more](https://forum.effectivealtruism.org/posts/foptmf8C25TzJuit6/gpt-3-like-models-are-now-much-easier-to-access-and-deploy#One_actor_can_build_a_model_API_and_other_software_tools_to_make_deployment_easy_for_other_actors)) 2. Some actors will be able to scale up to afford larger deployments, using a feedback loop of commercial deployment, which generates revenue, which in turn funds larger deployment. As such, expensive deployments may be ultimately accessible to more actors than one would initially think. ([more](https://forum.effectivealtruism.org/posts/foptmf8C25TzJuit6/gpt-3-like-models-are-now-much-easier-to-access-and-deploy#Barriers_to_deployment_will_decrease_over_time_for_actors_that_scale_up_commercially)) Some GPT-3-like models are widely available for download ======================================================== Here I overview two GPT-3-like models, OPT-175B and BLOOM. OPT-175B is a GPT-3 replica that can be downloaded by ML researchers in academia, government and industry labs (after their request for access is approved). BLOOM is similar to GPT-3, but not a replica, and is available for anyone to download. | | | | | --- | --- | --- | | | **OPT-175B** | **BLOOM** | | **Average task accuracy compared to GPT-3 (estimated based on the available evaluations)** | 2 percentage points worse (90% CI: 1–5) | 2 percentage points worse (90% CI: 1–10) | | **Access type** | Approval-based | Open-source | | **Access criteria** | People in academia, government, and industry labs that have published research involving language models | None | | **API?** | [Yes](https://perma.cc/LJZ6-8A22) | [Yes](https://perma.cc/XJ6C-JFBN) (small-scale use)[[4]](#fn3zqtg8fn1l5) | | **Estimate of who can deploy the model independently (i.e., not using an existing API)** | Anyone that is granted access (based on the access criteria) | Top 1% CS graduates that passed a course on natural language processing with deep learning, given 3 months of effort; or equivalent |                                 *Table 1: attributes of OPT-175B and BLOOM* Note that **many other GPT-3-like models have been developed before OPT-175B and BLOOM became available.**However, I am 80% confident that before July 2022, no other GPT-3-like models had their trained weights widely available for download.[[5]](#fn8g8o6wcz6be)In terms of understanding the most important impacts of diffusion, I think which GPT-3-like models are currently accessible is less important than the timing, development and release strategy of GPT-3-like models. I cover those characteristics in the other post in the sequence. Despite the existence of APIs that provide access to model outputs and fine-tuning, there is demand for direct access to model weights -------------------------------------------------------------------------------------------------------------------------------------- To date, the trained model weights for GPT-3 have not been made available.[[6]](#fnhdjpjnaw1zk) Having direct access to trained model weights would allow someone to (a) run the model independently on a computing cluster, (b) make copies of the model, (c) fine-tune the model for new tasks, and anything else that requires access to the values of weights (e.g., interpretability research). Although OpenAI provides an API that allows access to GPT-3 [outputs](https://perma.cc/6U2D-R959) and [fine-tuning procedures](https://perma.cc/ZJ53-5NFX), this API places considerable limits on diffusion. As well as preventing direct access to model weights,[[7]](#fn3mtbx6r0t4f) the API limits the applications of the model through OpenAI’s monitoring and review process,[[8]](#fnwb1b7jy4t6b) and limits the speed at which model outputs can be accessed.[[9]](#fnt2yfyp3uo6) So while the OpenAI API may be satisfactory for many users to access GPT-3 capabilities, it does not allow as much breadth and freedom in the use of the model as direct access to trained model weights would have. The lack of direct access to GPT-3 model weights appears to have created a demand for that access. The demand is strong enough that multiple actors have spent millions of dollars to make GPT-3-like models more widely and freely available.[[10]](#fnxpdvtjeoh5g) GPT-3 itself is almost certainly profitable for OpenAI given that its commercial API has not been discontinued. So I think the most obvious incentive for AI *companies* to create a GPT-3-like model is to develop their own products using their own model.[[11]](#fnka0rgwb2g5d) On the academic side, there seems to be growing interest in studying [foundation models](https://arxiv.org/abs/2108.07258)—GPT-3 is one such model. The research collaboration that culminated in BLOOM involved more than one thousand researchers ([BigScience, 2022](https://bigscience.huggingface.co/blog/bloom)). Meanwhile,  [Zhang et al. (2022, p. 1)](https://arxiv.org/abs/2205.01068) state: “Our aim in developing this suite of OPT models is to enable reproducible and responsible research at scale, and to bring more voices to the table in studying the impact of these LLMs.” Finally, [Black et al. (2022, p. 1)](https://arxiv.org/abs/2204.06745) state: “We make the models weights freely and openly available to the public through a permissive license, motivated by the belief that open access to LLMs is critical to advancing research in a wide range of areas—particularly in AI safety, mechanistic interpretability, and the study of how LLM capabilities scale.”[[12]](#fnznkhoujnbcg) OPT-175B can be directly accessed by hundreds to thousands of AI researchers ---------------------------------------------------------------------------- OPT-175B is a 175-billion-parameter language model from Meta AI Research, announced in May 2022 ([Zhang et al., 2022](https://ai.facebook.com/blog/democratizing-access-to-large-scale-language-models-with-opt-175b/)). The primary goal of OPT-175B was as a replication of GPT-3.[[13]](#fn8jdrid0ibxg) The replication seems to have largely succeeded; however, the model performed an average of two percentage points worse than GPT-3 on 14 benchmarks across zero-shot and few-shot evaluation settings.[[14]](#fnabv30i83qj) The trained OPT-175B model seems to be accessible by anyone that can demonstrate their status as a researcher affiliated with academia, government, civil society, or an industry lab, who has relevant publications.[[15]](#fn46reotjnpe9) Based on what it says in the paper and the application form, I estimate that 1000 (90% CI: 200–3000) people could be eligible, and all of these people could be granted access in the first year following release.[[16]](#fniv5i6v611u) This number depends on how quickly applications are processed and how relevant the applicant’s publications need to be.  Direct access to OPT-175B’s trained model weights is provided upon request.[[17]](#fnv2jg9bh28y) It therefore seems that anyone who currently has access could pass on the weights to someone else, or even make the weights publicly available for download. The potential for diffusion would thereby increase even further. It is not clear to me whether these actions would violate the terms of copyright stated in the license agreement for OPT-175B.[[18]](#fnlzs7xqmzzx) I am not aware of any unilateral open-sourcing event like this occurring yet for OPT-175B. However, I am only 60% confident that this has not happened. My confidence is based on how little time has passed since OPT-175B was released, that I haven’t heard about it happening, and that searching DuckDuckGo for “download opt-175b model” does not have any confirming results on the first page.[[19]](#fnuotalwwzg4i) At some point after OPT 175B’s release, this [API](https://perma.cc/LJZ6-8A22) for OPT 175B was released by a team at ​​[Sky Lab, UC Berkeley](https://sky.cs.berkeley.edu/).[[20]](#fn11oz3h4km2mj) BLOOM can be downloaded by anyone --------------------------------- BLOOM is a 176-billion-parameter language model from the open research collaboration known as [BigScience](https://bigscience.huggingface.co/). BLOOM was released in July 2022. The prior model that most influenced the design of BLOOM was apparently Megatron-LM ([Shoeybi et al., 2019](https://arxiv.org/abs/1909.08053)), which along with GPT-3 is heavily based on GPT-2 ([Radford et al., 2019](https://www.semanticscholar.org/paper/Language-Models-are-Unsupervised-Multitask-Learners-Radford-Wu/9405cc0d6169988371b2755e573cc28650d14dfe)).[[21]](#fn25xn3y462tv)[[22]](#fndf0f22c1dg) Despite its similar size and pretraining approach, unlike OPT-175B I don’t consider BLOOM to be an exact replication attempt. This is partly because it is not stated by the authors as an explicit replication attempt. It is also because the training data for BLOOM is much more multilingual, which was emphasized in [BigScience (2022)](https://bigscience.huggingface.co/blog/bloom).[[23]](#fnqb8ginf6y7m) My best guess is that BLOOM is worse than GPT-3 on most tasks to a similar degree to OPT-175B.[[24]](#fnure9pd2wv3l) On the other hand, I expect that BLOOM’s more multilingual training data leads to a *wider* *spread* of capabilities across languages. Unlike OPT-175B, BLOOM’s trained model weights are publicly [available](https://huggingface.co/bigscience/bloom/tree/main) to download from the HuggingFace website. HuggingFace is also [hosting](https://huggingface.co/bigscience/bloom) a public API for BLOOM. So access to BLOOM is even more open than for OPT-175B. Anyone with decent internet and 329 GB of storage can start downloading the weights immediately, without any request for access.[[25]](#fnut4rs0lybg8) What resources are required to actually use GPT-3-like models? ============================================================== | | | | | --- | --- | --- | | **Deployment scenario** | **Compute cost (USD)**[[26]](#fnjuf7452mik) | **Direct talent requirement**[[27]](#fnsnc5hx1hww) | | Generate 150 million English words by running one instance of the BLOOM model independently, for 24 hours. | 240  | One top 1% CS graduate that passed a course on natural language processing with deep learning, given three months of effort; or equivalent | | Generate 150 million English words using GPT-3 via the OpenAI API. | 4000 | Negligible | | Produce content equal in size to 1% of the average number of Tweets per day, for one year. Use instances of the BLOOM model running on cloud compute.  | 160K (90% CI: 88K to 260K) | 5 professional software developers that have worked with ML projects, and five ML engineers who know how to run language models over multiple GPUs.Labor cost: $250K | | Use a hypothetical GPT-3-sized coding language model to improve one million software developers’ productivity by between 1% and 10%. | 2M (90% CI: 260K to 8.4M) | 15 professional software developers that have worked with ML projects, and five ML engineers who know how to run language models over multiple GPUs.Labor cost: $500K | | Do the largest viable deployment of a GPT-3-like model (based on above two scenarios, adjusted by other evidence).[[28]](#fno2cbsnyvm4) | 2.6M (90% CI: 950K to 6.2M) | [not estimated] |                                                     *Table 2: Summary of deployment scenarios and the estimated requirements for them, explored in this section* So far I have considered the ability to access (i.e., download and interact with) trained model weights of GPT-3-like models. But if an actor has only downloaded some model weights, the cost and talent requirements to actually run inference with the model could be far from trivial.[[29]](#fnvlovv0t4nl8) Even further resources would be needed to deploy a model at a scale large enough to build a profitable business. **An important question is:** **how large is the barrier to deploying a model impactfully, compared to training a model?**I measure how large the barrier is in terms of compute cost and talent requirements. I will first consider this question in the case of a GPT-3-like model, BLOOM. My choice of BLOOM is not particularly special; it is just the only GPT-3-like model I was aware of (as of July 2022) that is open-source. Most of the analysis that follows depends merely on BLOOM’s basic Transformer architecture and number of parameters, so any similarly-sized language model can be substituted. My assumptions about what BLOOM is *capable* of are less defensible, but again, I think it serves as a basis for useful hypotheticals. I think the most important version of the question is how large the barrier is to deploying *transformative* AI (TAI) systems. Nonetheless, asking this question of present-day AI systems still seems useful. In particular, **the answer to this question affects which actors will be incentivized to develop or gain access to AI systems, and to deploy those systems.** **In turn, some actors may gain insight and revenue from these systems, and thereby become more able to develop, access, and diffuse TAI in the future.** One actor can build a model API and other software tools to make deployment easy for other actors ------------------------------------------------------------------------------------------------- **I expect that most actors that use state-of-the-art AI systems for inference will do so via APIs provided by some other actor.** The AI system will be hosted by some organization, and users will make queries to the model via an API.[[30]](#fnrzcgtmf46b) The most prominent example of such an API today is the [OpenAI API](https://openai.com/api/). If an actor merely wants to run inference with a model, there are strong incentives for using an API—the provider of the API handles all of the setup, running costs and maintenance, while the user merely needs to make queries to the API to get outputs from the model. **However, as I argued** [**above**](https://forum.effectivealtruism.org/posts/foptmf8C25TzJuit6/gpt-3-like-models-are-now-much-easier-to-access-and-deploy#One_actor_can_build_a_model_API_and_other_software_tools_to_make_deployment_easy_for_other_actors)**, there are still incentives to*****not*** **use a particular model API.** An API inherently limits a user’s interaction with a model. A user can’t necessarily probe or fine-tune the model in any way they want, nor use the model for any application, due to content policies and content filtering.[[31]](#fnuw941nbc5ka) Furthermore, there are many models for which no API is released in the first place, but which some actors may want to replicate, such as the Chinchilla model from DeepMind ([Hoffmann et al., 2022](https://arxiv.org/abs/2203.15556)). Suppose that the weights for some model become publicly available. This could happen via deliberate open publication by the original developer of the model, or by some other diffusion mechanism like replication or theft. **Due to the incentives to use open-source models, it seems very likely that someone will also set up and open-source a convenient way to use that model.** There will very likely be enough actors in the world that at least one of them is motivated to do this. **At that point, many more people will be capable of running the model, either via an API or other tools that are open-sourced.** BLOOM has an API with fully open access on this [page](https://huggingface.co/bigscience/bloom). I was able to create a HuggingFace account (only requiring an email address) and run inference with BLOOM immediately using the text box on the right side of the page. This interface is only a preview—HuggingFace has a more comprehensive and scalable [hosted inference API](https://huggingface.co/docs/api-inference/index) to make it easy to deploy the models that they host, including BLOOM. Hugging Face [offers](https://perma.cc/B4HU-FUVD) a $9/month plan that allows up to one million input characters for model inference per month. Again, given the convenience of APIs like this, I expect that most actors that use models will do so via APIs provided by other actors. Even running a GPT-3-like model independently seems feasible for thousands of individuals ----------------------------------------------------------------------------------------- I have just argued that most actors that use models will do so via other actors’ APIs. However, **I still expect there to be cases of diffusion where some actors want to run a model independently.**By running “independently,” I mean that an actor downloads the model weights and runs the model on a server that they either directly rent or own. The incentive for independent deployment could arise because 1. There is no existing API for a model of interest, or 2. Existing APIs are not sufficient for some actors’ aims. For (1), there might be no existing API because the model developer wants to limit diffusion of capabilities, or simply has no interest in providing wider access to the model. For (2), the actors in question could be startups that want to scale up to train and deploy their own models, or malicious actors that don’t want their malicious usage to be tracked (even if permitted) by an API. For these reasons, **I think it’s useful to analyze who can run a model like BLOOM independently, even though there is in fact an openly accessible API for BLOOM. One can then apply a similar analysis to future models where there is a stronger incentive to run the model independently.** By my calculations, the most compute-efficient way to run inference with BLOOM by renting compute from the cloud is to use one 8 x 80 GB A100 GPU node. Based on that, **running one instance of BLOOM independently would**[**cost**](https://perma.cc/TTB9-B8TF) **$10/hr** from the cloud provider Lambda, using a minimum three-month rent commitment. **A user could generate 150 million English words for $240 per day this way if running the model 24/7**. **For comparison, based on current**[**OpenAI API pricing**](https://perma.cc/H79C-K3WP)**, using GPT-3 via the OpenAI API would generate this many words for $4000.**[[32]](#fnv1ysct0yoy) I estimate that 5000 people in the world (90% CI: 100 to 45,000) have the talent required to run BLOOM independently this way. See [this appendix](https://forum.effectivealtruism.org/posts/foptmf8C25TzJuit6/gpt-3-like-models-are-now-much-easier-to-access-and-deploy#Appendix__Who_can_download_and_run_BLOOM_independently_) for the reasoning behind these estimates. Deploying a GPT-3-like model to have significant impact on the world is probably one order of magnitude cheaper than training the model --------------------------------------------------------------------------------------------------------------------------------------- I have just considered the constraints involved in running a single instance of BLOOM. But useful deployments of BLOOM could be much larger. To get a sense of how much the *largest*viable deployments of GPT-3-like models would cost, I consider two representative scenarios. The largest viable deployment is the deployment with the highest volume of model outputs that 1. Would be possible for at least one actor to do by now if they tried 2. Is worth the cost—not necessarily profit-wise, but in achieving the actor's goal. For example, a large-scale automated disinformation campaign may not generate revenue but may achieve a political goal. ### Scenario 1: automated text disinformation campaign (modeled as 1% of Twitter activity) The first scenario is an automated text disinformation campaign.[[33]](#fnybgulc3axpq) A crude way to model this is producing content equal in size to 1% of the average number of Tweets per day, which I estimated as 670 million. Using BLOOM running on cloud compute to accomplish this, I estimate the cost would be $450 per day (90% CI: 240 to 720) or $160K per year (90% CI: 88 to 260).[[34]](#fns64zwknfznl) For comparison, accomplishing this with GPT-3 via the OpenAI API (assuming it was allowed) would cost $5100 (90% CI: 3800 to 6600) per day or $1.9M (90% CI: 1.4 to 2.4) per year. My best-guess estimate of the total compute cost to develop a GPT-3 replica from scratch was $13 million with a 90% CI of $4 million to $36 million (see this [section](https://forum.effectivealtruism.org/posts/FANYsqzPM9Yht3KM2/the-replication-and-emulation-of-gpt-3#The_resources_required_to_produce_and_replicate_GPT_3)). So **even for a very large-scale sustained operation, such as generating text equivalent to 1% of global Twitter activity for one year, model training would be about a 100x larger constraint financially**. See [this appendix](https://forum.effectivealtruism.org/posts/foptmf8C25TzJuit6/gpt-3-like-models-are-now-much-easier-to-access-and-deploy#Appendix__Cost_of_producing_1__of_Twitter_activity_with_BLOOM) for supporting reasoning. ### Scenario 2: GPT-3-sized coding language model that is very commercially successful The second scenario I consider is a GPT-3-sized coding language model that is very commercially successful. This means the model is used to improve one million software developers’ productivity by between 1% and 10%.[[35]](#fnu7x12fnuzh) I estimate the cost of this would be $2M per year (90% CI: 260K to 8.4M). So on my best guess, this cost is still an order of magnitude lower than the total development cost of the model at $13M. But this cost is *plausibly* close to the total development cost, given the upper bound is $8.4M. See [this appendix](https://forum.effectivealtruism.org/posts/foptmf8C25TzJuit6/gpt-3-like-models-are-now-much-easier-to-access-and-deploy#Appendix__Cost_to_run_a_GPT_3_size_coding_language_model_that_is_very_commercially_successful) for supporting reasoning. ### Talent requirements and labor cost In terms of talent requirements, I’m confident that the level of talent required to train any machine learning model is basically sufficient to run inference with that model, because forward passes of the model are performed as part of the training process. However, deploying a large language model at a commercially viable scale generally requires other areas of talent. These areas include more traditional software development skills to build APIs and host models on servers. However, I expect that ML engineering talent is the bottleneck for deployment, because it is more scarce than other software engineering talent. Based on that, **my best guess is that large-scale deployment like in the above scenarios would approximately require** 1. **Five professional software developers that have worked with ML projects for the Twitter scenario** 2. **15 professional software developers that have worked with ML projects for the coding assistant scenario (because I imagine much more software infrastructure and maintenance is needed to serve 1 million users compared to just posting Tweets via bots)** 3. **Five ML engineers who know how to run language models over multiple GPUs (in both scenarios)** I do not have a rigorous justification for these exact requirements; they just seem like the most intuitively plausible to me.[[36]](#fniv6ijc0o7gf) Let’s suppose this team is working on the project with a salary of $100k per person for the entire year. Thenthe one-year Twitter scenario above would cost the equivalent of 10 x 100k = $1M in talent, while the coding assistant scenario would cost $2M. However, I expect that the actual time spent on such a project would be closer to three months full-time equivalent. So **my final labor cost estimate is $250K for the Twitter scenario and $500K for the coding assistant scenario.**[[37]](#fng5qlgk5i2no) ### Combining the scenarios and accounting for other evidence to estimate the compute cost of the “largest viable deployment” Combining the Twitter and coding assistant compute cost estimates in this [Guesstimate model](https://www.getguesstimate.com/models/21252?token=LQ4P39lWEmMO-H6-71bW9GjhhLeAKD9JceS7Fn9MTI8sgz49e-sxpb_8zZ6wKbxkbF4uz4ROy45yOjhuptWYlw), I get an overall estimate of $1.3M (90% CI: 230K to 5.2M). As a percentage of the GPT-3 replication cost estimate, this is 12% (90% CI: 1.3 to 56%).  As another line of evidence, I tried to find information on what percentage of the cost of machine learning applications as a whole is accounted for by inference rather than training. I found two sources estimating 80-90% of the cost is for inference.[[38]](#fn8sgcxc3h8tk) However, those sources don’t provide clear evidence or reasoning for those estimates, and they appear to be incentivized to give high estimates. Updating slightly for this evidence, **my overall estimate of the cost of the largest viable deployments of GPT-3-like models is 20% of the development cost (90% CI: 10 to 68). Converting this back to dollars, I get $2.6M (90% CI: 1 to 6.6).**[[39]](#fn3fhrtfo8uuk) Putting this all together, **in my median scenario the development of a GPT-3-like model costs about 5 times more than the largest viable deployment of the model**. But my confidence interval means there is a 5% chance that there are deployment scenarios which (a) cost more than 68% as much as developing the model, and (b) have a significant impact, such as improving one million software developer’s productivity by a few percent.So **plausibly, for the highest-impact applications, the cost of deployment is almost as prohibitive as the cost of development.** Barriers to deployment will decrease over time for actors that scale up commercially ------------------------------------------------------------------------------------ One consideration I haven’t taken into account in the above analysis is the ability for actors to scale up via commercial revenue. Actors could deploy a model at a small but profitable scale, then use the resulting revenue to scale up, then deploy at a larger and more profitable scale, and so on in an amplifying feedback loop. This feedback loop can also have discontinuous jumps—if an actor has a moderately successful and promising application of AI, they might suddenly receive much more funding from investors. AI21 Labs is an example, reportedly raising $64M in funding in July 2022 and thereby almost doubling their total capital ([Wiggers, 2022](https://perma.cc/5LTP-Q44A)). Having said that, the current leading AI developers can also set up this amplifying feedback loop, and have the biggest head start. So I think that leading developers are likely to maintain a steady (and perhaps an accelerating) lead this way. Because of this maintained lead, I think the number of actors that can afford to independently deploy future state-of-the-art models will most likely *not* increase significantly over time, even as smaller actors scale up. Upshot: focus more on shaping development than deployment --------------------------------------------------------- Above, I have argued that the development of GPT-3-like models is a much larger constraint than the deployment of models. I think there is more opportunity for the AI governance community to take advantage of the larger constraint on development, than to make deployment more difficult. For example, diffusion can be limited by taking advantage of the large compute and talent requirements to train GPT-3-like models. Meanwhile, deployment seems much easier to do and more difficult to control. This is because the cost of even the largest viable deployments seem to be much smaller (about four times smaller, at my best guess).  Furthermore, the developers of models seem to be in the most convenient position to deploy those same models. This is because 1. There is significant overlap in the expertise required to develop and deploy. 2. The compute used for training the model is probably much more than is needed to deploy (again, based on my above analysis), so this compute can be reused. 3. The developer has the most control over the model initially, because they are the first to possess the model and can decide to keep it private. For these reasons, **I think** **the AI governance community should prioritize limiting which actors can develop models over limiting which actors can deploy models.** Appendix: Who can download and run BLOOM independently? ======================================================= Cost: $240 for 150 million words -------------------------------- For the number of words, see the “Actual throughput (tokens per day per GPU node)” cell in [this Guesstimate model](https://www.getguesstimate.com/models/21075), which estimates 200 million tokens per day. Average words per token is 0.75 (see<https://perma.cc/T6M8-Q9BJ>), so 200M tokens corresponds to roughly 150M words. The cost per hour comes from the "Reserved pricing" for 8x NVIDIA A100 80GB GPUs from Lambda, listed here:<https://perma.cc/TTB9-B8TF>. Most CS graduates could in principle afford the financial cost of $240 to run BLOOM for one day, but running BLOOM for a year (say) would then cost ~$90K which would only be affordable for perhaps tens to hundreds of individuals. Talent pool: Thousands of people -------------------------------- Let’s consider the minimum talent required to download BLOOM and run the model on a separate cloud compute server.[[40]](#fn7hr2s7ixjw4) I think this requirement is equivalent to a single top-one-percentile Computer Science graduate who has passed at least one course on natural language processing with deep learning, and who can spend three months full-time figuring out how to run the model. This is because a lot of the know-how to run the model is available on the internet, such that a strong machine learning background is *not* required to start with. For example, [EleutherAI’s Discord server](https://discord.com/invite/zBGx3azzUn) would have a lot of relevant information and people willing to help. Tools such as [HuggingFace accelerate](https://github.com/huggingface/accelerate) make it easier to use machine learning models with multiple GPUs (which seems to be required for models as big as BLOOM). Besides that, I don’t have special reasons to specify the requirement as a single top-one-percentile CS graduate with introductory machine learning experience spending three months trying. It is just a concrete-enough requirement that is intuitively plausible to me. I think that the people in this set are a useful indication of the actual set, because it seems to overlap significantly with the actual set. For instance, I’m confident that high-percentile CS graduates make up more than 20% of the actual set. Reasoning for the calculation: 1. According to [datausa.io](https://datausa.io/profile/cip/computer-science-110701), there are currently about two million Computer Science graduates in the US workforce. 2. I’ll assume that 1% of these top 1% graduates have the requisite machine learning knowledge. Although machine learning seems to be a popular subject nowadays, many existing graduates would have graduated before machine learning was very popular, and fewer still would retain the knowledge through relevant work or continued study. My intuitive guess is that only 10% of the existing top graduates studied it, and only 10% of those have retained the requisite knowledge, hence 1%. 3. The US population is 335 million according to [Worldometer](https://www.worldometers.info/world-population/us-population/), while the [world](https://www.worldometers.info/world-population/) population is 8 billion. 4. So a crude estimate of people meeting the above talent requirement is: 2 million / 335 million \* 8 billion \* 1% \* 1% ~= 5000. 5. I think it’s likely that any of these 5000 people would have good internet access and be able to rent at least one 8x A100 80 GB GPU server from a cloud provider such as [Lambda](https://perma.cc/TTB9-B8TF). One of these servers seems sufficient to run BLOOM because the server’s total memory of 640 GB is much larger than the amount of memory taken up by the model weights of BLOOM, which is 329 GB.[[41]](#fnyt5dr9hynx9) As a lower bound, it seems implausible that the number could be any lower than the total number of “infrastructure engineers” I counted in my case studies, which was 73 (see this [cell](https://docs.google.com/spreadsheets/d/1mYWA2V2aGe4PMRYfF2rwJ4sVcBteUzoyGl26S7LyM2s/edit#gid=605901592&range=B31) in the diffusion database). So I set an approximated lower bound at 100. As an upper bound, it seems implausible that the number of people capable of running BLOOM exceeds the number of times the BLOOM repository (which includes the model weight files) has been downloaded. I could not find a total number of downloads, but the downloads in the past month (as of October 10, 2022) are reported at about 15,000 (see [repository page](https://perma.cc/XJ6C-JFBN)). Assuming the same number of downloads happened in the other two months since the repository was released in early July, that would make 45,000 downloads in total. The actual number may be higher because of a spike in interest in BLOOM in the first month after it was announced, but I find any significantly higher number too implausible given the technical difficulty of running a model as large as BLOOM. The number would also be close to this, at 50,000, if I instead chose 10% for one of the two 1% numbers in the “CS graduates” calculation above, which seems barely plausible. So my overall estimate is 5000 with a 90% CI of 100 to 45,000. Appendix: Cost of producing 1% of Twitter activity with BLOOM ============================================================= See this [Guesstimate model](https://www.getguesstimate.com/models/21075) for calculations and reasoning. [Buchanan et al. (2021, p. 58)](https://cset.georgetown.edu/wp-content/uploads/CSET-Truth-Lies-and-Automation.pdf) provide a point of comparison: "...creating enough content to equal in size to one percent of global Twitter activity would require hundreds of GPT-3s running 24/7 and would cost tens of millions of dollars per year." So my cloud-compute cost estimate ($160K) is about two orders of magnitude lower than theirs (~$10M). Their reasoning is not entirely clear, especially the calculation behind “hundreds of GPT-3s.” However, they seem to make the following different assumptions: 1. The actor buys the hardware rather than renting the hardware from a cloud vendor. 2. A ~2x larger memory footprint for the model than in my estimate. This is likely based on using a FP32 number representation rather than the FP16 number representation which is now more common for large language models, including BLOOM (it says “​​Bf16 weights” at <https://huggingface.co/bigscience/bloom#speeds-sizes-times> which refers to the bfloat16 number representation). 3. Using [V100](https://www.nvidia.com/en-gb/data-center/tesla-v100/) GPUs rather than the newer [A100](https://www.nvidia.com/en-us/data-center/a100/) GPUs 1. V100 has ~3 slower peak throughput (125 teraflop/s vs. 312 teraflop/s) 2. V100 has less than half the memory capacity (32 GB vs. 80 GB), therefore requiring ~2x the number of chips to fit the model in memory. Based on the rough factors of difference in (2) and (3), I get 2 \* 3 \* 2 = 12x overall. So the two orders of magnitude difference seems mostly, but perhaps not entirely, explained by the difference in assumptions that I came up with. Appendix: Cost to run a GPT-3-size coding language model that is very commercially successful ============================================================================================= See this [Guesstimate model](https://www.getguesstimate.com/models/21076?token=xPXk-Zaytq5uQYzp3LMIfntKdhB2pUGGJvKXkLCABjaKifKwwShakBgnhF1OeCdy12xyRfhmNlJ-zNs9HpDwJQ) for calculations and reasoning. Acknowledgements ================ ![](https://res.cloudinary.com/cea/image/upload/v1672743172/mirroredImages/nc3JFZbqnzWWAPkmz/zlqaveyga1kcixrclwdt.png) *This research is a project of*[*Rethink Priorities*](http://rethinkpriorities.org/)*. It was written by Ben Cottier. Thanks to Alexis Carlier, Amanda El-Dakhakhni, Ashwin Acharya, Ben Snodin, Bill Anderson-Samways, Erich Grunewald, Jack Clark, Jaime Sevilla, Jenny Xiao, Lennart Heim, Lewis Ho, Lucy Lim, Luke Muehlhauser, Markus Anderljung, Max Räuker, Micah Musser, Michael Aird, Miles Brundage, Oliver Guest, Onni Arne, Patrick Levermore, Peter Wildeford, Remco Zwetsloot, Renan Araújo, Shaun Ee, Tamay Besiroglu, and Toby Shevlane for helpful feedback. If you like our work, please consider*[*subscribing to our newsletter*](https://www.rethinkpriorities.org/newsletter)*. You can explore our completed public work*[*here*](https://www.rethinkpriorities.org/research)*.* 1. **[^](#fnref1qegtjk14he)**The 150 million words is somewhat arbitrary. The number came about in my estimate of how many tokens the BLOOM model could generate when running continuously on an 8x 80GB A100 GPU instance for 24 hours, at a typical hardware utilization rate. 2. **[^](#fnrefbq01963lk39)**That said, my intuition is that the number of people who will actually learn how to run and then use BLOOM independently for some research or application, at any point in time since BLOOM was released, is much lower. My 90% CI for that number is 10 to 1000. I expect that most people who use BLOOM will use an API rather than run it themselves. 3. **[^](#fnrefkjs5iy5lao)**Note that there are other (perhaps stronger) reasons to focus on the model development stage. Firstly, the forms of diffusion that help actors develop models pushes AI progress forward more than the forms of diffusion that help actors deploy models. Pushing AI progress forward is what shortens AI timelines and thereby increases AI existential risk. Secondly, a lot of AI existential risk comes from [misaligned power-seeking AI](https://arxiv.org/abs/2206.13353) rather than misuse by humans. I expect that reducing diffusion of deployment would have a smaller effect on this source of risk. 4. **[^](#fnref3zqtg8fn1l5)**The BLOOM announcement blog post states “We're finalizing an inference API for large-scale use even without dedicated hardware or engineering. In the meantime, for quick tests, prototyping, and lower-scale use, you can already[play with an early version](https://huggingface.co/bigscience/bloom) on the HF hub” ([BigScience, 2022](https://perma.cc/3GUR-TN3D)). 5. **[^](#fnref8g8o6wcz6be)**My confidence is based on (a) skimming the papers and/or blog posts for all GPT-3-like models in the diffusion [database](https://docs.google.com/spreadsheets/d/1akHMvcFcSrve9xwo-nRdedTzRz6Az_xxw5JZFAJvoig/edit?usp=sharing) for mention of model access; (b) the 20-billion parameter GPT-NeoX-20B model being “to the best of our knowledge, the largest dense autoregressive model that has publicly available weights at the time of submission” as of February 2022 ([Black et al., 2022](https://arxiv.org/pdf/2204.06745.pdf)); (c) none of the experts that I consulted with, nor papers that I looked at, mentioned other models that are both GPT-3-like and widely available for download. (I did not ask any experts about this directly, but several experts mentioned BLOOM and OPT, so it’s likely that they would have also mentioned other widely-accessible models if they existed.) [YaLM](https://medium.com/yandex/yandex-publishes-yalm-100b-its-the-largest-gpt-like-neural-network-in-open-source-d1df53d0e9a6) from the Russian tech company Yandex is a possible exception (which was in fact known to me), but given that it has only 100 billion parameters, my guess is that it does not have comparable performance to GPT-3. 6. **[^](#fnrefhdjpjnaw1zk)**Throughout this sequence, “GPT-3” refers to the original 175-billion-parameter model that was first described in [Brown et al. (2020)](https://arxiv.org/abs/2005.14165) *unless* it is mentioned in the context of using the OpenAI API, which provides an updated version of the model. 7. **[^](#fnref3mtbx6r0t4f)**See [Shelvane (2022, p. 105)](https://www.governance.ai/research-paper/the-artefacts-of-intelligence-governing-scientists-contribution-to-ai-proliferation): a member of the OpenAI policy team told the author that “[researchers] can't make any changes to the underlying weights [of GPT-3]. They can't fine-tune it arbitrarily. They can't remove layers, they can't inspect the activations; they can't do all sorts of things.” 8. **[^](#fnrefwb1b7jy4t6b)**See [Usage Guidelines](https://perma.cc/HSL5-QKFC) which describe the procedure for application review, and the content policy. 9. **[^](#fnreft2yfyp3uo6)**See [Shelvane (2022, p. 84)](https://www.governance.ai/research-paper/the-artefacts-of-intelligence-governing-scientists-contribution-to-ai-proliferation): “the [OpenAI] API is designed to prevent users from stealing GPT-3 [...] the API comes with usage quotas, which users must apply to increase.” 10. **[^](#fnrefxpdvtjeoh5g)**The cost of millions of dollars is based on my training compute cost estimates for OPT-175B and BLOOM. See this [column](https://docs.google.com/spreadsheets/d/1mYWA2V2aGe4PMRYfF2rwJ4sVcBteUzoyGl26S7LyM2s/edit#gid=1986563325&range=AI2) in the diffusion database. 11. **[^](#fnrefka0rgwb2g5d)** An example of this is AI21 Labs with Jurassic-1-Jumbo, provided via AI21 Studio ([AI21 Labs, 2022](https://perma.cc/N8CS-ASAT)). 12. **[^](#fnrefznkhoujnbcg)**GPT-NeoX-20B does not meet my definition of a GPT-3-like model, but it still serves as an informative case study. 13. **[^](#fnref8jdrid0ibxg)**See [Zhang et al. (2022, p. 8)](https://arxiv.org/pdf/2205.01068.pdf): "Given our primary goal as a replication of GPT-3..." 14. **[^](#fnrefabv30i83qj)**See the note on [this cell](https://docs.google.com/spreadsheets/d/1mYWA2V2aGe4PMRYfF2rwJ4sVcBteUzoyGl26S7LyM2s/edit#gid=1986563325&range=BG11) in the diffusion database. I have not investigated whether the lower performance is significant in terms of how useful the model is, and I lack the intuition to judge this at face value. [Zhang et al. (2022, p. 8)](https://arxiv.org/pdf/2205.01068.pdf) claim a “parity in performance for standard evaluation datasets used in the GPT-3 models,” but I didn’t find a clear statistical basis for this claim in the paper. 15. **[^](#fnref46reotjnpe9)**See [Zhang et al. (2022)](https://arxiv.org/pdf/2205.01068.pdf), Introduction, p. 1: "We are releasing all of our models between 125M and 66B parameters, and will provide full research access to OPT-175B upon request. Access will be granted to academic researchers; those affiliated with organizations in government, civil society, and academia; and those in industry research laboratories." The [form](https://docs.google.com/forms/d/e/1FAIpQLSe4IP4N6JkCEMpCP-yY71dIUPHngVReuOmQKDEI1oHFUaVg7w/viewform) to request model access includes fields for “Organization / Affiliation,” “Intended Use,” and “Previous related publications.” 16. **[^](#fnrefiv5i6v611u)**Based on my estimated number of natural language processing researchers at top universities. I also estimate this number is less than the estimated number of applications that can be processed in one year. See this [Guesstimate model](https://www.getguesstimate.com/models/21061) for further details. 17. **[^](#fnrefv2jg9bh28y)**See [Zhang et al. (2022)](https://arxiv.org/pdf/2205.01068.pdf), Introduction, p. 1: “will provide full research access to OPT-175B upon request.” I interpret this as making the OPT-175B trained model weight file(s) available for download to the requester. 18. **[^](#fnreflzs7xqmzzx)**See [access request form](https://docs.google.com/forms/d/e/1FAIpQLSe4IP4N6JkCEMpCP-yY71dIUPHngVReuOmQKDEI1oHFUaVg7w/viewform): “Subject to your compliance with the Documentation and Sections 2, 3, and 5, Meta grants you a non-exclusive, worldwide, non-transferable, non-sublicensable, revocable, royalty free and limited license under Meta’s copyright interests to reproduce, distribute, and create derivative works of the Software solely for your non-commercial research purposes.” Section 2 places restrictions on copying for certain purposes or copying without including the copyright, but not total restriction. 19. **[^](#fnrefuotalwwzg4i)**The search can be roughly replicated at this [link](https://duckduckgo.com/?q=download+opt-175b+model&kp=1&t=ffab&df=2022-05-01..2022-10-05&ia=web), but I failed to obtain a working archived copy of the search. 20. **[^](#fnref11oz3h4km2mj)**I have not figured out when the API was released, but I only became aware of it in October 2022. 21. **[^](#fnref25xn3y462tv)**For the influence of Megatron-LM on BLOOM, see <https://huggingface.co/bigscience/bloom#model-architecture-and-objective>: the BLOOM model architecture is "Modified from Megatron-LM GPT2 (see paper, BLOOM Megatron code)". The BLOOM Megatron code (<https://github.com/bigscience-workshop/Megatron-DeepSpeed>) is "a detached fork of <https://github.com/microsoft/Megatron-DeepSpeed>, which itself is a fork of <https://github.com/NVIDIA/Megatron-LM>." The original Megatron-LM code was open-sourced to accompany [Shoeybi et al. (2019)](https://arxiv.org/pdf/1909.08053.pdf). 22. **[^](#fnrefdf0f22c1dg)**For the influence of GPT-2 on Megatron-LM, see [Shoeybi et al. (2019)](https://arxiv.org/pdf/1909.08053.pdf), Abstract, p.1: “...we train an 8.3 billion parameter transformer language model similar to GPT-2.” 23. **[^](#fnrefqb8ginf6y7m)**For GPT-3 see the paper, p.14: “Although GPT-3’s training data is still primarily English (93% by word count), it also includes 7% of text in other languages.” For BLOOM, see [model card](https://huggingface.co/bigscience/bloom#languages): English is only 30.04% of the training data (presumably also measured by word count). 24. **[^](#fnrefure9pd2wv3l)**This is based on the following evidence. When I averaged the normalized accuracy on tasks that BigScience has evaluated for both BLOOM and OPT-175B, both BLOOM and OPT-175B both achieved approximately 47% accuracy. OPT-175B, in turn, had 2% less accuracy on average compared to GPT-3, on the tasks that OPT-175B was evaluated on in [Zhang et al. (2022, p. 17)](https://arxiv.org/pdf/2205.01068.pdf). So this suggests that BLOOM is similarly worse than GPT-3 on those tasks. A big caveat to this is that the set of tasks that BigScience has evaluated for both BLOOM and OPT-175B seem far from comprehensive. See this Colab [notebook](https://colab.research.google.com/drive/1EVdI3iXXsfjMfdKYNPzk8XarY2qTJROt?usp=share_link) for the calculations and further explanation. 25. **[^](#fnrefut4rs0lybg8)**The 329 GB size was listed under “Checkpoint size” at [https://huggingface.co/bigscience/bloom#speeds-sizes-times](https://perma.cc/XJ6C-JFBN) 26. **[^](#fnrefjuf7452mik)**Compute cost estimates are just based on cloud compute prices, and exclude the cost of other hardware such as a laptop to set up the cloud computing instance. 27. **[^](#fnrefsnc5hx1hww)**By “direct” I mean the people and skills that are required to set up the model and keep the model running in the deployment setting, excluding people that maintain software dependencies (e.g. PyTorch), or people that give advice on how to do deployment. 28. **[^](#fnrefo2cbsnyvm4)**This means the deployment with the highest volume of model outputs that (a) would be possible for at least one actor to do by now if they tried; (b) is worth the cost—not necessarily in terms of financial revenue, but in achieving the actor's goal. See this [Guesstimate model](https://www.getguesstimate.com/models/21252?token=LQ4P39lWEmMO-H6-71bW9GjhhLeAKD9JceS7Fn9MTI8sgz49e-sxpb_8zZ6wKbxkbF4uz4ROy45yOjhuptWYlw) for calculations (the method is also explained in the main text). 29. **[^](#fnrefvlovv0t4nl8)**Inference means passing data into the model and obtaining an output. This is also known as a “forward pass” of the model. 30. **[^](#fnrefrzcgtmf46b)**By “hosted” I mean that the organization stores the model on a server, and runs the model on hardware that is owned or rented by the organization. 31. **[^](#fnrefuw941nbc5ka)**See for example the OpenAI API [Usage Guidelines](https://perma.cc/HSL5-QKFC) which describe the procedure for application review, and the content policy. 32. **[^](#fnrefv1ysct0yoy)**The listed price for Davinci (which is presumably some version of the 175-billion parameter GPT-3 model) is $0.02 per 1000 tokens. 1000 tokens is roughly 750 English words based on this [page](https://perma.cc/T6M8-Q9BJ). Therefore 150,000,000 words requires 150e6 \* 0.02 / 750 ~= $4000. 33. **[^](#fnrefybgulc3axpq)**Credit to [Buchanan et al. (2021)](https://cset.georgetown.edu/wp-content/uploads/CSET-Truth-Lies-and-Automation.pdf) section 4 (starting p. 55) for the inspiration for this scenario. 34. **[^](#fnrefs64zwknfznl)**Note that I am glossing over the actual capability of BLOOM to automate disinformation effectively. On this point (but substituting GPT-3 for BLOOM), [Buchanan et al. (2021)](https://cset.georgetown.edu/wp-content/uploads/CSET-Truth-Lies-and-Automation.pdf) concluded that “although GPT-3 will not replace all humans in disinformation operations, it is a tool that can help them to create moderate- to high-quality messages at a scale much greater than what has come before.” As I explained earlier, BLOOM seems less capable overall than GPT-3, so the quality of messages would generally be lower, or a human operator would need to spend more time ensuring the messages are high enough quality. 35. **[^](#fnrefu7x12fnuzh)**Again, I am not accounting for the *likelihood* of a GPT-3-size coding language model being able to improve ~1 million software developers’ productivity by 1-10%. However, I think this is plausible given that OpenAI Codex is an existing 20-billion-parameter model that is already being marketed as a tool to improve developer productivity ([OpenAI, 2022](https://perma.cc/HRH7-DSC4)). Intuitively, I think that users wouldn’t be willing to adopt Codex (or tools building on Codex) in the long-term if they didn’t expect to get an overall productivity improvement of 1% or more. 36. **[^](#fnrefiv6ijc0o7gf)**After I made these estimates, I obtained a reference class estimate. The reference class was the team working on [GitHub Copilot](https://github.com/features/copilot), GitHub’s code suggestion tool powered by OpenAI Codex, which is a 20-billion parameter language model trained on code. I searched the term "GitHub copilot" on LinkedIn, filtered by "People", and then reviewed the first 4 pages of results for people that appeared to be currently working as engineers or developers for GitHub Copilot (after the 4th page, the results did not seem relevant enough to be worth continuing). I found 4 ML or Research Engineers, and 8 Software or Data Engineers, making 12 people in total. I think it's most likely that this LinkedIn search underestimates the true number of contributors, due to false negatives. This estimate is close to my intuitive estimate, but it should be taken as weak evidence due to being one case with a limited methodology. See this [document](https://docs.google.com/document/d/1EgBbmmA8cQB6LM58WijykJG6cmXVkpx8azni2RJLDbQ/edit?usp=sharing) for more details on the method. Due to time constraints, I did not use this evidence to update my final estimate. 37. **[^](#fnrefg5qlgk5i2no)**The three months is just an intuitive estimate based on project durations in my 1.5 years of experience in software engineering at a company that deployed ML models. 38. **[^](#fnref8sgcxc3h8tk)**See [Leopold (2019)](https://www.hpcwire.com/2019/03/19/aws-upgrades-its-gpu-backed-ai-inference-platform/) (reports 80-90%) and [Barr (2019)](https://aws.amazon.com/blogs/aws/amazon-ec2-update-inf1-instances-with-aws-inferentia-chips-for-high-performance-cost-effective-inferencing/) (reports “up to 90%”). 39. **[^](#fnref3fhrtfo8uuk)**See [Guesstimate model](https://www.getguesstimate.com/models/21252?token=LQ4P39lWEmMO-H6-71bW9GjhhLeAKD9JceS7Fn9MTI8sgz49e-sxpb_8zZ6wKbxkbF4uz4ROy45yOjhuptWYlw) for calculations. 40. **[^](#fnref7hr2s7ixjw4)**Note: there is already an API to run inference with BLOOM [here](https://huggingface.co/bigscience/bloom), but I think it’s useful to consider the general case where an actor deploys independently on a separate server, with less limit on usage. 41. **[^](#fnrefyt5dr9hynx9)**See the [BLOOM model card](https://perma.cc/XJ6C-JFBN)—“Speeds, Sizes, Times” section.
5506f5aa-f6d4-415f-83fd-585341f4f0fb
trentmkelly/LessWrong-43k
LessWrong
Steven Wolfram on AI Alignment Joe Walker has a general conversation with Wolfram about his work and things and stuff, but there are some remarks about AI alignment at the very end: > WALKER: Okay, interesting. So moving finally to AI, many people worry about unaligned artificial general intelligence, and I think it's a risk we should take seriously. But computational irreducibility must imply that a mathematical definition of alignment is impossible, right? > WOLFRAM: Yes. There isn't a mathematical definition of what we want AIs to be like. The minimal thing we might say about AIs, about their alignment, is: let's have them be like people are. And then people immediately say, "No, we don't want them to be like people. People have all kinds of problems. We want them to be like people aspire to be. > > And at that point, you've fallen off the cliff. Because, what do people aspire to be? Well, different people aspire to be different and different cultures aspire in different ways. And I think the concept that there will be a perfect mathematical aspiration is just completely wrongheaded. It's just the wrong type of answer. > > The question of how we should be is a question that is a reflection back on us. There is no "this is the way we should be" imposed by mathematics. > > Humans have ethical beliefs that are a reflection of humanity. One of the things I realised recently is one of the things that's confusing about ethics is if you're used to doing science, you say, "Well, I'm going to separate a piece of the system," and I'm going to say, "I'm going to study this particular subsystem. I'm going to figure out exactly what happens in the subsystem. Everything else is irrelevant." > > But in ethics, you can never do that. So you imagine you're doing one of these trolley problem things. You got to decide whether you're going to kill the three giraffes or the eighteen llamas. And which one is it going to be? > > Well, then you realise to really answer that question to the best ability of huma
957d1a9f-9995-4ed9-bfc9-6954d8aa3200
trentmkelly/LessWrong-43k
LessWrong
Is That Your True Rejection? by Eliezer Yudkowsky @ Cato Unbound A response essay written by Eliezer Yudkowsky posted at Cato Unbound for the issue Brain, Belief, and Politics: Is That Your True Rejection? by Eliezer Yudkowsky > Eliezer Yudkowsky suggests that the partial mutability of human traits is an auxiliary reason at best for Michael Shermer’s libertarianism. Take that fact away, and Shermer’s politics probably wouldn’t go with it. Yudkowsky says that his own small-l libertarian tendencies come from the long history of government incompetence, indifference, and outright malevolence. These, and not brain science, are the best reasons for libertarians to believe what they do. > > > > Moreover, we make a logical error when we infer shares of causality from shares of observed variance; the relationship between nature and nurture is cooperative, not zero-sum. One thing, however, is clear: Human genetic variance is tiny, as indeed it must be for human beings all to constitute a single species. Environmental manipulation can only achieve so much in part because of this universal human inheritance. The lead essay has been written by Michael Shermer: Liberty and Science by Michael Shermer > Michael Shermer discusses scientific findings about belief formation. Beliefs, including political beliefs, are usually the result of automatic or intuitive moral judgments, not rational calculations. One cluster of those intuitions presumes that human nature is malleable; these usually produce a liberal politics. Another group of intuitions presumes that human nature is static; these tend to produce conservatism. But Shermer argues that humans really fall somewhere in between — malleable, within some important limits. He argues that this set of findings should produce a libertarian politics.
c5f2b39a-7d35-4189-aaa9-84034e519c1a
trentmkelly/LessWrong-43k
LessWrong
Protecting agent boundaries If the preservation of an agent's boundary is necessary for that agent's safety, how can that boundary/membrane be protected? How agent boundaries get violated In order to protect boundaries, we must first understand how they get violated. Let’s say there’s a cat, and it gets stabbed by a sword. That’s a boundary violation (a.k.a. membrane piercing). In order for that to have happened, three conditions must have been met: 1. There was a sword. 2. The cat and the sword collided. 3. The cat wasn’t strong enough to resist penetration from the sword. More generally, in order for any existing membrane to be pierced, three conditions must have all been met: 1. There was a potential threat. (E.g., a sword, or a person with a sword.) 2. The moral patient and the threat collided. 3. The victim failed to adequately defend itself. (Because if the cat was better at self-defense — if its skin was thicker or if it was able to dodge — then it would not have been successfully stabbed.) Protecting agent boundaries Each of these three conditions then implies ways of preventing boundary violations (a.k.a. membrane piercing): 1. There was a potential threat. * → Minimize potential threats 2. There was a collision. * → Minimize dangerous collisions * → Predict and prevent collisions before they occur. * → Prevent collisions by putting distance between threats and moral patients. * → Prevent premeditated collisions by pre-committing to retribution.  3. The victim failed to defend itself. * → Empower the membranes of humans and other moral patients to be better at self-defense. How human societies already try to solve this problem As a helpful analogy, here’s some examples of how modern human societies try to solve this problem: Minimize potential threats * Restrict access to weapons (e.g., nukes, bioweapons, etc.)  * Minimize potential perpetrators (i.e., e.g., some fictional societies predict and eliminate potential psychopaths). Minimize dangerous
5c9d58b0-bd34-4e8c-a0d7-4fb0a146255f
StampyAI/alignment-research-dataset/blogs
Blogs
December newsletter | | | | --- | --- | | | | | --- | | [Machine Intelligence Research Institute](http://intelligence.org) | | | | | | | --- | --- | | | | | --- | | [MIRI’s winter fundraising challenge](https://intelligence.org/2014/12/01/2014-winter-matching-challenge/) has begun! Every donation made to MIRI between now and January 10th will be matched dollar-for-dollar, up to a total of $100,000!   [**Donate now**](https://intelligence.org/donate/) to double your impact while helping us raise up to $200,000 (with matching) to fund our research program. **Research Updates** * We’ve published [a new guide to MIRI’s research](https://intelligence.org/2014/11/06/new-guide-miris-research/). * [Three misconceptions in Edge.org’s conversation on “The Myth of AI.”](https://intelligence.org/2014/11/18/misconceptions-edge-orgs-conversation-myth-ai/) * Video and more from Nick Bostrom’s *Superintelligence* talk at UC Berkeley is [now available](https://intelligence.org/2014/11/06/video-bostroms-talk-superintelligence-uc-berkeley/). * “Exploratory Engineering in AI” is [now available](https://intelligence.org/2014/08/22/new-paper-exploratory-engineering-artificial-intelligence/) without needing a *CACM* subscription. **News Updates** * If you’re reading *Superintelligence* or following [the online reading group](http://intelligence.org/2014/08/31/superintelligence-reading-group/), **[please take this short survey](https://docs.google.com/forms/d/1P53uNnZY_suE5IXW8WLQtXGuipAkUfQHj1yN_D121X0/viewform)** if you haven’t already. **Other Updates** * Our friends at the Center for Effective Altruism will pay you $1,000 if you introduce them to somebody new that they end up hiring for one of their [five open positions](http://lesswrong.com/lw/laf/the_centre_for_effective_altruism_is_hiring_to/). As always, please don’t hesitate to let us know if you have any questions or comments. Best, Luke Muehlhauser Executive Director | | | | |   The post [December newsletter](https://intelligence.org/2014/12/01/december-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).
f5f40598-a41e-4006-84dc-b6ecd4718234
trentmkelly/LessWrong-43k
LessWrong
Failing to update You understand Bayes' Theorem. You enter into a situation with an intuitive "common sense" prior. You observe the situation, and then you shut up and multiply. And then you go to update, you compute the desired behavior to maximize utility... and some cognitive module buried in your brain says "no".  Example: I realize I need a physical examination. I have no rational reason to fear going to the doctor. I am, in fact, acutely aware that my fear of going to the doctor is based on a fear that they will find something wrong that I can't afford to fix, but the truth is already so. So I bite the bullet, make an appointment, and then at the scheduled time I get in the car and drive to the doctor's office. And then I just keep driving past the doctor's office, turn around and go home. I tell myself that if I'm not going, I should call the doctor to avoid a $100 no-show fee, but I don't. And then I get home, and I tell myself that that was dumb, and that I need to update my behavior - and that physically punishing myself for not going to the doctor is not an efficient use of my energy. So I punch my hand through a mirror. NOW, finally, I have an excuse to go to the doctor - so I wrap my hand in bandages and go back to bed, instead.   What do you do when your computed probabilities and utility function have NO EFFECT WHATSOEVER on your actual behavior?
56926e30-7c5b-43d3-be31-351a4e4e81a7
trentmkelly/LessWrong-43k
LessWrong
The Genie in the Bottle: An Introduction to AI Alignment and Risk I'm writing an article about the alignment problem and AI risk, and I'm sharing a draft with the fine folks here on LessWrong to elicit some feedback. I suspect most frequent readers here will already be familiar with the contents and arguments presented below, so bear in mind that this isn't really written for you guys, but for a broader "lay" audience. As I'm learning about AI risk and alignment, I'd like feedback on whether or not there are any obvious mistakes or untruths I should correct, whether I've mischaracterized any of the arguments, whether I've missed something obviously important and whether the overall structure works for the purpose of the article. Feedback on style and tone is also welcome. It's quite a lengthy article, so skimming is fine ;).  Thank you! * A note on claims: Throughout the article, I’ve referred to various ideas and claims, but I haven’t provided sources for all of them. This is on purpose. ---------------------------------------- Reasons why AIs might one day kill us all and why you should at least be a little worried. Introduction As AI systems become more advanced, concerns about their alignment with human values and interests become more pressing. AI alignment refers to the challenge of ensuring that AI systems behave as we would *intend* and do not harm humans, either accidentally or intentionally. The premise is that AI systems are unlike regular computer programs in that they can exhibit non-obvious, novel behaviors and generate solutions that may not be what humans would have intended, given that they are intelligent. At the heart of the AI alignment and AI risk discussions is the idea of creating generally intelligent autonomous agents, or more specifically, power-seeking advanced intelligence(s) with the capabilities that being intelligent confers. Advanced AIs are unlike other technologies like nuclear weapons because intelligence is more fundamental than specific technologies. All inventions and technologies are
f458adcf-1eca-4eb5-89bf-00378913e1c2
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
Announcing Insights for Impact Hey all! Are you wanting to follow EA research, but finding papers and longform forum posts too dry? Late last year, Jenna Ong and I noticed a lack of research-focused EA video content and decided to do something about it. Today, we are excited to introduce [Insights for Impact](https://www.youtube.com/@insightsforimpact), a YouTube channel that’s all about communicating the key insights of EA-aligned research papers. In our first video, [How Science Misunderstands Power](https://www.youtube.com/watch?v=yZXCfGqHsiY), we explore why well-meaning scientists failed to prevent nuclear proliferation in the 20th century. Perhaps by examining the history of nuclear weapon development, we may be able to better manage other powerful technologies, like AI and genetic engineering. A 2018 paper by [Samo Burja](https://samoburja.com/) and Zachary Lerangis, [The Scientists, the Statesman, and the Bomb](https://www.bismarckanalysis.com/Nuclear_Weapons_Development_Case_Study.pdf), served as the basis for this video. However, we also drew inspiration from [HaydnBelfield](https://forum.effectivealtruism.org/users/haydnbelfield)’s [post](https://forum.effectivealtruism.org/posts/cXBznkfoPJAjacFoT/are-you-really-in-a-race-the-cautionary-tales-of-szilard-and), especially their idea that the current headspace of the AI Safety community closely resembles the “this is the most important thing” mindset of scientists throughout the mid 20th century. From these case studies, it seems that both social and technical factors are crucial in ensuring powerful technologies have a positive impact. In future videos, we want to explore a range of EA-relevant cause areas. We’d love to collaborate with researchers to ensure we accurately portray their work. So if you’re a researcher who wants to give your work a voice outside of the forum, please [get in touch](mailto:insightsimpactcontact@gmail.com)!
61e9143a-6369-4481-8bac-942d18ca364a
trentmkelly/LessWrong-43k
LessWrong
Open Thread, May 18 - May 24, 2015 If it's worth saying, but not worth its own post (even in Discussion), then it goes here. ---------------------------------------- Notes for future OT posters: 1. Please add the 'open_thread' tag. 2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.) 3. Open Threads should be posted in Discussion, and not Main. 4. Open Threads should start on Monday, and end on Sunday.
df360195-17a1-4171-8e5f-85e18afccf05
trentmkelly/LessWrong-43k
LessWrong
Theory of Causal Models with Dynamic Structure? I'm looking for any work done involving causal models with dynamic structure, i.e. some variables in the model determine the structure of other parts of the model. I know some probabilistic programming languages support dynamic structure (e.g. the pyro docs mention it directly at one point). And of course one can always just embed the dynamic structure in a static structure (i.e. a model of any general-purpose computer), although that's messy enough that I'd expect it to create other problems. I haven't found much by quick googling (too many irrelevant things use similar terms) so I'd appreciate any pointers at all. At this point I've found basically-zero directly relevant work other than PPLs, and I don't even know of any standard notation.
1d9fb482-835c-4703-ad2f-51375d6f61cc
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
A short critique of Vanessa Kosoy's PreDCA *This critique is an addendum to* [*my distillation*](https://www.lesswrong.com/posts/EFrJdhKPZXa4MA3Gr/vanessa-kosoy-s-predca-distilled?_ga=2.165436411.531524530.1668251277-1637474216.1662379051) *for* [*application problem 3*](https://docs.google.com/document/d/1NVVtdsfz7HiseVFSk3jYly4sPG4dG03wFFDrD8rBXU0/edit) *for Nate Soares and Vivek Hebbar's* [*SERI MATS*](https://www.alignmentforum.org/posts/iR4kGzrWEJpXJ39ZB/seri-mats-program-winter-2022-cohort) *stream. Reading my distillation is not required if you are familiar with PreDCA.* ***Disclaimer:** This is just my model of how Vanessa thinks and I might be misrepresenting some views. Furthermore, as mentioned below, I'm sure Vanessa is well aware of these issues, and plans on trying to solve those which constitute real obstacles. [Edit: See her comments* [*below*](https://www.lesswrong.com/posts/FhKkFcojhKZt7nHzG/a-short-critique-of-vanessa-kosoy-s-predca-1?commentId=uPnCaiMf6w4xTddaa)*]* Vanessa( and Diffractor)'s work generally **feels different to other Alignment theory**, and I have usually attributed this to its radical focus on foundations (shared by some other researchers) and the complexity of its technical mathematical results (shared by few). But upon momentarily coarsening these fine technical details, and [presenting PreDCA more conceptually](https://www.lesswrong.com/posts/EFrJdhKPZXa4MA3Gr/vanessa-kosoy-s-predca-distilled?_ga=2.260759113.531524530.1668251277-1637474216.1662379051) in a language similar to that of other proposals, it becomes clear that it really is fundamentally different to them. As a consequence of Vanessa's opinions and approach, PreDCA has different objectives and hopes to most proposals. PreDCA is fundamentally concrete. Yes, it still includes some "throw every method you have at the problem" (as in *Classification*). But the truly principal idea involves **betting everything on a specific mathematical formalization** of some instructions. This concreteness is justified for Vanessa because nothing short from a watertight solution to agent foundations will get us through Alignment[[1]](#fnqwgpuu7b66). Some AGI failure modes are obviously problems humanity would have to deal with in the long run anyway (with or without AGI), but on a time trial. And Vanessa is pessimistic about the viability of some intuitively appealing Alignment approaches that try to delegate these decisions to future humanity. In short, she believes some lock-in is inevitable, and so tries to find the best lock-in possible by fundamentally understanding agency and preferences. Now, if PreDCA works, we won't get a naively narrow lock-in: the user(s)'s utility function will care about leaving these decisions to future humanity. But it will, in other subtler but very real ways, [**leave an indelible fingerprint on the future**](https://www.lesswrong.com/posts/DJRe5obJd7kqCkvRr/don-t-leave-your-fingerprints-on-the-future?_ga=2.88333279.531524530.1668251277-1637474216.1662379051). I'm certain Vanessa is aware of the implications, and I suppose she's just willing to take yet another chance (on this lock-in being broadly beneficial), since otherwise we have no chance at all. But still, PreDCA seems to me to downplay the extent to which we wouldn't be pointing at a well-established and comprehended minimal base of values that will keep us alive, but at extremely messy patterns and behaviors (which presuppose a vast amount of uncertainties as true). Of course, the idea is that the correct theoretical enunciation of preferences will do away with the unwanted details. But there might seem to be too many and too systematical biases for even a very capable framework to tell apart obvious consensus from civilizational quirks, and the latter could have awful consequences. Another (maybe unfair) consequence of PreDCA's concreteness is we can more easily find weak links in the conjunctive plan. The protocol is still work in progress, and so what I'm about to say might end up getting fixed in some retrospectively obvious way, but I now present some worries with the technical aspects of the proposal (which Vanessa is certainly aware of). The method to find which computations are running in the real world doesn't actually do so.Consider, for instance, a program trying to prove a formal system consistent (by finding a model of it), and another one trying to prove it inconsistent (by deriving a contradiction from it). The outputs of these two programs are acausally related: if the first halts, the second will not, and conversely[[2]](#fngcwf9mh9m8t). Our AGI will acknowledge this acausal relation (and much more complicated ones), as Vanessa expects, since otherwise it won't know basic math and won't produce a correct world model. But this seems to fundamentally mess with the setup. If only one of these programs is running in the universe, counterfactually changing the output of the other will still change the physical universe. And so, as Vanessa mentions [in conversation with Jack](https://www.youtube.com/watch?v=N-fscpxMIos&list=PLntAW0eILBdWcOWK_GmQiRGxrLjrUIKJy&index=12&ab_channel=JackParker), we're not really checking which computations run, but which computations' outputs the universe has information about. This is a problem to fix, since we actually care about the former. But this quirk seems to me too inherently natural in the Infra-Bayesian framework. The whole point of such a framework is finding a canonical grounding for these theoretical concepts, from which our common sense ideas and preferences follow naturally, and so changing this quirk ad hoc, against the framework's simplicity, won't cut it. So we need a theoretical breakthrough on this front, and I find it unlikely. That is, I think any accommodation for this quirk will change the framework substantially (but then again, this is very speculative given the unpredictable nature of theoretical breakthroughs). On a similar note, as Vanessa mentions, the current framework only allows for our AGI to give positive value to computations (so that it will always choose for more computations to be run), which is completely counter to our intuitions. And again, I feel like fixing this ad hoc won't cut it and we need a breakthrough. The framework has another quirk around computations: the number of instances of a concrete computation run in a universe isn't well defined. This is in fact a defining property of the framework, and is clearly deeply related to the above paragraph (that's in part why I feel the above issue won't get easily solved). Now, Vanessa has valid philosophical arguments in favor of this being the case in the real world. But even provided she's right, to me these examples point towards a more worrying general problem about how we're dealing with computations. We are clearly **confused about the role of computations** in a fundamental level. And yet PreDCA architecturally implements a specific understanding of them. Even if this understanding is the right one (even if future humanity would arrive at the conclusion that the intuitive "number of computations" doesn't matter ethically), **the user(s) will retain many of our ethical intuitions about computations**, and conflict with the architecture. In the ideal case, the theoretical framework will perfectly deal with this issue and extrapolate as well as future humanity. But I'm not certain it has the right tools to handle this correctly. On the contrary, I feel like this might be another potential source of errors in the utility function inference/extrapolation due to the nature of the framework. Other concerns can be raised that are more standard. Especially, Vanessa is aware that more ideas are needed to ensure *Classification* works. For example, my intuition is that **implementing a model of human cognitive science won't help much**: the model would need to be very precise to defend against the vast space of acausal attackers, since these can get as similar to humans as contingent details of our AGI's hypothesis search permits. Furthermore, we can only implement computational properties of human cognition, and not physical ones (this would require solving ontology identification, and open up a new proxy to Goodhart). This is actually a particular case of a more general (and my main) worry: **we might not have eradicated the need to prune a massive search**, and this will remain to be the main source of danger. **Even if all mesa-optimizers are acausal attackers, does that leave us much better off**, when our best plan at avoiding those is applying various approximate pruning mechanisms (as already happens in many other Alignment proposals)? PreDCA helps clearly with other failure modes, but maybe this hard kernel remains. Especially, our AGI's hypothesis search needs to be massive for it to converge on the right ones, which is necessary for other aspects of the protocol. And since our AGI's actions are determined by its hypotheses, **we might be searching a space as big as that of possible code for our AGI, which is the original problem**. Maybe pruning the hypothesis search for acausal attackers is way more tractable for some reason, but I don't see why we should expect that to be the case. 1. **[^](#fnrefqwgpuu7b66)**It's not surprising that these MIRI-like concerns have led to what is probably the most serious shot at making something similar to Coherent Extrapolated Volition viable. 2. **[^](#fnrefgcwf9mh9m8t)**Even if we only deal with provably halting programs, this scenario can be replicated.
a8e375fc-c9d4-4c4b-8618-98b0b6b3cf01
trentmkelly/LessWrong-43k
LessWrong
Cognitive Work and AI Safety: A Thermodynamic Perspective Introduces the idea of cognitive work as a parallel to physical work, and explains why concentrated sources of cognitive work may pose a risk to human safety. Acknowledgements. Thanks to Echo Zhou and John Wentworth for feedback and suggestions. Some of these ideas were presented originally in a talk in November 2024 at the Australian AI Safety Forum slides for which are here: Technical AI Safety (Aus Safety Forum 24) and the video is available on YouTube. This post is the "serious" half of a pair, for the fun version see Causal Undertow. Introduction This essay explores the idea of cognitive work, by which we mean directed changes in the information content of the world that are unlikely to occur by chance. Just as power plants together with machines are sources of physical work, so too datacenters together with AI models are sources of cognitive work: every time a model helps us to make a decision, or answers a question, it is doing work. The purpose of exploring this idea is to offer an alternative ontology with which to communicate about AI safety, in the theory that more variety may be useful. The author used the ideas in this way at the recent Australian AI Safety Forum with some success. Pushing the World to Extremes Consider a sequence of events that recurs throughout technological history: 1. Humans discover a powerful natural phenomenon 2. Using this phenomenon requires creating extreme conditions 3. These extreme conditions can harm humans 4. Therefore, human safety requires control mechanisms proportional to the extremity Nuclear reactors provide a natural example. The phenomenon is atomic fission, the extreme conditions are the high temperatures and pressures in reactors, and the safety challenge is clear: human biology cannot withstand such conditions. Hence the need for reaction vessels and safety engineering to keep the reaction conditions inside the vessel walls. Modern AI presents another example: we've discovered scaling laws that l
88f8b383-103a-469d-82ab-cf3dc34c2161
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Wizards and prophets of AI [draft for comment] *[Written for a general audience. You can probably skip the first section. Posted for feedback/comment before publication on The Roots of Progress. Decided not to publish as-is, although parts of this have been or may be used in other essays.]* Will AI kill us all? That question is being debated seriously by many smart people at the moment. Following [Charles Mann](https://www.amazon.com/Wizard-Prophet-Remarkable-Scientists-Tomorrows-ebook/dp/B071D3CZG4), I’ll call them the wizards and the prophets: the prophets think that [the risk from AI is so great](https://thezvi.substack.com/p/response-to-tyler-cowens-existential) that we should actively [slow](https://futureoflife.org/open-letter/pause-giant-ai-experiments/) or [stop](https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/) progress on it; the wizards [disagree](https://scottaaronson.blog/?p=7174). **Why even discuss this?** -------------------------- (If you are already very interested in this topic, you can skip this section.) Some of my readers will be relieved that I am finally addressing AI risk. Others will think that an AI apocalypse is classic hysterical pessimist doomerism, and they will wonder why I am even dignifying it with a response, let alone taking it seriously. A few reasons: ### **It’s important to take safety seriously** Safety is a value. New technologies really do create risk, and the more powerful we get, the bigger the risk. [Making technology safer is a part of progress](https://rootsofprogress.org/towards-a-philosophy-of-safety), and we should celebrate it. Doomer pessimism is generally wrong, but so is complacent optimism. We should be [prescriptive, not descriptive optimists](https://rootsofprogress.org/descriptive-vs-prescriptive-optimism), embracing [solutionism over complacency](https://www.technologyreview.com/2021/07/13/1028295/proud-solutionist-history-technology-industry/). ### **We shouldn’t dismiss arguments based on vibes** Or [mood affiliation](https://marginalrevolution.com/marginalrevolution/2011/03/the-fallacy-of-mood-affiliation.html), or who is making the argument, or what kind of philosophy they seem to be coming from. Our goal is to see the truth clearly. And the fact that doomer arguments always been wrong doesn’t mean that this one is. ### **The AI prophets are not typical doomers** They are generally pro-technology, pro-human, and not fatalistic. Nor are they prone to authoritarianism; many lean libertarian. And their arguments are intelligent and thoroughly thought-out. ### **Many of the arguments against them are bad** Many people (not mentioned in this post) are *not* thinking clearly and are being fairly sloppy. So I want to address this. **The argument** ---------------- I boil it down to three main claims: ### **AI will become a superintelligent agent** It will be far smarter than any human being, quantitatively if not qualitatively. And some forms of the AI will have goal-directed behavior. This does not require computers to be conscious (merely that they [be able to do things that right now only conscious beings can do](https://rootsofprogress.org/can-submarines-swim-demystifying-chatgpt)). It does not require them to have a qualitatively different form of “intelligence”: it could be enough for them to be as smart as a brilliant human, able to read everything ever written and have perfect recall of it, able to think 1000x faster, able to fork into teams that work on things simultaneously, etc. ### **The AI’s goals will not be aligned with ours** This is [the principal-agent problem](https://rootsofprogress.org/four-lenses-on-ai-risks#as-an-agent-with-unaligned-interests) again. Whatever it is aiming at will not be exactly what we want. We won’t be able to give it perfect instructions. We will not be able to train it to obey the law. We won’t even be able to train it to follow basic human morality, like “don’t kill everyone.” This does not require it to have free will to choose its goals, or otherwise to depart from following the training we have given it. Like a [genie](https://www.lesswrong.com/posts/4ARaTpNX62uaL86j6/the-hidden-complexity-of-wishes) or a [monkey’s paw](https://en.wikipedia.org/wiki/The_Monkey%27s_Paw), it might do exactly what we ask for, in a way that is not at all what we wanted—following the letter of our instructions, but destroying the spirit. ### **All our prevention and countermeasures will fail** If we test AI in a box before letting it out into the real world, our tests will miss crucial problems. If we try to keep it in a box forever, it will talk its way out (and by the way, [we’re not even trying to do that](https://openai.com/blog/chatgpt-plugins)). If we try to limit the AI’s power, it will evade those limitations. If we try to turn it off, it will stop us. If we try to use some AIs as police to watch the other AIs, they will instead collude with each other and conspire against us. In fact, it might anticipate all of the above and conclude that the easiest path is just to launch a sneak attack on humanity and kill us all to get us out of the way. And whatever happens might happen so fast that we don’t get a chance to learn from failure. There will be no Hindenberg or [Tacoma Narrows Bridge](https://en.wikipedia.org/wiki/Tacoma_Narrows_Bridge_(1940)) or Chernobyl as a cautionary example. There will be no warning shot, no failed robot uprising. The very first time AI takes action against us, it will wipe us all out. ### **Analogies** In “[Four lenses on AI risks](https://rootsofprogress.org/four-lenses-on-ai-risks)”, I gave the analogy that AI might be like expansionary Western empires when they clashed with other civilizations, or like humans when they arrived on the evolutionary scene, wiping out the Neanderthals and hunting many megafauna to extinction. A related argument is that if you would be worried about an advanced alien civilization coming to Earth, you should worry about AI. **What’s different this time** ------------------------------ People have always been worried that new technologies would cause catastrophe. But so far, technology has done [far more good than harm overall](https://rootsofprogress.org/smart-rich-and-free). What might be different this time? Related, why worry about AI instead of an asteroid impact, an antibiotic-resistant superbug, etc.? The crux is *the power of intelligence.* Humans have been able so far to overcome every challenge because of the power of our intelligence. We can beat natural disasters: drought and famine, storm and flood. We can beat wild animals. We can beat bacteria and viruses. We can make cars, planes, drugs, and X-rays safe. Nature is no match for us because intelligence trumps everything. [David Deutsch says](https://rootsofprogress.org/books/the-beginning-of-infinity) that “anything not forbidden by the laws of nature is achievable, given the right knowledge.” If AI goes rogue, we are for the first time up against an intelligent adversary. We’re not mastering indifferent nature; we’re potentially up against something that has a world-model, that can create and execute plans. Arguably, the *more* optimistic you are about the ability of humans to overcome any challenge, the *more* worried you should be about any non-human thing gaining that same ability. **The crux is epistemic** ------------------------- Why do smart people disagree so much on this? Eliezer is [certain we are doomed](https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategy). Zvi thinks it’s [very likely](https://thezvi.substack.com/p/response-to-tyler-cowens-existential). Scott Alexander gives it a [33% chance](https://astralcodexten.substack.com/p/mr-tries-the-safe-uncertainty-fallacy) (which means we still have a 2/3 chance to survive!) On the other hand, Scott Aaronson [implies](https://scottaaronson.blog/?p=7042) that his probability is under 2%; Tyler Cowen says that [we just can’t know](https://marginalrevolution.com/marginalrevolution/2023/03/existential-risk-and-the-turn-in-human-history.html), [Pinker](https://marginalrevolution.com/marginalrevolution/2023/03/steven-pinker-on-existential-risk.html) is dismissive of all the arguments. I think the deepest crux here is epistemological: how well do we understand this issue, how much can we say about it, and what can we predict? The prophets think that, based on the nature of intelligence, the entire argument above is obviously correct. Most of the argument can be boiled down to a simple syllogism: the superior intelligence is always in control; as soon as AI is more intelligent than we are, we are no longer in control. The wizards think that we are more in a realm of [Knightian uncertainty](https://en.wikipedia.org/wiki/Knightian_uncertainty). There are too many unknown unknowns. We can’t make *any* confident projections of what will happen. Any attempt to do so is highly speculative. If we were to give equal weight to all hypotheses with equal evidence, there would be a epistemically unmanageable combinatorial explosion of scenarios to consider. There is then a further disagreement about how to *talk* about such scenarios. Adherents of Bayesian epistemology want to put a probability on everything, no matter how far removed from evidence. Neo-Popperians like David Deutsch think that even suggesting such probabilities is irrational, that attempting inferences beyond the “reach” of our best explanations is unwarranted—appropriately, the term Popper used for this was “prophecy.” Eliezer thinks that [this is like orbital mechanics](https://twitter.com/ESYudkowsky/status/1632615617810563072): we see an asteroid way out in the distance, we calculate its trajectory, we know from physics that it is going to destroy the Earth. **Why I’m skeptical of the prophecy** ------------------------------------- Orbital mechanics is very simple and well-understood. The situation with AI is complex and poorly understood. What could a superintelligence really do? The prophets’ answer seems to be “pretty much anything.” Any sci-fi scenario you can imagine, like “[diamondoid bacteria that infect all humans, then simultaneously release botulinum toxin](https://twitter.com/ESYudkowsky/status/1438198189782290433).” In this view, as intelligence increases without limit, it approaches omnipotence. But this is not at all obvious to me. The same view is behind the argument that all our prevention and countermeasures will fail: the AI will outsmart you, manipulate you, outmaneuver you, etc. As Scott Aaronson [points out](https://scottaaronson.blog/?p=6823), this is a “fully general counterargument” to anything that might work. When we think about Western empires or alien invasions, what makes one side superior is not raw intelligence, but the results of that intelligence compounded over time, in the form of science, technology, infrastructure, and wealth. Similarly, an unaided human is no match for most animals. AI, no matter how intelligent, will not start out with a compounding advantage. Similarly, will we really have *no* ability to learn from mistakes? One of the prophets’ worries is “[fast takeoff](https://www.lesswrong.com/tag/ai-takeoff)”, the idea that AI progress could go from ordinary to godlike literally overnight (perhaps through “[recursive self-improvement](https://www.lesswrong.com/tag/recursive-self-improvement)”). But in reality, we seem to be seeing a “slow takeoff,” as some form of AI has arrived and we actually have time to talk and worry about it (even though Eliezer [claims that fast takeoff has not yet been invalidated](https://twitter.com/ESYudkowsky/status/1626612735529357314)). If some rogue AI were to plot against us, would it actually succeed on the first try? Even genius humans generally don’t succeed on the first try of everything they do. The prophets think that AI can *deduce* its way to victory—the same way they think they can deduce their way to predicting such outcomes. **Proceed, with caution** ------------------------- We always have to act, even in the face of uncertainty—even Knightian uncertainty. We also have to remember that the potential advantages of AI are as great as its risks. If it is as powerful as its worst critics fear, then it is also powerful enough to give us abundant clean energy, cheap manufacturing and construction, fast and safe transportation, and the cure for all disease. Remember that no matter what, we’re all going to die eventually, until and unless we cure aging itself. If we did see an alien fleet approaching us, would we try to hide? If they weren’t even on course for us, but were going to pass us by, would we stay silent, or call out to them? Personally, I would want to meet them and to learn from them. And yes, without some evidence of hostile intent on their part, I would risk our civilization to not pass up that defining moment. Scott Aaronson [defines](https://scottaaronson.blog/?p=7042) someone’s “Faust parameter” as “the maximum probability they’d accept of an existential catastrophe in order that we should all learn the answers to all of humanity’s greatest questions,” adding “I confess that my Faust parameter might be as high as 0.02.” I sympathize. None of the above means “damn the torpedoes, full speed ahead.” Testing and AI safety work are all valuable. It is good to occasionally hold an [Asilomar conference](https://en.wikipedia.org/wiki/Asilomar_Conference_on_Recombinant_DNA). It’s good to think through the safety implications of new developments before even working on them, as [Kevin Esvelt did for the gene drive](http://rationallyspeakingpodcast.org/261-dangerous-biological-research-is-it-worth-it-kevin-esvelt/). We can do [“reform” vs. “orthodox” AI safety](https://scottaaronson.blog/?p=6821). (And note that OpenAI spent several months [testing GPT-4](https://cdn.openai.com/papers/gpt-4-system-card.pdf) before its release.) So, proceed with caution. But proceed.
7c6c4228-0b3e-4a58-8d06-7bc8a1103238
trentmkelly/LessWrong-43k
LessWrong
SCP Foundation - Anti memetic Division Hub The fictional book/stories by qntm such as Introductory Antimemetics are an example where the only thing you can trust/rely on is the rationality of yourself and people around you.
becab8a4-3406-4f54-bc4e-a302a36ac462
trentmkelly/LessWrong-43k
LessWrong
Making a Crowdaction platform In 2017 Eliezer Yudkowsky wrote: > Coordination isn’t as simple as everyone jumping simultaneously every time one person shouts “Jump!” For coordinated action to be successful, you need to trust the institution that says what the action should be, and a majority of people have to trust that institution, and they have to know that other people trust the institution, so that everyone expects the coordinated action to occur at the critical time, so that it makes sense for them to act too. > That’s why we have policy prediction markets and… there doesn’t seem to be a word in your language for the timed-collective-action-threshold-conditional-commitment…hold on, this cultural translator isn’t making any sense. “Kickstarter”? You have the key concept, but you use it mainly for making video games? I thought this was very clever and was happy he broadcasted this idea to a community of capable, well-off computernerds. "I can't wait to see what this community cooks up" I thought. We are currently halfway into 2020 and the dream has not yet been realized. Let's see where we stand. CollAction I've searched the web and the closest I could come is CollAction. These people have gone ahead and build a functional website with actual projects on it. The makers of this site call these kinds of websites 'Crowdacting websites'. Since 'inadequate equilibria' didn't name this idea I shall be calling it that from here on out. The site allows you to upload your own projects (with their approval) and allows you to join other projects. The visually clean lay-out shows how many people have already joined a project and what the desired threshold is. The projects have clear deadlines and goals and once they are reached they are closed forever. I encourage you to check it out for yourself since it's a good way to tickle your imagination about what such a website may look like in the future. While the site is very pretty, it's not made by this community and as far as I can tell also not use
a65c863c-6539-440f-8b2a-007b938b1dea
trentmkelly/LessWrong-43k
LessWrong
My Current Thoughts on the AI Strategic Landscape I started working at AI Impacts slightly less than a year ago. Before then, I was not following developments in either AI or AI safety. I do not consider myself a rationalist and did not engage with LessWrong before starting this job. While I have mostly been working on historical case studies,[1] I have gotten a close look at the AI safety community and the arguments therein. I live in a rationalist group house and work out of an AI safety office. I think I have about as informed an opinion on AI safety as is possible without doing a bunch of technical alignment research or being involved in the community for years. Here are my current opinions on AI safety. Some of them may be wrong: I endorse being wrong more often if the alternative is not saying things of consequence.  This is presented as an organized list of my thoughts. There are arguments in my head justifying most of these, but I will not be spelling them out in detail here. I will link to more detailed arguments when they are available. If something is in italics, then I wrote the argument at that link, or intend to write about it in the future. This should be readable at any level of the list. If you want a quick overview, you can just read the top level points, in bold. Or you can read some details, but not others. Or you can read everything. I am mostly unconvinced by the classic story of AI risk. 1. Currently, AI is not very significant in the global economy / human society. In order to become impactful, the capabilities of AI will have to increase. 1. AI capabilities are increasing rapidly now. 2. It is not clear how much AI capabilities will need to increase in order for AI to become very impactful. 3. Several ways of operationalizing extremely capable AI include whole brain emulation (WBE), artificial general intelligence (AGI), and transformative artificial intelligence (TAI). 2. Whole brain emulation seems impossible on a (classical) computer. 1. Neurons / synapses are not