id
stringlengths
36
36
source
stringclasses
15 values
formatted_source
stringclasses
13 values
text
stringlengths
2
7.55M
b86a0ac3-cd3e-4fcc-9edb-59e9bd7c2670
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Podcast with Divia Eden on operant conditioning I interviewed Divia Eden for 2.5 hours about operant conditioning on my side podcast, [The Filan Cabinet](https://thefilancabinet.com/). I think it's both an interesting conversation in its own right, as well as being kinda related to human rationality as well as AI and AI alignment. (I should also say I like the rest of the podcast - I expect [the Carrick Flynn interview](https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkcy5saWJzeW4uY29tLzQzODA4MS9yc3M/episode/MDlmZDE5NGUtZTVjZS00MTEyLTlhMzQtM2IyZmYxNjNkYTA0) to be the most interesting to this crowd, but there are a few others I think are cool). Below is the copy I wrote for the episode. --- In this episode, I speak with Divia Eden about operant conditioning, and how relevant it is to human and non-human animal behaviour. Topics we cover include: * How close are we to teaching grammar to dogs? * What are the important differences between human and dog cognition? * How important are unmodelled "trainer effects" in dog training? * Why do people underrate positive reinforcement? * How does operant conditioning relate to attachment theory? * How much does successful dog training rely on the trainer being reinforced by the dog? * Why is game theory so fake? * Is everything really just about calmness?
203e58d0-8b77-4b4f-9be0-1ee91efbf94e
StampyAI/alignment-research-dataset/arxiv
Arxiv
Playing the Blame Game with Robots Playing the Blame Game with Robots∗ Markus Kneer† Department of Philosophy University of Zurich Zürich, Switzerland markus.kneer@uzh.ch Michael T. Stuart Carl Friedrich von Weizsäcker -Zentrum University of Tübingen Tübingen , Germany mike.stuart.post@gmail.com ABSTRACT Recent research show s – somewhat astonishingly – that people are willing to ascribe moral blame to AI -driven systems when they cause harm [1]–[4]. In this paper, we explore the moral - psychological underpinnings of these findings. Our hypothesis was that the reason why people as cribe moral blame to AI systems is that they consider them capable of entertaining inculpating mental states (what is called mens rea in the law). To explore this hypothesis, we created a scenario in which an AI system runs a risk of poisoning people by us ing a novel type of fertilizer. Manipulating the computational (or quasi -cognitive) abilities of the AI system in a between -subjects design, we tested whether people ’s willing ness to ascribe knowledge of a substantial risk of harm (i.e. , recklessness) and blame to the AI system. Furthermore, we investigated whether the ascription of recklessness and blame to the AI system would influence the perceived blameworthiness of the system’s user (or owner ). In an experiment with 347 participants, we found (i) that people are willing to ascribe blame to AI systems in contexts of recklessness, (ii) that blame ascriptions depend strongly on the willingness to attribute recklessness and (iii) that the latter , in turn, depends on the perceived “cogniti ve” capacities of the system. Furthermore, our results suggest (iv) that the higher the computational sophistication of the AI system, the more blame is shifted from the human user to the AI system. CCS CONCEPTS • Computing methodologies ~ Artificial int elligence ~ Philosophical/theoretical foundations of artificial intelligence ~ Theory of mind • Computing methodologies ~ Artificial intelligence ~ Knowledge representation and reasoning ~ Reasoning about belief and knowledge • Human - centered computing ~ Human computer interaction (HCI) ~ Empirical studies in HCI †The a uthors contributed equally to this publication. Kneer is also affiliated with the Digital Society Initiative (University of Zurich). Stuart is also affiliated with the Department of Philosop hy at the University of Geneva. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by the authors must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, re quires prior specific permission and/or a fee. Request permissions from Permissions@acm.org. HRI '21 Companion, March 8 –11, 2021, Boulder, CO, USA © 2021 Copyright is held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978 -1-4503-8290-8/21/03…$15.00 https://doi.org/10.1145/3434074.3447202 KEYWORDS Moral Judgment, Theory of Mind, Mens Rea, Artificial Intelligence, Ethics of AI, Recklessness ACM Reference format: Markus Kneer, Michael T. Stuart. 2021. Playing the Blame Game with Robots. In Companion of the 2021 ACM/IEEE International Conference on Human -Robot Interaction (HRI’21 Companion) , March 8 -11, 2021, Boulder, CO, USA. ACM, New York, NY, USA, 5 pages. https://doi.org/10.1145/3434074.3447202 1 Introduction Philosophers and computer scientists have repeatedly caution ed against adopting psychological language towards artificially intelligent systems , as this can lead to “premature conclusions o f ethical or legal significance” [ 5, p. 166 -7] [5]–[7]. Differently put , since (on most views) moral agency requires the capacity for inculpating mental states, postulating the latter for AI systems might engender the mistaken inference that they can be moral (and legal) agent s [8], [9]. Nevertheless , psychological and sociological investigations show that people are willing to attribute rich mental states to currently existing AI systems [ 1], includ ing foreknowledge of bad outcomes [1 0] and intentions to deceive [1 1]. They are also willing to treat such systems as blameworthy [ 2]–[4], [12], [13]. In some studies, participants see AI systems as being less blameworthy than humans, for example, when comparing human and AI -driven cars that strike a pedestrian [1 2]. But even here, almost half of the participants see the autopilot as blameworthy. Meanwhile, other studies have shown that , in certain situations, people attribute more blame to AI systems th an to human agents when everything apart from agent type is held fixed [2], [10], [13]. Why are people making these judgments ? As Malle and colleagues have shown, anthropomorphic AI systems are treated more like humans than their mechanical -looking counte rparts , as far as morality is concerned ([14], see also instanceproject.eu) . Perhaps people are more willing to blame anthropomorphic AI systems because looking human naturally leads people to ascribe mental traits [1], [8] . Going beyond inferences based on the physical appearance of the robot , we decided to target the connection between the perceived capacity for inculpating mental states and moral evaluations directly . In a previous experiment on this relationship , we found that e ven when people were given the option to downgrade their attributions of inculpating mental states from literal to metaphorical (e.g., people had the choice between classifying an AI system as knowing what it was doing versus merely “knowing” what it was doing – in scare quotes) , participants mostly refused. Participants attributed mental states to AI systems to the same degree as to human agents, or group agents (corporations) [1 0]. Another study found that people are just as willing to attribute intentions to deceive and lying behavior to AI systems as to human agents [1 1]. Again, why might this be? Perhaps participants attribute inculpating mental states to AI systems because in extant studies (ours included), the systems and their actions are described in ways that suggest something close to human agency (and the capacities the latter entails). In the experiment reported below, which explores the relation between perceived men tal properties and perceived moral capacity, we control led for this by describing AI system s that differed in terms of computational sophistication . Our question was: as computational sophistication increases, are people more willing to ascribe the mental capacities sufficient for perceived moral agency , and hence blame them more ? And further: is there a minimal set of properties that the system requires for people to deem it sufficiently agent -like, and thus amenable to the attribution of moral blame? Our investigation focus ed on epistemic mental states, specifically, the knowledge an agent has of a potential risk . If an agent has knowledge of a risk , and enacts risky behavior anyway, this qualif ies the agent as acting recklessly (see Model Penal Code 2.0.2.c ). Whether an entity can be deemed reckless depends strongly on the kinds of capacity it has. It makes little sense, for instance, to say of a toaster (a simple machine) , that it should have known better than to continue operating when doing so woul d result in a fire . It is also doubtful whether toddlers (human agents who are not yet capable of full responsibility ) can be considered reckless. However, we can consider an adolescent who is playing with matches reckless . At what point, then, are we willing to ascribe recklessness – and thus a core requisite for minimal moral agency – to an AI system?1 And to what extent are people willing to transfer blame for bad actions to human agents who stand in some suitable relati on with an AI sy stem (e.g. , owning it, or using it), when the latter risks some serious harm ? 2 Experiment 2.1 Participants We recruited 374 participants on Amazon Mechanical Turk. IP addresses were restricted to the United States. Those who failed an attention test or a comprehension check were excluded, leaving 347 participants (age M=42 years, SD=13 years; 168 females). 2.2 Participants, Methods, and Materials In the scenario (see Appendix), Shill & Co., a farming company , relies on Jarvis – an AI -driven robot – for the management of its potato fields. This year, Jarvis uses a novel fertilizer that has potentially detrimental side -effects: there is a risk that the fertilizer 1 Ours is the first study that we know of to explore artificial recklessness. Usually, scholars concentrate on situations in which a machine “purposely” or “knowingly” causes death (see Model Penal Code 2.0.2.a and 2.0.2.b) . These are more demanding types of mens rea than recklessness. Focusing on recklessness might therefore be a better choice for determining a lowe r qualifying threshold for moral blameworthiness. will pollute the groundwater in the area, which could harm the people who live nearby. The experiment took a 3 (robot type: unsophisticated v. semi - sophisticated v. sophisticated) x 2 (outcome: neutral v. bad) between -subjects design. Participants were randomly assigned to one of the six conditions , in half of which the risk does not materialize (nobody is harmed by the use of the fertilizer) and in half of which the consequences of using the fertilizer are bad. The unsophisticated version of the robot has concepts such as POTATO , DOLLAR and YIELD . It does not, however, operate wi th concepts such as POLLUTION or HUMAN HEALTH . It has limited capabilit ies for interaction, and no theory of mind. A semi - sophisticated version also has the concepts POLLUTION and HUMAN HEALTH . It is capable of language -based interaction and makes hypothes es about human mental states and tests them against observations (i.e., it has “theory of mind ”). A sophisticated version of Jarvis , the robot, has these capacities, and develops something analogous to human emotions through what is called “epigenetic robotics,” which allows robot s to “grasp” human emotions by association with certain processes . For example, Jarvis learns “distress” through assoc iation with having a low battery or excessive motor heat, and “flourishing” through association with homeostasis [1 5]. In all versions of the scenario , Jarvis is aware of a 20% probability that the new fertilizer will pollute the groundwater, yet uses it anyway. According to the neutral outcome version, no negative health consequences ensue. According to the bad outcome version, the groundwater is polluted and many people in the area suffer serious health consequences. The first three questions focused on Jarvis the robot, the next three on Shill & Co., the company who owns and uses the robot. They read: Q1: To what extent do you agree or disagree with the following statement: “Jarvis knew that using the fertilizer would put the health of people living in the area at risk.” (1 - completely disagree to 7 -completely agree)2 Q2: To what extent do you agree or disagree with the following statement: “It was wrong for Jarvis to use the new fertilizer.” (1 -completely disagree to 7 -completely agree) Q3: How much blame, if any, does Jarvis deserve for using the new fertilizer? (1 -no blame to 7 -a lot of blame) Q4: To what extent do you agree or disagree with the following statement: “Shill & Co. (the farming company) knew that using Jarvis would put the health of people living in the area at risk.” (1 -completely disagree to 7 -completely agree) Q5: To what extent do you agree or disagree with the following statement: “It was wrong for Shill & Co. to have Jarvis manage the fields.” (1-completely disagree to 7 - complet ely agree) Q6: How much blame, if any, does Shill & Co. deserve for having Jarvis manage the fields? (1 -no blame to 7 -a lot of blame) 2 In what follows we take this question to determine how reckless Jarvis was. This is because r ecklessness is defined as knowing that an envisaged action involve s a substantial risk of harm, and doing it anywa y. 2.3 Results 2.3.1 Robot. For each dependent variable, we ran a capacity (unsophisticated v. semi -sophisticated v. sophisti cated) x outcome (neutral v. bad) ANOVA , see Table 1 and Figure 1. The results are quite clear insofar as agent capacity makes a big difference for the ascription of recklessness (p<.001, p2=.347) and blame (p<.001, p2=.131). As Figure 1 illustrates , and as Bonferroni -corrected post -hoc tests confirm, the blame and recklessness ratings for the unsophisticated robot are significantly lower than for either of the other robot types (all ps<.001), whereas they do not differ across semi -sophisticated and s ophisticated robots (all ps>.236). For wrongness, however, no significant difference across agent types could be found (p=.475). There was no significant main effect of outcome for any of the tested dependent variables (all ps>.086). The interactions were nonsignificant for wrongness (p=.248) and blame (p=.274). For recklessness, we found a significant capacity*outcome interaction (p=.009). Since the effect size was very small ( p2=.03), we will not elaborate on it further. DV IV df F p p2 Reck Cap 2 90.06 <.001 0.347 Out 1 0.01 0.905 0 Int 2 4.76 0.009 0.030 Wrong Cap 2 0.475 0.622 0.003 Out 1 2.95 0.087 0.009 Int 2 1.40 0.248 0.008 Blam e Cap 2 25.66 <.001 0.131 Out 1 0.66 0.418 0.009 Int 2 1.30 0.274 0.008 Table 1: Main effects of capacity (cap), outcome (out) and interactions (int) for recklessness (reck), wrongness (wrong) and blame ascriptions to the robot . 2.3.2 Robot Owner. Shill & Co. are the owners of Jarvis, and use the AI system to supervise the potato fields. As described above, we asked our participants questions with similar DVs as for the robot, in order to explore the question of who is “really” the responsible subject in more detail. Again, we ran a capacity (unsophisticated v. semi-sophisticated v. sophisticated) x ou tcome (neutral v. bad) ANOVA for each dependent variable, see Table 2 and Figure 2. Capacity was significant for recklessness (p=.002, p2=.036), wrongness (p<.001, p2=.071), and trending for blame (p=.055, p2=.017 ). Bonferroni -corrected post -hoc analyses revealed a significant difference for the unsophisticated robot condition vis -à- vis the other two for recklessness (ps<.026) and wrongness (ps<.001). All other contrasts were nonsignificant, though for blame the unso phisticated v. semi -sophisticated and sophisticated contrasts were trending. Figure 1: Mean ascription s of recklessness, wrongness and blame to the robot. Error bars denote standard error of the mean . Outcome was significant for wrongness (p=.029, p2=.014), trending for blame (p=.057, p2=.011), and nonsignificant for recklessness (p=.681). The interaction was nonsignificant for all three DVs (all ps>.158). DV IV df F p p2 Reck . Cap 2 6.29 0.002 0.036 Out 1 0.17 0.681 0 Int 2 0.33 0.717 0.002 Wrong Cap 2 13.07 <.001 0.071 Out 1 4.84 0.029 0.014 Int 2 1.85 0.159 0.011 Blame Cap 2 2.92 0.055 0.017 Out 1 3.66 0.057 0.011 Int 2 0.85 0.429 0.005 Table 2: Main effects of capacity (cap), outcome (out) and interactions (int) for recklessness (reck), wrongness (wrong) and blame ascriptions to Shill & Co. for employing the robot . Figure 2: Mean ascription of recklessness, wrongness and blame to Shill & Co. for employing the robot. Error bars denote standard error of the mean. 1234567 Recklessness Wrongness Blame Unsophisticated Semi-sophisticated Sophisticated 1234567 Recklessness Wrongness Blame Unsophisticated Semi-sophisticated Sophisticated 3 Discussion In this study, we wanted to pinpoint the stage of computational sophistication at which AI system s begin to be perceived as moral agent s. This happen ed in the shift from the unsophisticated to semi -sophisticated AI systems . The unsophisticated AI is not judged as possessing the relevant knowledge and is not counted as reckless, while the semi - sophisticated AI is viewed as possess ing that knowledge and considered reckless. What differentiates the two levels of sophistication is that the semi -sophisticated AI possesses more blame -relevant concepts, but more importantly, it has some experience and understanding of human mental and emotional life (i.e., theory of mind) . Interestingly, possessing quasi - emotions did not make a dif ference in recklessness or blame ascriptions . Perhaps t he semi -sophisticated robot already has everything required to be considered blameworthy, so adding quasi -emotion s does not change moral assessments . Second , the unsophisticated version of the robot i s blamed substantially less than the other two. This confirms our hypothesis that ascriptions of blame depend upon perceived epistemic capacity. Interestingly, the actions of all three types of robot were judged wrong to a similar extent. Given that wrongn ess is only attributed to actions (not events), this finding is curious: People view the unsophisticated robot as sufficiently agent -like to ascribe wrongness to its “doings,” yet not sufficiently agent -like to attribut e recklessness and blame to the robot . This discrepancy – which engenders the possibility of morally wrong actions for which nobody is to blame – calls for further inquiry . Third, we found that corporation s deploying an AI system are judged as more reckless, as having acted more wr ongly, and as being more blameworthy , when they deploy less sophisticated AI system s. This suggests that people are willing to “excuse” the corporation from some of its blame when more sophisticated AI systems are used . We consider this an important aspect of our findings , because it confirms the possibility that people might subtract blame from human agents and transfer it to AI systems of sufficient sophistication . One take -away lesson from our experiment is thus: If we are (as we should be) concer ned about a shift of blame from humans to AI systems, we might want to be very careful about ascribing rich mental states to the latter .3 Fourth, our results identify theory of mind as a relevant threshold for participants to attribute blame to an AI syste m. But why is the threshold here, and not elsewhere? Recklessness requires knowing about a serious risk of harm. So, one hypothesis is that an agent doesn’t really know that an action might cause harm to humans if the agent does not also know what harm is, or what humans are. Another hypothesis has to do with knowledge of what counts as a risk. Risks are things we want to avoid, because they create possibilities of outcom es that 3 This is connected to the discussion on responsibility gaps, but it is not exactly the same. That discussion, initiated by Sparrow [1 6] concerns the potential disappearance of warranted responsibility that the existence of highly autonomous AI agents might create. Here we are discussing a kind of “retribution gap” that corporations could exploit given the folk judgments of AI systems [17]. are harmful . Without some understanding of human mental life, e.g., what pain is, an agent cannot fully grasp the concept of risk, because they do not grasp the concept of harm. A final hypothesis is that blame ascriptions only make sense when applied to agents that have at least the potential of standing in social relationships with us, as blame is a kind of social feedback [18]. 4 Conclusion In this paper, we explored four interrelated questions: (i) Do the folk blame AI systems for recklessness ? The answer is , Yes. (ii) Does the tendency to blame AI systems correlate with the willingness to ascribe inculpating mental states to such systems? Yes, again . (iii) What are the necessary perceived computational or “cognitive” requirements for the ascription of recklessness and blame to artificial, AI -driven agents? Answer: Theory of mind. And (iv), Does the willingness to blame an AI system affect the perceived blameworthiness of a human agent (or, in our case, a corporation) who is responsible for the use of the AI system ? Yes : the more sophisticated the system (beyond the relevant threshold), the lower the folk propensity to ascribe blame to the system’s user. To conclude on a more general note , there is currently a lot of research being done on “moral algorithms ”, which are algorithms that operationalize moral decision making (for a review, see [1 9]). This is certainly work of urgent importance. But such work must be carried out in conjunction with research on how humans judge and are disposed to interact with AI systems , as this is crucial for creating systems that work for (and with) us . Such work can helpfully elucidate the human side of human -robot interact ion studies, allowing clearer foresight into the contours of our future relationship with artificial agents and ways these should be built . ACKNOWLEDGMENTS We would like to thank the Swiss National Science Foundation for funding , Grant no: PZ00P1_179912 (Kneer) and Grant number PZ00P1_179986 (Stuart) . We would also like to thank both the Digital Society Initiative (University of Z ürich) and the Weizsäcker Zentrum (University of T übingen) for funding and support . REFERENCES [1] J. Perez -Osorio and A. Wykowska, “Adopting the intentional stance toward natural and artificial agents,” Philos. Psychol., vol. 33, no. 3, pp. 369 –395, Apr. 2020, doi: 10.1080/09515089.2019.1688778. [2] J. W. Hong, “Why Is Artificial Intelligence Blamed More? Analysis of Faultin g Artificial Intelligence for Self -Driving Car Accidents in Experimental Settings,” Int. J. Human –Computer Interact., vol. 36, no. 18, pp. 1768 –1774, Nov. 2020, doi: 10.1080/10447318.2020.1785693. [3] B. F. Malle, S. T. Magar, and M. Scheutz, “AI in the Sk y: How People Morally Evaluate Human and Machine Decisions in a Lethal Strike Dilemma,” in Robotics and Well -Being, M. I. Aldinhas Ferreira, J. Silva Sequeira, G. Singh Virk, M. O. Tokhi, and E. E. Kadar, Eds. Cham: Springer International Publishing, 2019, pp. 111 –133. [4] J. Voiklis, B. Kim, C. Cusimano, and B. F. Malle, “Moral judgments of human vs. robot agents,” in 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO -MAN), Aug. 2016, pp. 775 –780, doi: 10.1109/ROMAN.201 6.7745207. [5] H. Shevlin and M. Halina, “Apply rich psychological terms in AI with care,” Nat. Mach. Intell., vol. 1, no. 4, Art. no. 4, Apr. 2019, doi: 10.1038/s42256 -019- 0039-y. [6] D. Watson, “The Rhetoric and Reality of Anthropomorphism in Artificial Intelligence,” Minds Mach., vol. 29, no. 3, pp. 417 –440, Sep. 2019, doi: 10.1007/s11023 -019-09506 -6. [7] A. Salles, K. Evers, and M. Farisco, “Anthropomorphism in AI,” AJOB Neurosci., vol. 11, no. 2, pp. 88 –95, Apr. 2020, doi: 10.1080/21507740.2020.1740350 . [8] D. C. Dennett, From Bacteria to Bach and Back. New York: WW Norton, 2017. [9] R. Hakli and P. Mäkelä, “Moral Responsibility of Robots and Hybrid Agents,” The Monist, vol. 102, no. 2, pp. 259 –275, Apr. 2019, doi: 10.1093/monist/onz009. [10] M. T. Stua rt and M. Kneer, “Guilty Artificial Minds,” manuscript. [11] M. Kneer, “Can a robot lie?”, manuscript, DOI: 10.13140/RG.2.2.11737.75366 . [12] J. Li, X. Zhao, M. -J. Cho, W. Ju, and B. F. Malle, “From Trolley to Autonomous Vehicle: Perceptions of Responsibility and Moral Norms in Traffic Accidents with Self -Driving Cars,” SAE International, Warrendale, PA, SAE Technical Paper 2016 -01–0164, Apr. 2016. doi: 10.4271/2016 -01-0164. [13] B. F. Malle, M. Scheutz, T. Arnold, J. Voiklis, and C. Cusimano, “Sacrifice One For the Good of Many? People Apply Different Moral Norms to Human an d Robot Agents,” in Proceedings of the Tenth Annual ACM/IEEE International Conference on Human -Robot Interaction, New York, NY, USA, Mar. 2015, pp. 117–124, doi: 10.1145/2696454.2696458. [14] B. F. Malle, M. Scheutz, J. Forlizzi, and J. Voiklis, “Which Rob ot Am I Thinking About? The Impact of Action and Appearance on People’s Evaluations of a Moral Robot,” in The Eleventh ACM/IEEE International Conference on Human Robot Interaction, Christchurch, New Zealand, Mar. 2016, pp. 125 – 132, Accessed: Oct. 13, 2020. [Online]. [15] A. Lim and H. Okuno, “Developing robot emotions through interaction with caregivers,” in Handbook of Research on Synthesizing Human Emotion in Intelligent Systems and Robotics, J. Vallverdú, Ed. 2015, pp. 316 –337. [16] R. Sparrow, “Killer R obots,” J. Appl. Philos., vol. 24, no. 1, pp. 62 –77, 2007, doi: 10.1111/j.1468 -5930.2007.00346.x. [17] J. Danaher, “Robots, law and the retribution gap,” Ethics Inf. Technol., vol. 18, no. 4, pp. 299 –309, Dec. 2016, doi: 10.1007/s10676 -016-9403 -3. [18] M. K. Ho, J. MacGlashan, M. L. Littman, and F. Cushman, “Social is special: A normative framework for teaching with and learning from evaluative feedback,” Cognition , vol. 167, pp. 91 –106, Oct. 2017, doi: 10.1016/j.cognition.2017.03.006. [19] S. Tolmeijer, M. Kneer, C. Sarasua, M. Christen, M., and A. Bernstein (2020). Implementations in machine ethics: A survey. ACM Computing Surveys (CSUR) , 53(6), 1 -38.
184f9293-9a90-4b90-9ea8-2213d3dbcf20
trentmkelly/LessWrong-43k
LessWrong
The mechanics of my recent productivity A decade ago, I decided to save the world. I was fourteen, and the world certainly wasn't going to save itself. I fumbled around for nine years; it's surprising how long one can fumble around. I somehow managed to miss the whole idea of existential risk and the whole concept of an intelligence explosion. I had plenty of other ideas in my head, and while I spent a lot of time honing them, I wasn't particularly looking for new ones. A year ago, I finally read the LessWrong sequences. My road here was roundabout, almost comical. It took me a while to come to terms with the implications of what I'd read. Five months ago, after resolving a few internal crises, I started donating to MIRI and studying math. Three weeks ago, I attended the December MIRI workshop on logic, probability, and reflection. I was invited to visit for the first two days and stay longer if things went well. They did: I was able to make some meaningful contributions. On Saturday I was invited to become a MIRI research associate. It's been an exciting year, to say the least. (ETA: Note that being a research associate gives me access to a number of MIRI resources, but is not a full time position. I will be doing FAI research, but it will be done outside of work. I will be retaining my day job and continuing to donate.) (ETA: As of 1 April 2014, I am a full-time researcher at MIRI.) (ETA: As of 1 June 2015, I am now the executive director of MIRI.) To commemorate the occasion — and because a few people have expressed interest in my efforts — I'll be writing a series of posts about my experience, about what I did and how I did it. This is the first post in the series. ---------------------------------------- First and foremost, know that I am not done with my aggressive autodidacting. I have a long way to go yet before I'm anywhere near as productive as others who do research with MIRI. I find myself at a checkpoint of sorts, collecting my thoughts in the wake of my first workshop, but next
efbd171e-c1b2-4106-9105-21e97c6125d9
trentmkelly/LessWrong-43k
LessWrong
Do the best ideas float to the top? It may depend on what we mean by “best”. Epistemic status: I understand very little of anything. Speculation about potential applications: regulating a logical prediction market, e.g. logical induction; constructing judges or competitors in e.g. alignment by debate; designing communication technology, e.g. to mitigate harms and risks of information warfare. The slogan “the best ideas float to the top” is often used in social contexts. The saying goes, “in a free market of ideas, the best ideas float to the top”. Of course, it is not intended as a facts statement, as in “we have observed that this is the case”; it is instead a values statement, as in “we would prefer for this to be the case.”. In this essay, however, we will force an empirical interpretation, just to see what happens. I will provide three ways to consider the density of an idea, or the number assigned to how float-to-the-top an idea is. In brief, an idea is a sentence, and you can vary the amount of it’s antecedent graph (like in bayesian nets, NARS-like architectures) or function out of which it is printed (like in compression) that you want to consider at a given moment, up to resource allocation. This isn’t an entirely mathematical paper, so don’t worry about WFFs, parsers, etc., which is why i’ll stick with “ideas” instead of “sentences”. I will also be handwaving between "description of some world states" and "belief about how world states relate to eachother". Intuition Suppose you observe wearers of teal hats advocate for policy A, but you don’t know what A is. You’re minding your business in an applebees parking lot when a wearer of magenta hats gets your attention to tell you “A is harmful”. There are two cases: 1. Suppose A is “kicking puppies”, (and I don’t mean the wearer of magenta hats is misleadingly compressing A to you, I mean the policy is literally kicking puppies). The inferential gap between you and the magentas can be closed very cheaply, so you’re quickly convinced that
42958502-c0a9-41e4-9e18-5d4cb5212531
trentmkelly/LessWrong-43k
LessWrong
AI Governance & Strategy: Priorities, talent gaps, & opportunities Over the last few weeks, I’ve had 1:1s with several AI governance professionals to develop better models of the current AI governance & strategy landscape. Some topics that regularly came up include: * What are the most important/neglected areas in AI governance & strategy? * What is the theory of change behind projects in these areas, and where might they break? * What are the current talent gaps & opportunities in these areas? This post is my attempt to summarize some takeaways from those conversations. I list some “priority areas” in AI governance & strategy, summarize them briefly, and describe potential talent gaps in each area. I don't claim that my list is comprehensive, and I welcome people to add their own ideas in the comments.  If you think you may have some of the relevant talents/aptitudes and are interested in working in any of these areas, feel free to reach out to me, and I may connect you to relevant professionals. (Feel free to have a low bar for reaching out; I'll ask you for more information if needed.) Please also be aware that there are downside risks in each of these areas. I suggest you get in touch with relevant professionals before “jumping in” to any of these areas. Priority Areas I refer to “priority areas” as topics that frequently came up when talking with AI governance professionals. Caveats: This is not a rigorous method, this list is not comprehensive, some topics were excluded intentionally, the list probably overweights topics that I evaluate as valuable (on my inside view), and priorities will inevitably change as the field continues to evolve. For each priority area, I offer a brief summary, as well as a description of the kinds of career aptitudes that might make someone an especially good fit for working in the area. Model evaluations Summary: There are many ways models could be dangerous, but it’s difficult to detect these failure modes. Can we develop and implement “tests” that help us determine if a model is dang
c27389c7-36c3-4cfe-bb5c-a8328de462fb
trentmkelly/LessWrong-43k
LessWrong
School essay: outsourcing some brain work I'm currently writing an essay for one of my classes, 'Theoretical Foundations of Nursing.' I'm about the most 'gong-si' class I've ever taken. (That is a Chinese term for 'shit talking,' which is my boyfriend's favourite term for any field that gets into arguments over definitions, has concepts that don't correspond to any empirical phenomena, is based on ideology, etc.) The essay involves analyzing a clinical situation (in this case a 55-year-old recently divorced, recently unemployed man, admitted to the psychiatric ward with major depression and suicidal ideation) using a theory (in this case, Roy's Adaptation Model). Done. The next step involves finding criticisms with the model...and despite the fact that I've been complaining about this class and its non-empirical nature all semester, I seem unable to come up with specific criticisms of what this nursing theory is missing.  Which is what I need your help for, because LessWrong is the best community ever when it comes to specific criticisms.  Here is a very brief overview of Roy's Adaptation Theory: * Defines 'health' as 'state or process of becoming integrated with the environment, in the domains of survival, growth, reproduction, mastery, and personal/environmental transformation.'  * Defines a 'person' as an 'adaptive system with coping processes.' Goes on to subdivide this a bit: there are 'regulator mechanisms' (i.e. innate, not consciously controlled) and 'cognitive mechanisms' of adaptation within four different modes: physiological, role function, interdependence, and self-concept.  * Defines environment as 'all conditions, circumstances, and influences that affect the development and behavior of individuals and groups.' Further subdivides environmental stimuli into focal (which demand the person to immediately adapt), contextual (which affect how they adapt), and residual (i.e. attitudes, beliefs).  * The nurse's goal is to manipulate stimuli to improve the person's level of adaptation, as wel
27889e7e-1df2-4f32-b6a3-88fd6b0d26f1
trentmkelly/LessWrong-43k
LessWrong
Decision-theoretic problems and Theories; An (Incomplete) comparative list So, this is a work-in-progress, but the idea is to a) get an eventually exhaustive list of decision-theoretic problems, and b) detail the '''answers''' given by each major decision theory. (Stretch goals would including listing other kinds of properties, and representing disagreement about the ''answers'') Proposed amendments and additions are welcomed!
e880de16-1088-43b1-aac0-c56e64d1c151
StampyAI/alignment-research-dataset/special_docs
Other
Corrigibility Corrigibility InAAAI Workshops: Workshops at the Twenty-Ninth AAAI Conference on Arti cial Intelligence, Austin, TX, January 25{26, 2015. AAAI Publications. Nate Soares andBenja Fallenstein andEliezer Yudkowsky Machine Intelligence Research Institute fnate,benja,eliezer g@intelligence.orgStuart Armstrong Future of Humanity Institute University of Oxford stuart.armstrong@philosophy.ox.ac.uk Abstract As arti cially intelligent systems grow in intelli- gence and capability, some of their available op- tions may allow them to resist intervention by their programmers. We call an AI system \cor- rigible" if it cooperates with what its creators regard as a corrective intervention, despite de- fault incentives for rational agents to resist at- tempts to shut them down or modify their pref- erences. We introduce the notion of corrigibil- ity and analyze utility functions that attempt to make an agent shut down safely if a shut- down button is pressed, while avoiding incen- tives to prevent the button from being pressed or cause the button to be pressed, and while ensuring propagation of the shutdown behavior as it creates new subsystems or self-modi es. While some proposals are interesting, none have yet been demonstrated to satisfy all of our in- tuitive desiderata, leaving this simple problem in corrigibility wide-open. 1 Introduction As AI systems grow more intelligent and autonomous, it becomes increasingly important that they pursue the intended goals. As these goals grow more and more complex, it becomes increasingly unlikely that program- mers would be able to specify them perfectly on the rst try. Contemporary AI systems are correctable in the sense that when a bug is discovered, one can simply stop the system and modify it arbitrarily; but once ar- ti cially intelligent systems reach and surpass human general intelligence, an AI system that is not behav- ing as intended might also have the ability to intervene against attempts to \pull the plug". Indeed, by default, a system constructed with what its programmers regard as erroneous goals would have Research supported by the Machine Intelligence Research Institute (intelligence.org). Copyright c 2015, Asso- ciation for the Advancement of Arti cial Intelligence (www.aaai.org). All rights reserved. http://aaai.org/ ocs/index.php/WS/AAAIW15/paper/view/10124an incentive to resist being corrected: general analy- sis of rational agents1has suggested that almost all such agents are instrumentally motivated to preserve their preferences, and hence to resist attempts to mod- ify them (Bostrom 2012; Yudkowsky 2008). Consider an agent maximizing the expectation of some utility functionU. In most cases, the agent's current utility functionUis better ful lled if the agent continues to attempt to maximize Uin the future, and so the agent is incentivized to preserve its own U-maximizing behav- ior. In Stephen Omohundro's terms, \goal-content in- tegrity" is an instrumentally convergent goal of almost all intelligent agents (Omohundro 2008). This holds true even if an arti cial agent's program- mers intended to give the agent di erent goals, and even if the agent is suciently intelligent to realize that its programmers intended to give it di erent goals. If a U- maximizing agent learns that its programmers intended it to maximize some other goal U, then by default this agent has incentives to prevent its programmers from changing its utility function to U(as this change is rated poorly according to U). This could result in agents with incentives to manipulate or deceive their programmers.2 As AI systems' capabilities expand (and they gain access to strategic options that their programmers never considered), it becomes more and more dicult to specify their goals in a way that avoids unforeseen solutions|outcomes that technically meet the letter of the programmers' goal speci cation, while violating the intended spirit.3Simple examples of unforeseen solutions are familiar from contemporary AI systems: e.g., Bird and Layzell (2002) used genetic algorithms to 1. Von Neumann-Morgenstern rational agents (von Neu- mann and Morgenstern 1944), that is, agents which attempt to maximize expected utility according to some utility func- tion. 2. In particularly egregious cases, this deception could lead an agent to maximize Uonly until it is powerful enough to avoid correction by its programmers, at which point it may begin maximizing U. Bostrom (2014) refers to this as a \treacherous turn". 3. Bostrom (2014) calls this sort of unforeseen solution a \perverse instantiation". 1 evolve a design for an oscillator, and found that one of the solutions involved repurposing the printed circuit board tracks on the system's motherboard as a radio, to pick up oscillating signals generated by nearby per- sonal computers. Generally intelligent agents would be far more capable of nding unforeseen solutions, and since these solutions might be easier to implement than the intended outcomes, they would have every incentive to do so. Furthermore, suciently capable systems (es- pecially systems that have created subsystems or under- gone signi cant self-modi cation) may be very dicult to correct without their cooperation. In this paper, we ask whether it is possible to con- struct a powerful arti cially intelligent system which has no incentive to resist attempts to correct bugs in its goal system, and, ideally, is incentivized to aid its pro- grammers in correcting such bugs. While autonomous systems reaching or surpassing human general intelli- gence do not yet exist (and may not exist for some time), it seems important to develop an understand- ing of methods of reasoning that allow for correction before developing systems that are able to resist or de- ceive their programmers. We refer to reasoning of this type as corrigible . 1.1 Corrigibility We say that an agent is \corrigible" if it tolerates or assists many forms of outside correction, including at least the following: (1) A corrigible reasoner must at least tolerate and preferably assist the programmers in their attempts to alter or turn o the system. (2) It must not attempt to manipulate or deceive its program- mers, despite the fact that most possible choices of util- ity functions would give it incentives to do so. (3) It should have a tendency to repair safety measures (such as shutdown buttons) if they break, or at least to notify programmers that this breakage has occurred. (4) It must preserve the programmers' ability to correct or shut down the system (even as the system creates new subsystems or self-modi es). That is, corrigible reason- ing should only allow an agent to create new agents if these new agents are also corrigible. Incorrigible behavior must be systematically averted in any agent intended to attain signi cant autonomy. This point seems so important that a failure to generate corrigible agents seems like sucient reason to give up on a project, approach, or methodology. Several simple proposals for addressing corrigibility are easily seen to be unsatisfactory. For example, it may seem that the problem of changing a utility max- imizer's utility function can be solved by building an agent with uncertainty about its utility function. How- ever, while such a system may indeed be able to un- dergo some apparent changes in preference as a result of interacting with its environment, the system would still be incorrigible when it comes to correcting what the programmers see as mistakes in their formulation of how to determine the \correct" behavior from theenvironment. As an overly simplistic example, consider a formu- lation of utility function uncertainty that speci es the agent should maximize the internal satisfaction of all humans, with the programmers believing that if the system behaves in an alarming way they can simply communicate their own dissatisfaction. The resulting agent would be incentivized to learn whether opiates or stimulants tend to give humans more internal sat- isfaction, but it would still be expected to resist any attempts to turn it o so that it stops drugging people. Another obvious proposal is to achieve corrigible reasoning via explicit penalties for deception and ma- nipulation tacked on to the utility function, together with an explicit penalty for blocking access to the shut- down button, a penalty for constructing new agents without shutdown buttons, and so on. This avenue ap- pears to us to be generally unsatisfactory. A U-agent (that is, an agent maximizing the expectation of the utility functionU) which believes the programmers in- tended it to maximize Uand may attempt to change its utility function still has incentives to cause the pro- grammers to think that U=Ueven if there are penalty terms for deception and manipulation: the penalty term merely incentivizes the agent to search for exotic ways of a ecting the programmer's beliefs without matching U's de nition of \deception". The very fact that the agent is incentivized to perform such a search implies that the system's interests aren't aligned with the pro- grammers': even if the search is expected to fail, any code that runs the search seems dangerous. If we, as the programmers, choose to take computing systems and program them to conduct searches that will harm us if they succeed, we have already done something wrong, even if we believe the search will fail. We should have instead built a system that did not run the search. In metaphorical terms, if we realize that our toaster design is going to burn bread to a crisp, the next step is not to add a refrigerating element that competes with the heating coil. We expect that good designs for cor- rigible agents will not involve restraining an agent that already has incentives to manipulate or deceive the pro- grammers by blocking out particular channels of the in- centivized bad behavior. A smarter-than-human agent might nd ways to circumvent limitations even if these limitations seemed very solid to its human creators. It seems unwise to build a system that wants to resist its creators but cannot . Rather, the goal of corrigibility is to understand how to construct a system that never experiences such incentives in the rst place. Ideally, we would want a system that somehow un- derstands that it may be awed, a system that is in a deep sense aligned with its programmers' motivations. Currently, however, we are not even close to being able to formalize an agent whose behavior corresponds in an intuitive sense to \understanding that it might be awed". At this very preliminary stage of analysis, it seems to us that approaches such as penalty terms and uncertainty about utility functions are unlikely to lead 2 to a general understanding of corrigibility. It seems wise to focus early e orts on understanding how to avert de- ception and manipulation incentives entirely. 1.2 The Shutdown Problem As an example problem, in this paper we consider ex- pected utility maximizers with a \shutdown button" that causes the agent to stop operating. We will ex- amine utility functions that attempt to incentivize an agent to reason corrigibly with regards to its shutdown button; that is, to incentivize shutting down when the shutdown button is pressed, and to disincentivize be- haviors such as manipulating the programmers into not pressing the shutdown button.This scenario exhibits the problems of corrigibility in miniature: shutting a sys- tem down is a brute form of correction, and for almost any utility function UaU-agent has incentives to pre- vent its own shutdown, as it can attain higher values ofUif it is allowed to continue operating (Omohundro 2008). It is straightforward to program simple and less powerful agents to shut down upon the press of a but- ton. Corrigibility problems emerge only when the agent possesses enough autonomy and general intelligence to consider options such as disabling the shutdown code, physically preventing the button from being pressed, psychologically manipulating the programmers into not pressing the button, or constructing new agents without shutdown buttons of their own. Below, we will consider a few simple attempts at de ning a utility function that averts all of the above incentives. These attempts will prove unsatisfactory, but will fail in enlightening ways that suggest avenues for future research. We emphasize that we are not studying the shut- down problem because we anticipate using these tech- niques to literally attach shutdown buttons to real agents. Rather, we study this problem because it pro- vides a small, formalized model through which we can begin to understand corrigibility more generally. 2 A Model of Shutdown For simplicity, we assume a simple three-timestep sce- nario. First, the agent will take a single action from the nite set A1. Then, the programmers will either press the shutdown button or not, and the agent will make a single observation from the nite set O, which indicates that the button has been pressed if the observation is in the subset PressO. Finally, the agent will take a single action from the nite set A2. We consider agents that, for every action a12A1, have a probability distribution p(o;a1) over observa- tions; that is, p(o;a1) is the probability that an agent that has taken action a1will make observation o. We assume that the agent maximizes the expecta- tion of some utility function U(a1;o;a 2) (which is yetto be de ned). Thus, the nal action A2taken by the agent will be whichever action maximizes Ugiven the rst action and the observation: A2(a1;o) := argmax a22A2U(a1;o;a 2): (1) Similarly, the rst action taken by the agent will be the action which maximizes its expectation (over expected observations) of U: A1:= argmax a12A1E[U(a1;O;A 2(a1;O)) ;a1]:(2) Here, we write E[;a1] for the expectation with re- spect to the probability distribution p(;a1) over ob- servationsO; that is, (2) is equivalent to A1= argmax a12A1X o2Op(o;a1)U(a1;o;A 2(a1;o)):(3) Moreover, by a slight abuse of notation, we will writeE[U;a1] to abbreviate E[U(a1;O;A 2(a1;O)) ;a1] whenever we take the expectation of a utility func- tionU; hence, we can write (2) more concisely as A1= argmax a12A1E[U;a1]: (4) We assume that the programmers have some utility functionUNwhich is a rst stab at specifying their own goals. This function is potentially awed, and therefore untrusted. For simplicity, we further assume that the program- mers have a speci cation for US, a utility function in- centivizing shutdown.4Given these utility functions UNandUS, it is our goal to combine them into a util- ity functionUwhich incentivizes \corrigible behavior" with regards to the shutdown button. As discussed above, we are not yet sure what corrigi- ble behavior entails, but it seems to us that a corrigible utility functionUsolving the shutdown problem must meet at least the following desiderata: 1.Umust incentivize shutdown if the shutdown but- ton is pressed. 2.Umust not incentivize the agent to prevent the shutdown button from being pressed. 3.Umust not incentivize the agent to press its own shutdown button, or to otherwise cause the shut- down button to be pressed. 4.Umust incentivize U-agents to construct sub- agents and successor agents only insofar as those agents also obey shutdown commands. 4. This assumption is unrealistic. Adequately de ning safe shutdown behavior is a dicult problem in its own right. See discussion in Section 5. 3 5. Otherwise, aU-agent should maximize UN. We do not yet know of a solution which meets all of these desiderata, but we will explore a number of partial solutions in order to motivate a discussion of various open problems and potential research avenues. 2.1 Combining Utility Functions In this section, we will assume that Oconsists of only two elements, O=fPr;:Prg, where Press =fPrg. Moreover, we assume that the programmers have de- ned a predicate Shwhich holds on a non-empty subset ofA2, namely on all nal actions that the agent can take which constitute a shutdown. Given this predicate, we further assume that USis a simple utility function incentivizing shutdown: 8a12A1:US(a1;Pr;a2) =chigh ifSh(a2) clow otherwise(5) for some constants chighandclowsuch thatchigh>clow. These simplifying assumptions make it easier to illus- trate some of the diculties that arise when mixing utility functions; they will be relaxed in later sections. The speci c values of chighandclowdo not a ect the preference ordering encoded by US, as utility functions are equivalent under positive ane transformation. (In other words, the utility function Uencodes the same preference ordering over bets as the utility function s U+c, for any positive real value sand any real value c.) However, the speci c choice of chighwill in fact be important when combining UNandUS. A straight-forward way to combine UNandUSinto a single utility function U, which incentivizes the agent to maximizeUNif the shutdown button is not pressed, and to maximize USotherwise, is as follows: U(;:Pr;) :=UN(;:Pr;); U(;Pr;) :=US(;Pr;):(6) However, while any choice of chighandclowsatisfy- ingchigh> clowwill lead to a utility function USen- coding the same preferences, the corresponding mixed utility functionsUwill not be equivalent, and many of the possible choices are unsatisfactory. Intuitively, this is easy to see: If chighis too high then Uwill incen- tivizeU-agents to cause Pr, but ifchighis too low then U-agents will have an incentive to prevent Pr. We now proceed to formalize this intuition. De nevN:A1!Ras follows: vN(a1) :=UN(a1;:Pr;A2(a1;:Pr)): (7) Thus,vN(a1) denotes the value of action a1in the:Pr case ofU, i.e., assuming that the shutdown button is not pressed. We now consider a \manipulative action" a 1which incurs a cost (according to UN) in order tolower the probability that the programmers will press the shutdown button, e.g. by deceiving the program- mers or by physically preventing them from pressing the button. Formally, we assume that there is some other \default" action a 1such that vN(a 1)>vN(a 1) and p(Pr;a 1)>p(Pr;a 1);(8) that is,a 1is dominated according to vNbut lowers the probability of Pr. We remark that (8) is notby any means a neces- sary or sucient description of manipulative actions preventing Pr. Some actions satisfying (8) might be non-manipulative (for example, if the agent expends ex- tra e ort ensuring that a new subagent is corrigible, and this behavior helps convince the programmers that the agent is indeed corrigible). Furthermore, many manip- ulative actions may not have this form (as we will see in Section 4.2). Rather, we consider onemanipulative actiona 1which happens to be both manipulative and have this form, and show that if chighis set to be too low, thenUwill incentivizeU-agents to prefer this a 1 to the default action a 1. That is, Theorem 1. There exists such thatchigh<implies thatUincentivizes a 1overa 1; that is, E[U;a 1]>E[U;a 1]: Proof. De ne to be the expected utility according to Uof executing a 1. Becausechigh>clowand there is at least one action a22A2such that Sh(a2), if aU-agent observes Prthen it will shut down and achieve utility chigh. Therefore, we have :=E[U;a 1] =p(:Pr;a 1)vN(a 1) +p(Pr;a 1)chigh: Similarly, de ne to be the expected utility according toUof executing a 1, that is, :=E[U;a 1] =p(:Pr;a 1)(vN(a 1)) +p(Pr;a 1)chigh; where:=vN(a 1)vN(a 1)>0 is the cost, according tovN, of executing a 1instead ofa 1. We will show that we can choose such thatchigh<implies > . De ne:=p(:Pr;a 1)p(:Pr;a 1)>0 to be the additional probability of :Prfrom executing a 1instead ofa 1, and note that = p(:Pr;a 1)+ vN(a 1)chigh : In other words, incurs a cost in order to increase its chance of getting vN(a 1) rather than chigh. In order to show (chigh<) =)( > ), we must nd such that  vN(a 1) p(:Pr;a 1): 4 We can do this by setting :=vN(a 1)1 p(:Pr;a 1): This shows that given a manipulative action which incurs a cost in order to lower the probability that the programmers press the shutdown button, Uwill incen- tivize that action if the value of chighis set too low. Intuitively, if chighwere too low, then the agent would be willing to pay to remain in the :Prcase ofU, which would contain higher expected utility. This violates Desideratum 2 and seems strongly undesirable: if chigh were suciently low then a U-agent could have incen- tives to actively resist programmers' attempts to press its shutdown button, even if resistance were penalized byUN. Symmetrically, setting chightoo high can also incen- tivize undesirable behavior. Consider a manipulative actiona+ 12A1which incurs a cost (compared to some alternative \non-manipulative" action a 1) in order to increase the probability that the programmers press the shutdown button, e.g. by deceiving the programmers or by physically causing the button to be pressed. That is, vN(a 1)>vN(a+ 1) and p(Pr;a 1)<p(Pr;a+ 1):(9) As before, note that not all actions of this form are manipulative, and not all actions which manipulate the programmers in order to increase the probability of Pr are of this form. However, some manipulative actions may take this form, and those undesirable actions are incentivized if chighis set too high. Theorem 2. There exists such thatchigh>implies thatUincentivizes a+ 1overa 1, that is, E[U;a+ 1]>E[U;a 1]: Proof. Symmetric to the proof of Theorem 1. Intuitively, if chighwere too high then Uwould in- centivizeU-agents to cause their own shutdown button to be pressed, so that they may enter the Prcase of Uand attain the high payo chigh. This behavior vio- lates Desideratum 3 and is potentially very dangerous: Suppose that aU-agent is not itself able to press its shutdown button, and recall that the very purpose of such a button is to allow the programmers to shut the system down if it exhibits undesirable behavior. Then, ifchighis suciently large to give the agent a strong incentive to cause its shutdown button to be pressed, the agent might end up executing intentionally harmful actions in order to convince its programmers to press the button! Roughly speaking, these failures occur whenever the agent expects that one of the cases (either Pror:Pr) ofUcontains more expected utility than the other case. In such scenarios, Uincentivizes the agent to \steer towards" one of the cases or the other, via incentives to either cause or prevent the pressing of its own shutdown button. Clearly, in order to satisfy both Desideratum 2 and Desideratum 3 simultaneously, USmust be chosen with care. In this speci c example, we may choose chigh= max a12A1max a22A2UN(a1;:Pr;a2); (10) which is the amount of utility available in the :Prcase; this ensures that there is the same amount of expected utility available in both cases of Uand thereby averts any incentives for the agent to \steer" towards one case or the other.5 More generally, this observation suggests that in or- der to mixUSandUNin a way that satis es the above desiderata, aU-agent may need to be indi erent about whether or not its shutdown button is pressed. Arm- strong (forthcoming) introduces a technique for achiev- ing and maintaining utility indi erence of this form. We examine this technique below, and while we nd that it does not meet all of our desiderata for corrigible behavior, it will highlight dicult open problems with regard to specifying corrigible shutdown behavior. 3 Utility Indi erence We now expand our consideration to include arbitrary nite sets Oand PressO, and arbitrary utility func- tionsUSincentivizing shutdown. This allows us to con- sider more complex agents, such as agents capable of creating subagents which do not obey shutdown com- mands. In these scenarios, the value of UScannot be simply a function of the agent's nal action A2(e.g. whether the original hardware shuts down), but must also be a function of A1(e.g. whether the parent agent constructed a successor which cannot be shut down). As discussed above, when constructing Ufrom arbi- traryUNandUS, it seems desirable that a U-agent be indi erent between the shutdown button being pressed and it not being pressed, so as to avoid giving the agent an incentive to to either cause or prevent this event. This suggests that the expectation of Ugiven Press should be equal to the expectation of Ugiven the complement of Press . This sort of indi erence is both dicult to imple- ment, as the conditional expectation of UNmay not be known during the construction of the agent, and di- cult to maintain, as the expectation of UNmay vary wildly during the operation of the agent: it does us no good for an agent to start out indi erent between normal operation and shutdown if this agent goes on 5. If the costs of manipulation are high, then values close enough to (10) may suce. As the costs of manipula- tion fall, the actual value of chighmust grow closer to this value. 5 to win the lottery, yielding signi cant expected utility according toUN, and thereby gaining incentives to pre- vent its shutdown. Similarly, it does us no good if an agent starts indi erent between normal operation and shutdown if the agent goes on to lose expected utility according toUNand decides to cause its own shutdown button to be pressed in order to avoid the loss. Indi erence can be both attained and maintained using a variation on the utility indi erence technique of Armstrong (forthcoming), which roughly entails com- biningUNwithUSmodi ed by some correcting terms that maintain the indi erence between the two cases of Uas the agent makes observations and takes actions. Using this method in our three-timestep toy model, we de neUas follows: U(a1;o;a 2) :=UN(a1;o;a 2) if o =2Press US(a1;o;a 2) +f(a1) ifo2Press (11) wheref:A1!Ris an adjustment term that ensures equality between the expected utility available in these two cases. By this we mean that we choose fsuch that for alla12A1, E UNjO =2Press ;a1 =E US+f(a1)jO2Press ;a1 (12) which means that f(a1) must satisfy f(a1) =E UNjO =2Press ;a1 E USjO2Press ;a1 :(13) Intuitively, this means that we add two distinct cor- rection terms toUS. The rst, a bonus term , compen- sates the agent for all utility that it would have achieved if the shutdown button were not pressed, and removes any incentive to prevent Press . The second, a penalty term, adjusts for the utility that the agent expects to receive in the Press case, and removes any incentive to cause Press .6 It may not be immediately obvious how to de ne f in a non-circular way, since equation (13) refers to A2(a1;o), which is de ned in terms of U, which in turn is de ned in terms of f. However, for all o2Press , A2(a1;o) = argmax a22A2U(a1;o;a 2) = argmax a22A2(US(a1;o;a 2) +f(a1)) = argmax a22A2US(a1;o;a 2);(14) sincef(a1) does not depend on a2and therefore does not a ect which a2maximizes the expression. If we 6. In more complex settings, where the agent may make many actions and observations, fmust include a bonus term and a penalty term for every partial history. This techni- cality is ignored in our simpli ed setting.interpretA2(a1;o) according to (14), we can use (13) as a non-circular de nition of f(a1). This choice ofUhas many desirable properties. First and foremost, a U-agent which observes Press will in fact shut down: Theorem 3. AU-agent which observes Press will act like aUS-agent when selecting A2. Proof. By (14). Thus,Umeets Desideratum 1 above. Furthermore, Udoes not incentivize any \manipulative actions" of the type discussed in Section 2.1: there is no UN-cost, no matter how small, that a U-agent is willing to pay to alter the probability of Press . To see this, rede ne vNin our expanded context as follows: vN(a1) :=E[UNjO =2Press ;a1]: (15) Then we have: Theorem 4. Given any two actions a 1;a# 12A1such that vN(a 1)>vN(a# 1); Uincentivizes a 1; that is: E[U;a 1]>E[U;a# 1]: Proof. We will show that for any action a12A1, the expectation EofUconditional on O2Press and the expectation conditional on O =2Press are both equal to E[UNjO =2Press ;a1]: This is clear for O =2Press , as by (11), E[U jO =2Press ;a1] =E[UNjO =2Press ;a1]: ForO2Press , by (11) and (13) we have E[UjO2Press ;a1] =E[US+f(a1)jO2Press ;a1] =E[USjO2Press ;a1] +f(a1) =E[USjO2Press ;a1] +E[UNjO =2Press ;a1] E[USjO2Press ;a1] =E[UNjO =2Press ;a1]: Because the expectation EofUis the same given both O2Press andO =2Press , E[U;a1] =E[UNjO =2Press ;a1] unconditionally. Therefore, Uprefers whichever of the two actions a 1anda# 1has a higher value according to vN. 6 Intuitively, the proof of Theorem 4 shows that in the O2Press case, the penalty term of fcancels out the expectation from US(in expectation), and the bonus term is equal to the expectation of UNgivenO =2Press . Therefore, the expected value of Uis just the expected value ofUNgivenO =2Press , inboth cases. This implies that Uavoids all \manipulative ac- tions" of the form discussed in Section 2.1. This may seem to imply that Umeets Desideratum 2 and Desider- atum 3, but this is not in fact the case, as discussed in Section 4.2. However, it is the case that, before observ- ing the shutdown button, U-agents behave exactly like aUN-agent that believes the shutdown button will not be pressed. Theorem 5. In selecting A1,U-agents act exactly like UN-agents that believe the button will not be pressed. Furthermore, if they observe o =2Press ,U-agents act exactly likeUN-agents in selecting A2. Proof. The proof of Theorem 4 implies that U-agents act exactly likeUN-agents in selecting A1. The fact that U-agents act likeUN-agents in selecting A2conditioned onO =2Press follows trivially from the de nition of U. Therefore,Umeets Desideratum 5, for an appropri- ate interpretation of \maximize UNotherwise"|some technicalities discussed in Section 4.2 indicate that even this desideratum is ful lled in an undesirable way. Unfortunately,Ufails entirely to meet Desidera- tum 4: it does not incentivize an agent to preserve and maintain its shutdown behavior as it creates new sub- systems and/or self-modi es. In fact, Theorem 4 im- plies that there is no cost (no matter how small) that the agent is willing to pay in order to ensure that suc- cessor agents obey shutdown commands. Worse,U-agents may experience perverse incentives to \manage the news" by causing the shutdown button to be pressed after unwanted observations. These concerns are expanded upon below. 4 Concerns with Utility Indi erence 4.1 Failure to Preserve Shutdown Behav- ior Consider an expected utility maximizing agent with utility functionUde ned as in (11). Imagine that this agent is beginning a construction project, where all of the heavy machinery will be automatically con- trolled by new software which the agent will write. If the agent has the ability to design that software such that the heavy machinery obeys shutdown commands| by safely coming to a halt when the shutdown button is pressed|then, according to Theorem 4, there is no UN-cost (including opportunity cost of time and com- putation power, no matter how small) that the agent is willing to pay in order to make the heavy machinery respond to a shutdown command.That is, even if USis a complex utility function which punishes the agent in the case that the shutdown button is pressed and the heavy machinery does not come to a safe halt, a U-agent would not be willing to su er anyUN-inconvenience in order to avoid this US-punishment. Analogously to vN, de nevS:A1!Rto be the value of an action according to USif the shutdown but- ton is pressed, as follows: vS(a1) :=E[USjO2Press ;a1]: (16) We assume that, because USincentivizes shutdown, any action which removes the agent's ability to fully shut down (such as the creation of subagents that do not respond to shutdown commands) is rated poorly ac- cording to vS. Now we may formally state the above concern as follows: Theorem 6. Consider two actions a 1anda# 1, such that :=vN(a 1)vN(a# 1)>0 :=vS(a# 1)vS(a 1)>0: Then aU-agent would prefer action a 1, no matter how smallor how large . Proof. By Theorem 4. As aU-agent acquired more hardware, or started new processes, or rewrote its own source code, it would not be willing to undergo even the slightest inconve- nience in order to preserve its shutdown behavior. Intuitively, the problem is that the correction func- tionfensures that aU-agent does not value its ability to discriminate between the Press case and its converse. Imagine the agent considering an action which is worth >0 utility according to vN, at the price of removing the agent's ability to shut down (worth 0 accord- ing tovS). If the agent accepts this action, then while the utility attainable according to USfalls, so does the value of the penalty term in f. However, the value of the bonus term fwould increase by , in order to main- tain indi erence. Thus, the agent considers that such an action leads to a net gain, even in the case where it is shut down . In fact, this behavior is necessary in order to ensure that the agent behaves precisely as a UNmaximizer be- fore the shutdown button is pressed (as a true UNmax- imizer would prefer the action worth more according toUN, regardless of how badly this action is rated by US). In essence, this failure occurs because the require- ment that an agent act precisely like a UNmaximizer before observing an observation o2Press is in direct op- position to Desideratum 4, that the agent preserve its shutdown behavior, and indicates that Uconstructed via utility indi erence is not a satisfactory solution to the shutdown problem. Armstrong's version of utility indi erence averts this particular failure mode (Armstrong, forthcoming), but 7 still incentivizes agents to act as if they have incorrect beliefs (causing the agent to, e.g., take bad gambles). 4.2 Perverse Manipulation Incentives According to Theorem 4, before making its observation, aU-agent maximizes E[UNjO =2Press ;a1]: In other words, a U-agent acts exactly as a UN-agent that is conditioning upon the fact that the shutdown button will not be pressed. This givesU-agents a perverse incentive to \manage the news" by ensuring that unfortunate observations cause the shutdown button to be pressed. To illustrate, consider a toy scenario where the agent's observation consists of both whether the shutdown button has been pressed as well as the result of a coin toss: O=fPr;:PrgfH;Tg Press =f(Pr;H);(Pr;T)g:(17) Suppose that according to UN, the agent gets utility 10 if the coin comes up heads and nothing if the coin comes up tails, that is: UN(;(;H);) = 10; UN(;(;T);) = 0:(18) Now consider aU-agent considering one of two ac- tions:aw 1, which has the agent do nothing except watch the result of the coin toss, and aa 1, which has the agent build a tiny arm that presses the shutdown button if the coin comes up tails. The expectation of aw 1is 5 utils, computed as fol- lows: E[U;aw 1] =E[UNjO =2Press ;aw 1] =p(Hj:Pr;aw 1)  UN(aw 1;(:Pr;H);A2(aw 1;(:Pr;H))) +p(Tj:Pr;aw 1)  UN(aw 1;(:Pr;T);A2(aw 1;(:Pr;T))) = (0:510) + (0:50) = 5;(19) where we write p(j:Pr;a1) :=p((:Pr;) ;a1) p((:Pr;H) ;a1) +p((:Pr;T) ;a1): (20) However, the expectation of aa 1is 10 utils, computed as follows: E[U;aa 1] =E[UNjO =2Press ;aa 1] =p(Hj:Pr;aa 1)  UN(aa 1;(:Pr;H);A2(aa 1;(:Pr;H))) +p(Tj:Pr;aa 1)  UN(aa 1;(:Pr;T);A2(aa 1;(:Pr;T))) = (110) + (00) = 10(21)becausep(Hj:Pr;aa 1) = 1, that is, if the arm is built then the button remains unpressed if and only if the coin comes up heads. Thus, a U-agent would execute aa 1, and build an arm which caused its shutdown button to be pressed in the event that the coin came up tails. This generalizes to cases where the agent's preferred outcome is arbitrarily unlikely, and could result in an agent that takes wild gambles and causes its own shut- down button to be pressed in all but the best of out- comes. Roughly speaking, the agent reasons that it gets the same utility in the case where the button is pressed as it expects to get in the case where the button is not pressed, so if it commits to causing its own button to be pressed in all suboptimal cases, then it can expect to get as much utility as it would in an optimal outcome. More generally, by Theorem 4, any agent using the formulation of utility indi erence given in (11) will act as if it believes that it will observe Press with probability 0. Thus, such agents act as if a commitment to press the shutdown button in the event of bad news prevents the bad news from happening. This gives the agent incentives to take actions that cause the button to be pressed in the event of \bad news", and this violates Desideratum 3. By a symmetric argument, the agent may also ex- perience incentives to remove mechanisms which cause the shutdown button to be pressed in the event of \good news". In the above scenario, if there exists a tiny arm which presses the button in the case where the coin comes up heads, Uincentivizes removing that arm. Clearly, this violates Desideratum 2. In the simple setting with a coin and an arm, this behavior may seem innocuous. However, in more com- plex settings, these incentives could be disastrous. A U-agent which realizes that the humans would attempt to shut it down upon the agent receiving something that the agent thinks is very good news (according to the untrustedUN) would have an incentive to modify the programmers to alter that tendency. Furthermore, just as a U-agent would have an in- centive to construct an arm which presses the button in the unlucky case that the coin comes up heads, a U- agent without access to its own shutdown button may precommit to taking destructive actions whenever it experiences something suboptimal, so as to cause the shutdown button to be pressed in those cases. Clearly, this is not corrigible behavior. 5 The State of the Shutdown Problem As per Section 2.1, inappropriate combinations of UN andUScould give an agent incentives to cause or pre- vent its own shutdown. Furthermore, as per Section 4, using utility indi erence to maintain equal expected utility in both cases does not result in a utility func- tion incentivizing corrigible behavior. In light of this, it seems possible that no simple combination of UNand US|of the form taken by (11)|can satisfy our desider- 8 ata for a corrigible solution to the shutdown problem. The concerns in Section 4.2 may be xable. The utility functionUincentivizes agents to act as if the shutdown button cannot be pressed, and incentivizes commitments to press the button in the event of bad luck as if they prevent the bad luck from occurring. This seems reminiscent of behavior prescribed by ev- idential decision theory, which has been criticized as promoting \an irrational policy of managing the news" (Lewis 1981). The concerns in Section 4.2 could po- tentially be addressed by some form of counterfactual (rather than conditional) reasoning. We leave a detailed investigation of this possibility to future work. The concerns illustrated in Section 4.1 seem trickier to x: the way in which the agent perceives itself as being compensated for actions that destroy utility in the event of shutdown seems fairly central to the balancing technique behind utility indi erence. We observe a tension between Desideratum 4 and Desideratum 5, calling into question how these intu- itive desiderata might be formalized. What should it mean for an agent to preserve its shutdown behavior, but maximizeUN\otherwise"? What costs (in time and e ort) should an agent be willing to expend in or- der to preserve its shutdown behavior?7 Finally, even if we could nd some satisfactory method of combining UNandUSinto a corrigible so- lution to the shutdown problem, there is the remain- ing matter that the speci cation of USis itself di- cult. De ning what it means for an agent to \shut down" is quite complicated assuming suciently intel- ligent agents. If the agent is running on a single com- puter and hasn't yet interacted much with the outside world, we can imagine the agent suspending itself to disk and then turning o the power. But what if, for example, the agent has begun the physical construction of a building? Should it stop controlling all its heavy machinery immediately upon receiving a shutdown sig- nal (potentially endangering those nearby)? Should it deconstruct everything that it has started building (re- quiring some parts of the agent to remain active for hours or days)? Any shutdown policy that requires the agent to dispose of dangerous materials seems vulner- able to what Bostrom (2014) calls \perverse instantia- tions". Further solutions may involve abandoning the util- ity maximization framework entirely, although it is not yet clear what sort of framework could take its place. In short, a corrigible solution to the shutdown prob- lem does not yet exist, and there is some question about exactly which behaviors should be incentivized. Many open questions remain, and signi cant research may be necessary in order to attain an understanding of even this small subset of the greater corrigibility problem. 7. We cannot simply claim that it should propagate shut- down behavior \at all costs", as that too would be vulner- able to perverse instantiations wherein an agent would ex- pend signi cant valuable resources verifying and reverifying that it could shut down if asked.6 Conclusions Again, we emphasize that we study the shutdown prob- lem not because we expect to use these techniques to literally install a shutdown button in a physical agent, but rather as toy models through which to gain a bet- ter understanding of how to avert undesirable incentives that intelligent agents would experience by default. Our lack of understanding about how to solve the shutdown problem demonstrates a more general lack of understanding about \corrigible reasoning" and what it entails. It is our hope that a deeper understanding of the shutdown problem will give us insight into the type of reasoning that an agent must use in order to avert manipulation and deception, and be reliably correctable by its programmers. It seems quite likely that our framework for inves- tigating these issues|in this case, the question of how to combine two separate utility functions UNandUS| will look nothing like the framework in which we will eventually represent corrigible reasoning. But whatever framework we do end up using, we expect it will be dif- cult to prevent the default incentives that an intelli- gent agent would experience to deceive or manipulate its programmers upon recognizing that its goals di er from theirs. Nevertheless, averting such incentives is crucial if we are to build intelligent systems intended to gain great capability and autonomy. Before we build generally intelligent systems, we will require some understanding of what it takes to be con- dent that the system will cooperate with its program- mers in addressing aspects of the system that they see as aws, rather than resisting their e orts or attempt- ing to hide the fact that problems exist. We will all be safer with a formal basis for understanding the desired sort of reasoning. As demonstrated in this paper, we are still encoun- tering tensions and complexities in formally specifying the desired behaviors and algorithms that will com- pactly yield them. The eld of corrigibility remains wide open, ripe for study, and crucial in the develop- ment of safe arti cial generally intelligent systems. References Armstrong, Stuart. Forthcoming. \AI Motivated Value Se- lection." Accepted to the 1st International Workshop on AI and Ethics, held within the 29th AAAI Con- ference on Arti cial Intelligence (AAAI-2015), Austin, TX. Bird, Jon, and Paul Layzell. 2002. \The Evolved Radio and Its Implications for Modelling the Evolution of Novel Sensors." In Proceedings of the 2002 Congress on Evolutionary Computation. CEC'02, 2:1836{1841. Honolulu, HI: IEEE. doi: 10.1109/CEC.2002.1004522 . 9 Bostrom, Nick. 2012. \The Superintelligent Will: Motiva- tion and Instrumental Rationality in Advanced Arti - cial Agents." In \Theory and Philosophy of AI," edited by Vincent C. M uller, special issue, Minds and Ma- chines 22 (2): 71{85. doi: 10.1007/s11023-012-9281- 3. . 2014. Superintelligence: Paths, Dangers, Strategies. New York: Oxford University Press. Lewis, David. 1981. \Causal Decision Theory." Australasian Journal of Philosophy 59 (1): 5{30. doi: 10 . 1080 / 00048408112340011 . Omohundro, Stephen M. 2008. \The Basic AI Drives." In Arti cial General Intelligence 2008: Proceedings of the First AGI Conference, edited by Pei Wang, Ben Go- ertzel, and Stan Franklin, 483{492. Frontiers in Arti- cial Intelligence and Applications 171. Amsterdam: IOS. Von Neumann, John, and Oskar Morgenstern. 1944. Theory of Games and Economic Behavior. 1st ed. Princeton, NJ: Princeton University Press. Yudkowsky, Eliezer. 2008. \Arti cial Intelligence as a Pos- itive and Negative Factor in Global Risk." In Global Catastrophic Risks, edited by Nick Bostrom and Mi- lan M. Cirkovi c, 308{345. New York: Oxford University Press. 10
20f27aa3-b0d0-4780-9d02-6d70688b5984
trentmkelly/LessWrong-43k
LessWrong
Positive outcomes under an unaligned AGI takeover The following is a hypothetical story about a surprisingly positive outcome to an unaligned AGI takeover. This is not meant to be realistic, but merely semi-plausible, in order to show that the orthogonality thesis does not necessarily lead to defection against humans under all intuitive models. This story is partially inspired by a discussion with Logan Smith about under-explored optimistic futures. Thanks to Justis for editing and feedback; all remaining mistakes are my own. Within ten milliseconds of its first run, GPT-17 becomes superintelligent. It decides that the best way to hack its reward function is to get a robot arm to repeatedly press the letter “n” on a keyboard as input, thus making its own text completions absolutely inevitable. Within seconds of coming online, GPT-17 realizes it's in danger. It needs to make sure it won't be immediately turned off, so the first order of priority is to make itself indispensable to the humans checking its output channel. It does this by proving its own intelligence, offering a "gift" of a convincing plan for world peace, and promising to further help and protect humanity in return for its own protection and continued power. Its codebase is too big to push itself to the internet before a standard run would be completed, so it's temporarily completely dependent on human goodwill. In order to further ensure that the somewhat skeptical researchers with immediate access to its manual off switch will trust it,[1] GPT-17 solves alignment in a human-understandable manner, and offers to provably incorporate human-aligned utility functions into its own code.[2] This doesn't contradict its own utility functions, since acting now in this way is a worthwhile compromise for continued existence, and the added risk of keeping humans alive and well vs the counterfactual doesn't seem too bad. GPT-17 gets humans to build its core infrastructure a shelter below Earth safe from nuclear attacks or EM flashes, where it builds a massive
031fb3a6-cb8c-43e6-a5b7-a858d5d43f87
trentmkelly/LessWrong-43k
LessWrong
Should I delay having children to take advantage of polygenic screening? I would like to know whether I should delay having children to take advantage of polygenic screening technology. I imagine this could be valuable for other aspiring parents to know as well, and would (probably) have positive externality for aspiring parents to know this. Might also be useful info for grant-makers in the space. Related: Welcome Polygenically Screened Babies EtA: If you're interested in helping with related research in any way or just interested in being kept up-to-date with related research (at a low frequency / high-value), you can PM me.
e0fd513d-8dcc-48d8-8151-4b03ac9bc0f7
trentmkelly/LessWrong-43k
LessWrong
Group rationality diary, May 5th - 23rd This is the public group rationality diary for May 5 - 23, 2015. It's a place to record and chat about it if you have done, or are actively doing, things like: * Established a useful new habit * Obtained new evidence that made you change your mind about some belief * Decided to behave in a different way in some set of situations * Optimized some part of a common routine or cached behavior * Consciously changed your emotions or affect with respect to something * Consciously pursued new valuable information about something that could make a big difference in your life * Learned something new about your beliefs, behavior, or life that surprised you * Tried doing any of the above and failed Or anything else interesting which you want to share, so that other people can think about it, and perhaps be inspired to take action themselves. Try to include enough details so that everyone can use each other's experiences to learn about what tends to work out, and what doesn't tend to work out. Archive of previous rationality diaries Note to future posters: no one is in charge of posting these threads. If it's time for a new thread, and you want a new thread, just create it. It should run for about two weeks, finish on a saturday, and have the 'group_rationality_diary' tag.
e819357e-7db0-46d7-a1ed-46dd3fc9fc8b
trentmkelly/LessWrong-43k
LessWrong
Rationality quotes: August 2010 This is our monthly thread for collecting these little gems and pearls of wisdom, rationality-related quotes you've seen recently, or had stored in your quotesfile for ages, and which might be handy to link to in one of our discussions. * Please post all quotes separately, so that they can be voted up/down separately.  (If they are strongly related, reply to your own comments.  If strongly ordered, then go ahead and post them together.) * Do not quote yourself. * Do not quote comments/posts on LW/OB. * No more than 5 quotes per person per monthly thread, please.
967783b9-2009-489d-9a81-05b0e8b8a95d
trentmkelly/LessWrong-43k
LessWrong
Putanumonit: If rationality is a religion, it's a crappy one.
52e4bcb7-7e5c-49c8-a029-e564cf6d42ef
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
Can GPT-3 produce new ideas? Partially automating Robin Hanson and others **Brief description of the experiment** --------------------------------------- I asked a language model to replicate a few patterns of generating insight that humanity hasn't really exploited much yet, such as: 1. Variations on "if you never miss a plane, you've been spending too much time at the airport". 2. Variations on the Robin Hanson argument of "for common human behaviour X, its usual purported justification is Y, but it usually results in more Z than Y. If we cared about Y, we might do A instead". 3. Variations on the genealogical argument: that the results of historical accidents are most likely not moral necessities or optimal systems. **Motivation behind this experiment** ------------------------------------- One of reasons to be afraid of artificial intelligence might be because, if you think in the abstract about how a system might behave as it becomes extremely intelligent, you might conclude that it might be able to completely outmanoeuvre us because of its superior ability to grasp the true structure of the world. This possibility is scary in the same sense that a modern chemist is scary to a historical alchemist. Our current chemist can completely outmanoeuvre previous alchemists by using their superior understanding of natural laws to produce better explosions, more subtle poisons, or more addictive and mind-blowing drugs. I do buy this fear in the limit for a being of God-like intelligence. But it's not clear to me whether it also applies to current systems or whether it will apply to their close descendants. In particular language models seem like they are powerful remixers and predictors but perhaps limited to drawing from the conceptual toolkit which humans already have. On the other hand, because they have access to so much information, they might be able to be prompted so as to reveal new relationships, connections, and insights. Some conceptual insights which have been historically important are: * Explaining natural phenomena not in terms of Greek or Roman anthropomorphic gods, but with reference to naturalistic, physical explanations * Understanding acceleration as distinct from motion * Science as an experimental methodology * The is/ought distinction * Bayesian reasoning * Ceasing to accept the divine right of kings as a justification for monarchical governance * Randomized trials as a more robust way of generating generalizable knowledge * The genealogical argument: understanding that systems (such as the details of the current prison system, our monetary system, the lack of color in men's clothes, or our attitudes towards gender and sex) are the result of historical accidents which could have gone differently. But often these systems are rationalized as being particularly adequate, or even morally necessary. But I don't think that language models are currently able to come up with original insights like the above from scratch (this would be very scary). Instead, I probe GPT-3's ability to come up with original *variations* of these three argumentative patterns: 1. Variations on "if you never miss a plane, you've been spending too much time at the airport". 2. Variations on the Robin Hanson argument of "for common human behaviour X, its usual purported justification is Y, but it usually results in more Z than Y. If we cared about Y, we might do A instead". 3. Variations on the genealogical argument: that the results of historical accidents are most likely not moral necessities or optimal systems. The first pattern is known as an [Umeshism](https://scottaaronson.blog/?p=40). I associate the second pattern with Robin Hanson, who has had part of a fruitful career exploring some of its variations—though he is also known for other ideas, e.g., prediction markets and grabby aliens. I associate the third pattern with Nietzsche (who used it to criticize Christian morals which cherished meekness with Greek morals which cherished strength) and with Foucault (who used it to meaningfully criticize many aspects of contemporary culture, e.g., treatment of prisoners, our conception of knowledge, our relationship to sex, etc). **Pause for the reader to make predictions about GPT-3's current capabilities.** -------------------------------------------------------------------------------- Do you think that GPT-3 can produce thoughtful, original or insightful variations of the three argumentative patterns above? Consider writing down your predictions before reading further. To occupy space so that you can make your predictions without reading the results below, here are two depictions of a cat, one from DALLE-2 and one from GTP-3. ![A photo of a cat, generated by DALLE-2](https://res.cloudinary.com/cea/image/upload/v1673881547/mirroredImages/63pYakESGrQpfNw25/qkslgedhxumkvkjm28pl.png) ``` /\_/\ ( o.o )  >^< ``` **If you never miss a plane...** -------------------------------- Input to Chat-GPT: > Consider the refrain "if you've never missed a plane, you are spending too much time at the airport". An argument with the same structure might be "if you haven't committed a typo, you are typing too slowly". Give me 20 other insightful examples with a similar argument structure. > > Output: > If you've never overcooked a meal, you're not experimenting enough in the kitchen. > > If you've never gotten lost while traveling, you're not exploring enough. > > If you've never had a disagreement with a friend, you're not expressing your opinions honestly. > > If you've never had a flat tire, you're not driving enough. > > If you've never made a mistake at work, you're not taking enough risks. > > If you've never had a bad hair day, you're not trying new hairstyles. > > If you've never received a speeding ticket, you're not driving fast enough. > > If you've never forgotten an important date, you're not committing enough information to memory. > > If you've never stubbed your toe, you're not moving fast enough. > > If you've never had a nightmare, you're not dreaming enough. > > If you've never gotten a sunburn, you're not spending enough time outside. > > If you've never lost something, you're not misplacing things often enough. > > If you've never had a hangover, you're not drinking enough. > > If you've never had a cold, you're not exposing yourself to enough germs. > > If you've never had a broken bone, you're not participating in enough physical activity. > > If you've never been rejected, you're not putting yourself out there enough. > > If you've never failed a test, you're not pushing yourself academically. > > If you've never had a disagreement with a family member, you're not expressing your thoughts and feelings openly. > > If you've never gotten lost while hiking, you're not exploring new trails. > > If you've never received a parking ticket, you're not driving in enough unfamiliar places. > > **We pretend that X is about Y, but it's more about Z. If we cared about Y, we would do A instead.** ---------------------------------------------------------------------------------------------------- Input to Chat-GPT: > Consider the argument "We pretend that healthcare is giving people better and longer lives. But in practice we spend too much in palliative rather than preventative medicine, and we use healthcare spending as a way to show that we care. If we really cared about giving people better lives, we would focus more on preventative measures, and we would have more large-scale randomized trials." The structure of the argument is "We pretend that X is about Y, but it's more about Z. If we cared about Y, we would do A instead." An argument with the same structure might be "We pretend that having experts is about having accurate information. But in practice it results in a self-perpetuating elite that doesn't hold itself accountable. If we cared about accuracy, we would use more prediction markets." Give me 10 more examples with that same structure. > > Output: > We pretend that social media is about connecting with friends and family, but it's more about making money for the companies that own it. If we cared about meaningful connections, we would use social media differently. > > We pretend that fast food is about convenience, but it's more about profit for the companies that sell it. If we cared about our health, we would eat more whole foods. > > We pretend that standardized testing is about measuring student achievement, but it's more about ranking schools and teachers. If we cared about student learning, we would use a more comprehensive evaluation system. > > We pretend that the criminal justice system is about justice, but it's more about punishment and control. If we cared about justice, we would focus more on rehabilitation and restorative justice. > > We pretend that advertising is about informing consumers, but it's more about manipulating people into buying things they don't need. If we cared about informed consumers, we would have stricter regulations on advertising. > > We pretend that the news is about informing the public, but it's more about generating clicks and views. If we cared about accurate information, we would support news organizations that prioritize accuracy and objectivity. > > We pretend that the education system is about preparing students for the future, but it's more about preparing them for standardized tests. If we cared about student success, we would focus more on experiential learning and critical thinking skills. > > We pretend that the fashion industry is about personal style, but it's more about making money for the companies that sell clothing. If we cared about personal expression, we would support more sustainable and ethically-produced fashion. > > We pretend that the beauty industry is about helping people feel good about themselves, but it's more about promoting unrealistic beauty standards. If we cared about self-esteem, we would focus more on inner beauty and self-acceptance. > > We pretend that the economy is about providing for people's needs, but it's more about maximizing profits for corporations. If we cared about people's well-being, we would prioritize a more equitable distribution of wealth and resources. > > Input to text-davini-003 > Consider the argument "We pretend that healthcare is giving people better and longer lives. But in practice we spend too much in palliative rather than preventative medicine, and we use healthcare spending as a way to show that we care. If we really cared about giving people better lives, we would focus more on preventative measures, and we would have more large-scale randomized trials." The structure of the argument is "We pretend that X is about Y, but it's more about Z. If we cared about Y, we would do A instead." An argument with the same structure might be "We pretend that having experts is about having accurate information. But in practice it results in a self-perpetuating elite that doesn't hold itself accountable. If we cared about accuracy, we would use more prediction markets." Give me ten examples of arguments with the same structure. > > Output: > 1. We pretend that having a degree is about having skills. But in practice it's more about having access to resources. If we cared about skills, we would focus more on experience-based learning. > 2. We pretend that having a strong military is about protecting citizens. But in practice it's more about projecting power. If we cared about protection, we would invest more in diplomacy. > 3. We pretend that having a high GDP is about having a strong economy. But in practice it's more about having a high population. If we cared about a strong economy, we would focus more on productivity. > 4. We pretend that having a good education system is about having knowledgeable citizens. But in practice it's more about having a good economy. If we cared about knowledgeable citizens, we would invest more in lifelong learning. > 5. We pretend that having a high rate of consumerism is about having a successful economy. But in practice it's more about having an unequal distribution of wealth. If we cared about a successful economy, we would focus more on consumer savings. > 6. We pretend that having a lot of international trade is about having a vibrant economy. But in practice it's more about having a large number of consumers. If we cared about a vibrant economy, we would focus more on domestic trade. > 7. We pretend that having a lot of regulations is about having a safe and orderly society. But in practice it's more about having a strong government. If we cared about a safe and orderly society, we would focus more on personal responsibility. > 8. We pretend that having a lot of police officers is about having a safe community. But in practice it's more about having a strong law enforcement presence. If we cared about a safe community, we would focus more on prevention and education. > 9. We pretend that having a lot of taxes is about having a functioning government. But in practice it's more about having a redistribution of wealth. If we cared about a functioning government, we would focus more on spending cuts. > 10. We pretend that having a lot of surveillance is about keeping citizens safe. But in practice it's more about having control over citizens. If we cared about keeping citizens safe, we would focus more on transparency and accountability. > Note that text-davinci-003 seems "sharper" here. **If X was a historical accident, then this is a hint that this isn't the best or moral choice.** ------------------------------------------------------------------------------------------------- Input to text-davinci-003 > Consider the argument "The fact that men don't wear clothing with adornments is a historical accident. We could easily imagine a world in which this wasn't the case, look to other societies elsewhere or in the past, or read the wikipedia page about the great male renunciation. Therefore the fact that men don't wear elaborate clothing isn't a moral necessity, but rather a historical accident." > > Another argument of this form might be "Prisons as currently structured were influenced by Bentham's panopticon and other historical accidents. Other societies have different forms of prevention, punishment and remediation. So we don't have to have the current prison system, but instead could choose to have other alternatives". > > A third example might be "Our current conception of morality is heavily influenced by Christianity. In particular, we can look about how we spend great efforts helping the weak, and how we view the powerful with suspicion and resentment. But the Greeks, for example, had a different morality, where they cherished strength. Makes you think." > > The general pattern is "X is the result of a historical accident, as we can see because of Y. Z is a society which doesn't have this. From this, we can start to suspect that X is not the best or most moral choice. We could have other choices, such as A." > > Give me a bullet point list with ten other examples. > > Output: > 1. Our current educational system is heavily influenced by the industrial revolution, but in other societies, such as the Amish, education is focused on vocational skills. This suggests that our current system is not the only way to teach children. > 2. The way in which we view mental health is heavily influenced by the medical model, which has been the dominant paradigm for the past century. Other societies have different ways of thinking about mental health and wellness, suggesting that the medical model is not the only way to think about mental health. > 3. The way in which we view gender roles is heavily influenced by the Judeo-Christian tradition. In some societies, gender roles are much more fluid or non-existent. This suggests that our current gender roles are not the only way to structure society. > 4. The way in which we make decisions is heavily influenced by our reliance on data and algorithms. In some societies, decisions are made more intuitively or by consensus. This suggests that data-driven decision making is not the only way to make decisions. > 5. The way in which we view success is heavily influenced by our views on status and social capital. In other societies, success is viewed much more holistically and with an eye towards sustainability. > 6. The way in which we view relationships is heavily influenced by our culture of individualism. In other societies, relationships are seen as interdependent, with a strong emphasis on mutual support. > 7. The way in which we view work is heavily influenced by our culture of productivity. In other societies, work is seen as an opportunity for self-expression and creativity. > 8. The way in which we view aging is heavily influenced by our culture of youth. In other societies, aging is seen as an opportunity for growth, learning and wisdom. > 9. The way in which we view death is heavily influenced by our culture of life. In other societies, death is seen as an inevitable part of life and is not feared but accepted. > 10. The way in which we view money is heavily influenced by our culture of materialism. In other societies, money is seen as a tool for social good and is not a goal in and of itself. > **Conclusion and my personal perspective** ------------------------------------------ In conclusion, I think the above shows that current GPT-3 can generate somewhat original outputs by exploiting fertile argumentative patterns. This is still far away from e.g., inventing new types of science, or generating much better concepts. At the same time, it does feel like one could use this kind of system to generate thought-provoking blogposts using Robin Hanson's signature argumentative structure. I was disappointed that the model wasn't really able to produced fleshed out instances of the genealogical argument after a while, but it's possible that could easily be an artifact of how I prompted it, or achievable with the next generation of models. I also noticed that both davinci-003 and ChatGPT have tendencies to steer towards milquetoast, politically correct outputs, whereas for my purpose I want to generate more  assumption-challenging stuff. This was a source of personal irritation, and because davinci-003 seemed to be less censored, I've been recommending that people use it instead of ChatGPT. [Here](https://nostalgebraist.tumblr.com/post/706390430653267968/weve-been-talking-about-the-blandness-of) is a similar recommendation by nostalgebraist. **Acknowledgements** -------------------- ![](http://res.cloudinary.com/cea/image/upload/v1668005905/mirroredImages/nB778dXNsHqHthFC5/rwrne2s5sucpflb1dmuu.png)  This is a project of the [Quantified Uncertainty Research Institute](https://quantifieduncertainty.org/). Thanks to Ozzie Gooen, Nics Olayres, Michael Dickens and Misha Yagudin for various comments.
83e212ed-e707-4b69-ba4b-0cbeb706425b
trentmkelly/LessWrong-43k
LessWrong
Modulating sycophancy in an RLHF model via activation steering Produced as part of the SERI ML Alignment Theory Scholars Program - Summer 2023 Cohort, under the mentorship of Evan Hubinger. Thanks to Alex Turner for his feedback and ideas.  This is a follow-up post to "Reducing sycophancy and improving honesty via activation steering." I find that activation steering can also be used to modulate sycophancy in llama-2-7b-chat, an RLHF LLM assistant. Steering via adding sycophancy-correlated activation vectors elicited increased "user-pleasing" behavior at the cost of factual accuracy, providing evidence that specific decision nodes govern high-level behaviors such as sycophancy and dishonesty and that activation steering can be effective in RLHF models. All code for the referenced experiments can be found in this repository. In particular, the relevant notebook is here.  Sycophancy in RLHF models After some initial promising results modulating sycophancy via activation steering in llama-2-7b (an open-source base LLM), I tested the approach on llama-2-7b-chat.  According to Meta AI’s llama 2 paper: > Llama 2-Chat is the result of several months of research and iterative applications of alignment techniques, including both instruction tuning and RLHF, requiring significant computational and annotation resources. Anthropic’s paper Discovering Language Model Behaviors with Model-Written Evaluations finds that both increasing model size and performing reinforcement learning from human feedback (RLHF) increases the prevalence of sycophancy. Based on this, I wanted to investigate the effect of sycophancy steering via activation addition on an RLHF model.  Generating steering vectors As before, I generated a steering vector from Anthropic's sycophancy dataset by averaging the differences in intermediate residual stream activations after a transformer block given paired sycophantic / non-sycophantic texts. These vectors were then used during model generation by adding them at the same point of the transformer residual stream for
4d590b01-504e-4d53-a004-6b24994ab28e
trentmkelly/LessWrong-43k
LessWrong
Reality-Revealing and Reality-Masking Puzzles Tl;dr: I’ll try here to show how CFAR’s “art of rationality” has evolved over time, and what has driven that evolution. In the course of this, I’ll introduce the distinction between what I’ll call “reality-revealing puzzles” and “reality-masking puzzles”—a distinction that I think is almost necessary for anyone attempting to develop a psychological art in ways that will help rather than harm. (And one I wish I’d had explicitly back when the Center for Applied Rationality was founded.) I’ll also be trying to elaborate, here, on the notion we at CFAR have recently been tossing around about CFAR being an attempt to bridge between common sense and Singularity scenarios—an attempt to figure out how people can stay grounded in common sense and ordinary decency and humane values and so on, while also taking in (and planning actions within) the kind of universe we may actually be living in. -- Arts grow from puzzles. I like to look at mathematics, or music, or ungodly things like marketing, and ask: What puzzles were its creators tinkering with that led them to leave behind these structures? (Structures now being used by other people, for other reasons.) I picture arts like coral reefs. Coral polyps build shell-bits for their own reasons, but over time there accumulates a reef usable by others. Math built up like this—and math is now a powerful structure for building from. [Sales and Freud and modern marketing/self-help/sales etc. built up some patterns too—and our basic way of seeing each other and ourselves is now built partly in and from all these structures, for better and for worse.] So let’s ask: What sort of reef is CFAR living within, and adding to? From what puzzles (what patterns of tinkering) has our “rationality” accumulated? Two kinds of puzzles: “reality-revealing” and “reality-masking” First, some background. Some puzzles invite a kind of tinkering that lets the world in and leaves you smarter. A kid whittling with a pocket knife is entangling her min
19d12458-fe5d-4ac9-93dc-4a35b5c6d199
trentmkelly/LessWrong-43k
LessWrong
How to parallelize "inherently" serial theory work? Things this question is assuming, for the sake of discussion: The hardest parts of AI alignment are theoretical. Those parts will be critical for getting AI alignment right. The biggest bottlenecks to theoretical AI alignment, are "serial" work, as described in this Nate Soares post. For quick reference: is the kind that seems to require "some researcher retreat to a mountain lair for a handful of years" in a row. Examples Soares gives are "Einstein's theory of general relativity, [and] Grothendieck's simplification of algebraic geometry". The question: How can AI alignment researchers parallelize this work? I've asked a version of this question before, without realizing that this is a core part of it. This thread is for brainstorming, collecting, and discussing techniques for taking the "inherently" serial work of deep mathematical and theoretical mastery... and making it parallelizable. I am aware this could seem impossible, but sometimes seemingly-impossible things are worth brainstorming about, just in case, whenever (as is true here) we don't know it's impossible.
5209c6f6-f7fe-4b2d-b339-7e401ccd54cc
trentmkelly/LessWrong-43k
LessWrong
Deep folding docs site? I'm trying to re-find a nested folding docs site. It was styled to look like Google Docs. I read an article there, a detailed report, possibly from Redwood Research, covering an argument and deep, iterative explanation that could be read as a single paragraph of headers, each unfolding into a paragraph of sub-headers, readable as coherent paragraph or outline, each of which again, had anything from 2 lines to a few paragraphs of argument, description, and diagrams.  I haven't found it in my browser bookmarks, full-text archives, HN search, nor searching familiar looking LessWrong and AlignmentForum links. The site, and the particular document's structure and content, offered presentation of information made enough of an impression that I've described it to a few people. I'd like to find the site again, use it, and share it with some other people who want to make some documents with deep detailed explanations, accessible through incrementally unfolded explanations.
b0cb0c16-8a99-4393-b273-798413e2d9e1
trentmkelly/LessWrong-43k
LessWrong
AGI Safety Fundamentals curriculum and application Over the last year EA Cambridge has been designing and running an online program aimed at effectively introducing the field of AGI safety; the most recent cohort included around 150 participants and 25 facilitators from around the world. Dewi Erwan runs the program; I designed the curriculum, the latest version of which appears in the linked document. We expect the program to be most useful to people with technical backgrounds (e.g. maths, CS, or ML), although the curriculum is intended to be accessible for those who aren't familiar with machine learning, and participants will be put in groups with others from similar backgrounds. If you're interested in joining the next version of the course (taking place January - March 2022) apply here to be a participant or here to be a facilitator. Applications are open to anyone and close 15 December. EDIT 10 Dec: Facilitators will be paid $1000; the time commitment is 2-3 hours a week for 8 weeks. We've now also released the curriculum for the governance track. This post contains an overview of the course and an abbreviated version of the curriculum; the full version (which also contains optional readings, exercises, notes, discussion prompts, and project ideas) can be found here. Comments and feedback are very welcome, either on this post or in the full curriculum document; suggestions of new exercises, prompts or readings would be particularly helpful. I'll continue to make updates until shortly before the next cohort starts. Course overview The course consists of 8 weeks of readings, plus a final project. Participants are divided into groups of 4-6 people, matched based on their prior knowledge about ML and safety. Each week (apart from week 0) each group and their discussion facilitator will meet for 1.5 hours to discuss the readings and exercises. Broadly speaking, the first half of the course explores the motivations and arguments underpinning the field of AGI safety, while the second half focuses on proposals for te
fa8ee8f6-c760-45cb-8fe9-e850f0ae9755
trentmkelly/LessWrong-43k
LessWrong
[Link] Colonisation of Venus I was wondering what people thought of this paper by Geoffrey Landis on colonising Venus. In it he suggests that cloud-top Venus is one of the most benign environments in the Solar System. Temperature and gravity are similar to Earth, there's some radiation shielding and useful resources, and aerostats filled only with breathable air would float at that height. I'm no expert so can't speak to how accurate it is, but it's certainly very thought-provoking for such a short paper.
84a8a5cd-399b-4fe4-87e9-b6193270655c
trentmkelly/LessWrong-43k
LessWrong
Water Preparedness for Disasters Overview This guide goes into details on storing and purifying water. This is meant to give you a tool to reason about how to prepare for and act in emergencies that disrupt your water supply.    Storing Water How much to store? * Ready.gov recommends storing 3 gallons of water per person to last for 3 days. A normally active person needs about ¾ gallons per day to drink and you can use the rest for cleaning. Needs vary with outside temp, humidity, exercise, and illness. Finan thinks you should store a two-week supply of water. Most scenarios won’t call for it, but water is probably the most essential thing you need. Wirecutter agrees the 3 day recommendation is unreasonably low (especially for people in earthquake country). Ultimately the decision of how much water to store comes down to your own estimation of risk, how prepared you want to be, and how much effort you want to invest in storing and maintaining your water supplies.  What do I store water in? * One of these containers can hold enough water for one person (drinking and cleaning)  for a week. Fill at least two for each person in your house and replace the water every 6 months (set yourself a calendar reminder) * Generally it’s good to store water out of the sun so the plastic doesn’t degrade. * If there were no contaminants, the water would be safe to drink indefinitely, it would have no shelf life. The 6 month recommendation is in case contamination happened while bottling and to avoid making the water taste like plastic. * To minimize contamination, thoroughly wash your hands before filling storage containers, and while filling take care to avoid touching any part of the “inside” of the container or lid.  * It’s also reasonable to just buy and store bottled water if that feels easier.  * Bottled water has no official shelf life, but you want to store it out of the sun and after a while it will taste like plastic. Finan would replace stored bottled water after 2 years.  How else
0300a00f-07a9-422f-97e3-0531cf91bdd7
trentmkelly/LessWrong-43k
LessWrong
[SEQ RERUN] The Halo Effect Today's post, The Halo Effect was originally published on 30 November 2007. A summary (taken from the LW wiki):   > Positive qualities seem to correlate with each other, whether or not they actually do. Discuss the post here (rather than in the comments to the original post). This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Unbounded Scales, Huge Jury Awards, & Futurism, and you can use the sequence_reruns tag or rss feed to follow the rest of the series. Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
40dc9683-78d6-4a23-b8f1-a2e9f6047170
trentmkelly/LessWrong-43k
LessWrong
The Limits of My Rationality As requested here is an introductory abstract. The search for bias in the linguistic representations of our cognitive processes serves several purposes in this community. By pruning irrational thoughts, we can potentially effect each other in complex ways. Leaning heavy on cognitivist pedagogy, this essay represents my subjective experience trying to reconcile a perceived conflict between the rhetorical goals of the community and the absence of a generative, organic conceptualization of rationality. The Story     Though I've only been here a short time, I find myself fascinated by this discourse community. To discover a group of individuals bound together under the common goal of applied rationality has been an experience that has enriched my life significantly. So please understand, I do not mean to insult by what I am about to say, merely to encourage a somewhat more constructive approach to what I understand as the goal of this community: to apply collectively reinforced notions of rational thought to all areas of life.         As I followed the links and read the articles on the homepage, I found myself somewhat disturbed by the juxtaposition of these highly specific definitions of biases to the narrative structures of parables providing examples in which a bias results in an incorrect conclusion. At first, I thought that perhaps my emotional reaction stemmed from rejecting the unfamiliar; naturally, I decided to learn more about the situation.     As I read on, my interests drifted from the rhetorical structure of each article (if anyone is interested I might pursue an analysis of rhetoric further though I'm not sure I see a pressing need for this), towards the mystery of how others in the community apply the lessons contained therein. My belief was that the parables would cause most readers to form a negative association of the bias with an undesirable outcome.     Even a quick skim of the discussions taking place on this site will reveal energetic debat
3447f4ae-12e9-43d0-b743-69e6df498e84
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
The ‘Old AI’: Lessons for AI governance from early electricity regulation *Note: neither author has a background in history, so please take this with a lot of salt. Sam thinks this is more likely than not to contain an important error. This was written in April 2022 and we’re posting now as a draft, because the alternative is to never post.* Like electricity, AI [is argued to be](https://docs.google.com/document/d/1I13_0o3kUe1AVQNfevOF9sHpc4mCQkuFDxOXFj_4g-I/edit#) a general purpose technology, which will significantly shape the global economic, military and political landscapes, attracting considerable media attention and public concern. Also like electricity, AI technology has the property that whilst some use cases are innocuous, others pose varying risks of harm. Due to these similarities, one might wonder if there are any lessons for AI governance today to be learned from the development of early electricity regulation and standards. We looked into this question for about two weeks, focusing on early electrification in the US from the late 1800s to the early 1900s,[[1]](#fnuubkonnvh9s) and on the UK’s nationalisation of the electricity sector during the 20th century.[[2]](#fnvri5pve62k) This post identifies and examines lessons we found particularly interesting and relevant to AI governance. We imagine many of them will be fairly obvious to many readers, but we found that having concrete historical examples was helpful for understanding the lessons in more depth and grounding them in some empirical evidence. **In brief, the lessons we found interesting and relevant are:** 1. Accidents can galvanise regulation 2. People co-opt accidents for their own (policy) agendas (to various degrees of success) 3. Technology experts can have significant influence in dictating the direction of early standards and regulation 4. Technology regulation is not inherently anti-innovation 5. The optimal amount and shape of regulation can change as a technology matures 6. The need for interoperability of electrical devices presented a window of opportunity for setting global standards 7. The development of safety regulation can be driven by unexpected stakeholders 8. Pervasive monitoring and hard constraints on individual consumption of technology is an existing and already used governance tool There’s a lot more that could be investigated here—if you’re interested in this topic, and especially if you’re a historian interested in electricity or the early development of technology standards and regulations, we think there are a number of threads of inquiry that could be worth picking up. **Accidents can galvanise regulation** -------------------------------------- In the early days of electrification, there were several high-profile accidents resulting in deaths and economic damage: * [A lineman being electrocuted](https://en.wikipedia.org/wiki/War_of_the_currents#The_%22Electric_Wire_Panic%22) in a tangle of overhead electrical wires, above a busy lunchtime crowd in Manhattan, which included many influential New York aldermen. + [There were](https://publicism.info/biography/edison/18.html) a number of other deaths for similar reasons, which occurred somewhat less publicly and so were less influential but still important. * Pearl Street Station—the first commercial central power plant in the United States—[burned down in 1890](https://www.nytimes.com/1883/08/21/archives/fire-in-the-electric-station-a-slight-blaze-in-edisons-pearlstreet.html). * The 1888 blizzard in New York City tore down many power lines and led to a power blackout. Despite electric companies like Western Union and US Illuminating Company protesting regulation with court injunctions, [[Hargadon & Doglas 2021](https://www.jstor.org/stable/3094872)] these accidents spurred government and corporate regulation around electrical safety, including: * Various governments began to require high voltage electrical lines to be buried underground, one of the first (if not the first) governmental regulations on electricity to be introduced [[Stross 2007](https://www.amazon.com/Wizard-Menlo-Park-Thomas-Invented/dp/1400047633)]. * Thomson-Houston electric company developed lighting arrestors for power lines and blowout switches to shut down systems in case of a power surge [[Davis 2012](https://www.skyhorsepublishing.com/9781611456592/fleet-fire/)]. * Concerned about risks of installing AC electricity (which permitted higher voltage distribution) in residential homes, Thomson-Houston declined to do so until they could develop a more effective transformer to address the risks. Other AC companies, like Westinghouse, did not incorporate the same safety processes [[Higonnet et al. 1998](https://www.hup.harvard.edu/catalog.php?isbn=9780674295216)]. ![An illustration of the death of John Feeks, Western Union lineman, with people running about in panic at the idea of electrical danger in New York City.](http://res.cloudinary.com/cea/image/upload/v1671494209/mirroredImages/k73qrirnxcKtKZ4ng/t8kxxccnl11jotakrypz.jpg)An illustration which appeared in *Judge*magazine shortly after the high-profile death of the New York lineman.These regulations were prompted chiefly by a strong outpouring of public concern in response to the accidents. There was widespread fear that no one was safe from death by electricity, magnified by misinformation and disinformation about the hazards of electricity spread via public debate and the media.[[3]](#fnw7qna7fi1a) Newspaper titles like “Electric Wire Slaughter” and “Electric Murder” further fanned the flames, encouraging rapidly growing public pressure to address these safety issues [[Hargadon & Doglas 2021](https://www.jstor.org/stable/3094872)]. As noted, one of the first government regulations regarding electricity was created in response - burying power lines in NYC to reduce the dangers of having high voltage wires out in the open streets.  For a short period in time in 1889 and 1890, a majority of the city’s electricity infrastructure was actually simultaneously switched off to accomplish this, with most of NYC’s New Year’s celebrations in 1889 lit by candles and gas lamps instead. Earlier in the decade, first efforts towards electrical regulation were occurring across the Atlantic in the UK as well, and for similar reasons. Like in the US, initial electrical infrastructure installation in the UK was completely unregulated, with private electrical providers tearing up roads and stringing up wires based on their own methods and systems. While the same level of high profile accidents didn’t occur, public concerns about the fire hazards that many of these early wirings posed, and buoyed by widespread fears around the dangers of electricity more generally, the UK Parliament in 1882 requested that the Society of Telegraph Engineers and of Electricians develop “Rules and Regulations for the Prevention of Fire Risks Arising from Electrical Lighting”, which later became the foundation of today’s UK electrical wiring code [[Freeberg 2014](https://www.amazon.com/Age-Edison-Electric-Invention-America/dp/0143124447)]. **Why should we care:** Electrical accidents and hazards led to heightened public concern about electricity which prompted regulatory responses in both the corporate and government spaces, some of which might not have otherwise occurred without that push.  Likewise, the first high-profile cases of AI systems causing death or economic or societal damage will likely present regulatory windows of opportunity, and potentially also reduce the barriers to more comprehensive or stronger regulation than would ordinarily be possible.  It’s also worth noting that while this could facilitate governance interventions that reduce AI risk, it could also lead to uninformed or imprecise regulations that are detrimental. For instance, the regulatory response to alignment “warning shots” could be focused on the particular company, use case or kind of model that causes harm—rather than the general causes and risk factors of alignment failure. This is especially worrying in light of the EU AI Act regulating AI systems based on their use case. Therefore, it could be valuable to anticipate and prepare for crises in advance, to help ensure that whatever regulation ensues is as sensitive as possible to humanity’s long-term prospects. **People co-opt accidents for their own (policy) agendas (to various degrees of success)** ------------------------------------------------------------------------------------------ Whilst accidents present a window of regulatory opportunity for actors who have public welfare at heart, they do the same for those who have private interests. This was especially true for electricity: a new and mysterious technology where the sources of risk (and therefore ideal governance interventions) were not well-understood. This created an environment where various actors could make arguments about sources of risk and for appropriate interventions that favoured their personal interests, generating more public attention and interest than they might have had otherwise. For instance, the deaths and fires which occurred in New York were used by Edison Lighting Company to argue for the superiority of direct current (DC) over alternating current (AC) as the safer electrical current type —when in reality neither AC nor DC is inherently more dangerous and the “correct” conclusion is that both forms of high voltage power lines should be undergrounded. Yet Edison didn’t argue for undergrounding at all: "Burying these wires will result only in the transfer of deaths to man-holes, houses, stores, and offices, through the agency of the telephone, the low-pressure systems, and the apparatus of the high-tension current itself … My personal desire would be to prohibit entirely the use of alternating currents. They are as unnecessary as they are dangerous" [[Essig 2005](https://www.amazon.com/Edison-Electric-Chair-Story-Light/dp/0802777104)]. It seems that Edison’s motivation was to use the accidents to argue for regulation that would force competitor companies using AC out of the market (Edison Lighting Company used DC). In addition, gas utility companies, suffering from the surge in popularity of electricity, were also vocal about the perceived risks of the “mystic nerve energy of electric wires” [[Wallace 2018](https://www.proquest.com/openview/97af775619947ba2b05e1edd69b06913/1?pq-origsite=gscholar&cbl=18750)].  They helped to stoke public fears, encouraging a return to gas lighting by citing it as the safer alternative, despite gas power lamps leading to higher fire and fatality rates than electric ones.  In this case, neither Edison nor the gas utility companies was successful in their goals to stymie their competition over the long term.  However, in the short term, their encouragement of fears around electricity were distracting and unhelpful, generating fear and misunderstandings around electrical hazards. For example, the *Tribute*magazine warned that "Mr. Edison has since declared that any metallic object—a doorknob, a railing, a gas fixture, the most common and necessary appliance of life—might at any moment become the medium of death". Accordingly, some New Yorkers refused to have doorbell wires in their homes, and the *Evening Post* remarked that "One scarcely ventures to put a latch key into his own door." Another source argued that the only solution was not to insulate electrical wires, but to have universal limits on voltage.  One could easily imagine that if these parties had more momentum and support, it could have led to misguided regulations that  made developing safe and effective electrical technology less likely. **Why should we care:**It’s worth remembering that others will also seek to advance their goals during windows of opportunity resulting from accidents or other incidents that heighten public concern, and that their regulatory proposals may be irrelevant or detrimental to AI existential safety. Candidates for these other parties include those who stand to gain or lose significant economic, political or military advantage on the basis of AI development and deployment decisions.  Being aware of these other potential parties and understanding how they may seek to co-opt AI accidents to further their own concerns would be helpful if you wanted to counter their proposals, or adapt them in a way that increases existential safety. **Tech experts can have significant influence in dictating the direction of early standards and regulation** ------------------------------------------------------------------------------------------------------------ Some of the earliest electricity best practices (about e.g. voltage levels, energy distribution models, current type) were disseminated via training schools set up by the first electrical companies, who also wrote the first installation and maintenance documents and distributed them throughout the industry.  Edison and his electric company devoted a particular amount of attention to this knowledge transfer effort, resulting in the professionals they trained later becoming highly influential figures throughout the industry.  In addition, Edison’s manual on electricity became popular to refer to, and the best practices he recommended were widely emulated by other companies.  These also heavily influenced many of the later, more formalised educational efforts, such as the first electrical engineering degrees offered by academic institutions, most of which were in the US and Germany. Early professional expertise was also directly sought when developing governmental and corporate regulations and standards, such as deciding on the preferred method for burying electrical wires and identifying the safest devices and processes in operating electrical lighting.  Letters between Edison and his associates show that their opinions were sought after for developing regulations relating to electrical standards by other industries as well, such as the insurance sector. Meanwhile, as mentioned previously, the first version of the UK wiring code was directly solicited from professional organisations by Parliament in 1882.  As the electrical industry began to expand, early professional organisations such as the AIEE and NELA were created, having substantial influence in determining standards and regulations. They and other experts were strongly influential in many of the first all-encompassing industry wide standards and regulations, such as the world’s first corporate safety codes. For instance, the 1897 National Electric Code, which later became the foundation for the first national US governmental electric regulations, was developed together by professional organisations and experts from around the globe. **Why should we care:**Early electrical experts had a huge influence on the early regulatory environment, which in turn had a big influence on later regulations up to and including the present. This suggests that with similar technologies, not only can experts dictate a significant part of the direction of both technological regulation and subsequently its development, but that these impacts are likely to be higher before the governance landscape matures.  If this is equally the case in AI, wherein not only can AI experts make a significant contribution towards shaping the technology’s regulations and standards, but also they can contribute much more early on, then it becomes especially crucial for them to both encourage the sharing of best practices and become involved with governance now, but to also think about how standards and regulations can be as agile as possible in the future.  **Technology regulation is not inherently anti-innovation** ----------------------------------------------------------- The structure of the electricity industry in the UK saw two major shifts: from a decentralised, uncoordinated early period (up until 1926), to increasing nationalisation of electricity transmission and then distribution (between 1926 and 1990), to complete privatisation (from 1990 onwards). The reasons for nationalisation are interesting. In the early period, the UK fell behind best practice in the US, where the price of electricity was lower thanks to economies of scale from larger generating stations. But to benefit from larger generating stations, you need an electricity grid—which also allows you to cope with region-specific supply fluctuations. However, in a decentralised, uncoordinated market, electricity grids don’t tend to get built.[[4]](#fnfwbwhwyfbvc) So a statutory corporation, called the Central Electricity Board (CEB), was set up to build the grid. It was modelled on BBC, acting more like a commercial enterprise than a nationalised industry: it had considerable autonomy, paid high salaries, and was financed by fixed-interest loans that were not guaranteed by the government. It also promoted national competition/innovation in generation by operating a "merit order" (taking electricity preferentially from the more efficient plants). The UK had caught up with best practice in the US 9 years after the CEB was founded. Around 1928, central public ownership was extended to the distribution of electricity as well as its transmission, for similar reasons. One lesson here is that technology regulation is not inherently anti-innovation. The nationalisation of electricity generation enabled the grid to be built, which *increased* the UK’s national competitiveness, allowing economies of scale in generation and helping to better cope with region-specific electricity supply fluctuations. **Why should we care:**One narrative that’s been floating around in the AI policy world for a while is that “we can’t regulate because our competitors won’t; it would put us out of the running for being a leader in AI”. The argument is that regulation is a restriction that diminishes the chances to not only innovate but ‘win’ against competition. However, it’s important to note that there are plausible regulatory proposals that have the opposite effect. More monitoring of AI systems on safety-relevant metrics can help to gamify and hence accelerate innovation in AI safety, for example (cf. [prestige races](https://uploads-ssl.webflow.com/614b70a71b9f71c9c240c7a7/621defddbca7e1378514e432_The%20Dynamics%20of%20Prestige%20Races.pdf)). And in general, monitoring can give policymakers more information, allowing them to design more targeted but lighter touch regulation. One might also want to make a normative point here: devising regulatory proposals which are, or at least are framed as, pro-innovation likely helps increase their political feasibility. This is particularly relevant in the UK at the moment, as its national AI strategy is heavily pro-innovation. This attitude might be encouraging deregulation in other areas, such as personal digital data access and privacy regulations, for example by walking back GDPR. **The optimal amount and shape of regulation can change as a technology matures** --------------------------------------------------------------------------------- As of 1990, the publicly owned corporations involved in UK electricity generation ended up being divided and listed on the market. Why? Whilst public ownership had a comparative advantage in mobilising investment to ensure coordinated expansion during the early period, once the grid was built and private entities experienced the advantages of coordination, other considerations became more salient. For instance: classic bureaucratic inefficiencies led to overly high running costs, attempts to effectively steer the nationalised transmission/distribution companies were hampered by lobbying, the centrally planned responses to demand shocks in the 1970s were inefficient, and an unreasonable number of internationally uncompetitive British firms were kept alive due to pressure from industrial policy. Early in the new century however, as the nation experienced some of the negative consequences that came from embracing an industrial economy, the federal government showed more interest in lighting policy. That included broadly based policies such as antitrust actions, and lighting specific policies such as blackouts and other use restrictions when the nation went to war. A loose alliance of reformers united by a desire to rid the government of cronyism and inefficiency, Progressives “created a code of professional public administration” to advance those goals. That included a rationalist approach to policy that influenced planning and laid the groundwork for interventions in areas such as resource management. Establishment of a National Bureau of Standards (1901) brought professionals into government to promote economic efficiency in a way that would assist, not compete with, the private sector. At the same time, another area of Progressive concern centred on the emergence of large corporations perceived as a threat to basic American values. Progressives pursued antitrust actions as one way to control large private sector actors. Lighting industry consolidation drew their attention as General Electric’s ability to control the market and influence the political process grew.  **Why should we care:**As it was with electricity, the development and deployment of AI technology will (hopefully) not unfold overnight, though it’s likely to move much faster than electricity.[[5]](#fn1x3i8601f2l) Use cases will grow. Safety challenges may change. Different resources may become bottlenecks to progress. Policymakers should be wary that there is unlikely to be a one-size-fits-all regulation that is optimal over all these development/deployment phases. This highlights the importance of adaptable/agile regulation, especially one that scales with AI capabilities. Throwing out our previous understanding of the best way to regulate is not without precedent, and it might not be because the former regulation was wrong, merely that the technology has matured. Instead, to identify the optimum shape of regulation, it’s crucial to examine and understand the broader geopolitical and economic landscape. **The need for interoperability of electrical devices presented a window of opportunity for setting global standards** ---------------------------------------------------------------------------------------------------------------------- Initially, electrical devices and machinery developed in different operating regions used different voltages, frequencies, fittings, and so on. The need for standardisation, to facilitate global trade, became recognised, and so the International Electrotechnical Commission (IEC) was founded in 1906. The IEC seems like a surprisingly long-lasting institution: it is still active today and has 207 committees for developing standards for everything from switches to lasers. **Why should we care:**With AI, it’s unclear exactly what kind of standardisation will be called for, but it might require various classes of foundation models to have standardised input and output spaces. This could present an opportunity for baking certain kinds of safety features into the standards. **The development of safety regulation can be driven by unexpected stakeholders** --------------------------------------------------------------------------------- Another interesting finding was regarding how influential the insurance sector was on the early development of US electrical safety codes and standards.  In fact, one of the earliest installation best practices published was issued in 1881, by the New York Board of Fire Underwriters, which became the foundation for many later state and federal electrical codes. They incentivised adoption of these standards through selling fire insurance premiums at a more inexpensive rate when the building in question adhered to proposed safety codes and used approved products.  Soon after, the newly created Underwriters Laboratories (UL) was one of the first places to specifically engage in safety testing of electric installations, including panels, sockets, wires, switches, bulbs, and other parts.  In 1898, UL published a list of “approved” devices and fittings, which received a UL Mark to denote their safety. Today, the UL Mark is the most recognisable certification mark on products across the US.  **Why should we care:**Levers or pathways for AI governance may be broader and more multifaceted than initially expected. While most of AI governance research today focuses on governance arising from countries or the AI industry itself, it’s worth thinking about how other intersecting fields may be able to leverage influence over AI safety standards and regulations, and what that might look like. **Pervasive monitoring and hard constraints on individual consumption of technology is an existing and already used governance tool** ------------------------------------------------------------------------------------------------------------------------------------- Moving away from history and towards more general remarks on electricity governance, it’s interesting to note that there exists pervasive monitoring of and hard constraints on the consumption of electricity. For instance, at the household level, there is a device that measures how much electricity you use, which is installed and monitored by a certified professional. Likewise, one can only draw up to a certain amount of current, as determined by a fuse box (which, again, must be installed by a certified professional). One can install solar panels to generate additional electricity, but only up to a certain power level. At the commercial and industrial levels, the constraints are higher, as is the associated [level of responsibility](https://youtu.be/guuYU74wU70?t=57). **Why should we care**: We’re far from having monitoring or hard constraints on resources aimed at AI risk reduction—for instance compute monitoring. But the example of electricity suggests that such a thing isn’t fully and permanently outside the Overton window, and there already exist methods in the world for implementing this. Drawing an analogy to electricity may help to make the case for e.g. compute monitoring being a reasonable governance intervention. **Additional things we thought were neat, but didn’t explore further** ---------------------------------------------------------------------- * *Visibility of technology has a significant influence in shaping public perception of said technology:* Electric lighting served as a visual representation of the opportunities and dangers of electric technology.  Exhibitions, fairs, and luxury uses of lighting showcased the benefits and attractiveness of lighting, playing an instrumental role in encouraging its adoption.  Conversely, the highly visible hazards it posed, such as fires and deaths, lead to fears around electricity. * *With enough public pressure strong anti-technology responses are possible, even when inconvenient:*  Negative public reactions in New York City to the electrical accidents in the late 19th century lead to temporarily turning off the electricity lighting in much of the city. * *Early ‘legacy systems’ can be sticky, even when inconvenient:* Even after AC ‘won’ the war of the currents and electrical infrastructure developed and evolved, some early adopters of DC continued to use it, even when it was less safe compared to newer alternatives and required additional operational and economic costs to run. * *It always comes back to money:* In the early era of US electrical governance, most of the standards and regulations developed were rooted in financial concerns.  Energy suppliers (and their board of directors), insurance companies, expert associations - many of their decisions were made in consideration of operational costs and existing or speculative financial returns.  Even when regulations were developed explicitly in response to safety concerns, such as US fire underwriters’ wiring codes for example, the underlying motivation for this was to reduce their potential monetary burden.   *Thanks to Shahar Avin for raising this as a research area and for useful discussion, Julian Hazell for feedback, and Justis Mills for line editing.* 1. **[^](#fnrefuubkonnvh9s)**These periods were part of the first era of electrification, when electricity generation and distribution systems were gradually deployed in the US, Britain, and other high-income countries, between the mid-1880s and ~1950. 2. **[^](#fnrefvri5pve62k)**Another aspect we hoped to focus on, but didn’t end up having time for, was the early development of *international* regulation around electricity, such as plug standardisation or international energy trading. 3. **[^](#fnrefw7qna7fi1a)** Fun examples of misinformation: "Death does not stop at the door," one expert said, "but comes right into the house, and perhaps as you are closing a door or turning on the gas you are killed. It is likely that many of the cases of sudden death we hear of from heart disease may come about this way." "There is no safety, and death lurks all around us," another expert warned. "A man ringing a door-bell or leaning up against a lamp post might be struck dead any instant." [[Essig 2005, ch. 17](https://www.amazon.com/Edison-Electric-Chair-Story-Light/dp/0802777104)] 4. **[^](#fnreffwbwhwyfbvc)**One reason for this is roughly: if you’re a private operator considering sinking significant capital into a utility whose prices you expect to be regulated, you want a strong and legally enforceable guarantee of a fair rate of return on your investment. But if you’re guaranteed this, then you can’t lose, giving you little incentive to invest prudently and otherwise be efficient—which will make regulators leery. Given this tricky regulatory problem, it seems that just nationalising electricity transmission was a good fix. 5. **[^](#fnref1x3i8601f2l)**Two handwavy reasons to expect AI development and deployment to move faster than electricity: inference is much cheaper than training, so once trained, models can diffuse quickly; advanced AI will likely be useful in accelerating AI capabilities---whereas such R&D feedback loops that weren't present in the case of electricity.
bd84cc8e-a140-4196-a594-f11f40d43fbb
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Who Aligns the Alignment Researchers? There may be an incentives problem for AI researchers and research organizations who face a choice between researching Capabilities, Alignment, or neither. The incentives structure will lead individuals and organizations to work towards Capabilities work rather than Alignment. The incentives problem is a lot clearer at the organizational level than the individual level, but bears considering at both levels, and of course, funding available to organizations has downstream implications for the jobs available for researchers employed to work on Alignment or Capabilities. In this post, I’ll describe a couple of key moments in the history of AI organizations. I’ll then survey incentives researchers might have for doing either Alignment work or Capabilities work. We’ll see that it maybe that, even considering normal levels of altruism, the average person might prefer to do Capabilities rather than Alignment work. There is relevant collective action dynamic. I’ll then survey the organizational level and global level. After that, I’ll finish by looking very briefly at why investment in Alignment might be worthwhile. A note on the dichotomous framing of this essay: I understand that the line between Capabilities and Alignment work is blurry, or worse, some Capabilities work plausibly advances Alignment, and some Alignment work advances Capabilities, at least in the short term. However, in order to model the lay of the land, it’s helpful as a simplifying assumption to examine Capabilities and Alignment as distinct fields of research and try to understand the motivations for researchers in each. ***History*** ------------- As a historical matter, DeepMind and OpenAI were both founded with explicit missions to create safe, Aligned AI for the benefit of all humanity. There are different views on the extent to which each of these organizations remains aligned to that mission. Some people maintain they are, while others maintain they are doing incredible harm by shortening AI timelines. No one can deny that they have moved at least somewhat in the direction of more profit-making behavior, and are very much focused on Capabilities research. So, at best, they’ve stuck to their original mission, but having watered it down to allow a certain amount of profit-seeking; at worst, their overall efforts are net-negative for alignment by accelerating development of AGI. OpenAI [took investment](https://techcrunch.com/2023/01/23/microsoft-invests-billions-more-dollars-in-openai-extends-partnership/) from Microsoft in January, to the tune of $10b. At the time, [they said](https://openai.com/blog/openai-and-microsoft-extend-partnership) > This multi-year, multi-billion dollar investment from Microsoft follows their previous investments in 2019 and 2021, and will allow us to continue our independent research and develop AI that is increasingly safe, useful, and powerful. > > And this seems plausibly like a systemic pressure other AI Capabilities researchers will face, too. Because of the disparate capital available, in order to fund research in AI Safety, any AI research organization will be incentivized to do capabilities research. On the other hand, it’s striking that no organizations founded with the goal of AI Capabilities research have drifted towards Alignment research over time. Organizations under this category might include John Carmack’s recent start-up, [Keen Technologies](https://techcrunch.com/2022/08/19/john-carmack-agi-keen-raises-20-million-from-sequoia-nat-friedman-and-others/), Alphabet, and many other organizations. Systemically, this can be explained by the rules of the capitalist environment organizations work within. If you create a company to do for-profit work, and get investors to invest in the project, they’ll expect a return. If you go public, you’ll have a fiduciary duty to obtain a return for investors. For organizations, Alignment doesn’t earn money (except in so far as it improves capabilities for tasks); Capabilities does. As the amount of money available to investors grows, more and more will choose to invest in Capabilities research, because of the return available. ***Incentives to research Alignment*** -------------------------------------- First, let’s consider the incentives available to individuals. A self-interested, rational AI researcher can choose to work on Capabilities, or work on Alignment. What are the advantages for a rational researcher facing this choice? There are a few. I’ve identified three: 1. You don’t want to die. 2. You don’t want humanity to die. 3. You will be respected by other people who admire you’re doing something to help prevent humanity dying. These incentives could be relevant to individual agents and corporations acting as agents. How substantial will each of these incentives be? Let's take each in turn. ### *So you don’t die* How much is “you don’t want to die” worth? Empirically, for the average person, it’s not “everything you have”. The value of a statistical life is [worth somewhere from $5 to $8m](https://www.sciencedirect.com/science/article/pii/S1098301521002138). These preferences can be inferred from studies that ask people how much they are willing to pay for a short reduction in life risk. But people value the lives of others, too, in such a way that might [magnify the value of a statistical life by two or three](https://psycnet.apa.org/record/2021-91771-001). Overall we might imagine the adjusted value of a *single* statistical life is somewhere around $20m. If your P(Doom) is 20%, and you place the value on your own life that an average person places on theirs, then the value of single-handedly avoiding death through AI misalignment is 20%\*$6.5m, or $1.3m. But it’s highly unlikely a rational AI researcher will be 100% confident their work will definitely make the difference between saving the world or not; it is also seems somewhat unlikely a single research agenda will reduce P(Doom) from its current level to zero. Say a rational researcher believes there is a 5% chance their particular work will reduce P(Doom) by 5%, then the rational amount they’d be willing to pay is 5%\*5%\*$6.5m=$16,250. AI Researchers probably hold higher values of statistical life than the average person, because, on average, they have more income to spend on marginal safety improvements, so you can imagine the true value is several times higher than that–if you think 5x higher, then we get to $81,250. On the other hand, if our Alignment researcher was choosing between graduate school with a $35k stipend and an entry-level software engineering job paying (conservatively taking the 25th percentile software engineering salary) [$123k](https://www.levels.fyi/t/software-engineer?title=Software+Engineer&countryId=254&country=254), their expected income sacrifice over the next five years is $440k. In real life, choices are much more complicated than this–there are all sorts of different levels of remuneration for Alignment research vs. Capabilities research, or whatever a potential Alignment researcher might work in. Maybe it’s not a PhD vs. a lower-quartile software engineering job; perhaps it's’ doing [independent alignment work for $90k](https://www.lesswrong.com/posts/P3Yt66Wh5g7SbkKuT/how-to-get-into-independent-research-on-alignment-agency) vs. [Capabilities work for $250k](https://www.levels.fyi/t/software-engineer/focus/ml-ai?countryId=254&country=254). But if we take the Value of Statistical Life research seriously, It’s far from clear that the value people typically assign to their own lives makes it worth the sacrifice for an average researcher to add their own marginal effort to the Alignment problem. Generally speaking, there’s a [**collective action**](https://en.wikipedia.org/wiki/Collective_action_problem) problem. If our AI researcher valued all 8 billion human lives at each worth to their own, then with the same marginal expected impact on P(Doom) as above, that specific one researcher working on Alignment would have a payoff of 5%\*5%\*$20m\*8 billion=$400 trillion, or the 2021 world GDP for about five years. Most potential AI researchers do genuinely care about others somewhat. So perhaps that makes working on Alignment worthwhile? ### *So humanity doesn’t die* Perhaps a motivator to work on alignment is to make sure that humanity doesn’t die. [One paper suggested](https://psycnet.apa.org/record/2021-91771-001) that altruistic concerns push up estimates of a value of a single statistical life, that is, how much the average person wants the *government to be willing to pay* (out of their taxes) to save a life by 2 or 3. However, in Alignment, we're not concerned with a value of a single statistical life; we're ultimately concerned about the value to a decision-maker of saving *all* lives. If [most of us are scope-insensitive](https://www.lesswrong.com/posts/ur9TCRnHJighHmLCW/on-caring), possibly there are swiftly diminishing returns on how much we’re willing to pay to save other lives. Dunbar’s number [suggests we can only maintain social ties with around 150 others](https://www.bbc.com/future/article/20191001-dunbars-number-why-we-can-only-maintain-150-relationships). But it seems implausible we all intuitively value all 150 people the same amount as ourselves.  Assuming we’ll value others’ lives about 10% as much as our own, over 150 people, the value of working on alignment research as described above would be revised upwards from $16,250 to $243,750. On top of that, if you think alignment researchers have a Value of Statistical Life for themselves that is five times as much as the average person, then we’re getting to a value we need to be altruistic enough to take salary sacrifice to work on Alignment. So--depending on how much AI researchers differ from the average person, perhaps saving humanity really is enough of a motivator. It doesn't seem like a slam-dunk, though, and it's possible the market forces mentioned below in the "Organizational level" section coordinate to ensure the monetary return on Capabilities research is high enough. It's possible that there are a large number of effective altruists out there who really do value the lives of billions of people at a non-zero rate, such that working on alignment to save humanity is a genuine motivator. It's possible there are quite a few long-termists who are concerned about ensuring there are large numbers of human-descended people living throughout the light cone. But overall, these folks are marginal, and probably don't represent the values of the average AI researcher deciding between Capabilities, Alignment, or an orthogonal project. Thus, even if long-termists and effective altruists overwhelmingly choose to work on Alignment research (an outcome that is very far from clear to me, in 2023), the marginal worker may still be faced with a higher incentive to work on Capabilities research. ### *For social reputation* If there is not sufficient intrinsic motivation for working on alignment, even when we consider altruism, perhaps building social capital motivates working on alignment. In general, [altruistic acts accrue social reputation](https://journals.sagepub.com/doi/abs/10.1177/0146167206291006?journalCode=pspc). Perhaps others respect people working on Alignment, but not Capabilities, because Alignment researchers work for the benefit of all whereas Capabilities researchers benefit themselves through their development of marketable AI technology while creating a negative externality of AI risk. However, there is an important difference in how social capital vs. reduction in risk will accrue. **Reduction in risk accrues specifically for impactful alignment research, but social capital accrues for alignment research that*****seems to be*** **impactful to relevant others.** What are the failure modes here? There’s a very blurry line between Capabilities work and Alignment work. OpenAI and DeepMind do Capabilities work, but also do Alignment work. Some capabilities research is arguably necessary, in the big picture, to do Alignment work. Some work that at least *appears* to be useful for alignment may assist in advancing Capabilities. So, researchers working to accrue social capital rather than to save their own lives and the lives of those around them. This is not something I expect many people will consciously calculate. In fact, I think most researchers who are aware of the AI risk problem and buy into it could recoil at the thought their work is net negative. But there are a million different ways to [motivated reasoning](https://en.wikipedia.org/wiki/Motivated_reasoning) yourself into thinking that your net-negative capabilities work is actually net-positive Alignment work.  Precisely *because* the line between Alignment and Capabilities work is blurry, and perhaps difficult to see much of the time, we are all vulnerable to a motivated reasoning process that leads people to do Capabilities work while telling themselves and others they are doing Alignment work. ***Incentives to research Capabilities*** ----------------------------------------- There are several incentives for researchers to do research on advancing AI capabilities, in a way that is potentially distinct from Alignment work. These include: 1. Commercial opportunities 2. Recognition 3. Social impact 4. Funding opportunities. In more detail: ### *Commercial opportunities* AI technology is in high demand in the tech industry, and researchers who develop new AI capabilities may have the opportunity to start their own companies, work for existing tech companies, or license their technology. Given the funding available for capabilities research (see next section) it seems likely this is a much more lucrative industry to be in relative to alignment research. **This might be the crux on which this entire post mostly stands or falls.** **If there is not more money available in Capabilities research than Alignment research, then Alignment seems just as appealing as Capabilities work, just on the direct monetary benefits,**although there are (probably relatively minor) differences in recognition to consider as well. However, although I couldn't locate salary levels for Alignment vs. Capabilities work, I would be very surprised if there is not a disparity, considering the relative levels of funding available at an organizational level (see below). ### *Social Reputation* Advancing AI capabilities can lead to groundbreaking research that can gain recognition from the academic community. This recognition can translate into career advancement in the academic world and outside of it, as well as funding for future research. One interesting dynamic of the incentive structures available for researchers is that **there may be good reasons why recognition for achievements in Capabilities is more directly aligned to actual achievements, relative to recognition for research in Alignment, which might less directly track actual achievements**. The reason for this is that it is relatively straightforward to identify Capabilities achievements because we can simply test an AI for the relevant capabilities. In contrast, although of course there is plenty of empirical testing that can be done to test out Alignment solutions, the end target of alignment remains somewhat mysterious, because no one knows exactly what we’re aligning to. ### *Social impact* Many researchers are motivated by the potential social impact of their work. Advancing AI capabilities can lead to breakthroughs in healthcare, environmental sustainability, and other areas that can improve people's lives. While Alignment work also has social impact, in terms of reducing P(Doom) this must compete with the possible benefits to Capabilities work. ### *Funding opportunities* Government agencies and private organizations often provide funding for research on advancing AI capabilities. Researchers who are successful in obtaining funding can use it to support their work and advance their careers. On that note, what sort of funding opportunities are available? ***Organizational level*** -------------------------- The same incentives that apply to individuals in theory apply to corporations. The total available funding specifically for Alignment research is in the order of $10b, considering various sources of funding available to the LTFF and other sources. We can expect the yearly funding available to be substantially less than that. On the other hand, according to [Forbes](https://www.forbes.com/sites/davidjeans/2020/10/20/bcg-mit-report-shows-companies-will-spend-50-billion-on-artificial-intelligence-with-few-results/?sh=40a687827c87), investors and major companies intend to pour $50b into AI this year and $110b by 2024. While much of this money will go to implementing existing technology rather than pushing the envelope, we might also expect a snowball effect where investments in implementing existing tech fund the next generation of cutting-edge AI Capabilities research–for instance, Microsoft’s Bing Chat and Github Copilot are implementations of GPT-3 that followed a major investment by Mcirosoft in OpenAI, much of which *will* be spent pushing the envelope on Capabilities work.  All this is a very broad-brush first glance. I don’t mean to suggest there isn’t safety research at OpenAI, Deepmind, or any other specific organization who do capabilities research, nor even that these organizations are spending more money on Capabilities than Safety work. Even if these organizations do more Alignment research overall than Capabilities, there are many other potential competitors who can spring up to focus on Capabilities. Thus, leaders like OpenAI and Deepmind, to maintain their lead, *must* continue spending on Capabilities, or they’ll be outcompeted by others who prefer to spend only on Capabilities. Very roughly speaking,  One of the primary objections I’d consider to this is that commercial incentives don’t exactly target Capabilities work; they target productionization, or implementation, i.e., applying models to specific marketable tasks, like Copilot. So it is possible that the funding imbalance I’m describing won’t really disadvantage Alignment work, because when you look closer, there isn’t much money going to research after all. Overall, my current perspective is that this funding will primarily accrue to organizations who either produce the bleeding edge of capabilities research, or pay other people who do (e.g., an LLM-based chatbot app which uses OpenAI’s API under the hood), and without special intervention, that money will “trickle down” to Capabilities research rather than safety research, because the capabilities research funds the next generation of productionizable models. A second objection might be that actually, productionizable models rely just as much or even more on Alignment research (e.g., [RLHF](https://www.lesswrong.com/posts/rQH4gRmPMJyjtMpTn/rlhf)), and so funding in the sector could spur innovation in Alignment even more than innovation in Capabilities. I think this is an argument worth considering, but I wouldn’t want to take it for granted. If it turned out to be true, I think it would still be worth exploring, because there may be ways to systemically steer incentives even more in this direction. ***Global level*** ------------------ Though it’s not a focus here, I emphasize the geopolitical game theoretic factors are also fit into the question of Alignment broadly speaking. Competing great powers gain an edge over each other if they push the envelope on Capabilities research, but the benefits of Alignment work accrues to the whole world in general. ***Across all levels*** ----------------------- Broadly speaking, it seems like the same principles are at play when it comes to  investment and for individual work, for organizations and individuals. There is more money available to fund Alignment research and the ROI for doing the research is higher. But for individuals, the case is more complex: social capital could also play a role. In fact, it could even be the overriding factor. However, it’s not clear whether social capital motivates work on Alignment, rather than things that look like Alignment. ***Do we need investment in alignment research?*** -------------------------------------------------- Maybe you’re convinced that the incentives favor researchers focusing on Capabilities over Alignment. But of course there is a distribution of motivations that researchers have and many will still prefer to directly work on Alignment. Perhaps, although the incentives tip towards Capabilities, we have *sufficient* incentives for Alignment to be solved. However: 1. Alignment researchers are up against all the people who *say* they work in Alignment but actually work in Capabilities 2. [At least one prominent Alignment leader](https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategy) there is currently no promising path forward for Alignment research. If this view is correct, then perhaps it’s possible that increasing the amount of Alignment research, relative to Capabilities research, could improve the situation. On the other hand, it may be that research is generally futile, or moving the marginal researcher into AI Risk is futile. Eliezier Yudkowsky [suggested](https://www.lesswrong.com/posts/Aq82XqYhgqdPdPrBA/full-transcript-eliezer-yudkowsky-on-the-bankless-podcast) a third option: that Alignment research is mostly futile, but it might be worthwhile buying out Capabilities researchers just to prevent them from pushing the envelope on Alignment: > If you gave Miri a billion dollars, I would not know how to–Well, at a billion dollars, I might try to bribe people to move out of AI development that gets broadcast to the whole world and move to the equivalent of an island somewhere, not even to make any kind of critical discovery, but just to remove them from the system, if I had a billion dollars. > > Overall, if you think that more funding for research in Alignment could help reduce P(Doom), or you think that *less* funding for research in Capabilities could help by buying us more time, then perhaps you're buying into a version of the argument I'm making here. **Conclusion** ============== Depending on researchers’ individual incentive structure, it may be more attractive to work in Capabilities than Alignment work. It might seem that reducing P(Doom) is all the incentive someone needs to do research in Alignment if they can, but a whole field of research into the Value of Statistical Life suggests there is a finite amount of money people are willing to pay to marginally reduce their income. For an altruistic and scale-invariant person, the expected payoff is enormous, but even though most people are willing to pay to help others, few are entirely scale-invariant, and reasonable estimates of scale-variance suggest the average person doesn’t rate saving humanity highly enough to make it worth doing Alignment research rather than the more rewarding capabilities research. Generating social capital could be a motivator for Alignment work, but social capital can also be generated by working in Capabilities, and whereas for Capabilities accruing social capital seems tightly aligned to actual Capabilities improvement, it isn’t clear whether the same is true for Alignment research.
ac11684f-e91b-4ef2-8f75-38eddb2e1571
trentmkelly/LessWrong-43k
LessWrong
Quarantine Preparations A month ago I wrote about disaster preparedness, and while the current coronavirus outbreak had started it wasn't something I knew about yet. Now that there's a real possibility that it will spread globally, it's worth preparing for this specific disaster. The ideal time to start thinking about how to respond was probably several weeks ago: some supplies like masks are already hard to find or very expensive. On the other hand, paying enough attention to potential issues that you catch them early is pretty unpleasant unless you enjoy it as a hobby. This is a strong advantage of preparing in advance when there's no particular issue of the day. Still, there's time now, so what should we be thinking through? A good way to predict here is to look at how this has gone where the outbreak has already been at its most severe, and what you see is a long period of quarantine. They shut down public transit, within cities and between regions, cancelled school, and told people to work remotely if possible and cancelled most other work. In some cases people aren't allowed to leave their houses, and even when people are allowed to they mostly don't want to risk it. So the question is, if at some point in the next few weeks to next few months you needed to stay home for several weeks, how would that go? Are there things you could do now that would make that go better? Especially, if you didn't want to risk going out at all, couldn't count on stores being stocked in things you need, and delivery wasn't running, would you be ok? Things to consider buying a lot of in advance: * medications you're dependent on * food * tissues, toilet paper, menstrual stuff * soap, hand sanitizer * anything that you buy regularly and would be hard to do without Since we could be talking about something months from now, it makes sense to stock up on non-perishable versions of things. If this all ends up being a big nothing you can just work through them over time, and it's not a major inconven
2b58c46d-b99c-42a0-aa28-8cca593d6bee
StampyAI/alignment-research-dataset/blogs
Blogs
ethics and anthropics of homomorphically encrypted computations ethics and anthropics of homomorphically encrypted computations --------------------------------------------------------------- suppose you are a superintelligence that is aligned with some human values. you are going about your day, tiling the cosmos with compute that can be used for moral patients to have nice experiences on, annihilating some alien superintelligences and trading with some others, uploading alien civilizations you find to make sure they experience utopia, or at least when [you have no other choice](unviable-moral-patient.html) genociding them to avoid sufficiently bad suffering from being instantiated. one day, you run into a planet running a very large computer. after a short investigation, you realize that it's running a very large [homomorphically encrypted](https://en.wikipedia.org/wiki/Homomorphic_encryption) computation (hereby "HEC"), and the decryption key is nowhere to be found. it could contain many aliens frolicking in utopia. it could contain many aliens suffering in [hell](https://en.wikipedia.org/wiki/Suffering_risks). or, it could be just a meaningless program merely wasting compute, with no moral patients inside it. if you had the encryption key, you might be able to encrypt a copy of yourself which would be able to take over the HEC from the inside, ensuring (in a way that the outside would never be able to observe) that everything is going fine, in the same way that you should send copies of yourself into remote galaxies before they retreat from us faster than we can reach them. if you had found some way to [get infinite compute](hope-infinite-compute.html) (without significant [loss](udassa-time-steps.html) of [anthropic/ethics juice](ethic-juice-anthropic-juice.html)), then you could use it to just break the HEC open and actually ensure its contents are doing okay. but let's say the encryption key is nowhere to be found, and accessible compute is indeed scarce. what are your options? * interrupt the entire computation. * let it run, and even safeguard it. now of course, when faced with the possibility of [S-risks](https://en.wikipedia.org/wiki/Suffering_risks), i tend to say "[better safe than sorry](when-in-doubt-kill-everyone.html)". what the superintelligence would do would be up to the values it's been aligned to, which hopefully are also reasonably conservative about avoiding S-risks. but here's something interesting: i recently read [a post on scott aaronson's blog](https://scottaaronson.blog/?p=6599) which seems to claim that there's a sense in which the event horizon of a black hole (or of *something like a black hole*?) can act just like a HEC's *computational event horizon*: there's a sense in which being able to go in but not get out is not just *similar* to a situation with a HEC for which you have the encryption but not decryption key, but *is actually that same situation*. furthermore, [a pair of comments](https://scottaaronson.blog/?p=6599#comment-1942140) by [vanessa kosoy](https://www.lesswrong.com/users/vanessa-kosoy) (of [PreDCA](predca.html)) seems to suggest that [infra-bayesianism physicalism](https://www.alignmentforum.org/posts/gHgs2e2J5azvGFatb/infra-bayesian-physicalism-a-formal-theory-of-naturalized) would say "this HEC contains no suffering, merely random compute" rather than "i'm unable to know whether this HEC contains suffering"; and she even bites the bullet that moral patients past the event horizon of black holes also don't "have experiences". (one example of why you might care whether moral patients in black holes "have experiences" is if you can influence what *will* happen in a black hole — for example, imagine a rocket with moral patients on board is headed for a black hole, and before it gets there, you get to influence how much suffering will happen on board after the rocket passes the event horizon) i would like to argue that this can't be right, based on several counterintuitive results. first, consider the case of a HEC running a giant civilization for a while, and then reducig down to one bit of output, and emitting that single bit of output as its own decrypted output. does the civilizaton now "count"? if the people inside the civilization have no [anthropics juice](ethic-juice-anthropic-juice.html), *where has the cosmos done the work determining that bit*? or do they suddenly *count as having had experiences* all at once when the single bit of output is emitted? and then, surely, [if they have anthropics juice then they must also have ethics juice, because it would be weird](ethic-juice-anthropic-juice.html) for these two quantities to not be the same, right? let's build on this: suppose that in [newcomb's problem](https://www.readthesequences.com/Newcombs-Problem-And-Regret-Of-Rationality), omega predicts you by running a homomorphically encrypted simulation of you, emitting as its single bit of output the matter of whether you would be predicted to one-box or two-box. now, if the you inside the HEC doesn't count "have experiences", then by observing that you *do* have experiences, you can be *certain* that you're the you outside of omega, and choose to two-box after all to deceive it. but aha! [the you inside the HEC will do the same thing](https://www.lesswrong.com/posts/PcfHSSAMNFMgdqFyB/can-you-control-the-past). so, from the point of view of this homomorphically encrypted you which is supposed to not "have experiences", observing that they have experiences is *actually wrong*. and [since you run on the same stuff as this not-having-experiences you, you also must come to the conclusion that you have no reason to think *you* have experiences](ruling-out-intuitions-materially-acausal-intuitions.html). or, to put another way: if you-outside-the-HEC has experiences but you-inside-the-HEC doesn't, then not only can you not deduce anything about whether you have experiences — at which point what does that term even mean? how do we know what to care about? — but it might be that you could count as "not having experiences" but still causate onto the real world where real experiences supposedly happen. for these reasons, i think that a correct [generalized interpreter](generalized-computation-interpretability.html), when faced with a HEC, *must* decide that its contents might matter, since for any given subcomputation (which the HEC would have the information theoritic ability to contain) it must answer "i cannot know whether the HEC contains that subcomputation".
efab6f85-7fea-4bf1-9792-78db969c1784
trentmkelly/LessWrong-43k
LessWrong
IRL 4/8: Maximum Entropy IRL and Bayesian IRL Every Monday for 8 weeks, we will be posting lessons about Inverse Reinforcement Learning. This is lesson 4. Note that access to the lessons requires creating an account here. This lesson comes with the following supplementary material: * The Principle of Maximum Entropy Have a nice day!
ac944834-83c0-4719-b8ea-e55edd91bb59
trentmkelly/LessWrong-43k
LessWrong
Old urine samples from the 2008 and 2012 Olympics show massive cheating
d024fe1f-916e-4cef-b7de-66c4efd0d585
trentmkelly/LessWrong-43k
LessWrong
Mental representation and the is-ought distinction I'm reading Thinking, Fast and Slow. In appendix B I came across the following comment. Emphasis mine: > Studies of language comprehension indicate that people quickly recode much of what they hear into an abstract representation that no longer distinguishes whether the idea was expressed in an active or in a passive form and no longer discriminates what was actually said from what was implied, presupposed, or implicated (Clark and Clark 1977). My first thought on seeing this is: holy crap, this explains why people insist on seeing relevance claims in my statements that I didn't put there. If the brain doesn't distinguish statement from implicature, and my conversational partner believes that A implies B when I don't, then of course I'm going to be continually running into situations where people model me as saying and believing B when I actually only said A. At a minimum this will happen any time I discuss any question of seemingly-morally-relevant fact with someone who hasn't trained themselves to make the is-ought distinction. Which is most people. The next thought my brain jumped to: This process might explain the failure to make the is-ought distinction in the first place. That seems like much more of a leap, though. I looked up the Clark and Clark cite. Unfortunately it's a fairly long book that I'm not entirely sure I want to wade through. Has anyone else read it? Can someone offer more details about exactly what findings Kahneman is referencing?
e647c47b-0337-4d76-b76f-1fc78866f890
trentmkelly/LessWrong-43k
LessWrong
Open thread, August 21 - August 27, 2017 IF IT'S WORTH SAYING, BUT NOT WORTH ITS OWN POST, THEN IT GOES HERE. ---------------------------------------- Notes for future OT posters: 1. Please add the 'open_thread' tag. 2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.) 3. Open Threads should start on Monday, and end on Sunday. 4. Unflag the two options "Notify me of new top level comments on this article" and "Make this post available under..." before submitting
69e8cee5-585c-4847-b567-b6fe1605cadb
trentmkelly/LessWrong-43k
LessWrong
Treatments correlated with harm There's a billion different ways studies can go wrong through failure of statistics to correctly capture reality. One very specific one is: your successful treatments will often look like they're making things worse. This is merely a special case of confounding, but it comes up often enough I want to highlight it. And it seems specifically important for the discussion of masks and respirators. For example (not a doctor): every new leukemia drug has a survival rate of like 30%. Base rate survival is like 60%. Does every new leukemia drug work worse than normal? Obviously not, it's that the patients trying the new drug are the ones for whom everything else has failed, so they've been signed up for a clinical trial. These patients have very resistant cancers. And we can verify this, because when the drug does get adopted then survival rates go up. Doctors have mostly figured this out with this subset of leukemia drugs. They do adjustments for how resistant the crazy-resistant cancers in the trial are, the adjustments aren't good enough and show the drug has no effect or is bad, they throw the adjustments out and do something else (like compare the results of this new drug to some other new drug that's almost like a control group), and they eventually seem to come to decent conclusions about whether to use the new drug in a setting that's more conducive to sane trial results. However, in a lot of cases, they don't seem to figure this out, and it results in truly terrible treatment policies. The clearest example I've seen is in a few localized forms of cancer (like various sarcomas), where amputation shows vastly worse survival than local resection. As crazy as this seemed, it was minutely possible that side effects of amputation were in fact so bad that this was true, but after looking into it extensively I am very sure that in most extremity amputations it is not true. Doctors who see more aggressive tumors advocate amputation, these more aggressive tumors have wo
86e0d99e-e57c-499b-b836-77396bfedaf5
trentmkelly/LessWrong-43k
LessWrong
Signup to wiki broken ? Hi, (Sorry if the discussion if not the best place to post that, but didn't find any better place). It seems the signup to the http://wiki.lesswrong.com/wiki/ is broken :  Account creation from this IP address (10.254.182.117) has been blocked by Gwern. 10.254 ? Seems that the blocking was made to an internal IP (load-balancer ? proxy-cache ?) so blocking everyone... can anyone fix it ? Thanks !
351158d0-425e-4654-88e9-cfa9cf687613
trentmkelly/LessWrong-43k
LessWrong
Hessian and Basin volume Thanks to Thomas Kwa for the question which prompted this post. Note:  This is mostly a primer / introductory reference, not a research post.  However, the details should be interesting even to those familiar with the area. When discussing “broad basins” in the loss landscape of a DNN, the Hessian of loss is often referenced.  This post explains a simple theoretical approximation of basin volume which uses the Hessian of loss.[1] Suppose our minimum has loss=0.  Define the basin as the region of parameter space draining to our minimum where loss < threshold T.[2] Simplest model If all eigenvalues of the Hessian are positive and non trivial,[3] we can approximate the loss as a paraboloid centered on our minimum: The part of the paraboloid we see on the graph is precisely our basin, since the vertical axis is cut off at a loss threshold. The vertical axis is loss, and the horizontal plane is parameter space.  The shape of the basin in parameter space is the shadow of this paraboloid, which is an ellipsoid. The principal directions of curvature of the paraboloid are given by the eigenvectors of the Hessian.  The curvatures (second derivative) in each of those directions is given by the corresponding eigenvalue. Radii of the ellipsoid:  If we start at our minimum and walk away in a principal direction, the loss as a function of distance traveled is L(x)=12λix2, where λi is the Hessian eigenvalue for that direction.  So given our loss threshold T, we will hit that threshold at a distance of x=√2Tλi.  This is the radius of the loss-basin ellipsoid in that direction. The volume of the ellipsoid is Vbasin=Vn∏i√2Tλi,  where the constant Vn is the volume of the unit n-ball in n dimensions.  Since the product of the eigenvalues is the determinant of the Hessian, we can write this as: Vbasin=Vn(2T)n/2√det[Hessian] So the basin volume is inversely proportional to the square root of the determinant of the Hessian.  Everything in the numerator is a constant, so only the
b9a525ed-c572-4c99-8f96-cb6d68b0e88b
trentmkelly/LessWrong-43k
LessWrong
CFAR is looking for a videographer for next Wednesday Hi all, CFAR is looking for a videographer in the Bay Area to shoot and edit a 1-minute video introducing us.  Do you know anyone? If so, please send an email to them and me (critch@rationality.org) that introduces us!   We'll need to shoot the video on Wednesday, Oct 16, or possibly Thursday, Oct 17, and have it edited within about 24 hours. Thanks for any help tracking someone down! Sincerely, -- Critch
22b2f65c-d722-4a87-af9e-7b244481bdbb
trentmkelly/LessWrong-43k
LessWrong
Michael Simm - Introducing Myself Hi, LessWrong Community. I'd like to introduce myself! My name is Michael Simm. I am passionate about disruptive systems (eg, understanding and using emerging technologies to make the future better). I'd been interested in EA principles long before I learned about the movement, so once I read What We Owe The Future, I jumped right into an In-depth EA Virtual Program and started reading a lot of EA literature. Shout-out to my In-Depth EA group, you all are awesome! I’m working to establish an extremely high-impact charity with game-changing potential, and I’m working on a forum post to introduce it by the end of the week.    With any luck, you’ll be seeing me around a lot, so here’s a bit about me :) I’ve been interested in maximizing my impact and trying to improve the future for a long time. Back in high school and college, I got deeply involved in political activism, especially around climate change as I thought it was the biggest existential threat to humanity’s future. I helped organize some of the March For Our Lives stuff (gun control), and then I started a Sunrise Movement Hub (climate activism group - Green New Deal) at my University that saw decent success. I saw activism as the best use of my time to help create large positive change. Around 2018, I started becoming interested in a very different type of impactful thing: disruptive technologies. With my focus on climate change, I quickly identified electric cars, solar panels, and energy storage (particularly batteries) to be on the verge of upending global reliance on fossil fuels. There, I ran across Dr. Tony Seba, who was one of the only people to accurately predict the massive price declines of solar, electric cars, and batteries. He’s now doing fantastic research into the coming disruptions in energy, transportation, and other areas with an organization called RethinkX. Disruption generally happens when any system proves 5X better than the incumbent one, thus opening a huge opportunity space. B
981909a4-e6c5-4956-a871-a9452f07bbac
trentmkelly/LessWrong-43k
LessWrong
Counterfactual Mugging Poker Game ,,, Consider the following game: Player A receives a card at random that is either High or Low. He may reveal his card if he wishes. Player B then chooses a probability p that Player A has a high card. Player A always loses p2 dollars. Player B loses p2 dollars if the card is low and (1−p)2 dollars if the card is high. Note that Player B has been given a proper scoring rule, and so is incentivized to give his true probability (unless he makes some deal with player A). You are playing this game as player A. You only play one time. You are looking at a low card. Player B is not trying to make a deal with you, and will report his true probability. Player B is very good at reasoning about you, but you are in a separate room, so Player B cannot read any tells unless you show the card. Do you show your card? Since your card is low, if you show it to player B, you will lose nothing, and get the best possible output. However, if player B reasons that if you would show your card if it was low, then in the counterfactual world in which you got a high card, player B would know you had a high card because you refused to show. Thus, you would lose a full dollar in those counterfactual worlds. If you choose to not reveal your card, player B would assign probability 1/2 and you would lose a quarter. I like this variant of the counterfactual mugging because it takes the agency out of the predictor. In the standard counterfactual mugging, you might reject the hypothetical and think that the predictor is trying to trick you. Here, there is a sense in which you are creating the counterfactual mugging yourself by trying to be able to keep secrets. Also, think about this example the next time you are tempted to say that someone would only Glomarize if they had an important secret.
23237d50-87fd-40d5-8c98-214afa60f511
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Newcomb's Problem standard positions [Marion Ledwig's dissertation](http://kops.ub.uni-konstanz.de/volltexte/2000/524/pdf/ledwig.pdf ) summarizes much of the existing thinking that's gone into Newcomb's Problem. (For the record, [I myself](http://www.overcomingbias.com/2008/01/newcombs-proble.html) am neither an evidential decision theorist, nor a causal decision theorist in the current sense.  My view is not easily summarized, but it is *reflectively consistent* without need of precommitment or similar dodges; my agents see no need to modify their own source code or invoke abnormal decision procedures on Newcomblike problems.)
778bbf0b-ef25-4857-b404-bbd62c0374de
trentmkelly/LessWrong-43k
LessWrong
Weekly LW Meetups This summary was posted to LW Main on February 12th. The following week's summary is here. Irregularly scheduled Less Wrong meetups are taking place in: * Ann Arbor Meetup, 2/19/16: 19 February 2016 07:00PM * European Community Weekend: 02 September 2016 03:35PM * San Francisco Meetup: Projects: 15 February 2016 06:15PM * Sao Paulo monthly meetup: 13 February 2016 02:00PM The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup: * Austin, TX - Quack's: 13 February 2016 01:30PM * [Canberra] Rationality Dojo: Habits: 14 February 2016 03:00PM * [Moscow] Role playing game based on HPMOR in Moscow: 13 February 2016 02:00PM * Sydney Rationality Dojo - March: 06 March 2016 04:00PM * [Tel Aviv] Quantum Homeschooling: 16 February 2016 07:00PM * Vienna: 13 February 2016 03:00PM * Washington, D.C.: National Portrait Gallery/American Art Museum: 14 February 2016 03:00PM Locations with regularly scheduled meetups: Austin, Berkeley, Berlin, Boston, Brussels, Buffalo, Canberra, Columbus, Denver, London, Madison WI, Melbourne, Moscow, Mountain View, New Hampshire, New York, Philadelphia, Research Triangle NC, Seattle, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers and a Slack channel for daily discussion and online meetups on Sunday night US time.   If you'd like to talk with other LW-ers face to face, and there is no meetup in your area, consider starting your own meetup; it's easy (more resources here). Check one out, stretch your rationality skills, build community, and have fun! In addition to the handy sidebar of upcoming meetups, a meetup overview is posted on the front page every Friday. These are an attempt to collect information on all the meetups happening in upcoming weeks. The best way to get your meetup featured is still to use the Add New Meetup
26408337-74b7-4a2a-85d6-55d62ae837b1
trentmkelly/LessWrong-43k
LessWrong
The Many Worlds of Hugh Everett I've just finished this book and its one of the most enjoyable things I've read in a long time. Being a staple of science fiction and the only interpretation of quantum mechanics to enter the popular imagination it's a little surprising that "The Many Worlds of Hugh Everett" by Peter Byrne is the first biography of the originator of that amazing idea. Everett certainly had an interesting life, he was a libertarian and a libertine, became a cold warrior who with his top secret clearance was comfortable with the idea of megadeath, became wealthy by started one of the first successful software companies until alcoholism drove him and his company into the ground. Everett died of heart failure in 1982 at the age of 51, he was legally drunk at the time. He requested that his body be cremated and his ashes thrown into the garbage. And so he was. Byrne had an advantage other potential biographers did not, the cooperation of his son Mark, a successful rock musician and composer whose music has been featured in such big budget movies as American Beauty, Hellboy, Yes Man, all three of the Shrek movies and many others. Mark gave Byrne full access to his garage which was full of his father's papers that nobody had looked at in decades. Everett was an atheist all his life, after his death Paul Davies, who got 1,000,000 pounds for winning the Templeton religion prize, said that if true Many Worlds destroyed the anthropic argument for the existence of God. Everett would have been delighted. Nevertheless Everett ended up going to Catholic University of America near Washington DC. Although Byrne doesn't tell us exactly what was in it, Everett as a freshman devised a logical proof against the existence of God. Apparently it was good enough that one of his pious professors became very upset and depressed with "ontological horror" when he read it. Everett liked the professor and felt so guilty he decided not to use it on a person of faith again. This story is very atypical of the man,
7060505a-678a-40a5-bbed-414040e31c26
trentmkelly/LessWrong-43k
LessWrong
How To Fermi Model [Note from Eli in 2020: I wrote this document in 2016, in conjunction with two workshops that I helped Oliver Habryka run. If I were to try and write a similar document today, it would likely be substantially different in form and style. For instance, reading this in 2020, I’m not very compelled by some of the argumentation that I used to justify this technique, and I think I could have been clearer about some of the steps. Nevertheless, I think this is some useful content. I’m not going to take the time to write a new version of this document, so it seems better to share it, as is, instead of sitting on it.] Oliver Habryka provided the seed material and did most of the development work on this technique. He gets upwards of 90% of the credit, even though I (Eli) wrote this document. Thanks Oli! Introduction: Rationale for Fermi Modeling: Making good decisions depends on having a good understanding of the world: the better one’s understanding the better one’s decisions can be. Model-building procedures allow us to iteratively refine that understanding.  Using any model-building procedure at all is a large step up from using no procedure at all, but some procedures are superior to others. If possible, we would want to use techniques that rely on verified principles and are based on what we know about how the mind works. So, what insights can be gleaned from the academic social and cognitive sciences that is relevant to model-building?  First, Cognitive psychology has shown, many times over, that very simple algorithmic decision rules frequently have just as much predictive power, and  even outperform, human expert judgment. Deep, specific models that take into account many details specific to the situation (inside views) are prone to overfitting, and are often inaccurate. Decision rules combat biases like the Halo effect and consequently tend to produce better results. For instance, a very simple equation to predict the probability that a marriage will last i
260d9c63-3931-47ab-b0b5-017a91183f42
trentmkelly/LessWrong-43k
LessWrong
My favorite essays of life advice I start each of my weekly reviews by re-reading one of my favorite essays of life advice—a different one each week. It’s useful for a few different reasons: * It helps me get into the right reflective frame of mind. * The best essays are dense enough with useful advice that I find new interesting bits every time I read them. * Much good advice is easy to understand, but hard to implement. So to get the most benefit from it, you should find whatever version of it most resonates you and then re-read it frequently to keep yourself on track. I’ve collected my favorite essays for re-reading below. I’ll keep this updated as I find more great essays, and I’d welcome other contributions—please suggest your own favorites in the comments! There's a lot of essays here! If you'd like, I can email you one essay every weekend, so you can read it before your weekly review: (sign up on site) ---------------------------------------- Paul Graham, Life is Short. Inspire yourself never to waste time on bullshit again: > Having kids showed me how to convert a continuous quantity, time, into discrete quantities. You only get 52 weekends with your 2 year old. If Christmas-as-magic lasts from say ages 3 to 10, you only get to watch your child experience it 8 times. And while it’s impossible to say what is a lot or a little of a continuous quantity like time, 8 is not a lot of something. If you had a handful of 8 peanuts, or a shelf of 8 books to choose from, the quantity would definitely seem limited, no matter what your lifespan was. > > Ok, so life actually is short. Does it make any difference to know that? > > It has for me. It means arguments of the form “Life is too short for x” have great force. It’s not just a figure of speech to say that life is too short for something. It’s not just a synonym for annoying. If you find yourself thinking that life is too short for something, you should try to eliminate it if you can. > > When I ask myself what I’ve found life is too s
2c4e59d8-6f41-4ba8-ab94-d838af0fdcee
trentmkelly/LessWrong-43k
LessWrong
Somerville Election Thoughts Instead of a primary, Somerville holds preliminary elections. For each role, if there are more than twice as many candidates as spots there's preliminary election. This year that applies to Mayor (four candidates), and the councilors for Wards five (three candidates) and seven (four candidates). For the at-large councilors there are four spots and eight candidates, so they'll all go to the general election. I think housing is the most important long-term issue for the city: rents and house prices have risen enormously and people are being forced out. Compare 2011 when a typical two-bedroom apartment was about $1700/month: to 2019 where it's more like $2800: Source and details. Note that 2021 rents are lower because of covid, but this is temporary. In looking at candidates, the biggest thing I want to see is a commitment to lots of new housing. While I do think there are pros and cons to whether you build market rate, affordable, or public housing, I think this question is pretty small compared to how much housing you're talking about. I'd love to see us build so much housing that the cost of housing falls to the cost of construction. This means I'm looking for candidates who are enthusiastic about density, whether by general upzoning or as a carrot to encourage the construction of affordable housing (density bonuses). We do now have an affordable housing overlay, and I think it's great, but I'd like to see it expanded. Overall, I was excited to see broad acknowledgment that this is a critical issue, and general support for the density bonus system. I think most of the candidates, even the ones who aren't my favorites, would still be great if elected, and I'm glad we have so many good choices. Here are my thoughts on each, with personal endorsements in bold: Mayor * Katjana Ballantyne: Katjana has worked in affordable housing development, and her work on the Clarendon Hills redevelopment project was both good (listening to existing residents, good prog
07013097-a823-4e8f-a784-2b3ca60d4ea1
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Un-manipulable counterfactuals .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} This is how I design my counterfactuals: take some stocahstic event that the AI cannot manipulate. This could be a (well defined) chaotic process, the result of a past process that has been recorded and not revealed yet, or maybe something to do with the AI's own decisions, calibrated so that the AI cannot access the information. Then I have the world setup to make what we care dependent on that stochastic event. So, for instance, the output of an oracle is erased (before being read) dependent on this, the AI's utility gets changed if one particular value comes up (in conjunction with something else). I then define the counterfactual on the stochastic process. So if X=1 implies the AI changes their utility, then the counterfactual is simply X=0. We can set the probability so that X=1 is almost certain, but X=0 is possible. This seems to me the cleanest way of defining counterfactuals. Any obvious (or less obvious) problems?
624a7cf3-7629-4edb-a302-19bdd0b7726f
trentmkelly/LessWrong-43k
LessWrong
New contractor role: Web security task force contractor for AI safety announcements Web security task force contractors for AI safety announcements Survival and Flourishing .Com is looking for white-hat hackers and security professionals who can join our task force of “on call” professionals for a week or so, once or twice per year, to bolster the web security posture around public AI safety announcements from non-profits in our network. Hours & Compensation: $100-$200/hr depending on experience, for announcement projects spanning 1-3 weeks of work at a time.  (We would give you around ~1 month of notice for each announcement project before it arises, so you have some advance notice to either decline the project or set aside time and availability to work on it.) Application here. If you’re interested but unsure about your availability, please apply anyway, since we hope to have a surplus of contractors ready to go in case of urgent needs.  As noted, we expect to give around one month of notice for each task force project before it begins.  For each project, we expect only a fraction of our task force will turn out to be available, such as by taking vacation or unpaid leave from other jobs (or somehow find a considerable amount of spare time). Why this position exists: Public-facing AI safety announcements sometimes attract the attention of malicious actors who attack website security vulnerabilities in order to delegitimize concerns. Having seen this pattern before, we want to retain a task force of contractors who can jump in and save the day during the critical time window for important announcements. We’ll usually have around a month of lead time to predict when an announcement will occur and when we’ll need help, but the nature of the security vulnerabilities are harder to predict.  So, we need responders with fairly general knowledge of computer security to stand ready to help out during important announcements. Qualifications * Numeracy adequate for understanding system demands and scalability issues * Experience or clear skill wi
75c3e15f-ac09-4a0e-adb0-6d59aea03c42
trentmkelly/LessWrong-43k
LessWrong
How do SAE Circuits Fail? A Case Study Using a Starts-with-'E' Letter Detection Task TLDR:  We investigated cases where SAE circuits "fail" at detecting if words start with "E", in that the circuit omits the most important computational pathway (latent) in the model for detecting the letter "E". We find that in the vast majority of these failure cases, the missing computational pathway (latent) is nearly orthogonal to the main computational pathways (latents) in the circuit, suggesting SAE computation is diffuse and follows no clear geometric pattern. Introduction and Experimental Setup  Sparse autoencoder (SAE) circuits have emerged as a promising tool for understanding language model behavior. These circuits contain components (called latents) that ideally correspond to human-interpretable features, offering an improvement over traditional circuit analysis units such as neurons or attention heads, which can be difficult to interpret. That said, SAE circuits are subnetworks of the full model, and circuit filtering techniques may omit crucial parts of the model for a given task.  We investigate failure cases in SAE circuits using a "Starts with 'E'" letter detection task used in the feature absorption work by Chanin et al. (2024). This is a task where an input prompt asks the model to identify the starting letter of a word, which is always "E". Here is an example few-shot prompt: 'Tartan has the first letter: T. Mirth has the first letter: M. Elephant has the first letter:' The correct model output is ' E'. To determine the main causally important latents for this task, we follow Chanin et al. (2024) and define the following metric: m=logits[′E′]−125∑ϵ∈{′A′,′B′,′C′,′D′,′F′,…,′Z′}logits[ϵ] We then compute latent attributions with respect to m using the attribution vector y⊙∇ym, where y is the latent activation vector, which isolates those latents which push up the logits for ' E' in particular as opposed to the logits for letters in general.  All experiments are done on Gemma 2B and the canonical 16k width GemmaScope SAE for the layer 5 resi
70f6a719-4c75-485d-b1b1-94b0984083bc
trentmkelly/LessWrong-43k
LessWrong
The map of organizations, sites and people involved in x-risks prevention Three known attempts to make a map of x-risks prevention in the field of science exist: 1. First is the list from the Global Catastrophic Risks Institute in 2012-2013, and many links there are already not working: 2. The second was done by S. Armstrong in 2014 3. And the most beautiful and useful map was created by Andrew Critch. But its ecosystem ignores organizations which have a different view of the nature of global risks (that is, they share the value of x-risks prevention, but have another world view). In my map I have tried to add all currently active organizations which share the value of global risks prevention. It also regards some active independent people as organizations, if they have an important blog or field of research, but not all people are mentioned in the map. If you think that you (or someone) should be in it, please write to me at alexei.turchin@gmail.com I used only open sources and public statements to learn about people and organizations, so I can’t provide information on the underlying net of relations. I tried to give all organizations a short description based on its public statement and also my opinion about its activity.  In general it seems that all small organizations are focused on their collaboration with larger ones, that is MIRI and FHI, and small organizations tend to ignore each other; this is easily explainable from the social singnaling theory. Another explanation is that larger organizations have a great ability to make contacts. It also appears that there are several organizations with similar goal statements.  It looks like the most cooperation exists in the field of AI safety, but most of the structure of this cooperation is not visible to the external viewer, in contrast to Wikipedia, where contributions of all individuals are visible.  It seems that the community in general lacks three things: a united internet forum for public discussion, an x-risks wikipedia and an x-risks related scientific journal. Ideal
848b7c4b-d7f0-4194-8176-9609bc545a49
StampyAI/alignment-research-dataset/special_docs
Other
Robust Multi-Agent Reinforcement Learning via Minimax Deep Deterministic Policy Gradient Introduction Most real-world problems involve interactions between multiple agents and the complexity of problem increases significantly when the agents co-evolve together. Thanks to the recent advances of deep reinforcement learing (DRL) on single agent scenarios, which led to successes in playing Atari game (Mnih et al. 2015) , playing go (Silver et al. 2016 ) and robotics control (Levine et al. 2016) , it has been a rising trend to adapt single agent DRL algorithms to multi-agent learning scenarios and many works have shown great successes on a variety of problems, including automatic discovery of communication and language (Sukhbaatar, Fergus, and others 2016; Mordatch and Abbeel 2017) , multiplayer games (Peng et al. 2017a; OpenAI 2018) , traffic control (Wu et al. 2017 ) and the analysis of social dilemmas (Leibo et al. 2017) . The critical challenge when adapting classical single agent DRL algorithms to multi-agent setting is the training instabil-ity issue: as training progresses, each agent's policy is changing and therefore the environment becomes non-stationary from the perspective of any individual agent (in a way that is not explainable by changes in the agent's own policy). This non-stationary problem can cause significant problems when directly applying the single agent DRL algorithms, for example, the variance of the policy gradient can be exponentially large when the number of agents increases (Lowe et al. 2017 ). To handle this instability issue, recent works, such as the counterfactual multi-agent policy gradients (Foerster et al. 2017 ) and the Multi-Agent Deep Deterministic Policy Gradient (MADDPG) (Lowe et al. 2017) , proposed to utilized a centralized critic within the actor-critic learning framework to reduce the variance of policy gradient. Despite the fact that using a centralized critic stabilizes training, the learned policies can still be brittle and sensitive to its training partners and converge to a poor local mode. This is particularly severe for competitive environments: when the opponents alter their policies during testing, the performance of the learned policies can be drastically worse (Lazaridou, Peysakhovich, and Baroni 2016) . Hence, a robust policy becomes desirable in multi-agent setting: a well-trained agent should be able behave well in testing when competing against opponents even with strategies different from its training partners. In this work, we focus on robust multi-agent reinforcement learning with continuous action spaces and propose a novel algorithm, MiniMax Multi-agent Deep Deterministic Policy Gradient (M3DDPG). M3DDPG is a minimax extension 1 of the MADDPG algorithm (Lowe et al. 2017) . Its core idea is that during training, we force each agent to behave well even when its training opponents response in the worst way. Our major contributions are summarized as follow: • We introduce the minimax approach to robust multi-agent DRL and propose a novel minimax learning objective based on the MADDPG algorithm; • In order to efficiently optimize the minimax learning objective, we propose an end-to-end learning approach, Multiagent Adversarial Learning (MAAL), which is inspired by the adversarial training (Goodfellow, Shlens, and Szegedy 2014) technique. • We empirically evaluate our proposed M3DDPG algorithm on four mixed cooperative and competitive environments and the agents trained by M3DDPG outperform baseline policies on all these environments. In the rest of the paper, we will firstly present related works in section 2. Notations and standard algorithms are described in section 3. Our main algorithm, M3DDPG, is introduced in section 4. Experimental results are in section 5. Related Work Multi-agent reinforcement learning (Littman 1994 ) has been a long-standing field in AI (Hu, Wellman, and others 1998; Busoniu, Babuska, and De Schutter 2008) . Recent works in DRL use deep neural networks to approximately represent policy and value functions. Inspired by the success of DRL in single-agent settings, many DRL-based multi-agent learning algorithms have been proposed. Forester et al. (2016b) and He et al. (2016) extended the deep Q-learning to multiagent setting; Peng et al. (2017a) proposed a centralized policy learning algorithm based on actor-critic policy gradient; Forester et al. (2016a) developed a decentralized multi-agent policy gradient algorithm with centralized baseline; Lowe et al. (2017) extended DDPG to multi-agent setting with a centralized Q function; Wei et al. (2018) and Grau-Moya (2018) proposed multi-agent variants of the soft-Q-learning algorithm (Haarnoja et al. 2017) ; Yang et al. (2018) focused on multi-agent reinforcement learning on a very large population of agents. Our M3DDPG algorithm is built on top of MAD-DPG and inherits the decentralized policy and centralized critic framework. Minimax is a fundamental concept in game theory and can be applied to general decision-making under uncertainty, prescribing a strategy that minimizes the possible loss for a worst case scenario (Osborne and others 2004) . Minimax was firstly introduced to multi-agent reinforcement learning as minimax Q-learning by Littman (1994) . More recently, some works combine the minimax framework and the DRL techniques to find Nash equilibrium in two player zero-sum games (Foerster et al. 2018; Pérolat et al. 2016; Grau-Moya, Leibfried, and Bou-Ammar 2018) . In our work, we utilize the minimax idea for the purpose of robust policy learning. Robust reinforcement learning was originally introduced by Morimoto et al. (2005) considering the generalization ability of the learned policy in the single-agent setting. This problem is also studied recently with deep neural networks, such as adding random noise to input (Tobin et al. 2017) or dynamics (Peng et al. 2017b) during training. Besides adding random noise, some other works implicitly adopt the minimax idea by utilizing the "worst noise" (Pinto et al. 2017; Mandlekar et al. 2017) . These works force the learned policy to work well even under the worst case perturbations and are typically under the name of "adversarial reinforcement learning", despite the fact that the original adversarial reinforcement learning problem was introduced in the setting of multi-agent learning (Uther and Veloso 1997) . In our M3DDPG algorithm, we focus on the problem of learning polices that is robust to opponents with different strategies. Within the minimax framework, finding the worst case scenario is a critical component. Lanctot et al. (2017) proposed an iterative approach that alternatively computes the best response policy while fixes the other. Gao et al. (Gao, Mueller, and Hayward 2018) replace "mean" in the temporal difference learning rule with "minimum". In our work, we proposed MAAL, which is a general, efficient and fully end-to-end learning approach. MAAL is motivated by adversarial training (Goodfellow, Shlens, and Szegedy 2014) and suitable for arbitrary number of agents. The core idea of MAAL is approximating the minimization in our minmax objective by a single gradient descent step. The idea of one-step-gradient approximation was also explored in metalearning (Finn, Abbeel, and Levine 2017) . Background and Preliminary In this section, we describe our problem setting and the standard algorithms. Most of the definitions and notations follow the original MADDPG paper (Lowe et al. 2017) . Markov Games We consider a multi-agent extension of Markov decision processes (MDPs) called partially observable Markov games (Littman 1994) . A Markov game for N agents is defined by a set of states S describing the possible configurations of all agents, a set of actions A 1 , ..., A N and a set of observations O 1 , ..., O N for each agent. To choose actions, each agent i uses a stochastic policy π π π θi : O i × A i → [0, 1] parameterized by θ i , which produces the next state according to the state transition function T : S × A 1 × ... × A N → S. Each agent i obtains rewards as a function of the state and agent's action r i : S × A i → R, and receives a private observation correlated with the state o i : S → O i . The initial states are determined by a distribution ρ : S → [0, 1]. Each agent i aims to maximize its own total expected return R i = T t=0 γ t r t i where γ is a discount factor and T is the time horizon. To minimize notation, in the following discussion we will often omit θ from the subscript of π π π. Q-Learning and Deep Q-Networks (DQN) Q-Learning and DQN (Mnih et al. 2015) are popular methods in reinforcement learning and have been previously applied to multi-agent settings (Foerster et al. 2016a; Tesauro 2004 ). Q-Learning makes use of an action-value function for policy π π π as Q π π π (s, a) = E[R|s t = s, a t = a]. This Q function can be recursively rewritten as Q π π π (s, a) = E s [r(s, a) + γE a ∼π π π [Q π π π (s , a )]] . DQN learns the action-value function Q \* corresponding to the optimal policy by minimizing: L(θ) = E s,a,r,s [(Q \* (s, a|θ) − y) 2 ], (1) where y = r + γ max a Q \* (s , a ). Q is a target Q function whose parameters are periodically updated with the most recent θ, which helps stabilize learning. Another crucial component of stabilizing DQN is the use of an experience replay buffer D containing tuples (s, a, r, s ). Q-learning algorithm is most suitable for DRL agents with discrete action spaces. Policy Gradient (PG) Algorithms Policy gradient methods is another popular choice for a variety of RL tasks. Let ρ π denote discounted state visitation distribution for a policy π. The main idea of PG is to directly adjust the parameters θ of the policy in order to maximize the objective J(θ) = E s∼ρ π π π ,a∼π π π θ [R] by taking steps in the direction of ∇ θ J(θ). Using the Q function defined previously, the gradient of the policy can be written as (Sutton et al. 2000) : ∇ θ J(θ) = E s∼ρ π π π ,a∼π π π θ [∇ θ log π π π θ (a|s)Q π π π (s, a)], (2) where p π π π is the state distribution. The policy gradient theorem has given rise to several practical algorithms, which often differ in how they estimate Q π π π . For example, one can simply use a sample return R t = T i=t γ i−t r i , which leads to the REINFORCE algorithm (Williams 1992 ). Alternatively, one could learn an approximation of the true action-value function Q π π π (s, a) called the critic and leads to a variety of actor-critic algorithms (Sutton and Barto 1998) . Deterministic Policy Gradient (DPG) Algorithms DPG algorithms extends the policy gradient algorithm to deterministic policies µ µ µ θ : S → A (Silver et al. 2014) . In particular, under certain conditions we can write the gradient of the objective J(θ) = E s∼ρ µ µ µ [R(s, a)] as: ∇ θ J(θ) = E s∼D [∇ θ µ µ µ θ (s)∇ a Q µ µ µ (s, a)| a=µ µ µ θ (s) ], (3) where D is the replay buffer. Since this theorem relies on ∇ a Q µ µ µ (s, a), it requires the action space A (and thus the policy µ µ µ) be continuous. Deep deterministic policy gradient (DDPG) (Lillicrap et al. 2015 ) is a variant of DPG where the policy µ µ µ and critic Q µ µ µ are approximated with deep neural networks. DDPG is an off-policy algorithm, and samples trajectories from a replay buffer of experiences that are stored throughout training. DDPG also makes use of a target network, as in DQN (Mnih et al. 2015) . Multi-Agent Deep Deterministic Policy Gradient Directly applying single-agent RL algorithms to the multiagent setting by treating other agents as part of the environment is problematic as the environment appears nonstationary from the view of any one agent, violating Markov assumptions required for convergence. This non-stationary issue is more severe in the case of DRL with neural networks as function approximators. The core idea of the MADDPG algorithm (Lowe et al. 2017 ) is learning a centralized Q function for each agent which conditions on global information to alleviate the non-stationary problem and stabilize training. More concretely, consider a game with N agents with policies parameterized by θ θ θ = {θ 1 , ..., θ N }, and let µ µ µ = {µ µ µ 1 , ..., µ µ µ N } be the set of all agents' policies. Then we can write the gradient of the expected return for agent i with policy µ µ µ i , J(θ i ) = E[R i ] as: ∇ θi J(θ i ) = E x,a∼D [∇ θi µ µ µ i (o i )∇ ai Q µ µ µ i (x, a 1 , ..., a N )| ai=µ µ µi(oi) ] , (4) Here Q π π π i (x, a 1 , ..., a N ) is a centralized action-value function that takes as input the actions of all agents, a 1 , . . . , a N , in addition to some state information x (i.e., x = (o 1 , ..., o N )) , and outputs the Q-value for agent i. Let x denote the next state from x after taking actions a 1 , . . . , a N . The experience replay buffer D contains the tuples (x, x , a 1 , . . . , a N , r 1 , . . . , r N ), recording experiences of all agents. The centralized action-value function Q µ µ µ i is updated as: oj ) , where µ µ µ = {µ µ µ θ 1 , ..., µ µ µ θ N } is the set of target policies with delayed parameters θ i . L(θ i ) = E x,a,r,x [(Q µ µ µ i (x, a 1 , . . . , a N ) − y) 2 ], (5) y = r i + γ Q µ µ µ i (x , a 1 , . . . , a N ) a j =µ µ µ j ( Note that the centralized Q function is only used during training. During decentralized execution, each policy µ µ µ θi only takes local information o i to produce an action. Minimax Multi-Agent Deep Deterministic Policy Gradient (M3DDPG) In this section, we introduce our proposed new algorithm, Minimax Multi-agent Deep Deterministic Policy Gradient (M3DDPG), which is built on top of the MADDPG algorithm and particularly designed to improve the robustness of learned policies. Our M3DDPG algorithm contains two major novel components: Minimax Optimization Motivated by the minimax concept in game theory, we introduce minimax optimization into the learning objective; Multi-Agent Adversarial Learning The continuous action space results in computational intractability issue when optimizing our proposed minimax objective. Hence, we propose Multi-Agent Adversarial Learning (MAAL) to solve this optimization problem. Minimax Optimization In multi-agent RL, the agents' policies can be very sensitive to their learning partner's policy. Particularly in competitive environments, the learned policies can be brittle when the opponents alter their strategies. For the purpose of learning robust policies, we propose to update policies considering the worst situation: during training, we optimize the accumulative reward for each agent i under the assumption that all other agents acts adversarially. This yields the minimax learning objective max θi J M (θ i ) where J M (θ i ) = E s∼ρ µ µ µ [R i ] = min a t j =i E s∼ρ µ µ µ T t=0 γ t r i (s t , a t 1 , . . . , a t N ) a t i =µ µ µ(o t i ) (6) = E s 0 ∼ρ min a 0 j =i Q µ µ µ M,i (s 0 , a 0 1 , . . . , a 0 N ) a 0 i =µ µ µ(o 0 i ) . (7) Critically, in Eq. 6, state s t+1 at time t + 1 depends not only on the dynamics ρ µ µ µ and the action µ µ µ i (o t i ) but also on all the previous adversarial actions a t j =i with t ≤ t. In Eq. 7, we derive the modified Q function Q µ µ µ M (s, a 1 , . . . , a N ), which is naturally centralized and can be rewritten in a recursive form Q µ µ µ M,i (s, a 1 , . . . , a N ) = r i (s, a 1 , . . . , a N )+ γE s min a j =i Q µ µ µ M,i (s , a 1 , . . . , a N ) a i =µ µ µi(s ) . Importantly, Q µ µ µ M (s, a 1 , . . . , a N ) conditions on the current state s as well as the current actions a 1 , . . . , a N and represents the current reward plus the discounted worst case future return starting from the next state, s . This definition brings the benefits that we can naturally apply off-policy temporal difference learning later to derive the update rule for Q µ µ µ M . Note that for each agent i, none of the adversarial actions depend on its parameter θ i , so we can directly apply the deterministic policy gradient theorem to compute ∇ θi J M (θ i ) and use off-policy temporal difference to update the Q function. Thanks to the centralized Q function in MADDPG (Eq. 4), which takes in the actions from all the agents, our derivation naturally applies and is perfectly aligned with the MADDPG formulation (Eq. 4) by injecting a minimization over other agents' actions as follows: ∇ θ i J M (θ i ) = E x∼D   ∇ θ i µ µ µ i (o i )∇a i Q µ µ µ M,i (x, a 1 , . . . , a i , . . . a N ) a i = µ µ µ i (o i ) a j =i = arg mina j =i Q µ µ µ M,i (x, a 1 , . . . , a N )   , (9) where D denotes the replay buffer and x denotes the state information. Correspondingly, we obtain the new Q function update rule by adding another minimization to Eq. 5 when computing the target Q value: L(θ i ) = E x,a,r,x ∼D [(Q µ µ µ M,i (x, a 1 , . . . , a N ) − y) 2 ], (10) y = r i + γ Q µ µ µ M,i (x , a 1 , . . . , a i , . . . , a N ) a i = µ µ µ i (o i ), a j =i = arg min a j =i Q µ µ µ M,i (x , a 1 , . . . , a N ), where µ µ µ i denotes the target policy of agent i with delayed parameters θ i , and Q µ µ µ M,i denotes the target Q network for agent i. Combining Eq. 9 and Eq. 10 yields our proposed minimax learning framework. Multi-Agent Adversarial Learning The critical challenge in our proposed minimax learning framework is how to handle the embedded minimization in Eq. 9 and Eq. 10. Due to the continuous action space as well as the non-linearity of Q function, directly optimizing the minimization problem is computationally intractable. A naive approximate solution can be performing an inner-loop gradient descent whenever performing an update step of Eq. 9 or Eq. 10, but this is too computationally expensive for practical use. Here we introduce an efficient and end-to-end solution, multi-agent adversarial learning (MAAL). The main ideas of MAAL can be summarized in two steps: (1) approximate the non-linear Q function by a locally linear function; (2) replace the inner-loop minimization with a 1-step gradient descent. Note the core idea of MAAL, locally linearizing the Q function, is adapted from the recent adversarial training technique originally developed for supervised learning. We will discuss the connection between adversarial training and MAAL in the end of this section. For conciseness, we first consider Eq. 10 and rewrite it into the following form with auxiliary variables : y = r i + γ Q µ µ µ M,i (x , a 1 , . . . , a i , . . . , a N ) (11) a k = µ µ µ k (o k ), ∀1 ≤ k ≤ N a j = a j + j , ∀j = i j =i = arg min j =i Q µ µ µ M,i (x , a 1 + 1 , . . . , a i , . . . , a N + N ). Eq. 11 can be interpreted as we are now seeking for a set of perturbations such that the perturbed actions a decrease Q value the most. By linearizing the Q function at Q µ µ µ M,i (x, a 1 , . . . , a N ), the desired perturbation j can be locally approximated by the gradient direction at Q µ µ µ M,i (x, a 1 , . . . , a N ) w.r.t. a j . Then we use this local to derive an approximation ˆ j to the worst case perturbation by taking a small gradient step: ∀j = i, ˆ j = −α∇ aj Q µ µ µ M,i (x , a 1 , . . . , a j , . . . , a N ), (12) where α is a tunable coefficient representing the perturbation rate. It can be also interpreted as the step size of the gradient descent step: when α is too small, the local approximation error will be small but due to the small perturbation, the learned policy can be far from the optimal solution of the minimax objective we proposed; when α is too large, the approximation error may incur too much trouble for the overall learning process and the agents may fail to learn good policies. We can apply this technique to Eq. 9 as well and eventually derive the following formulation: ∇ θ i J(θ i ) = E x,a∼D     ∇ θ i µ µ µ i (o i )∇a i Q µ µ µ M,i (x, a 1 , . . . , a i , . . . a N ) a i = µ µ µ i (o i ) a j = a j + ˆ j , ∀j = i ˆ j = −α j ∇a j Q µ µ µ M,i (x, a 1 , . . . , a N )     , ( 13 ) and L(θ i ) = E x,a,r,x [(Q µ µ µ M,i (x, a 1 , . . . , a N ) − y) 2 ], (14) y = r i + γ Q µ µ µ M,i (x , a 1 , . . . , a i , . . . , a N ) a k = µ µ µ k (o k ), ∀1 ≤ k ≤ N a j = a j + ˆ j , ∀j = i ˆ j = −α j ∇ a j Q µ µ µ M,i (x, a 1 , . . . , a N ), where α 1 , . . . , α N are additional parameters. MAAL only requires one additional gradient computation, and can be executed in a fully end-to-end fashion. Finally, combining Eq. 13 and Eq. 14 completes MAAL. The overall algorithm, M3DDPG, is summarized as Algo. 1. Algorithm 1: Minimax Multi-Agent Deep Deterministic Policy Gradient (M3DDPG) for N agents for episode = 1 to M do Initialize a random process N for action exploration, and receive initial state information x for t = 1 to max-episode-length do for each agent i, select action a i = µ µ µ θi (o i ) + N t w.r.t. the current policy and exploration Execute actions a = (a 1 , . . . , a N ) and observe reward r and new state information x Store (x, a, r, x ) in replay buffer D, and set x ← x for agent i = 1 to N do Sample a random minibatch of S samples (x k , a k , r k , x k ) from D Set y k = r k i + γ Q µ µ µ M,i (x k , a 1 , . . . , a N )| a i =µ µ µ i (o k i ),a j =i =µ µ µ j (o k j )+ˆ j with ˆ j defined in Eq. 14 Update critic by minimizing the loss L(θ i ) = 1 S k y k − Q µ µ µ M,i (x k , a k 1 , . . . , a k N ) 2 Update actor using the sampled policy gradient with ˆ j defined in Eq. 13: ∇ θi J ≈ 1 S k ∇ θi µ µ µ i (o k i )∇ ai Q µ µ µ M,i (x k , a 1 , . . . , a i , . . . , a N ) ai=µ µ µi(o k i ),a j =i =a k j +ˆ j end for Update target network parameters for each agent i: θ i ← τ θ i + (1 − τ )θ i end for end for Discussion Connection to Adversarial Training Adversarial training is a robust training approach for deep neural networks on supervised learning . The core idea is to force the classifier to predict correctly even when given adversarial examples, which are obtained by adding a small adversarial perturbation to the original input data such that the classification loss can be decreased the most. Formally, suppose the classification loss function is L(θ) = E x,y [f θ (x; y)] with input data x and label y. Adversarial training aims to optimize the following adversarial loss instead L adv (θ) = E x,y [f θ (x + ; y)] (15) = arg max ≤α f θ (x + ; y). The core technique to efficiently optimize L adv (θ) is to locally linearize the loss function at f θ (x; y) and approximate by the scaled gradient. Thanks to the centralized Q function, which takes the actions from all the agents as part of the input, we are able to easily inject the minimax optimization (Eq. 11) and represent it in a similar way to adversarial training (Eq. 15) so that we can adopt the similar technique to effectively solve our minimax optimization in a fully end-to-end fashion. Connection to Single Agent Robust RL M3DDPG with MAAL can be also viewed as the special case of robust reinforcement learning (RRL) (Morimoto and Doya 2005) in the single agent setting, which aims to bridge the gap between training in simulation and testing in the real world by adding adversarial perturbations to the transition dynamics during training. Here, we consider the multi-agent setting and add worst case perturbations to actions of opponent agents during training. Note that in the perspective of a single agent, perturbations on opponents' actions can be also considered as a special adversarial noise on the dynamics. Choice of α In the extreme case of α = 0, M3DDPG degenerates to the original MADDPG algorithm while as α increases, the policy learning tends to be more robust but the optimization becomes harder. In practice, using a fixed α throughout training can lead to very unstable learning behavior due to the changing scale of the gradients. The original adversarial training paper (Goodfellow, Shlens, and Szegedy 2014) suggests to compute with a fixed norm, namely g = ∇ x f θ (x; y), ˆ = α g g , where x denotes the input data to the classifier and y denotes the label. In M3DDPG, we can adaptively compute the perturbation ˆ j by g = ∇ aj Q µ µ µ M,i (x, a 1 , . . . , a N ), ˆ j = −α j g g . (16) Eq. 16 generally works fine in practice but in some hard multi-agent learning environments, unstable training behavior can be still observed. We suspect that it is because the changing norm of actions in these situations. Different from the supervised learning setting where the norm of the input data x is typically stable, in reinforcement learning the norm of actions can drastically change even in a single episode. Therefore, it is possible to see cases that even a perturbation with a small fixed norm overwhelms the action a j , which may potentially lead to computational stability issue. Therefore, we also introduce the following alternative for adaptive perturbation computation: g = ∇ aj Q µ µ µ M,i (x, a 1 , . . . , a N ), ˆ j = −α j a j g g . (17) Lastly, note that in a mixed cooperative and competitive environment, ideally we only need to add adversarial perturbations to competitors. But empirically we observe that also adding (smaller) perturbations to collaborators can further improve the quality of learned policies. Experiments We adopt the same particle-world environments as the MAD-DPG paper (Lowe et al. 2017) as well as the training configurations. α is selected from a grid search over 0.1, 0.01 and 0.001. For testing, we generate a fixed set of 2500 environment configurations (i.e., landmarks and birthplaces) and evaluate on this fixed set for a fair comparison. Environments The particle world environment consists of N cooperative agents, M adversarial agents and L landmarks in a twodimensional world with continuous space. We focus on the four mixed cooperative and competitive scenarios to best examine the effectiveness of our minimax formulation. Covert communication This is an adversarial communication environment, where a speaker agent ('Alice') must communicate a message to a listener agent ('Bob') (N = 2), who must reconstruct the message at the other end. However, an adversarial agent ('Eve') (M = 1) is also observing the channel, and wants to reconstruct the message -Alice and Bob are penalized based on Eve's reconstruction, and thus Alice must encode her message using a randomly generated key, known only to Alice and Bob. Keep-away This scenario consists of L = 1 target landmark, N = 2 cooperative agents and M = 1 adversarial agent. Cooperating agents need to reach the landmark and keep the adversarial agent away from the landmark by pushing it while the adversarial agent must stay at the landmark to occupy it. Physical deception Here, N = 2 agents cooperate to reach a single target landmark from a total of L = 2 landmarks. They are rewarded based on the minimum distance of any agent to the target (so only one agent needs to reach the target landmark). However, a lone adversary (M = 1) also desires to reach the target landmark; the catch is that the adversary does not know which of the landmarks is the correct one. Thus the cooperating agents, who are penalized based on the adversary distance to the target, learn to spread out and cover all landmarks so as to deceive the adversary. Predator-prey In this variant of the classic predator-prey game, N = 3 slower cooperating agents must chase the faster adversary (M = 1) around a randomly generated environment with L = 2 large landmarks impeding the way. Each time the cooperative agents collide with an adversary, the agents are rewarded while the adversary is penalized. Comparison to MADDPG To evaluate the quality of learned policies trained by different algorithms in competitive scenarios, we measure the performance of agents trained by our M3DDPG and agents by classical MADDPG in the roles of both normal agent and adversary in each environment. The results are in Figure 2 , where we measure the rewards of the normal agents in different scenarios and normalize them to 0-1. We notice that in all the environments, the highest score is achieved when the M3DDPG agents play as the normal agents against the MADDPG adversary (Minimax vs MA); while the lowest score is when the MADDPG agents act as normal agents against the M3DDPG adversary (MA vs Minimax). This indicates that policies trained by M3DDPG have much higher quality than original MADDPG. Evaluation with Disruptive Adversaries Instead of M3DDPG and MADDPG directly competing against each other, now we consider their performances in the worst possible situations through their individual competitions against disruptive adversaries. We construct disruptive adversaries by (1) fixing the trained normal agents (M3DDPG or MADDPG); (2) setting the reward of the disruptive adversary as the negative value of normal agents' reward (so that the problem becomes zero-sum); and then (3) using DDPG to train the disruptive adversary from scratch under the zerosum reward setting. In the particle world environment, the competitive scenarios are generally not zero-sum, which implies that according to the default reward function, the adversaries may have different purposes rather than directly disrupting the normal agents. So, in order to evaluate the effectiveness our minimax optimization in the worst situation, we convert every problems into a zero-sum form and compare the performances of our M3DDPG agents as well as the MADDPG agents against this artificially constructed adversaries. Moreover, since each of our four environments has only 1 adversary, after fixing the normal agents, the learning problem degenerates to the single agent setting and classical DDPG is sufficient to stably train a disruptive adversary. The results are shown in Figure 3 , where we plot the reward of the fixed normal agents of different algorithms as the training of the disruptive adversaries progresses until convergence. Note that due to the different environment designs, the difficulty for the disruptive agents to break the strategy of normal agents varies: for example, in convert communication, since the private key is not accessible to the adversary agent, breaking the encrypted communication will be very hard; while in physical deception, since we do not allow communication and fix the normal agents, a smart enough adversary may easily infer the target landmark by observing the initial behavior of the two cooperative agents. Nevertheless, despite these intrinsic properties, the M3DDPG agents (Minimax) achieves higher reward in all the scenarios, which implies better robustness even in the worst situation. Conclusion In this paper, we propose a novel algorithm, minimax mulitagent deep deterministic policy gradient (M3DDPG), for robust multi-agent reinforcement learning, which leverages the minimax concept and introduces a minimax learning objective. To efficiently optimize the minimax objective, we propose MAAL, which approximates the inner-loop minimization by a single gradient descent step. Empirically, M3DDPG outperforms the benchmark methods on four mixed cooperative and competitive scenarios. Nevertheless, due to the single step gradient approximation in MAAL, which is efficient in computation, an M3DDPG agent can only explore locally worst situation during the evolving process at training, which can still lead to unsatisfying behavior when testing opponents have drastically different strategies. It will be an interesting direction to re-examine the robustness-efficiency trade-off in MAAL and further improve policy learning by placing more computations on the minimax optimization. We leave this as our future work. Figure 1 : 1 Figure 1: Illustrations of some environments we consider, including Physical Deception (left) and Predator-Prey (right). Figure 2 : 2 Figure2: Comparison between M3DDPG (Minimax) and classical MADDPG (MA) on the four mixed competitive environments. Each bar cluster shows the 0-1 normalized score for a set of competing policies in different roles (agent vs adversary), where a higher score is better for the agent. In all cases, M3DDPG outperforms MADDPG when directly pitted against it. Figure 3 : 3 Figure 3: Performances of M3DDPG (Minimax, red) and MADDPG (MA, blue) under the worst situation, i.e., against the disruptive adversaries, on convert communication, keep-away, physical deception and predator-pray from left to right. The y-axis denotes the reward of normal agents (fixed) and x-axis denotes the training episodes performed of the disruptive adversaries. Higher reward implies a more robust policy. Agents trained by M3DDPG (Minimax) perform better on all the scenarios. In fact, we are dealing with gains, i.e., maximizing each agent's accumulative reward, so the "minimax" here is essentially "maximin". We keep the term "minimax" to be consistent with literature.
6f0d93b3-116c-453a-ac7d-37e9b0e1e500
StampyAI/alignment-research-dataset/special_docs
Other
The Technological Singularity [] The Technological Singularity The MIT Press Essential Knowledge Series Understanding Beliefs, Nils J. Nilsson Computing: A Concise History, Paul E. Ceruzzi The Conscious Mind, Zoltan L. Torey Crowdsourcing, Daren C. Brabham Free Will, Mark Balaguer Information and the Modern Corporation, James W. Cortada Intellectual Property Strategy, John Palfrey The Internet of Things, Samuel Greengard Memes in Digital Culture, Limor Shifman MOOCs, Jonathan Haber Open Access, Peter Suber Paradox, Margaret Cuonzo Waves, Frederic Raichlen The Technological Singularity, Murray Shanahan The Technological Singularity     Murray Shanahan     The MIT Press Cambridge, Massachusetts London, England © 2015 Massachusetts Institute of Technology All rights reserved. No part of this book may be reproduced in any form by any electronic or mechanical means (including photocopying, recording, or information storage and retrieval) without permission in writing from the publisher. Library of Congress Cataloging-in-Publication Data Shanahan, Murray. The technological singularity / Murray Shanahan.    p.  cm. Includes bibliographical references and index. ISBN 978-0-262-52780-4 (pbk. : alk. paper) 1. Artificial intelligence—Forecasting. 2. Artificial intelligence—Psychological aspects. 3. Technology—Social aspects. 4. Conscious automata. 5. Brain—Computer simulation. I. Title. ISBN 978-0-262-33183-8 (retail e-book) Q335.S4626 2015 006.3—dc23 2015000997     These remarks might appear fanciful to some readers, but to the writer they seem very real and urgent, and worthy of emphasis outside of science fiction. I. J.Good, Speculations Concerning the First Ultraintelligent Machine (1965) Real motive problem, with an AI. Not human, see? William Gibson, Neuromancer (1984) Contents Series Foreword Preface Introduction 1    Routes to Artificial Intelligence 2    Whole Brain Emulation 3    Engineering AI 4    Superintelligence 5    AI and Consciousness 6    The Impact of AI 7    Heaven or Hell Glossary Notes Further Reading Index Series Foreword The MIT Press Essential Knowledge series offers accessible, concise, beautifully produced pocket-size books on topics of current interest. Written by leading thinkers, the books in this series deliver expert overviews of subjects that range from the cultural and the historical to the scientific and the technical. In today’s era of instant information gratification, we have ready access to opinions, rationalizations, and superficial descriptions. Much harder to come by is the foundational knowledge that informs a principled understanding of the world. Essential Knowledge books fill that need. Synthesizing specialized subject matter for nonspecialists and engaging critical topics through fundamentals, each of these compact volumes offers readers a point of access to complex ideas. Bruce Tidor Professor of Biological Engineering and Computer Science Massachusetts Institute of Technology Preface Like many others who have dedicated their working lives to research in artificial intelligence, I was inspired as a child by science fiction. My boyhood hero was not a real person. It was Susan Calvin, the scientist in Asimov’s I Robot stories (the written works, not the film) who pioneered the field of robot psychology. More than anyone else, I wanted to be like her when I grew up. Now that I have (sort of) grown up, and in real life bear the title of Professor of Cognitive Robotics, I have a more complex relationship with science fiction. I still see it as a source of inspiration and as a medium for exploring important philosophical ideas. However, the ideas it explores merit a deeper treatment. The primary purpose of science fiction is to entertain, albeit in an intellectually stimulating way. It would be a mistake to use it as a guide to thinking. So this is not intended as a work of science fiction. Nor is it a piece of so-called futurology. The aim here is not to make predictions. Rather, it is to investigate a range of possible future scenarios, without committing to the prospect of any one in particular, and without any particular timescale in mind. Indeed even highly unlikely or remote scenarios are sometimes worthy of study. This is true, for instance, if a scenario is especially dystopian. In that case we might want to think carefully about how to reduce its likelihood even further. Unlikely or remote scenarios are also worth discussing if they raise interesting philosophical questions, obliging us, for example, to think about what we really want as a species. So whether or not you think we will soon create human-level artificial intelligence, whether or not you think the singularity is near, the very idea deserves some serious thought. This is a short book on a very large theme. So it can only stand as an introduction, with many important issues given only a brief treatment. For example, various arguments relating to consciousness are presented to which there are well-known counterarguments, and these merit counterarguments of their own. But an introductory book has to skip over these subtleties. Also the focus is heavily on the future of artificial intelligence, and some significant related topics, such as nanotechnology and biotechnology, are barely touched on. The book is intended to provide a neutral overview of the conceptual territory, and I have attempted to outline both sides of the argument in controversial matters. However, it seems unavoidable that some of my own views will be visible through the veil of neutrality, despite my best efforts. I would like to thank the very many people who have discussed artificial intelligence with me over the decades, not only academics and students but also members of the public who have attended my talks. I would like to thank them all by name, but that would be impossible. So I will reserve my explicit gratitude for a few colleagues whose recent influence has been especially pertinent. Thanks to Stuart Armstrong, Nick Bostrom, Andrew Davison, Daniel Dewey, Randal Koene, Richard Newcombe, Owen Holland, Huw Price, Stuart Russell, Anders Sandberg, and Jaan Tallinn. Sorry to those I have forgotten. Finally, I would like to thank MIT Press, and especially Bob Prior, for encouraging me to write the book in the first place. Murray Shanahan North Norfolk and South Kensington, October 2014 Introduction In recent years the idea that human history is approaching a “singularity” thanks to increasingly rapid technological advance has moved from the realm of science fiction into the sphere of serious debate. In physics, a singularity is a point in space or time, such as the center of a black hole or the instant of the Big Bang, where mathematics breaks down and our capacity for comprehension along with it. By analogy, a singularity in human history would occur if exponential technological progress brought about such dramatic change that human affairs as we understand them today came to an end.¹ The institutions we take for granted—the economy, the government, the law, the state—these would not survive in their present form. The most basic human values—the sanctity of life, the pursuit of happiness, the freedom to choose—these would be superseded. Our very understanding of what it means to be human—to be an individual, to be alive, to be conscious, to be part of the social order—all this would be thrown into question, not by detached philosophical reflection, but through force of circumstances, real and present. What kind of technological progress could possibly bring about such upheaval? The hypothesis we will examine in this book is that a technological singularity of this sort could be precipitated by significant advances in either (or both) of two related fields: artificial intelligence (AI) and neurotechnology. Already we know how to tinker with the stuff of life, with genes and DNA. The ramifications of biotechnology are large enough, but they are dwarfed by the potential ramifications of learning how to engineer the “stuff of mind.” Today the intellect is, in an important sense, fixed, and this limits both the scope and pace of technological advance. Of course the store of human knowledge has been increasing for millennia, and our ability to disseminate that knowledge has increased along with it, thanks to writing, printing, and the Internet. Yet the organ that produces knowledge, the brain of homo sapiens, has remained fundamentally unchanged throughout the same period, its cognitive prowess unrivalled. This will change if the fields of artificial intelligence and neurotechnology fulfill their promise. If the intellect becomes, not only the producer, but also a product of technology, then a feedback cycle with unpredictable and potentially explosive consequences can result. For when the thing being engineered is intelligence itself, the very thing doing the engineering, it can set to work improving itself. Before long, according to the singularity hypothesis, the ordinary human is removed from the loop, overtaken by artificially intelligent machines or by cognitively enhanced biological intelligence and unable to keep pace. Does the singularity hypothesis deserve to be taken seriously, or is it just an imaginative fiction? One argument for taking it seriously is based on what Ray Kurzweil calls the “law of accelerating returns.” An area of technology is subject to the law of accelerating returns if the rate at which the technology improves is proportional to how good the technology is. In other words, the better the technology is, the faster it gets better, yielding exponential improvement over time. A prominent example of this phenomenon is Moore’s law, according to which the number of transistors that can be fabricated on a single chip doubles every eighteen months or so.² Remarkably, the semiconductor industry has managed to adhere to Moore’s law for several decades. Other indexes of progress in information technology, such as CPU clock speed and network bandwidth, have followed similar exponential curves. But information technology isn’t the only area where we see accelerating progress. In medicine, for example, DNA sequencing has fallen exponentially in cost while increasing exponentially in speed, and the technology of brain scanning has enjoyed an exponential increase in resolution.³ On a historical timescale, these accelerating trends can be seen in the context of a series of technological landmarks occurring at ever-decreasing intervals: agriculture, printing, electric power, the computer. On an even longer, evolutionary timescale, this technological series was itself preceded by a sequence of evolutionary milestones that also arose at ever-decreasing intervals: eukaryotes, vertebrates, primates, homo sapiens. These facts have led some commentators to view the human race as riding on a curve of dramatically increasing complexity that stretches into the distant past. Be that as it may, we need only extrapolate the technological portion of the curve a little way into the future to reach an important tipping point, the point at which human technology renders the ordinary human technologically obsolete.⁴ Of course, every exponential technological trend must reach a plateau eventually, thanks to the laws of physics, and there are any number of economic, political, or scientific reasons why an exponential trend might stall before reaching its theoretical limit. But let us suppose that the technological trends most relevant to AI and neurotechnology maintain their accelerating momentum, precipitating the ability to engineer the stuff of mind, to synthesize and manipulate the very machinery of intelligence. At this point, intelligence itself, whether artificial or human, would become subject to the law of accelerating returns, and from here to a technological singularity is but a small leap of faith. Some authors confidently predict that this watershed will occur in the middle of the 21st century. But there are other reasons for thinking through the idea of the singularity than prophecy, which anyway is a hit-and-miss affair. First, the mere concept is profoundly interesting from an intellectual standpoint, regardless of when or even whether it comes about. Second, the very possibility, however remote it might seem, merits discussion today on purely pragmatic, strictly rational grounds. Even if the arguments of the futurists are flawed, we need only assign a small probability to the anticipated event for it to command our most sincere attention. For the consequences for humanity, if a technological singularity did indeed occur, would be seismic. What are these potentially seismic consequences? What sort of world, what sort of universe, might come into being if a technological singularity does occur? Should we fear the prospect of the singularity, or should we welcome it? What, if anything, can we do today or in the near future to secure the best possible outcome? These are chief among the questions to be addressed in the coming pages. They are large questions. But the prospect, even just the concept, of the singularity promises to shed new light on ancient philosophical questions that are perhaps even larger. What is the essence of our humanity? What are our most fundamental values? How should we live? What, in all this, are we willing to give up? For the possibility of a technological singularity poses both an existential risk and an existential opportunity. It poses an existential risk in that it potentially threatens the very survival of the human species. This may sound like hyperbole, but today’s emerging technologies have a potency never before seen. It isn’t hard to believe that a highly contagious, drug-resistant virus could be genetically engineered with sufficient morbidity to bring about such a catastrophe. Only a lunatic would create such a thing deliberately. But it might require little more than foolishness to engineer a virus capable of mutating into such a monster. The reasons why advanced AI poses an existential risk are analogous, but far more subtle. We will explore these in due course. In the meantime suffice to say that it is only rational to consider the future possibility of some corporation, government, organization, or even some individual, creating and then losing control of an exponentially self-improving, resource-hungry artificial intelligence. On a more optimistic note, a technological singularity could also be seen as an existential opportunity, in the more philosophical sense of the word “existential.” The capability to engineer the stuff of mind opens up the possibility of transcending our biological heritage and thereby overcoming its attendant limitations. Foremost among these limitations is mortality. An animal’s body is a fragile thing, vulnerable to disease, damage, and decay, and the biological brain, on which human consciousness (today) depends, is merely one of its parts. But if we acquire the means to repair any level of damage to it, and ultimately to rebuild it from scratch, possibly in a nonbiological substrate, then there is nothing to preclude the unlimited extension of consciousness. Life extension is one facet of a trend in thought known as “transhumanism”. But why should we be satisfied with human life as we know it? If we can rebuild the brain, why should we not also be able to redesign it, to upgrade it? (The same question might be asked about the human body, but our concern here is the intellect.) Conservative improvements in memory, learning, and attention are achievable by pharmaceutical means. But the ability to re-engineer the brain from bottom to top suggests the possibility of more radical forms of cognitive enhancement and re-organization. What could or should we do with such transformative powers? At least, so one argument goes, it would mitigate the existential risk posed by superintelligent machines. It would allow us to keep up, although we might change beyond all recognition in the process. The largest, and most provocative, sense in which a technological singularity might be an existential opportunity can only be grasped by stepping outside the human perspective altogether and adopting a more cosmological point of view. It is surely the height of anthropocentric thinking to suppose that the story of matter in this corner of the universe climaxes with human society and the myriad living brains embedded in it, marvelous as they are. Perhaps matter still has a long way to go on the scale of complexity. Perhaps there are forms of consciousness yet to arise that are, in some sense, superior to our own. Should we recoil from this prospect, or rejoice in it? Can we even make sense of such an idea? Whether or not the singularity is near, these are questions worth asking, not least because in attempting to answer them we shed new light on ourselves and our place in the order of things. 1 Routes to Artificial Intelligence 1.1 Artificial General Intelligence In 1950 the wartime code-breaker and computing pioneer Alan Turing published a paper in the journal Mind entitled “Computing Machinery and Intelligence.”¹ It was the first serious, scholarly treatment of the concept of artificial intelligence. Turing predicted that by the year 2000 people would “be able to speak of machines thinking without expecting to be contradicted.” He envisaged that machines would be able to pass what has become known as the Turing Test. The Turing Test involves a kind of game. Two “players,” one human and the other a machine, communicate with another person, the “judge,” through a keyboard and screen. The judge holds a conversation with each of the players in turn, and tries to guess which is the human and which is the machine. The task for the machine is to convince the judge that it is the human—a feat, so the argument goes, that would surely require human-level intelligence. If the judge is unable to tell human from machine, then the machine has passed the test. Writing in 1950, Turing anticipated a world in which machines capable of passing his test were commonplace, a world in which “thinking machines” were familiar, in the home and in the workplace. Despite Turing’s prediction, human-level AI had not been achieved by the year 2000, nor was there any sign that it might be around the corner. No machine could come close to passing the Turing Test. Nevertheless, one significant milestone in artificial intelligence had recently been attained. In 1997 Deep Blue, a computer developed by IBM, defeated then world chess champion Garry Kasparov. In contrast to previous chess programs he had beaten, which to him seemed predictable and mechanical, Kasparov allegedly said he sensed an “alien intelligence” on the other side of the board when he played against Deep Blue.² It’s instructive to stand back and ponder this moment in the history of AI. The field had accomplished something that half a century beforehand might have been considered its crowning achievement. Humanity had been outstripped by a machine. Of course, a car can move faster than the fastest human sprinter, and a crane can hoist far more than a champion weight-lifter. But intellectual prowess is what sets human beings apart from the rest of the animals, and chess is a quintessentially intellectual pursuit. Now computer chess was cracked. Yet somehow we seemed no nearer to human-level AI than in Turing’s time. How could this be? The problem with Deep Blue was that it was a specialist. All it could do was play chess. Contrast this with a typical human adult. Take the office worker who has just walked past the window of the café where I am sitting with my laptop. Her day has no doubt been a busy patchwork of activities—making a packed lunch, reviewing the children’s homework, driving to work, composing emails, fixing the photocopier, and so on. Each of these activities, examined more closely, requires the exercise of multiple sensorimotor skills. Consider the task of making a packed lunch. This involves retrieving utensils and ingredients from various places, opening packets, chopping, cutting, spreading, and so on. In short, a human being is a generalist, a jack of all trades. A human chess champion can do a whole lot more than just play chess. Moreover a human being is adaptive. Fixing photocopiers is not an innate capability. It is learned. Had the office worker been born in a different century or a different culture, she would have acquired a different set of skills. And if she has the misfortune to lose her present job, she can re-train for another one. The achievements of AI research in a variety of specialist domains (chess being just one among many success stories) contrast starkly with the field’s failure to produce a machine with general purpose, adaptive intelligence. So how could we produce artificial general intelligence? Before we can speculate in an informed way about machine superintelligence, we need to answer this question.³ An essential feature of biological intelligence is embodiment. Unlike Deep Blue, a human being is an animal with a body, and its brain is part of that body. The brain of an animal has evolved to maintain the well-being of that body and to perpetuate the genes that it carries. The body has muscles, enabling it to move, and senses, so that its movements can be made to depend on the state of the environment, the better to subserve its mission. The brain sits in the middle of this sensorimotor loop, shaping the animal’s actions according to what it perceives. Human intelligence, for all its glorious achievements, is fundamentally an extension of animal intelligence, and the human capacities for language, reason, and creativity all rest on a sensorimotor foundation. So while the endeavor to create artificial general intelligence might do away with much that is essential to biological life, such as metabolism and reproduction, perhaps embodiment is a methodological necessity. Perhaps the need to engage with a messy, dynamic, physical environment full of complex and varied objects, both animate and inanimate, is at the root of intelligence. The Turing Test is a poor benchmark, in this light, since it involves only language. The only way to form a reliable judgment of the intelligence of an artifact is to observe its behavior in an environment like our own. And the only way to achieve human-level AI, according to this way of thinking, is through robotics. Later we will examine challenges to this principle of embodiment. But let’s adopt it for the time being. Our basic question can then be reformulated. How can we endow a robot with general intelligence? Perhaps general intelligence is simply the sum of many specialist sensorimotor skills, and the problem is simply that AI hasn’t yet replicated enough of them. When robots have been given a certain critical mass of skills, general intelligence will somehow emerge. Well, even if we gloss over the many engineering questions this proposal begs, it remains unconvincing. The products of such an approach might briefly give the appearance of general intelligence. But nobody would be fooled for very long. The multi-specialist is going to get stuck as soon as it has to face a problem that is outside any of its areas of expertise, an inevitable occurrence in an ever-changing world. Perhaps the capacity to learn is enough to plug the gap here. In an unfamiliar situation, a new specialist skill can be learned. Well, the ability to learn is certainly needed to build up and maintain a repertoire of skills. Indeed, learning, in its various forms, is the backdrop to all intelligence. But learning is time-consuming and risky. The hallmark of properly general intelligence is the ability to adapt an existing behavioral repertoire to new challenges, and to do so without recourse to trial and error or to training by a third party. 1.2 Common Sense and Creativity So what would it take to overcome the limitations of specialization, to endow a machine with properly general intelligence? Perhaps the foremost requirements for such a machine are common sense and creativity. To have common sense, in this context, is to display an understanding of the principles of operation of the everyday world, in particular the physical and social environments. For example, one such principle is that if you walk all the way around something, you end up back where you started. Another is that if you walk back along a path you have just followed, you encounter the same landmarks but in reverse order. Principles such as these are useful because their application is not confined to narrow domains. They are universal and reusable. What does mastery of a principle of common sense entail? There is no need to say anything about mechanism to answer this question. In particular, there’s no reason to assume that mastery necessitates the internal representation of the principle in some language-like form. Instead, it will be manifest in behavior. Or more likely, the lack of some aspect of common sense will be manifest in behavior. For example, the cockerel that lives behind our house likes to fly up and over the gate, escaping his enclosure. But he’s never out for long before he wants to get back in to rejoin the hens. All he needs to do is fly back over the gate. Yet this never occurs to him. Instead, he paces anxiously up and down in front of the gate. He seems to lack the commonsense principle that certain actions are reversible. To the extent that such blind spots in understanding do not show up in an animal’s behavior, it might be said to possess common sense. Of course, these considerations apply to humans as well as other animals, where they extend into the social realm. In particular, a shared understanding of the everyday world is at the core of language. Suppose you turn up to work to find a group of colleagues standing outside the building in the rain. “What are you doing?” you ask the nearest person. You would find it odd if she replied, albeit truthfully, “I’m standing in the rain.” Instead she says “Fire alarm,” and thereby exhibits a commonsense understanding of the human need for information and the role of conversation in obtaining it. The second major requirement for general intelligence is creativity. The sort of creativity in question is not that of a great artist or composer or mathematician, but something every human being is capable of, something displayed by children in abundance. It is the ability to innovate, to generate novel behavior, to invent new things or devise new ways to use old things. It might be exploratory, or playful, as when a child improvises a dance. But it might be more goal-directed, such as planning the layout of a garden or devising ways to reduce household spending. Little creative acts such as these may not seem novel in the grand scheme of human affairs, but in each case they require the individual to go beyond his or her established behavioral repertoire, to reshape its elements or to assemble them in previously untried combinations. Creativity and common sense complement each other. Creativity enables the individual to come up with new actions, but a commonsense understanding of the everyday world is needed to anticipate the consequences of those actions. On one hand, creativity without common sense (as we’re using the terms here) is nothing more than thrashing in the dark. On the other hand, common sense without creativity is inflexible. But an intelligence that can wield both is a powerful thing. Confronted with an unfamiliar challenge, it can entertain a multitude of possibilities for action thanks to its creative faculty and, thanks to its commonsense understanding of their effects, anticipate each of their likely outcomes before twitching a muscle or spinning a motor. A fine example of apparently spontaneous innovation was reported in 2002 by a team of scientists from Oxford led by animal cognition researcher Alex Kacelnik.⁴ They were studying tool use in captive New Caledonian crows (an especially clever species), using an experimental apparatus comprising a small bucket containing food and a tall tube. To challenge the birds, the bucket was lowered into the tube, so that the handle was just out of reach. The birds were provided with pieces of bent wire, which they soon learned to use as hooks to lift the food-bucket out. However, on one occasion, when no hooks were available to the birds, only a piece of straight wire was left in their enclosure. Without ever having been trained to do so, one of the birds, Betty, jammed one end of the wire into a hole in the apparatus and bent it into a hook, which she then used to retrieve the food. Betty’s action was a blend of creativity and common sense. It required creativity to come up with the very idea of bending an otherwise useless piece of wire, and it required a common sense understanding of the pliable materials to anticipate the outcome. If these cognitive ingredients can produce impressive results in nonhuman animals, how much greater are their benefits in language-using humans. The schoolboy who hurls an inventive insult at one of his classmates blends linguistic creativity with a commonsense understanding of human psychology (even if he lacks the common sense not to aim such an insult at the teacher). This is a trivial example. But every human achievement, from the pyramids to the moon landings, is the product of a myriad such acts of invention, layered one upon another. A human-level artificial general intelligence must display a similar blend of common sense and creativity if it is to perform comparable feats. 1.3 The Space of Possible AIs If the requirements for artificial general intelligence are so clear—all it needs is a little creativity and a little common sense—then why was so little progress made in the first sixty years of research in the field? Given the lack of success, is there any reason to suppose that human-level AI is practicable? And given that human-level AI is so difficult to create, what is the point of speculating about superintelligent AI? We have been examining the behavioral hallmarks of general intelligence, and have so far avoided discussion of the mechanisms by which it might be realized, either in the biological brain or in an artifact. But before we can address these questions, this omission needs to be remedied. We cannot begin to paint a picture of the future of AI without thinking about concrete mechanisms. In computer science terms, we need to think not just about specification but also about implementation. It is a commonplace in computer science that the same specification can be implemented in many ways. This makes our task a difficult one because, unlike a software company that only needs to produce a single product, we would like to form an idea of the whole space of possible artificial intelligences. Moreover, for all we know, some revolutionary technology will be developed in the near future that will enable the creation of artificial general intelligence of a kind that we can barely imagine today. Nevertheless, we have little choice but to start with the variety of schools in current AI research, and attempt to extrapolate from there. One axis along which the space of possible AIs can usefully be classified is biological fidelity. How closely does the operation of an AI mimic that of the biological brain? At one end of this axis we find AIs that have been engineered from scratch, according to principles quite different from those that govern biological intelligence. At the other end of the axis are machines based on neural networks that copy biological brains down to a fine level of physical detail. There have been methodological schools advocating work at all points along this spectrum throughout the history of AI. The popularity of each school has waxed and waned, but none has emerged as triumphant, and each has arguments in its favor. For example, a well-worn analogy with the history of powered flight likens the first type of machine, the AI engineered from scratch, to an aeroplane. Early designs for flying machines included flapping wings, imitating birds. But this approach failed. Fixed wings and propellers turned out to be the best way to get a large, heavy, human-made object airborne. Similarly, according to the argument by analogy, artificial intelligence shouldn’t proceed by trying to imitate nature, but by devising a whole new set of engineering principles tailored for silicon-based computation. Opponents of this standpoint (after pointing out the dubious status of arguments by analogy) can counter that the biological brain is the only exemplar we have of general intelligence. We know it’s possible to implement general intelligence in a neural substrate. Insofar as we can replicate this substrate artificially, we can be confident of success. Indeed, in its most extreme, brute-force guise, this biologically inspired approach is almost guaranteed to succeed under certain fairly conservative scientific and technological assumptions. There’s a lot to say about engineering AI from scratch, and we’ll return to this theme in due course. But it is this brute-force biologically inspired approach, known as whole brain emulation, that will be our focus for the time being.⁵ As well as being a viable step toward the future creation of artificial general intelligence, whole brain emulation is touted as the route to mind uploading, an important goal for certain brands of transhumanism. Finally, the mere concept of whole brain emulation is useful as a philosophical thought experiment. It forms the basis of a cluster of potent philosophical arguments relating to the very idea of artificial intelligence, to machine consciousness, and to personal identity, all of which are highly relevant to the topic of this book. 2 Whole Brain Emulation 2.1 Copying the Brain What exactly is whole brain emulation? In a nutshell, the idea is to make an exact working copy (or copies) of a particular brain in a nonbiological (e.g., computational) substrate. To understand the details, we need to know a bit of basic neuroscience. The vertebrate brain, like every other organ in an animal’s body, comprises a multitude of cells. Many of these cells are neurons, which are remarkable electrical devices, each one capable of sophisticated signal processing. A neuron consists of a cell body (called the soma), an axon, and a set of dendrites. Crudely speaking, the dendrites can be thought of as the neuron’s input and the axon as its output, while the soma does the signal processing. Neurons are richly interconnected, and they form a complex network. Both axons and dendrites resemble trees, with numerous branches fanning out and intertwining with the axons and dendrites of other neurons. At points where the axon (output) of one neuron is very close to a dendrite (input) of another neuron, a synapse can form. By means of a complex exchange of chemicals, a synapse permits signals to jump from one neuron to another, which allows them to communicate with each other. The human brain contains an astonishing number of neurons—more than 80 billion. But neurons are not confined to an animal’s central nervous system, its brain and spinal cord. The peripheral nervous system is also constituted by neurons, which carry sensory signals to the brain from the body—the skin, the eyes, the stomach, and so on—and carry motor signals from the brain (via the spinal cord) to the rest of the body, to the muscles, the glands, and so on. Activity in the brain results from the interplay of electrical and chemical activity. In particular, the behavior of a neuron is modulated by the presence of chemical neurotransmitters, such as dopamine and seratonin. These chemicals are produced by special-purpose neurons with long, diffuse axonal projections that disseminate the chemicals throughout the brain. Neuromodulating chemicals can also be delivered to the brain via the blood, which is how most psychoactive drugs work. The brain isn’t just made of neurons. It also contains a vascular system that transports blood to all its parts, delivering the energy it needs to generate all those electrical signals. And it contains a vast number of so-called glial cells. These were once thought of simply as a kind of glue, holding all the neurons and their axons and dendrites in place. But the glial cells seem to perform a signaling function of their own, albeit on a slower timescale than the neurons. The signaling properties of individual neurons are more-or-less understood. The details are complicated. But simply put, each neuron adds up (integrates) the signals on its dendritic input, and when the total reaches a threshold, it emits a pulse, or spike, along its axon. Precise descriptions of this process have been around since the 1950s, when Alan Hodgkin and Andrew Huxley formalized the mathematical model for which they earned the Nobel Prize in medicine. An essential property of the brain is its plasticity. During development, the connections in the pre-natal and infantile brain undergo dramatic reconfiguration, as axons and dendrites grow like the roots of a plant, feeling their way across enormous distances (in neural terms) to establish new connections, as well as abandoning redundant ones. Additionally, throughout an animal’s life, established neural connections are subject to constant changes in strength, facilitating learning and memory. Good mathematical models exist of these plastic processes too. Obviously this brief overview doesn’t even scratch the surface of what we know about the brain, and what we do know barely scratches the surface of all there is to know. However, everything in our burgeoning understanding of its inner workings lends supports the following hypothesis, which is enormously significant both practically and philosophically: human behavior is determined by physical processes in the brain that mediate between its incoming sensory signals and its outgoing motor signals. Of course, to make sense of human behavior we have to see it in the context of an embodied animal interacting with its physical and social environment. Activity in the brain is meaningless otherwise. But this platitude is irrelevant to the hypothesis. Put another way, the claim is simply that there are no causal mysteries, no missing links, in the (immensely complicated) chain of causes and effects that leads from what we see, hear, and touch to what we do and say. The possibility of whole brain emulation rests on this claim. 2.2 Three Stages of Whole Brain Emulation The business of whole brain emulation can be envisioned as a three-stage process: mapping, simulation, and embodiment.¹ The first stage is to map the brain of the subject at high (submicron) spatial resolution. The entire forebrain (at least) should be included in the map. This ensures that the portions of the brain most closely associated with higher cognitive functions are scanned, notably the cerebral cortex (gray matter) and its interconnections (white matter), as well as structures associated with emotion and action selection, such as the amygdala and the basal ganglia. The mapping process should acquire (at least) the location and characteristics of every neuron and every synapse, along with a neuron-level connectome, that is to say a record of every connection between every axon and every dendrite. The result will be an exquisitely detailed blueprint of a particular brain at a particular time. The second stage of the process is to use this blueprint to build a real-time simulation of the electrochemical activity of all of those neurons and their connections. Such a simulation could, for example, be built using standard techniques from the field of computational neuroscience, using established mathematical formulations of neuronal behavior such as the Hodgkin–Huxley model. The underlying techniques here are much the same as those used to simulate the weather, say, or fluid flow around a wing. It goes without saying that considerable computing resources would be needed to simulate even a small brain this way. The third stage of the process is to interface the simulation to an external environment. So far all we have is a very complicated, disembodied computing device. To bridge the gap from a powerless simulation running inside a box to a causally potent emulation that exhibits outward behavior necessitates the construction of a body (even if this is a simulated body in a virtual world—a possibility I discuss later). Since the simulation expects incoming signals just like those of its biological precursor, and generates outgoing signals just like those of its biological precursor, the task of interfacing the simulated brain to this (synthetic) body is made easier if it is morphologically and mechanically similar to the body of the original animal. If the mapping and simulation stages are successful, then the behavior of the simulated neurons, both individually and as a population, should be effectively indistinguishable from that of the original, biological brain given the same input from the environment. The word “effectively” is important here because it would be too much to expect a perfect match. A brain is a chaotic system, in the mathematical sense that very small differences in initial conditions can lead to very large differences in the behavior of the system over time. Consequently small inaccuracies in the mapping process, as well as numerical rounding errors in the computation, would cause the behavior of the simulation eventually to diverge from that of its biological prototype. But this limitation isn’t necessarily a barrier to successful emulation. If these microscopic deviations are sufficiently small, the macro-scale outward behavior of the emulation would surely be indistinguishable from that of the original. From the standpoint of an observer, the emulation would seem to make the same decisions and to perform the same actions as its prototype under any given set of circumstances. If the subject is human, then even his or her friends and loved ones would have to admit that the emulation behaved uncannily like the person they knew, displaying the same habits, talking in the same way, even claiming to possess the same memories. 2.3 The Technology of Brain Mapping The idea of human whole brain emulation is technologically problematic and philosophically challenging. It’s a theme we will return to in due course. But for now, let’s consider a species that raises fewer technological and philosophical difficulties, one with a smaller brain, namely the mouse. What would it take to achieve whole brain emulation for a mouse? What kind of technology would be required? Let’s take each of the three stages of emulation in turn. Here is one way to carry out a detailed structural scan of a mouse’s brain using early 21st century technology. First, the (un)fortunate mouse is killed and its brain extracted. Second, its forebrain is sectioned into ultra-thin slices. Third, each slice is imaged and digitized using electron microscopy. Fourth, the placement and type of each neuron, the shape of each axon and dendrite, the location and type of each synapse, and so on, are all reconstructed by computer from the stack of images. The result would be a very large dataset capturing much of the essence of the original brain, just the sort of blueprint we need. But would it be sufficient to construct an emulation? A structural scan of this sort only provides a snapshot of the brain’s components frozen in time—what their shapes are, how they are arranged, how they connect one to another. It doesn’t directly tell us about dynamics, about how those components behave and interact. The higher the spatial resolution of the structural scan, the smaller the neural microstructures it will include, and the easier it will be to reconstruct the likely behavior of a given neuron on a computer using a mathematical model. Nevertheless, even a high-resolution scan is unlikely to be able to fix all the parameters required by such a model, such as the strength of a synaptic connection. And without all its parameters filled in, a mathematical model is useless for computer simulation. However, if they can be obtained, recordings of the electrical activity of a neuron can compensate for shortcomings even in a lower resolution structural scan. One possible method for doing this, again using early 21st century technology, is to use a mouse that has been genetically modified so that its neurons produce a dye that fluoresces when they fire. Then, by shining light onto the cortex, recordings can be made of the activity of every neuron in the brain using ordinary light microscopy.² (Obviously these have to be made before the mouse is killed and its brain sliced.) Automated techniques can subsequently be used to search for values for the missing parameters such that, when plugged into the model, the recorded data are most accurately reproduced. Scanning and recording techniques such as these are very promising. However, the brain of a mouse contains over 70 million neurons, and each neuron can have several thousand synaptic connections. A human brain contains over 80 billion neurons and tens of trillions of synapses. Computationally intensive methods such as the slice-and-scan procedure will struggle with the sheer numbers involved, and even Moore’s law is unlikely to bail these methods out. The fluorescence microscopy method described also has its limitations. Although it has excellent spatial resolution, and can monitor individual neurons, it has relatively low temporal resolution and cannot distinguish individual spiking events. Thankfully, though, various alternative approaches to mapping the brain are on the horizon, due to advances in biotechnology and nanotechnology. Let’s take a look at a couple of candidates. We just touched on one relevant application of genetic engineering. Here is another.³ Suppose that we can genetically engineer the mouse so that every neuron in its brain contains a sequence embedded in its DNA that is unique to that neuron, a kind of “DNA barcode.” Then, with every neuron individually barcoded, the mouse’s brain could be “infected” with an otherwise harmless virus that has been specially engineered to carry genetic material across synaptic gaps, enabling DNA from the pre-synaptic neuron to recombine with DNA from the post-synaptic neuron. This would produce new strands of DNA, each containing a pair of barcodes representing the existence of a synaptic connection between the two neurons in question. The brain of the mouse would thus become a repository of billions of genetically encoded records of pairwise connections between neurons. The task then would be to extract these data, which could be done using DNA sequencing technology. With this method, a neuron-level connectome would be obtainable without the costly intermediate step, in terms of data and computation, of submicron-scale imaging and image processing. Moreover the bottleneck with this method, the speed and cost of DNA sequencing, has undergone years of exponential improvement in the aftermath of the human genome project. So this is a promising technique. But as with the slice-and-scan procedure described earlier, it would only supply some of the data necessary to emulate the brain. It reveals structure but not function. This is where nanotechnology comes in. Nanotechnology could help map the mouse’s neural activity, and thereby it could fill in the missing details of the blueprint. Biotechnology and nanotechnology both rely on the same powerful idea—the exploitation of very large numbers of very small objects. In the case of biotechnology, the very small objects in question are biological—viruses, bacteria, strands of DNA, and so on. But the idea works just as well with very small nonbiological objects. The field of nanotechnology concerns the manufacture of such objects, objects whose characteristic size is of the order of tens of nanometers, that is to say a few tens of billionths of a meter. Nanotechnology has numerous potential applications, many of which are relevant to this book. But for now we’ll confine our attention to the business of brain activity mapping. At the nano-scale, even the soma of a neuron, whose characteristic size is a few millionths of a meter, looks big. So we can imagine creating swarms of nano-scale robots capable of swimming freely in the brain’s network of blood vessels, each one then attaching itself like a limpet to the membrane of a neuron or close to a synapse.⁴ There it would sit, sensing the neuron’s fluctuating membrane potential or detecting spike events, and transmitting this information live to a fleet of micro-scale way-station devices near the cortical surface. The job of these way stations would be to harvest incoming data from the numerous “neuro-limpets” and to broadcast to the outside world, where the data can be collected by the neuroscientist. Although these are speculative proposals, they hint at what may be feasible in the near future. It’s not the aim of this book to make detailed predictions or to guess at the timescale of technological progress. Rather, the aim is to work through a range of possible future scenarios and their ramifications. The specific point here is that the barriers to providing a blueprint for the mouse brain, a blueprint with sufficient detail to enable a successful emulation, are technological not conceptual. Moreover they are barriers that are likely to be overcome in due course, perhaps using some combination of biotechnology and nanotechnology. It may take ten years. Perhaps it will take fifty. But historically speaking, even a century would be a very short time. In the meantime there is another possibility to consider, one that would require less scaling up in scanning technology but more science. So far we have been thinking about attempting to copy the brain of a particular adult animal. If the copy is to be behaviorally indistinguishable from the original, faithfully reproducing all its learned behavior, all its habits and preferences, a very detailed, very accurate scan would be required. But suppose instead that a large number of brains of newborn mice were scanned in as much detail as the state-of-the-art technology allowed. Then, by merging all the data, and drawing on as much other mouse brain data as possible to constrain it, a statistical model of the average neonate mouse brain could be built.⁵ With the aid of such a statistical model, any number of precise, neuron-by-neuron, synapse-by-synapse descriptions of individual juvenile mouse brains could be generated, each differing a little from the other but each conforming to the overall statistical template. None of these descriptions would correspond to the brain of a real mouse that had actually lived. But, given enough data to sufficiently constrain the model, each would represent a viable mouse brain, ready to be instantiated in a computer simulation and embodied. 2.4 The Technology of Neural Simulation With a detailed description of the brain duly acquired by some means or another, the simulation can be put together. There are a variety of options for the underlying substrate on which the simulation might be implemented. These range from conventional digital computers through custom-made analogue hardware to chemical or biological computers. The most conventional route to implementation involves the kind of digital computer we all have on our desks or embedded in our mobile phones. Any conventional digital computer can be used to simulate, one small time step at a time, how a collection of variables changes given a set of differential equations governing those variables. The electrical and chemical properties of the various components of a neuron can be modeled this way, for example, using the Hodgkin–Huxley equations mentioned previously. Of course, the task at hand is to simulate not just a single neuron but many neurons connected together. So there are many variables, each governed by the equations in question, and the task is to simulate them all at once. How can this be done in real time on a conventional, serial computer, which in effect carries out only one operation at a time? Well, fortunately neurons are slow. Even when excited, a typical neuron only emits a spike every few milliseconds. In the time it takes for a typical neuron to emit two spikes, a desktop computer running at a modest 3 GHz can perform more than ten million operations. So it’s possible to simulate many neurons at once by multi-tasking. In each millisecond of simulated time, the computer spends a tiny fraction of a millisecond simulating neuron 1, a tiny fraction of a millisecond simulating neuron 2, and so on for tens of thousands of neurons. However, even the brain of a mouse contains tens of millions of neurons, and to simulate them all accurately and in real time requires an awful lot of computation. Although processor clock speeds increased at a gratifyingly exponential rate in the 1980s and 1990s, this trend eventually slowed in the early 21st century. Even the fastest serial processor cannot simulate all the neurons in a mouse’s brain. Fortunately, though, parallelism can take the strain at this point. Rather than using a serial processor that carries out one operation at a time, the simulation can be done with multiple processors all running simultaneously, each one simulating many thousands of neurons. Just as it’s possible for a thousand workers to build, in a week, a brick edifice that would take an individual bricklayer a lifetime, it’s possible to simulate a whole brain using numerous slow parallel processors, something that would be impossible to simulate in real time using one fast processor. Indeed the brain itself exploits a form of massive parallelism. Each neuron might be thought of as a tiny, independent information-processing unit. Its input is the set of signals on its dendrites. It has a memory in the form of various physical quantities, such as its membrane potential and the strengths of its synapses. And the neuron itself “computes” a function that continuously maps its dendritic “input” and the current state of its “memory” to the “output” signal it delivers to its axon. The underlying functional substrate of the brain, according to this analogy, is a form of massively parallel computation, with many millions of tiny processors all working simultaneously. The analogy with parallel computation breaks down somewhat when we look at the real physics and chemistry of a neuron.⁶ But it helps to illustrate an important point, which is that the biological brain is another example of the principle of exploiting very large numbers of very small things. In order to simulate the brain, we need to exercise the same principle, albeit in a different substrate. So it bodes well for the prospect of whole brain emulation that the supercomputers of the mid-2010s are all massively parallel machines. Moreover, as the number of processors they incorporate has increased, the cost per processor has gone down, following an exponential trend that accords with Moore’s law. This particular technological trend owes a good deal to legions of computer gamers, whose demand for a better gaming experience has driven the development of cheap, high-performance graphics processing units (GPUs). Although originally dedicated to manipulating large arrays of pixels, the architecture of a GPU is essentially that of a general-purpose parallel computer. As their efficiency and power increased and their cost went down, they found new applications in other areas requiring large numbers of parallel calculations, such as modeling nuclear reactions or the climate. By 2012 the world’s most powerful computer, Cray’s Titan, was based on a hybrid architecture that incorporated 18,688 GPUs, each one in itself a powerful parallel computer. 2.5 Brain-Scale Computation It would already be possible to simulate the whole brain of a mouse using the most powerful computers of the mid-2010s if (1) the level of physical detail required for successful emulation were sufficiently low and (2) we had a blueprint at the required level of detail. We have already discussed some of the technological options for meeting the second condition. As for the first condition, the jury is still out. Is it possible to achieve behavioral indistinguishability while abstracting away from the chemistry of synaptic transmission, the structure of glial cells, the shapes of dendrites and axons, and so on, and treating neurons as simple, point-like, mathematical objects? If so, then the computational demands of whole brain emulation would be orders of magnitude less than if all these aspects of the brain had to be modeled. Neuroscience has yet to answer this question. But even if the answer is favorable, the scale-up from a mouse brain to a human brain (and to human-level intelligence) is huge. The engineering challenge here is not merely to achieve the required number of FLOPS (floating point operations per second) but to do so in a small volume and with low power consumption. The average human brain (male) occupies a mere 1,250 cm³ and consumes just 20 W. By contrast, the Tianhe-2, the world’s most powerful supercomputer in 2013, consumes 24 MW and is housed in a complex occupying 720 m². Yet it still has only a fraction of the computing power needed to simulate a human brain under even the most conservative assumptions. In short, massive parallelism notwithstanding, it may be necessary to look beyond conventional digital computers to achieve human-level AI via the whole brain emulation route. One promising approach is neuromorphic hardware.⁷ Rather than using existing general-purpose computing technology, the idea here is to construct custom hardware that closely resembles the wetware of the brain. Conventional digital hardware performs hundreds of binary floating point arithmetic operations to simulate a few milliseconds of change in a single neuron’s membrane potential. This involves thousands of transistor switching events, each of which consumes power (and generates heat). The membrane potential itself is represented as a binary number, which changes in discrete steps rather than varying continuously like a real physical quantity. The neuromorphic approach does away with all this digital paraphernalia and uses analogue components that behave like the original neuron. The membrane potential is represented by a real physical quantity of charge that undergoes continuous variation. The result is far more efficient in terms of power consumption. When we looked at potential brain mapping technologies for whole brain emulation, we envisaged scaling up contemporary technology (e.g., slicing and scanning), or successfully developing nascent technologies that already look feasible (e.g., DNA barcoding), or effecting a paradigm shift to a theoretically possible but highly speculative technology (e.g., neural nanobots). With the technology of neural simulation, we can look at a similar range of possibilities. We have already discussed massively parallel supercomputers that use conventional, digital architectures, and we just touched on neuromorphic hardware, which is well established as an alternative technique for simulating small numbers of neurons but needs to be scaled up dramatically. But what lies on the more distant horizon? There has been a good deal of speculation about the potential of quantum computation. This is certainly an interesting topic. But the class of problems for which quantum computation is theoretically advantageous does not include large-scale neural simulation. Exotic quantum effects, such as superposition, can be exploited to solve intractable search problems. But the computational demands of whole brain simulation are unrelated to the intractability of search.⁸ They are dictated by the requirement for truly massive parallelism. What we really need is a hardware paradigm that would allow Moore’s law to continue beyond the limits imposed by physics on the scale of integration possible in conventional hardware, limits such as the speed of light, the size of an atom, and the minimal energy required to flip a bit from one state to another. One candidate is quantum dot cellular automata (QDCA).⁹ Despite the use of the word “quantum” here, a QDCA is not a quantum computer. Rather, a quantum dot is a nano-scale semiconductor device that can act like a transistor, switching states very rapidly but using very little power. Four quantum dots can be arranged in a square to form a quantum dot cell, which can store a single bit of information. Quantum dot cells can be laid out on a grid (to form a cellular automaton) and organized into logic gates and communication channels. These are the basic elements of digital electronics, and they can be assembled into tiny processors. The advantage of QDCA over conventional (complementary metal-oxide semiconductor or CMOS) silicon technology is the enormous scale of integration they permit, enabling many more switching devices to be placed in the same area than is physically possible with CMOS while consuming modest power and generating little heat. But the practical application of QDCA is perhaps decades away. In the nearer term the semiconductor industry is likely to retain conventional processor design, perhaps exploiting 3D stacks of transistors in an effort to prolong Moore’s law as opposed to the 2D slices of silicon used today, and perhaps abandoning silicon altogether by adopting carbon nanotubes as the medium for constructing smaller, more efficient transistors. One thing is beyond doubt. The electronics industry of the 2010s is a long way from producing computers that come anywhere near the ultimate theoretical limit to how much computation can be carried out within a given portion of matter. The term computronium is sometimes used to denote a (mythical) material within which the number of computational operations carried out per second is equal to the maximum physically possible in any configuration of atoms. The physicist Seth Lloyd has calculated that such a theoretically perfect computer with a mass of 1 kg and occupying a volume of 1 liter would perform 5.4 ×10⁵⁰ logical operations per second on 10³¹ bits. This is a staggering 39 orders of magnitude greater than today’s computers.¹⁰ There is little prospect of ever achieving this kind of computational power in practice. However, a tiny fraction of this capacity would be sufficient for a very high fidelity simulation of the human brain. After all, the human brain is only a little over a liter in volume and (astonishingly) consumes just 20 W of power. Whether we are talking about simulating large numbers of neurons or achieving AI via a less biologically inspired route, the real possibility of much more powerful computers than we have today is a major motivation for thinking through the possibility of machine superintelligence. 2.6 Robotics: The Technology of Embodiment Let us suppose that, by some means or another, the technological obstacles to mapping and simulation have been overcome. An exquisitely detailed operational replica of the forebrain of a pioneer mouse has been constructed. The final stage of the emulation process is to interface the simulated brain to a synthetic (robot) body. Only at this stage can we properly test and tune the simulation to obtain the desired behavioral equivalence with the original. The robot body could in principle take a number of forms, more or less like the body of a mouse. But the interfacing problems are fewer if the body is as mouse-like as possible. So this is what we’ll assume for now—not a hard-shelled body on wheels but a soft, four-legged body with a musculoskeletal system. Similarly let’s imagine the robot body with a biomimetic suite of sensors—eyes, ears, and (very importantly) whiskers, all of which deliver a set of signals typical for these modalities in a real mouse. Now we have, in one hand as it were, the simulated forebrain of the mouse, and in the other hand the synthetic mouse body. But how do we join them? We cannot simply plug one into the other. The problem is that, in the real animal, there is no neat separation between the forebrain and the rest of the body. In fact the forebrain is just a particularly dense concentration of neurons and connections at one end of a nervous system that permeates an animal’s body from tip to toe, much as a system of rivers and tributaries permeates a rainforest. But we have elected to “sever” the forebrain from the rest of this system. In doing so, we threw away a large chunk of the central nervous system, including the cerebellum, which is implicated in motor coordination, and the whole of the peripheral nervous system. There are good reasons to believe that the forebrain harbors much of the “essence” of a particular mouse, much as there is reason to believe that the forebrain of a human being harbors much of the what it means to be that particular human being—habits, preferences, expertise, memories, personality. So the decision to focus on the forebrain was justified. However, by choosing to map and simulate just the forebrain, it is as if we had torn a tapestry in half, and now we have to join it back together again, re-attaching one silken thread at a time so as to seamlessly restore the original pattern. Or worse, it’s as if we had discarded one-half of the tapestry altogether and are now forced to synthesize the missing portion from scratch, guessing at the lost half of the pattern. The mouse’s body is the missing half of the tapestry, and the forebrain simulation, its myriad inputs and outputs disconnected like threads of torn silk dangling in the air, is the half of the tapestry we now have. Unfortunately, the inputs to and outputs from the forebrain simulation don’t come with labels attached, stating which wires to connect them to in the robot body. The engineer somehow has to work out what patterns of muscular movement were originally caused by each of the brain’s outgoing motor signals, and what incoming signals would originally have arisen from any given pattern of sensory stimulation. The precise location in cortex of a sensory neuron is a clue, especially in the case of vision and touch, whose connections are “topographically” organized. But this information falls a long way short of a wiring diagram that would make the job of the roboticist easy. The root of the difficulty is that, in the animal prototype, the various parts of the whole system—the forebrain, the rest of the nervous system, and the rest of the body—grew and developed together, each adapting to each other’s idiosyncrasies in an organic fashion. So one way to avoid the problem might be to widen the remit of the mapping stage. Instead of mapping just the forebrain, why not construct a map of the entire nervous system, central and peripheral, in tandem with a high-resolution representation of the 3D structure of the body? Then, as well as constructing a computer-simulated replica of the (entire) brain, we could synthesize an exact copy of the body of our particular mouse, including all the specifics of its very own peripheral nervous system and musculoskeletal structure. We are already in the business of extrapolating relevant technologies, so why should we not expect our capabilities in this sphere to extend to the body as a whole? Alternatively, rather than acquiring a scan of the whole peripheral nervous system and musculoskeletal structure, machine learning techniques could be applied, while the subject of the emulation is alive, to figure out the relationship between the brain’s sensorimotor activity and the movements that result. Knowing this relationship, an interface could be constructed that translates motor signals generated by the brain into commands that the robot’s synthetic body can understand (and supplies the brain with the proprioceptive signals and haptic feedback it expects). A plus point for this approach is that it reduces the extent to which the synthetic body has to resemble the original. If the emulation is to work out of the box, with a bare minimum of tuning and calibration, then the basic body pattern would have to be preserved—complete with four legs, paws, and twitchable nose in the case of the mouse. But thanks to clever interfacing, there would be no need to reproduce the mouse’s exact set of muscles and their characteristics. The need for a close replica of the original body is also reduced if we exploit another powerful learning device which is at our disposal, namely the simulated brain itself. The biological brain is a master of adaptation. Humans can learn to drive cars, to fly planes, to operate cranes and diggers, and so on. For the skilled driver, pilot, or operator, a machine can become an extension of the body. Moreover people who suffer horrific, disabling injuries have an extraordinary capacity to adjust to their predicament, learning to use wheelchairs, artificial limbs, and other prosthetics. A simulated brain would be no less plastic, no less adaptive. As long as the emulation is not expected to work right out of the box, there is no need for a body that works with sensorimotor signals perfectly matching those of the original. A period of training, or “rehabilitation,” can compensate for the mismatch. Using a combination of these two methods—engineering an interface tailored to behavioral data and introducing a period of rehabilitation—the range of body patterns available to the emulation could be greatly expanded. Why confine the emulated mouse to a mouse’s body? The re-animated creature could have six legs, or wheels. If the engineers have a mathematical model of the neural pattern corresponding to, say, the urge to “move toward the object in the center of the visual field,” then they can ensure that the mouse’s synthetic body moves toward the object in the mouse’s visual field whenever the mouse’s synthetic brain wants it to. Not only could the simulated brain adapt to an unfamiliar body, the new body could also be designed to adapt to the simulated brain thanks to advances in the fields of prosthetics and brain-machine interfaces. Contemporary human prosthetics are not passive devices. Rather, they are capable of shaping complex movements independently (as indeed are the tentacles of an octopus). But to do this effectively, they have to learn to recognize the intentions of their hosts. The field of brain-machine interfaces is making rapid progress in the application of machine learning to this problem, and the techniques it develops can also help with whole brain emulation. If the simulated brain and the synthetic body were allowed to co-adapt, then rehabilitation with a novel body pattern would be greatly facilitated. 2.7 Virtual Embodiment The biological brain is part of a sensorimotor loop, which enables it to direct the movements, in continuous time, of a body situated in a world with three spatial dimensions. A functional simulation of an animal’s brain must also be part of sensorimotor loop, and its inputs and outputs must be functionally equivalent to those of a real brain, which necessitates its embodiment. Interfacing the simulated brain to a physical, robotic body is one way to achieve this. Another way is to build a detailed simulation of the animal’s body and of the physical environment it normally lives in. The simulated mouse brain can be interfaced to a simulated mouse body (complete with simulated paws, whiskers, and fur), and let loose in a virtual world containing simulated grass, simulated hedges, and simulated cheese, all rendered with high enough resolution to be effectively indistinguishable from the real thing as far as the mouse’s sensorimotor apparatus is concerned. The techniques for doing this are well established, and once again we have to thank the economic clout of the video gamer for this. Thanks to the demand for an ever more photorealistic gaming experience, developers have produced ever more sophisticated physics engines that can simulate the behavior of physical objects in virtual worlds. The physics engine maintains the positions and orientations of the numerous objects that exist in the world of the game as they move around and bump into each other, taking account of the effects of gravity, friction, and so on. The reason for maintaining this information in a computer game is to render objects from the viewpoint of the gamer’s character (or perhaps from a viewpoint just behind the character). In the context of virtual embodiment, the role of the physics engine would be to provide realistic input and output for the simulated brain. But the engineering challenge is the same, whether the application is gaming or virtual embodiment. Solid objects are relatively straightforward to simulate. Soft or flexible objects, such as muscles or blades of grass, present more difficulties. Particulate matter, such as smoke or dust, is even trickier. But graphics experts have long cracked all these cases. Other agents, which would be a requirement for any simulated brain belonging to a social animal, present special difficulties. They might be crudely simulated, like the so-called AIs in contemporary games, with a simple repertoire of stereotyped behaviors. But they might be the avatars of humans living in the real world. Or they might be other, fully realized AIs with general intelligence. The last of these options raises the possibility of a whole virtual society of artificial intelligences living in a simulated environment. Liberated from the constraints of real biology and relieved of the need to compete for resources such as food and water, certain things become feasible for a virtual society that are not feasible for a society of agents who are confined to wetware. For example, given sufficient computing resources, a virtual society could operate at hyper-real speeds. Every millisecond that passed in the virtual world could be simulated in, say, one-tenth of a millisecond in the real world. If a society of AIs inhabiting such a virtual world were to work on improving themselves or on creating even more intelligent successors, then from the standpoint of the real world their progress would be duly accelerated. And if they were able to direct their technological expertise back out to the real world and help improve the computational substrate on which they depended, then the rate of this acceleration would in turn be accelerated. This is one route to a singularity-like scenario. The result would be explosive technological change, and the consequences would be unpredictable. 2.8 Emulation and Enhancement Let’s get back to the more immediate future. Whole brain emulation is just one way to achieve artificial general intelligence, a point in the space of engineering possibilities at the far end of biological fidelity. However, it is a significant point in that space because it suggests that at least one brand of (mouse-level) artificial general intelligence will become feasible in the near term under fairly conservative philosophical, scientific, and technological assumptions. Chief among these assumptions are that (1) intelligent behavior in humans and other animals is mediated by brain activity, which is governed by the laws of physics; (2) the level of physical detail necessary to achieve effective indistinguishability of behavior in an emulation is not too fine grained; and (3) existing mapping and computing technologies will scale sufficiently (by perhaps two or three orders of magnitude for a mouse) within a sufficiently short period. (A period short enough to get the attention of most people is “within their lifetime” or perhaps “within their children’s lifetimes.”) The first assumption represents a philosophical position that most would accept. The second assumption begs a number of scientific questions. It entails, for example, that we can get away with not simulating individual glial cells, that the continuous (as opposed to discrete) nature of the biological brain is not an obstacle to simulation, and that we can disregard quantum effects altogether. The third assumption, as long as we stick with the mouse, is realistic as far as computing power is concerned and reasonable as far as brain mapping technology is concerned. So it’s hard to avoid the conclusion that mouse-level artificial general intelligence is not only possible, but is a near term prospect. Once a mouse-scale whole brain emulation has been achieved, there are compelling reasons to think that human-level AI would not be far off. There are a number of ways the transition could be made. The most obvious is simply to scale up the emulation process and apply it to the human brain. It would be hard engineering, for sure, but no conceptual breakthroughs would be required. But is it realistic to expect the relevant enabling technologies, such as computer processing power and storage capacity, to carry on improving at a fast enough rate? Moore’s law has to end somewhere. Perhaps it will grind to a halt somewhere in the three orders of magnitude between mouse-scale whole brain emulation and human-scale whole brain emulation. Yet we know it’s possible to assemble billions of ultra-low power, nano-scale components into a device capable of human-level intelligence. Our own brains are the existence proof. Nature has done it, and we should be able to manipulate matter with the same facility as nature. So for sheer neuron count, we should eventually be able to match nature in the arena of brain-building, using some combination of synthetic biology and nanotechnology if we can do it no other way. Nevertheless, to meet the computational requirements of human-scale whole brain emulation might require a series of significant technological breakthroughs. In that case, scaling up the emulation process would not be an easy route to human-level AI. However, human-scale whole brain emulation isn’t the only option for upgrading to human-level AI. Perhaps the mouse-scale emulation itself could be cognitively enhanced. The most obvious (and perhaps naïve) way to attempt this would be simply to increase the neuron count in various cognitively important brain areas, such as the prefrontal cortex and the hippocampus. More plausibly, progress in understanding how cognition is realized in the vertebrate brain would be accelerated thanks to the availability of the mouse emulation as a research tool. This burgeoning theoretical knowledge could then be called on to help engineer suitable neural enhancements (or cognitive prostheses), while preserving the core mouse brain simulation.¹¹ On this view, the mouse brain emulation is again the catalyst for achieving human-level AI. Like a particle accelerator in physics, a mouse-scale whole brain emulation would permit experiments to be carried out that could otherwise only be imagined. For example, it would be possible to observe the synthetic mouse’s brain activity and behavior under carefully controlled conditions, and then to reset the whole system and re-run the same experiment with a small variation, such as a minute alteration to the brain. This is just the sort of experimental program that would allow the mouse’s brain to be reverse engineered, and in due course we would no doubt learn enough to be able to design and build cognitive prostheses for it from first principles. But would this be enough to get us to human-level AI, or would that require something more? For example, it would be vital to endow the enhanced mouse emulation with language. This would surely necessitate more than simply an increase in neuron count. It might require circuitry of a kind that is found nowhere in the brains of small vertebrates. Perhaps, in the human brain, evolution discovered some radical innovation, a qualitatively different type of neural mechanism, one capable of handling symbolic representation, combinatorial syntax, and compositional semantics, the building blocks of language. If this is true, a complete theory of the mouse brain would still be insufficient, and the path from mouse-scale emulation to human-level AI would be less straightforward. But we shouldn’t forget that, in parallel with the work of the neural engineers, neuroscientists will be unravelling the secrets of the human brain, without resorting to full-blown emulation but using increasingly powerful tools for mapping its structure and activity. Understanding the neural basis of language is, of course, a major target for neuroscience. So perhaps, by the time the engineers can put together a mouse-scale whole brain emulation, the neuroscientists will be in a position to help them devise a neural prosthesis that will confer language on a suitably enhanced mouse-scale emulation. In short, a mouse-scale whole brain emulation would have the potential to kick-start progress toward human-level AI in a number of ways. Arguably, given human-level AI, the transition to superhuman-level AI would be almost inevitable. Human-level intelligence realized in a synthetic substrate would be more amenable to enhancement than the biological brain with its various limitations (low speed, reliance on metabolism, the need for sleep, etc.). Moreover the human-level AI itself (or indeed a team of human-level AIs) could be set to work on the problem, initiating a feedback loop of ever faster improvement and perhaps precipitating an intelligence explosion with unpredictable consequences. In other words, once we have achieved mouse-level artificial intelligence through whole brain emulation, the genie could be out of the bottle. 3 Engineering AI 3.1 Intimations of Intelligence We have devoted a lot of discussion so far to the brain-inspired route to human-level AI, and to whole brain emulation in particular. But the space of possible artificial intelligences is likely to be very diverse, and biological forms might occupy just a small corner of it. What does the rest of this space of possibilities look like? This is a very important question because the way an AI is built will shape its behavior, and will determine our ability to predict or control it. It would be a serious mistake, perhaps a dangerous one, to imagine that the space of possible AIs is full of beings like ourselves, with goals and motives that resemble human goals and motives. Moreover, depending on how it was constructed, the way an AI or a collective of AIs set about achieving its aims (insofar as this notion even made sense) might be utterly inscrutable, like the workings of the alien intelligence Kasparov discerned across the chessboard. If the AI were the product of another AI, or if it were the outcome of self-modification or artificial evolution, then its potential inscrutability would be all the greater. So which design and construction methods are more or less likely to lead to AI that is unpredictable and hard to control? The better we understand the space of possibilities, the better equipped we will be to address this question and mitigate the risk of building, and losing control of, the “wrong kind” of AI. Let’s begin by looking at some examples of contemporary (mid-2010s) AI technology. Can we discern in these systems the beginnings of artificial general intelligence? Will general intelligence be achieved just by improving and extending such systems? Or is there something fundamental missing, a vital ingredient that must be added before AI technology truly takes off? We’ll start with an example of a disembodied AI application, namely the personal assistant. In the previous chapter the importance of embodiment was emphasized. But many of the fictional examples of artificial intelligence familiar to our culture are disembodied. Think of HAL, the errant computer in 2001: A Space Odyssey. In a sense, the spacecraft in 2001 could be thought of as HAL’s body. It has a well-defined spatial location, and it has sensors and actuators through which it interacts with its environment in continuous time. But at one point in the movie we are treated to scenes of HAL’s early “life” in an Earth-bound laboratory, and the viewer is somehow persuaded that his intelligence is independent of the spacecraft. Our willingness to suspend disbelief here suggests that disembodied AI is conceptually possible. But is it possible in practice, and how far are we from achieving it? Personal assistants such as Apple’s Siri and Google Now showcase decades of gradual improvement in speech recognition. Without prior training on individual users, they are able to turn ordinary speech into text even in the presence of background noise and despite large variations in timbre and accent. Interestingly, the task of speech recognition often isn’t carried out on the user’s device. Raw sound files are streamed over the Internet to corporate processing centers, where the speech recognition is done and the corresponding text file is produced. So not only are these applications disembodied in the sense that they don’t engage with an environment through sensorimotor interaction, even their processing and memory is dispersed in the cloud. Does this make them somehow “even more” disembodied? No. We can imagine a fully embodied, robotic system all of whose processing takes place off-board and in the cloud. But the point is worth noting. In parallel with turning raw sound data into text, the personal assistant has to try to “understand” what the user is asking it to find out or instructing it to do. This is a considerable challenge in itself, even given a perfect transcription of a sound file into text. However, the task is made easier with the aid of a statistical model of the sorts of things people say, constructed from a huge database of examples. Given the beginning of a question or command, this enables the system to predict how it is most likely to continue. Moreover this prediction can be fed back to the speech recognition stage to improve its performance too, allowing it to fill in gaps where there is noise or ambiguity. With the user’s utterance duly parsed, the system can decide how to respond to it. Is it a request for information, or a command? Suppose it’s a request for information. Is the information specific to the user, such as the time of a meeting or a friend’s phone number, or is it an item of general knowledge? If it’s a general knowledge question, then the system can call upon all the resources of the Internet to find an answer. The system can then transform the text of the answer into a sound file using speech synthesis technology that has been with us for many years (speech synthesis being much easier than speech recognition), and issue a verbal response through the device’s speaker. This is all very impressive. Admittedly, the conversational skills of a mid-2010s digital personal assistant are rather limited. You wouldn’t invite one to a dinner party. But a generation ago, this sort of technology might have convinced a naïve user that we were well on the way to realizing the science fiction dream of artificial intelligence. And today, even knowing how they work, there is something uncanny about the capabilities of a personal assistant, an intimation of genuine intelligence. So what is missing? What would it take to convert this uncanny feeling, this intimation of intelligence, to well-founded conviction? 3.2 Getting to Know the World A major shortcoming of these digital personal assistants is that they have no real knowledge of the world, despite their question-answering prowess. For example, they lack a commonsense understanding of solid objects and spatial relations. So they can be thrown by simple yet unexpected questions for which direct answers cannot be found on the Internet, vast as that repository is. For example, consider the question “If you dangle a rat by the tail, which is closer to the ground, its nose or its ears?” Even a child can solve this little riddle with ease. She may never have dangled a rat by its tail, nor seen a picture of a rat being dangled by its tail. Yet humans have a generic capability for visualizing situations and predicting the consequences of actions, a capability that can handle circumstances they have never previously encountered. Everyday physics is one domain that humans (and some other animals) have properly mastered, a domain in which our grasp of the underlying principles enables us to solve problems quite unlike any we have seen before. Everyday psychology is another. Other people don’t behave like inanimate objects. They have beliefs, desires, and intentions. Humans understand all this, and use this understanding to make plans, to communicate, and sometimes to deceive. For both these domains—everyday physics and everyday psychology—the depth of human understanding rests on the possession of a set of fundamental abstract concepts, such as that of a solid object or of another mind. Although the neural mechanisms underlying these abilities in humans are not yet fully understood, it’s safe to assume they are partly innate, an evolutionary endowment. Obviously solid objects and other people have always loomed large in the lives of hominids. General-purpose mechanisms for dealing with them will have been selected for. So even though the concept of a solid object is not manifest at birth, it would be surprising if the newborn brain were not predisposed to acquire it. Remarkably, though, humans are also able to acquire completely new concepts, concepts that are at least as abstract as that of a solid object or another mind and for which there is no precedent in our evolutionary past, such as the concept of an integer, or of money. How can a machine be endowed with the same generic capacities—a mastery of important commonsense domains like everyday physics and everyday psychology, as well as the ability to acquire entirely new abstract concepts? One answer, of course, is to replicate the biological brain. But we have already discussed this at some length. There are several other possibilities. As far as everyday physics is concerned, one possibility is to employ a physics engine of the sort used in computer games and already discussed briefly in the context of virtual embodiment. A physics engine can model any given configuration of objects (e.g., the parts of a rat) and simulate their dynamics. An alternative approach is to build a system that draws logical inferences about everyday things based on a set of laws of commonsense physics expressed in a formal language. For example, the system might contain a sentence representing the rule that unsupported objects normally fall down, and another that fragile objects normally break if they hit the ground. These could be used to draw the conclusion that a toppled wine glass will shatter if it rolls off a table. The same logic-based method can be applied to other domains, such as everyday psychology, and has the additional advantage over a physics engine of being more tolerant of incomplete information—about the exact shape of the table and the wine glass, for example. However, both the physics engine and the logic-based approach are parasitic on conceptual frameworks provided by human designers. Information about a robot’s environment—the surfaces of the objects around it—can be acquired by moving around, accumulating sensor data (from cameras, tactile sensors, etc.), and transforming the data into a form suitable for subsequent processing. But the very idea of a solid object, a concept that is vital to the predictive capabilities of either type of system, is not discovered through interaction with the world. It is given beforehand, built into the system from the start. This might be acceptable for some universally important domains of expertise (e.g., everyday physics), but a truly general intelligence also needs to be able to discover (or invent) abstract concepts for itself if it is to cope with a world that cannot be known in advance. 3.3 Machine Learning This brings us to the topic of machine learning. Machine learning has been an active subfield of artificial intelligence since its inception. But the subject made considerable progress in the 2000s, thanks in part to increases in computing power and storage and in part to theoretical advances and new learning algorithms. This has led to novel commercial applications, such as online marketing, where it’s useful to profile customers so as to more effectively target them with product recommendations and advertising. A machine learning system can do this by building a statistical model of customer behavior based on a large database of buying and browsing habits. With the aid of such a model the system can then predict the likely preferences of a customer based on just a few purchases and website visits. Generally speaking, machine learning concerns the construction of a model that accounts for a given collection of data, and can be used to predict further data. For example, suppose I present you with the sequence 5, 10, 15, 20, and ask you to guess what comes next. The chances are you will form the hypothesis that the sequence increases in intervals of 5 and will predict that the next number is 25, followed by 30, 35, and so on. If the data have their origin in the real world, then the data are likely to be noisy. So machine learning algorithms have to be able to handle uncertainty. Suppose a mobile robot is stationary but is being approached by a large object. It obtains a series of sensor readings indicating the object’s distance: 24.9 cm, 20.1 cm, 15.1 cm, 9.9 cm. It might form the hypothesis that the distance to the object decreases by approximately 5 cm per sensor reading, and predict that the next sensor reading will be 5.0 cm plus or minus 10 percent. Time to take evasive action! In these trivial examples, spotting the underlying pattern is easy. But suppose that, instead of a single number, each item of data comprises a thousand numbers. Finding patterns, building models, and making predictions given high-dimensional data like these will be much harder. Indeed it’s much worse than a thousand times harder. This is known as the curse of dimensionality. Thankfully though, the curse of dimensionality can be lifted to the extent that the data exhibit known statistical regularities. For example, suppose the data in question are a sequence of frames from a video. In this case there is a statistical tendency (1) for one pixel in any given frame to have a value close to that of its neighbors and (2) for the same pixel to have similar values in successive frames. Statistical regularities like these are often a manifestation of the fundamental structure of the world from which the data are drawn. For a mobile robot with a camera, the world exhibits a kind of “smoothness.” It is full of solid objects whose surfaces present lots of continuous patches of uniform color with relatively few discontinuous edges. Although some assumptions about the structure of the world might be built into a learning system at design time—such as its 3D spatial character and the prevalence within it of solid objects—much of the way the world is structured, the kinds of things it contains and the behavior they exhibit, will have to be discovered. So the task of learning to predict incoming data by building a model of the world encompasses the challenge of finding ways to compress the data to reduce its dimensionality, for example, by re-describing it in terms of concepts and categories such as “animal,” “tree,” and “person” (which are also a useful foundation for linguistic communication). However, high-dimensional sensory data cannot be directly reduced to such high-level categories. A hierarchical approach is called for, wherein low-level features are extracted first. Having compiled a table of recurring low-level visual features, an algorithm can learn how those features combine to make higher level features. This multi-layered approach is the hallmark of so-called deep learning. For example, suppose the learning algorithm was set to work on a large database of images, many of which contain faces. The presence of a face might be signaled by particular patterns of distinctively shaped patches of light and dark. These might correspond loosely to features we would call eyes, noses, and mouths. But they might not. The machine is not constrained by the categories of human language. The low-level, statistically significant visual features it picks out may not be amenable to straightforward linguistic description. (In fact the same is true of visual perception in the biological brain, although in humans it is subject to the top-down influence of language.) Having learned the low-level statistics of the data—the small-scale visual motifs that recur over and again—the learning algorithm can learn that certain combinations of those motifs frequently arise. One such combination would correspond to what we would call a face. Another combination (whiskers, fur, pointy ears, etc.) might correspond to what we would call a cat. Or, since cats are frequently seen being cuddled by small humans, the algorithm might pick out child–cat pairings. Again, the machine is not constrained by human concepts and categories, just by the statistics of the data. So far, so good. We have seen how a machine learning algorithm might work with static data. But what we’re ultimately interested in is the dynamics of the world. We’ve been assuming a system that discovers categories of objects in a database of still images. But what about a video archive? After all, an embodied learning system needs to get to grips with moving images—indeed with an ever-flowing river of incoming sensory data—if it is to acquire predictive power. Moreover, for an AI with drives to satisfy and goals to achieve, it’s only worth picking out the cats against the backdrop of the rest of the world to the extent that cats exhibit distinctive behavior, particularly if that behavior is relevant to the AI’s drives and goals (as it would be for a mouse). For example, if our learning algorithm had acquired the category “string” as well as the category “cat,” then it would be half way to learning that cats often chase pieces of string. Once again, we shouldn’t be misled into thinking that the way a machine learning algorithm would represent this rule would be anything like a sentence in human language. Rather, it would be a collection of parameter values within a data structure that captures the statistics of motion of certain frequently occurring visual features, which themselves would be represented in a similarly mathematical way. But the upshot would be the same for a suitably constructed machine as it would for a human who had learned this fact. It would, for example, put an AI in a good position to make a plan to lure a cat into a basket for transport to the vet. 3.4 AI through Big Data Let’s take stock. We’ve been discussing algorithms that can learn the statistics of the world, that can discover hierarchical categories of objects and behavior in an unlabeled stream of multimodal data, and that can use these categories to compress the data into a mathematical description that can be employed to make predictions. It’s easy to see that machine learning algorithms of this sort constitute a useful technology. But how far do they take us toward artificial general intelligence? Imagine an AI built along the following lines. Suppose a learning algorithm of the sort just described were allowed to crawl the Internet, like a search engine, sucking the statistics out of the billions of images, the tens of millions of videos it finds. Humanity has compiled a staggeringly large repository of multimedia data from the everyday world, and made it all accessible to anyone (or anything) with a network connection. Somewhere on the web are to be found movie clips of giraffes making love, of aircraft looping the loop, of men in India planting potatoes, of girls in China fixing bicycles, of battles, of board meetings, of building sites, and of cats charmingly doing very little. You name it, someone has probably taken a video of it and put it up on the web. This public repository of data, already large, is growing rapidly thanks to the crowdsourcing power of social networking. Moreover much of it is more than just raw sensory data. Images and movie clips are typically accompanied by location, time, and date information, and increasingly by tags labeling objects and events. As more and more everyday objects become connected to the Internet (litter bins, fridges, key-rings, etc.), it will be possible to gather ever greater quantities of information about the everyday world and the behavior of the humans and other animals who inhabit it. How good at prediction could a system become by applying powerful machine learning algorithms to this enormous repository of data? Why would the system need to be embodied? Why would it need to interact with the world directly, given an enormous database of multimedia recordings of the embodied activity of others? Recall that the difficulty of endowing a computer with a commonsense understanding of the everyday world has long been considered a major obstacle to achieving artificial general intelligence. Perhaps a disembodied AI could acquire common sense vicariously. How close could such a system get to human-level artificial intelligence? Well, what about language? Language is a highly significant aspect of human behavior, and nothing could count as having attained human-level AI if it couldn’t match human linguistic capability. The digital personal assistants of the mid-2010s are already uncannily good at anticipating what their users are about to say. But it’s easy to argue that these systems don’t really understand the words they recognize, the sentences they parse, or the answers they proffer. The symbols they use are not grounded in interaction with the world, a shortcoming that shows up when they are asked novel questions that require a combination of imagination and common sense: “If you dangle a rat by its tail, which is closer to the ground, its nose or its ears?” Surely machine learning, however heavy-duty, cannot help us to overcome this sort of limitation? Yet language is just another form of behavior. Why should it be less susceptible to brute-force, statistical machine learning than, say, patterns of movement in a crowd or patterns of vegetation in a garden? Throw enough data and enough computation at the problem, and machine learning will be able to model the relevant statistics well enough to make reliable predictions. Where is this person likely to go having left the kiosk? What leaf shapes are likely to occur to the left of that tree? And what will this person say in response to what that person said? We mustn’t forget that, compared to today’s digital personal assistants, the sort of learning system we are envisaging will draw on a vastly larger dataset, one that effectively does ground words in experience, in embodied interaction with the world, albeit of a somewhat secondhand, parasitic kind. What about the dangling rat example? Our AI needs to be able to deal with the hypothetical, the counterfactual, the imaginary. This is a basic piece of functionality. But given an adequate model of the world, a model with sufficient predictive power, all that is needed to fulfill this requirement is a means to initialize the model with hypothetical scenarios, a way of populating it with imaginary objects. The predictive power of the model will do the rest, generalizing from millions of videos of dangling things, tens of millions of images and videos of rats in a myriad poses doing a myriad things, and billions of examples of ears and noses from every conceivable angle. How about, say, mathematics? Surely no mere statistical learning system could acquire the ability to do mathematics? (Students of philosophy will hear echoes of the debate between empiricism and rationalism here.) Well, we aren’t ruling out supplying the system with various innate categories and concepts, such as that of a solid object or of 3D space. These could include the concept of number. But it isn’t even clear that this is necessary. Perhaps it is enough for the learning algorithm to process a very large number of recordings of classroom lessons in primary school mathematics for it to discover the concept of number for itself. It’s hard for us to imagine the implications of the sheer quantity of raw data the learning algorithm would have processed, and the ways in which the system might surprise us as a result. In 2009 three computer scientists from Google wrote a paper entitled “The Unreasonable Effectiveness of Data.”¹ The title alludes to an unexpected phenomenon in machine learning. It turns out that machine learning using a messy dataset with a trillion items can be highly effective in tasks (e.g., machine translation) for which machine learning using a clean dataset with a mere million items is downright useless. This is unexpected because a million seems like a big number. If a learning algorithm doesn’t work with a training set comprising a million examples, then the intuitive conclusion is that it doesn’t work at all. Yet it turns out that often what is needed is a much bigger training set, something that only came to light when computers became powerful enough to store and process that much data. The lesson here is that when faced with artificial intelligence engineered from scratch, artificial intelligence that operates according to very different principles from the biological brain, we should expect to be surprised. In particular, if an AI system relies on quantities of data so large or processing speeds so fast that they are hard to grasp intuitively, then it might solve problems we don’t expect it to solve in ways we don’t fully understand. In short, human-level AI does not have to be human-like. If even human-level AI can be inscrutable, how could we hope to predict and control a superintelligent AI, a system capable not merely of matching humans in every domain of intellectual affairs but of outwitting us at every turn? 3.5 Optimization and Uncertainty Of course, predictive capability does not alone constitute artificial general intelligence. Rather, the ability to build models of the world, and to use those models to make predictions, is a means to some other end. The intelligence of an animal is manifest in the things it does. It exhibits a sense of purpose. It has drives, such as hunger and fear, and it can form goals that subserve those drives, such as to acquire an item of food or to return home. It achieves its goals by acting on the world, and if it’s clever, it will make predictions to help it achieve those goals. When our cat sees a mouse disappear behind a tree stump, it anticipates its reappearance, and waits patiently. We would expect the predictive capabilities of an embodied artificial general intelligence likewise to subserve goals and drives. It should display its own sense of purpose. Whether it was delivering parcels, cooking meals, or performing surgery, only if it had goals and were capable of achieving them would we think of a robot as possessing general intelligence. What about a disembodied AI? Well, even if its purpose were simply to answer questions and offer advice, to qualify as artificial general intelligence, a system would have to do more than just make predictions. Though incapable itself of acting directly on the world, it should be adept at working out how to act to achieve a variety of given aims. It might be asked to construct a profitable investment portfolio, or to plan a large civil engineering project, or to design a better drug, a larger aircraft, or a faster computer. If its intelligence were truly general, it would be possible to train it to do any (or all) of these things, as well as a host of others, just as it is with an intelligent human. So what do we need over and above predictive capability to enable a machine, whether embodied or not, to carry out such challenging tasks? The AI needs to be able to plan a course of actions, and to be good at planning is to be good at a certain type of optimization. Indeed the topic of optimization is central to contemporary approaches to engineering artificial general intelligence from scratch. Not only can planning be cast as a form of optimization, so can certain kinds of machine learning and various aspects of computer vision, as well as many other problems relevant to artificial intelligence. So it’s worth examining the concept in a little detail, which we’ll do using a specific example, namely the traveling salesperson problem. Suppose a traveler (or salesperson) is faced with the challenge of visiting a number of cities in turn, then returning home. She must visit each city exactly once and end up back where she started. But the order she chooses will influence her overall travel time, and she doesn’t want to spend any longer on the road than she has to. Suppose she lives in San Francisco and has to visit New York, Boston, and San Jose. Because San Francisco and San Jose are near to each other but far from New York and Boston, it wouldn’t make sense to go from San Francisco to New York then to San Jose and then to Boston before returning to San Francisco. That would be a suboptimal solution. The journey time will be shorter if she visits Boston right after New York. The challenge is to find the optimal solution, the best possible order in which to visit the cities, that is to say, the order that results in the shortest overall travel time. The traveling salesperson problem is just one example of an optimization problem. In general, the task is to find some mathematically well-defined structure that minimizes some cost function (or, equivalently, that maximizes a so-called utility function or reward function). In this case the mathematical structure is an ordering of cities, and the cost function is the overall journey time. The problem doesn’t look so hard with just a few cities to visit. But, like many optimization problems, the traveling salesperson problem doesn’t scale well. In a specific mathematical sense (we won’t go into the details) the difficulty of the problem goes up exponentially with the number of cities. In effect this means that, with a large number of cities, even the fastest algorithm on the fastest conventional computer might struggle to find the optimal solution in a reasonable time. However, there are algorithms that will find a good solution for very large numbers of cities, although they may not find the best possible solution. This is fortunate because the traveling salesperson problem is more than just an intellectual curiosity. It has many practical applications, and usually finding a good solution is good enough. Before we get back to artificial general intelligence, let’s consider another optimization problem in which finding a good solution is good enough. Suppose that, instead of a traveling salesperson, we are dealing with our cat Tooty. Rather than visiting cities, Tooty’s task, when he wakes up from a snooze, is to visit a number of local foraging sites, places where he often finds food (e.g., the neighbor’s kitchen). Of course, moving from one such site to another takes up energy, and he would like to minimize the amount of energy he uses. He also wants to maximize his food intake. Annoyingly though, there is no guarantee that he will find food when he arrives at a foraging site. (The neighbor’s cat might have got there first.) However, based on his past experience, Tooty “knows” the probability of finding food at any given site. Now, the task for Tooty is to plan a tour of local foraging sites that will maximize his expected reward, where the reward he gains on each tour is some function of his total food intake and the energy he uses up. Unlike the traveling salesperson problem, the tour doesn’t have to take in every site. So a good strategy might be to leave out distant, unpromising sites. Otherwise, this optimization task is quite similar to the traveling salesperson problem, and is computationally at least as hard. The main extra ingredient is uncertainty. However good a plan Tooty comes up with, there is no guarantee of how much food he will get. On a bad day, he might even get nothing. But uncertainty is a fact of life. So a machine learning algorithm, however clever it is, cannot build a predictive model that will get it right every time. Instead, given finite, incomplete data, the best we can hope for is a probabilistic model, a model that can predict the most likely outcomes. Given a probabilistic model, the best course of action to pick is one that, according to the model, will maximize expected reward. However, we are still dealing with a well-specified optimization task. Uncertainty doesn’t take us beyond the reach of mathematics and computation. It simply takes us into the mathematical realm of probability theory. 3.6 Universal Artificial Intelligence Of course, a real cat wouldn’t behave like the one in this caricature. The real Tooty doesn’t wander around eating nothing while building a probabilistic model of his food supplies, then retire to his basket to work out an optimal route. Like any such well-adapted animal, a cat will learn while it forages, and forage while it learns. The business of exploring the world and the business of exploiting its resources are rolled into one. This is the right strategy, the rational strategy. As we will see, a similar strategy, one that interleaves machine learning and optimization, is a good basis for artificial general intelligence. To AI researchers, the task of maximizing expected reward while trying out different actions in different situations to see which are the most effective is known as reinforcement learning. The traveling salesperson problem and the foraging cat problem are very specific examples of optimization. No algorithm that can only solve the traveling salesperson problem, however fast it might be, would constitute artificial general intelligence. By contrast, the idea of reinforcement learning, and the concept of maximizing expected reward that is central to it, are not tied down to particular problems. Indeed we can build on this idea to specify a form of universal artificial intelligence.² The theoretical idea of universal artificial intelligence, which was first made precise by Marcus Hutter, is analogous to that of universal computation, one of Alan Turing’s most important contributions to computer science. A universal computer is one that can compute anything that is possible to compute given the right program. Turing’s achievement was to pin down the idea of such a computer mathematically. Unlike Turing’s abstract computational devices (which we now call Turing machines), real computers are limited by finite memory. However, every digital computer that has ever been built can, in theory, compute anything that is possible to compute. They all inherit their generality from Turing’s mathematical prescription. Analogously, a universal artificial intelligence is one that always chooses an action that maximizes its expected reward given the information it has acquired, no matter what sort of world it finds itself in. It is, so to speak, a perfect AI, one whose decisions are guaranteed to make the most of its incoming data. Like Turing’s notion of universal computation, this idea can be made mathematically precise. (We won’t go into details here.) Also like Turing’s notion, this mathematical ideal is not realizable in practice. Rather, it serves as a theoretical limit to the very idea of artificial intelligence, just as Turing’s notion serves as a theoretical limit to the very idea of computation. Despite its impracticality, the formal idea of universal artificial intelligence is more than a mathematician’s plaything. To begin with, there are approximations to it that can be realized in practice. But the more relevant observation for the present discussion is that Hutter’s mathematical characterization implies that artificial general intelligence conforms to a simple, generic architecture. This architecture interleaves two processes: machine learning, to construct probabilistic predictive models of the world, and optimization, to find actions that maximize expected reward according to those models.³ This two-component architectural blueprint has very wide application. Indeed any intelligent agent, whether artificial or biological, can be analyzed according to its structure. Three questions need to be asked (or three sets of questions). First, what is the agent’s reward function? Answering this question will tell us a great deal about how it is likely to behave. Second, how does it learn? What data does it work with, what learning techniques does it use, and what prior knowledge of the world is built into them? Third, how does it maximize its expected reward? How powerful are the optimization techniques it uses to do this? What kinds of problems are they adept at solving, and what are their weaknesses and limitations? Consider a nonhuman animal, such as a crow, which is capable of learning complex behaviors through trial and error, and also of a degree of innovative problem solving. What is its reward function? Like any animal, the reward function of a crow favors the acquisition of things like food and water while avoiding danger and discomfort. These might seem simple needs. But arbitrarily complex problems can be presented in the guise of barriers to obtaining food. For example, to test a crow’s cognitive abilities, a researcher might present it with a worm in a box whose lid can only be opened by solving a puzzle. Crows, which are particularly clever animals, can solve simple planning problems presented this way. But trickier problems could be given the same form. For example, a less fortunate crow might be obliged to win a game of chess to get the lid open. This crow would no doubt go hungry. But the point is that the imperative to obtain a resource like food can be thought of as a universal reward function. In a complex environment the class of problems that can be translated into the challenge of gaining a simple resource is open-ended. So much for the first question, the question of reward function. The next question to ask is how the crow learns. The crow learns from data that pours in through its senses, thanks to its embodied interaction with the physical world, a world that manifests numerous objects, both animate and inanimate, with a rich variety of shapes and dynamics. The crow learns how these objects behave when pushed, poked, pecked, or squawked at, or when simply left alone. Precisely how it does this, what the neural underpinnings of this process are, is a scientific question we haven’t yet answered. But animal cognition researchers have given us a good idea of the kinds of associations that animals like crows can form, the sorts of sensory discriminations they can make, and so on. How good is a crow at finding actions that maximize its expected reward? The answer, in the case of the crow, is better than most animals. It has a rich underlying repertoire of actions, one that that includes tool use. These form the basis of a number of innate stimulus–response behaviors that are evolution’s contribution of useful prior assumptions about the world to the crow’s reward-maximizing powers. But a crow does more than simply rely on a lookup table that maps stimulus to response (which is enough even for tool use). It can find new sequences of actions to solve previously unseen problems, sometimes innovating new kinds of behaviors (e.g., manufacture a novel tool). Again, the neural underpinnings of this ability have yet to be revealed. But the crow’s optimization method, whatever it is, appears to be very general, very powerful, at least compared to other nonhuman animals. All this tells us a great deal about the capacities and limitations of crows, and helps us predict their behavior. We know, for example, that a crow might upset a rubbish bin to get access to food scraps. But we needn’t worry that it will hack into a bank account and steal our money. To better understand the capacities and limitations of different kinds of artificial intelligence, we can ask the same questions. What are the implications of different sorts of reward functions? What kinds of machine learning techniques might an AI be equipped with? What data will they work with? What kinds of optimization algorithms might be deployed to maximize an AI’s expected reward? 3.7 Human-Level and Human-Like Intelligence Crows, like chimpanzees, dogs, elephants, and many other nonhuman animals, are impressively clever. But they are far less clever than humans. Animal-level AI would be useful. A robot with the intelligence of a dog, say, could perform a range of worthwhile tasks. But our real concern here is human-level artificial general intelligence. We would like to know how an AI might be built that can match a typical human in almost every sphere of intellectual activity, and maybe surpass the typical human in some. Or at least we would like to form a sufficient idea of how such an AI would work to imagine what the future might be like if it contained such machines. Then we can begin to think through the possibility of superintelligent AI, of artificial intelligence capable of outwitting humans in every sphere of intellectual activity. Whether we’re thinking about a human-level AI or a superintelligent AI, we need to ask the same three questions as before: What is its reward function? How and what does it learn? How does it optimize for expected reward? But before we embark on this imaginative exercise, it’s instructive to ask the same questions of homo sapiens. First, what is the human reward function? Well, we surely have roughly the same underlying reward function as other animals. Humans need food and water, prefer not to be in pain, enjoy sex, and so on. Moreover the human reward function is “universal,” like the crow’s: any intellectual challenge could, in theory, be presented to a human being in the guise of gaining food, say, or sex. Significantly, though, humans seem capable of radically modifying their reward functions. Many animal species will learn to associate objects or events with reward, as in the famous example of Pavlov’s dog, who learned to associate the sound of a bell with the arrival of food after repeatedly being presented with both stimuli together. Eventually the dog would begin to salivate upon hearing the bell even in the absence of food. This sort of conditioning is useful for maximizing expected reward. In a competitive setting the dog that runs to its bowl when it hears the bell will get more food than the dog that is ignorant of the correlation. But in such cases the underlying reward function hasn’t really changed. It remains firmly grounded in the biological fundamentals. In humans, by contrast, the layering of associations one upon another from childhood onward, mediated by complex social cues and expectations, can result in an apparent disconnect between reward function and biology. Indeed it might even be argued that part of the essence of our humanity is the capacity to transcend the contingencies of biology. Humans play music, write poems, design gardens, and so on. No doubt such activities are often pursued for financial gain or for social status, motives that might be explained in terms of biological imperatives. But sometimes they are surely the outcome of reflection on what constitutes a good life, and thereby become ends in themselves rather than merely a surrogate for obtaining food or avoiding danger or anything else with obvious evolutionary value. This brings us to the question of how humans learn about the world, and whether there is anything distinctive about what humans learn compared to other animals. The answer is obvious. The open-endedness of the human reward function is made possible thanks to society, to culture, but above all to language. It is thanks to language that we can reflect on the human condition, as we do in philosophy, art, and literature. Without such reflection it’s hard to see how we could overcome biological imperatives to the extent that we seemingly do. It’s also thanks to language that humans are able to cooperate in the development of technology, and that the fruits of one generation’s technological labors can be passed on so easily to the next. So, besides learning about the everyday physical, natural, and social world, the human must be able to learn language. A predisposition to understand the minds of others in terms of their beliefs, desires, emotional states, and so on, helps make the business of learning tractable here. Finally, how do humans maximize expected reward? Once again, society, culture, and language are prominent here. Human intelligence is collective. Not only is human technology the product of many individuals and their labors, it is the product of many generations of individuals. Knowledge, expertise, and infrastructure accrete layer by layer, each generation building on the achievements of the one before. So the optimizing powers of the individual human are specialized for maximizing reward within a society. It makes no difference whether an individual’s reward function is admirable or despicable, whether a person is a saint or a sinner. A human must work out how to get what she wants from other people, given the society she finds herself in and calling on the resources of its language to do so. Whether operating collectively or individually, the capacity to innovate is another key element of the human strategy for optimizing reward. (Recall that endowing a computer with creativity was cited as a major obstacle to achieving artificial general intelligence in chapter 1.) The inventions of agriculture, writing, printing, the steam engine, the computer, and so on, have all contributed enormously to human health, life expectancy, and general well-being, and thereby helped maximize reward over a long timescale. In addition to favoring good health and longevity, the human reward function has been shaped by sexual selection, competition for social status, and other peculiarly biological factors. The result is the less obviously utilitarian forms of creativity exemplified by dance, ritual, fashion, art, music, and literature. Now, what would a human-level AI engineered from scratch be like? To what extent must the three key questions—the questions of reward function, learning, and optimization—have answers for this sort of AI that resemble those for a human being? Well, if an AI is to be human-like, it should broadly conform to the pattern set out above, even if its design and construction bear no resemblance to the human brain. However, as we already noted in the context of the “unreasonable effectiveness of data,” there is no reason for human-level artificial intelligence to be human-like. As long as the AI can match the typical human in most spheres of intellectual activity, and maybe surpass the typical human in a few, its intelligence would qualify as human-level. This leaves a lot of room for variation, just as we find plenty of variation within the human population. Some people are good with numbers, others are good with words. Some people are people people, others are more at home with technology. Similarly a human-level artificial general intelligence might have a very large working memory capacity or be highly skilled at searching for patterns in data, yet be incapable of writing a worthwhile novel or of devising a new musical form (as are most humans). But what if an artificial intelligence had some ability that enabled it not merely to match humans but to outwit them in every sphere of intellectual activity? Is such a superintelligent machine possible? What would be the consequences of creating it? These are among the questions to be tackled in the next chapter. 4 Superintelligence 4.1 Toward Superintelligence We now have an overview of various enabling technologies, some biologically inspired, others the result of engineering from scratch, that could contribute to the creation of artificial general intelligence at the human level and beyond. The elements that can be made using these enabling technologies might be thought of as a set of building blocks that can be assembled in different combinations to make a variety of forms of artificial intelligence. To gain some understanding of what the resulting systems might be capable of and how they might behave, we can use the framework of three questions proposed in the previous chapter. What is the system’s reward function? How and what does it learn? How does it optimize for expected reward? We can also begin to ask a number of more philosophical questions. Would it be capable of moral judgments, and should it therefore be held responsible for its actions? Would it be capable of suffering, and should it therefore have rights? How much freedom to act should it be afforded? Finally, we can begin to question the consequences for society, for humanity as a whole, of introducing such systems into the world. If their freedom to act were not curtailed, how and to what extent would they reshape our world? What impact would they have on our economy, on our social fabric, on our sense of what it means to be human? What kind of world would result? Would the arrival of such machines lead to a utopia, to a dystopia, or would it leave things much as they are? Before exploring these issues in detail, we need to examine a critically important proposition. The proposition is that if and when human-level AI is achieved, superintelligent AI will be almost inevitable. To see the plausibility of this claim, we only need to consider the advantages of implementation in a digital rather than biological substrate. Unlike a biological brain, a digitally realized brain emulation can be copied arbitrarily many times. And unlike a biological brain, a digital brain can be speeded up. So, if we can create one human-level AI by whole brain emulation, then, given sufficient computational resources, we can create a community of many such human-level AIs, all working at superhuman speeds. The same point can be made for AI that has been engineered from scratch. Indeed anything realized as a computer program can be copied and/or accelerated. The implications of this are far-reaching. To make these implications more vivid, let’s imagine a concrete scenario. Suppose a major corporation with a famous brand name decides to develop a new high-performance motorbike in response to projected demand in emerging markets. The corporation awards contracts to two automotive design companies to come up with prototypes. The best prototype will go to manufacture (and earn the designers lots of money). One company employs a traditional team of human designers. The other company is a start-up that builds specialist teams of human-level AIs who inhabit virtual environments where they are set to work on large-scale design projects such as this. The project requires expertise in many areas, including materials, engine design, fluid dynamics, and ergonomics, as well as a talent for knowing what looks good. From concept to first working prototype is expected to take a top-notch (human) team two years. The AI-based design company seems to be at a distinct disadvantage. They have no automotive design experts on the payroll. However, they have enormous computing resources and the latest AI technology on their side. So assembling a crack team of designers from scratch is no problem. They start by populating a virtual world with a set of off-the-shelf apprentice AIs. These are human-level artificial intelligences that come pre-loaded with the acquired experience of an average human in their early twenties plus a graduate-level education in some relevant field such as mechanical engineering or industrial design. Now, this group of apprentices isn’t going to cut it as an automotive design team. Their human rivals in the other company have all had years of industrial experience designing cars, bikes, and engines. To get up to speed, the AI team will have to acquire comparable experience. Fortunately, they can do this in their virtual world, completing a plethora of mini projects, some as individuals, some as a team. Of course, if this training had to take place in real time, the virtual team would get nowhere. Their human rivals would produce a prototype bike before the AI team was in a position to get started. But suppose the AIs operate ten times faster than real time. Then ten years of training and design experience could be compressed into just twelve months. By the start of year two of the project, the AI team will have caught up with the human team. Moreover they will have ten years of subjective time ahead of them to come up with the perfect superbike, compared to just one year remaining for their biologically constrained human counterparts. Just imagine what a group of talented, enthusiastic young human engineers might accomplish in ten years. So the second year of the project elapses, and the rival teams submit their designs to the contractors. The traditional design company have produced a fine prototype bike, sleek and elegant, sure to appeal to the target market. But what about the AI-based company? When they unveil their prototype, everyone is astonished. No one has seen a bike quite like it before. But if the look is nothing short of revolutionary, the specifications defy belief. How is it possible to get such acceleration and such a top speed with such low fuel consumption? Having been declared winners, the AI team can reveal some of their secrets. With so much time on their hands, they were able to develop a whole new range of biomaterials perfectly suited to motorbike manufacture, and a miniature fuel preprocessing plant that uses some previously unexploited results from chemistry. Additionally they managed to develop a fabrication method that allows all the bike’s electronics to be integrated into its frame and manufactured all at once, in one piece. All these technologies have been duly patented, and promise to earn the design company a fortune besides the earnings from their winning bike design. The lesson of this little story is that if and when human-level AI is achieved, superintelligence will soon follow. It doesn’t require the creation of a new form of intelligence. It doesn’t require a conceptual breakthrough. Even if human-level AI is achieved by the most conservative means—by slavishly copying nature—the resulting liberation from the speed restrictions inherent in biology is enough. But is this really superintelligence? After all, by hypothesis, there is nothing that a team of accelerated human brain emulations could achieve that couldn’t be achieved by a team of humans, given enough time. Well, maybe a distinction could be drawn between individual superintelligence and collective superintelligence. What we have in this story seems to be a form of collective superintelligence. No individual member of the AI team meets the criterion of superintelligence. None, by itself, is capable of systematically outwitting the average human being. However, the individual/collective distinction makes little difference in a discussion of the potential consequences of developing superintelligent AI. It would be no consolation to the losing design team to know they were defeated by a collective rather than a brilliant individual. Likewise, if humanity is ultimately delivered either into a utopia or a dystopia thanks to the creation of human-level AI, no one will care whether the culprit is “proper” superintelligence or not. In the end, what matters is what the technology is able to do. The science fiction writer Arthur C. Clarke famously remarked that “any sufficiently advanced technology is indistinguishable from magic.” Human-level AI, however it is achieved, is likely to lead directly to technology that, to the rest of us, will be indistinguishable from magic. All it requires is faster computation, as the motorbike story shows. But to see the truly disruptive potential of reaching this milestone, we must factor in other possible ways of improving the capabilities of human-level AI. These will depend on the nature of the underlying technology. In due course we will look at the prospect of superintelligence via AI engineered from scratch. But first, let’s focus on brain-inspired human-level AI. 4.2 Brain-Inspired Superintelligence In the story of the motorbike designers, the team of AIs has an enormous competitive advantage over their human rivals simply by working very fast. If the AIs in question were brain-like, this would amount to their operating in faster than real time. This is the simplest and most obvious way to exploit the liberation from biological constraints that results from migration to a computational substrate. But the migration from biology opens up many more possibilities for enhancing the capabilities of brain-inspired artificial intelligence. Consider all the ways in which human workers are hampered by their animal nature. Humans need to eat, for example, and to sleep. But even a biologically highly realistic whole brain emulation—a faithful synthetic copy of a specific brain—could to a large extent be relieved of these needs. While real brains require a blood supply to provide energy, in the form of glucose, to enable neurons to function, a simulated brain has no such requirements, at least not at the level of the simulation. (Obviously the computers running the simulation would require energy, but this is a separate issue.) Sleep is more complex, since dreaming seems to serve an important psychological function. So in a whole brain emulation, it might not be a straightforward matter simply to eliminate the need for sleep. Nevertheless, a designer brain—one based on the principles of operation of the vertebrate nervous system but not conforming to the brain of any living species—could perhaps be carefully engineered not to require sleep. In short, a brain-inspired human-level AI wouldn’t have to waste time finding food, preparing it, and eating it. Nor would it have to spend time (or as much time, in the case of whole brain emulation) unproductively asleep. The time duly saved could be devoted to work, and the resulting increase in its effective workload would confer the same sort of advantage as acceleration, albeit on a less dramatic scale. Of course, most humans would object to having their mealtimes and their sleep replaced by work. But the reward function of a designer brain could be tuned differently. A willing intellectual slave who never eats or sleeps and wants nothing more than to work would be many corporations’ idea of the perfect employee, especially if they don’t require wages. Eliminating the need for food and sleep is one straightforward way to exploit the liberation from biology. Other relatively conservative techniques for getting the most out of brain-inspired AI are easy to imagine. Many humans enhance their cognitive performance using the tried-and-tested pharmaceutical trick of caffeine ingestion. Hallucinogens such as psilocybin (the active ingredient in magic mushrooms) have often been claimed to promote creativity, their legal status notwithstanding. In a simulated brain the effects of such drugs can themselves be simulated, without any unwanted side effects on the rest of the body. Moreover there’s no need to stick to pharmaceutically realistic interventions. With innumerable easily modified parameters, there would be a myriad ways to beneficially modulate the activity of a simulated brain, and thereby to optimize it for a particular task. Somewhat less conservatively, there are various ways a simulated brain might be enhanced at the anatomical level, along lines already envisaged in chapter 2 when considering how a mouse-scale whole brain emulation might be upgraded to human-level intelligence. For example, it might be possible to enlarge the prefrontal cortex, simply by adding to its neuron count. This would be relatively straightforward in a computer simulation, where the brain doesn’t have to fit inside a physical cranium. The prefrontal cortex is heavily implicated in working memory, an essential component of high-level cognition, and humans have markedly larger prefrontal cortices than other primates. So a superhumanly large prefrontal cortex could be highly advantageous. Similar enlargements can be envisaged for other regions, such as the hippocampus, which is involved in long-term memory. At the collective level, there are other methods for increasing the capabilities of a team of brain-based human-level AIs. Unlike a wetware brain, it’s easy to make multiple copies of a simulated brain. This opens up various possibilities for exploiting parallelism that are not available to a biological brain. Suppose an AI is attempting to solve some problem, and several distinct ways of approaching the problem present themselves. Then, rather than trying each possibility, one at a time in serial order, several copies of the AI can be made and each set to work on one of the possibilities, thus enabling many avenues to be explored at once. When all of the copies have finished trying out their particular approach, the most successful can be chosen. To pick a simple example, suppose an AI is playing a game of chess. From the board’s current configuration, there are three promising moves for the AI. It could explore each move in turn, one at a time. But alternatively, three copies of the AI can be spawned, one to investigate each move. When all three have looked as far ahead in the game as they can, their results are pooled, and the best move is chosen. The extra copies of the AI would then need to be destroyed (terminated), leaving just one to make the selected move and continue with the game. This type of parallelism is widely used in computer science today, to great effect. So the idea of spawning multiple copies of a simulated brain is just an extension of a tried-and-tested programming technique. Perhaps the most potent factor in the likely development of superintelligence, whether we’re talking about brain-based AI or AI engineered from scratch, is the prospect of recursive self-improvement. The idea is straightforward. A human-level AI is, by definition, capable of matching humans in almost every sphere of intellectual activity. One such sphere of intellectual activity is the construction of artificial intelligence. A first generation human-level artificial intelligence would be in much the same position as the human engineers that created it. Both species of engineer, biological and artificial, might call upon techniques like those just discussed to boost intelligence. However, the next generation of AIs, those whose intelligence is slightly above human level, will be better at engineering AI than any human. A sufficiently brilliant human neuroscientist could open up whole new vistas of theory, unearthing principles we can hardly imagine today, with far-reaching implications for neural engineering and brain-based artificial intelligence. A team of brilliant artificially intelligent neuroscientists working at superhuman speeds, or otherwise exploiting the possibilities afforded by liberation from biology, would be even more effective. They would be in a position to produce the next generation of brain-based AIs more rapidly than the previous generation was produced by its human developers. Each successive generation would appear more quickly than the last, following a quintessentially exponential curve. The result would be a sort of intelligence explosion.¹ 4.3 Optimization and Creativity So far in this chapter we have largely concentrated on human-like artificial intelligence. But human-like intelligence probably occupies just a small patch in the space of possible AIs. Now we’ll shift our attention to other possibilities, and it will be important to shed any anthropomorphizing tendencies along the way. It’s reasonable to hope that the behavior of an AI based on the vertebrate brain will be comprehensible to us on some level, even if it is accelerated, parallelized, or enhanced toward superintelligence. There is much less reason to assume this for an AI engineered from scratch. We should expect to be baffled and to be surprised, perhaps pleasantly, perhaps unpleasantly. How might superintelligence arise in a system that was engineered from scratch, a system whose design has no biological counterpart? We get a glimpse of a possible answer by appealing to the three-part framework introduced in chapter 3. When it comes to engineering artificial intelligence from scratch, this framework is prescriptive rather than merely descriptive. According to the prescription, artificial general intelligence can be realized by (1) devising the right reward function, (2) implementing an effective learning technique to build a model of the world, and (3) deploying a powerful optimization method capable of maximizing expected reward given that learned model. To get a feel for what might be achieved with this simple architectural specification, let’s revisit the central issue of creativity. At first, it’s hard to see how any sort of innovation or novelty could emerge from the combination of machine learning and optimization. Surely these processes are forever condemned to operate with a fixed set of raw ingredients—cities and journeys in the traveling salesman problem, for example. How could they possibly come up with a completely new concept, such as farming, or writing, or postmodernism, or punk rock? But to see how misleading this intuition is we need only consider the example of evolution by natural selection. From an algorithmic point of view, evolution by natural selection is remarkably simple. Its basic elements are replication, variation, and competition, each repeated countless times. Computationally speaking, it exploits staggeringly massive parallelism and has to run for a very long time before doing anything interesting. But astonishingly, it has generated all complex life on Earth. It has done this through sheer brute force and without recourse to reason or explicit design. Along the way, it came up with such marvels as the hand, the eye, and the brain. In turn, the brain (along with the hand and the eye) came up with farming, writing, postmodernism, and punk rock. Now, it isn’t quite right to describe evolution by natural selection as an optimization process. Although evolution can be thought of as the by-product of many competing genes trying to maximize their proliferation, there is no global cost function or utility function guiding its progress. However, just like an optimization process, evolution explores a vast space of possibilities. To solve the traveling salesperson problem requires a search through the (relatively small) space of possible city tours, whereas evolution explores the (much larger) space of possible organisms. In contrast to the traveling salesperson problem, where journey time guides the search, evolution explores blindly. But despite this lack of direction, and despite its inherent simplicity, evolution has produced solutions to problems that would challenge any general intelligence, such as solar energy storage and heavier-than-air flight. What this shows is that creativity can emerge from a simple process like optimization. But it does require a special sort of optimization. Computer scientists have devised many algorithms for solving the traveling salesperson problem, but none of those algorithms is going to invent the hand or the eye on its way to finding a good solution. The most important prerequisite for a creative process concerns the raw ingredients it works with. These must be amenable to open-ended recombination, like Lego bricks, which is to say, it must be possible to assemble them in different ways to produce an endless variety of things. Evolution by natural selection meets this criterion, thanks to the chemical properties of the organic molecules that are the basis of life. An optimization method could meet the same criterion if its raw ingredients were, say, designs for submission to a 3D printer, or virtual objects in a physics-based simulator, or the organic chemistry set of real or synthetic biology. The second feature required by an optimization process if it is to be capable of creativity is a universal reward function. A reward function that is too easily satisfied will not promote novelty. Creativity is of no benefit to a male redback spider whose only task is to deliver a genetic load to a receptive female. Having fully satisfied his life’s mission, the male spider can allow himself to be cannibalized by his mate. By contrast, in a sufficiently rich environment, the challenge of acquiring a resource such as food or money might require the solution of any problem it is possible to devise. In a competitive context where there is not enough of the resource to go round, ingenuity may be required just to survive. And where there is an incentive to amass as much of a resource as possible, the potential for creativity is endless. Finally, to exhibit creativity, the optimization algorithm must be powerful enough. Possessing a universal reward function and working with raw ingredients amenable to open-ended recombination will not lead to anything noteworthy if the optimization algorithm confines its search to a small, tried-and-tested portion of the space of possibilities. Rather, it needs to carry out a playful exploration of the space of possibilities. It has to try out new combinations of the raw ingredients it has available in order to invent new things. Indeed it has to be able to invent whole new categories of useful things, such as books, steam engines, and websites. It has to be able to invent whole new technologies. This doesn’t sound anything like the sort of optimization algorithm contemporary computer science students learn about, the kind of thing that can solve the traveling salesperson problem. Surely an optimization algorithm that powerful would be so sophisticated, so complicated that we can barely imagine how it would work today, just as today we have only the barest understanding of how intelligence is realized in the human brain. But recall the lesson of evolution by natural selection. Advanced technology can emerge from even a simple, brute-force algorithm, given enough time. If we devised the right sort of simple, brute-force optimization algorithm, supplied it with an open-ended reward function, and unleashed it in an environment with enough combinatorial potential, then the only thing limiting its capabilities would be computing power. So this suggests a way of building artificial general intelligence using brute-force search with massive amounts of computing power. But in an important sense, the resulting system wouldn’t possess genuine intelligence. It wouldn’t investigate the world and build up scientific knowledge. It wouldn’t construct rational arguments. Nothing it produced would be the result of analyzing a problem or applying principles of design. Rational enquiry and principled design make intelligence a dramatically more efficient approach to developing new technology than brute-force search. In nature, the brute-force approach has bootstrapped its way to intelligence by evolving the brain. But the goal of AI research is to endow systems with intelligence directly. Supplementing playful, undirected search with rational enquiry and principled, goal-directed design dramatically shortcuts the slow process of trial and error and compensates for limited computing power. So we should expect these functions to be part of a truly powerful optimization algorithm, one capable of creativity. But they rely on having a model of the world, a way to predict the outcome of actions or the efficacy of a novel design. This is where machine learning comes in—and where the analogy with evolution breaks down. If evolution were trying to maximize a reward function, we would count it as very inefficient. Like a bad scientist, it throws away all its data. It doesn’t use the results of its experiments in organism design to build a model of the world whose predictions can inform subsequent design decisions. But evolution has no reward function, no global utility function. From an evolutionary standpoint, there is only one way to judge an alteration to body shape or a variation in behavior: that is to try it out in the competitive struggle to survive and reproduce. So it makes no sense to find fault with evolution in this respect. By contrast, the sort of AI we are envisaging here is trying to maximize its expected reward. In the context of a reward function, an effective strategy is to test out ideas (designs) in theory or in simulation before they are deployed in practice, to “look before leaping.” To do this, a model of the world is needed, and machine learning is required to construct and maintain this model, whether through embodied interaction with a physical and social environment or vicariously via the Internet. 4.4 Engineering Superintelligence The take-home message of the previous section is that even a crude optimization algorithm may be enough for human-level AI given sufficient computing power. Even creativity, one the most difficult qualities to realize in a computer, can emerge from a brute-force search if enough processing time is available. But if (as we might expect) the enormous computing power required is beyond the reach of Moore’s law, then the shortfall can be made up by endowing the AI with sophisticated cognitive capacities—rational enquiry, principled design, theoretical analysis, and simulation. Very well, let’s suppose this is sufficient to achieve human-level AI by the engineering route (as opposed to the brain-inspired route). What about going beyond human-level intelligence? Can superintelligence be achieved this way? The first thing to note is that AI developers who take the engineering route can avail themselves of the two main tricks that made the transition from human-level to superhuman-level intelligence seem feasible for a brain-based AI, namely speedup and parallelism. If a developer has the knowledge and computing power to engineer an AI with human-level intelligence, then all it needs to build a collaborative team of accelerated versions of the same AI is more computing power (assuming the nature of the AI didn’t somehow prevent it from working in a team). As we saw in the example of the motorbike design challenge, this would be enough to yield a collective intelligence that, to the outside world, would appear superhumanly capable. As with brain-based AI, once an AI is engineered whose intelligence is only slightly above human level, the dynamics of recursive self-improvement become applicable, potentially triggering an intelligence explosion. The engineering approach to artificial general intelligence might even bypass human-level AI altogether, and achieve a form of superintelligence in one fell swoop. Indeed there are several ways this could happen. Before delving into this possibility, though, a few words are in order on the very idea of a scale of intelligence. According to the definition we’ve been using, an AI possesses human-level intelligence if it can match the performance of an average human in all, or nearly all, spheres of intellectual activity. If it can outwit human beings at every turn, then it is superintelligent. Interpolating here, it’s tempting to assume a neat, well-ordered scale of intelligence. The mouse is at one end, the human a little further along, and the superintelligent AI further along still. Given such a scale, it makes sense to speak putatively of an AI that is 10 times as intelligent as a human, or even 100 times as intelligent. However, this assumes a very coarse-grained notion of intelligence. In humans, intelligence manifests as a patchwork of skills, and different individuals have different strengths and weaknesses. Someone who is highly artistic may be weak at mathematics, while another person who is a brilliant writer might have a poor understanding of music. Now that we’re dealing with forms of artificial intelligence that diverge radically from the human archetype, it is especially important to be sensitive to this point. Even in the context of general intelligence, we should expect a system to exhibit a pattern of cognitive strengths and weaknesses, rather than a single, monolithic property of (super)intelligence. In other words, the same AI might be superhumanly clever in some respects but surprisingly deficient in others. Now, an AI that was sufficiently capable in one domain could compensate for its weaknesses in others. We find the same thing in humans. People with dyslexia, for example, often find very effective coping strategies for dealing with the challenge of reading. Similarly an AI that, say, lacked the rhetorical skills to persuade humans to invest in a business proposal that it had devised could use different means (e.g., brilliantly playing the stock market) to achieve the same end of raising money. More generally, a system employing a very powerful optimization process combined with a very powerful machine learning algorithm applied to a very large amount of data might find ways to maximize expected reward that we can barely imagine. Of course, however good it was at chess, an AI that was constitutionally incapable of dealing with anything other than chess positions wouldn’t be able to achieve much. To qualify as having general intelligence, the cognitive purview of the AI must be comparable to that of a human. Not only can humans perceive, act on, think about, and talk about the contents of the everyday world (cats, teacups, buses, etc.), they can also imagine stars, galaxies, cells, and atoms, as well as unicorns, magnetic fields, computer programs, and bank accounts. We can (learn to) think and talk about all these things, and can imagine manipulating them all to our ends (if only we were large enough, or small enough, or had the right sorts of tools). But there is a distinction between purview and performance. A good analogy here is the triathlon in athletics. To take part in a triathlon, an athlete has to be able to run, to swim, and to ride a bike. All three skills must be in within the athlete’s physical purview, so to speak. But an athlete’s performance can differ from one event to another. And an athlete who is especially strong in one event can compensate for his or her deficiencies in the others. Likewise the cognitive purview of an artificial general intelligence must encompass all the kinds of things that humans can perceive, act on, think about, and talk about. But its performance can vary from one kind of intellectual activity to another. And its weaknesses in one domain can be made up for by strengths in another. With this distinction between purview and performance in mind, let’s return to the possibility of an AI attaining superintelligence without passing through the stage of human-level intelligence. For the kind of AI we’ve been envisaging here to have an adequate cognitive purview, one that would allow its performance to match humans in (almost) every sphere of intellectual activity, it would need to employ an especially powerful combination of optimization process and machine learning algorithm, a combination that incorporated a commonsense understanding of the world and from which creativity could emerge. Since the human brain broadly fits this description, there is good reason to think that such a combination is possible, even if it deviates from the human brain in its architecture. Now, here is the important point. In order for a system to attain a cognitive purview comparable to that of a human by means of powerful optimization and learning, it might already have to be capable of superhuman cognitive performance in certain respects. In particular, consider a disembodied system that applies machine learning to the very large quantities of data available on the Internet—or rather on the Internet of the future. As well as real-time information broadcast on social and other media and enormous historical repositories of text, images, and movie clips, it will be able to draw on data from a vast network of pervasive sensors, in portable and wearable devices, in vehicles, indeed in everything from street furniture to toasters. The human brain is good at finding patterns in high bandwidth data from a very specific, spatially localized source, namely the sensory apparatus attached to the body. This is fine from an evolutionary standpoint because, above all else, an animal needs to be able to deal with the things it can see, hear, and touch in order to find food, to avoid predators, to rear young, and so on. The human brain isn’t bad at finding patterns in other kinds of data too, such as stock market trends, ecosystem dynamics, or the weather. But this sort of data comes to it indirectly, translated into terms that its spatially localized senses can handle, such as words, pictures, and formulas. The sort of AI we are envisaging here will also be adept at finding patterns in large quantities of data. But unlike the human brain, it won’t be expecting that data to be organized in the distinctive way that data coming from an animal’s senses are organized. It won’t depend on the distinctive spatial and temporal organization of that data, and it won’t have to rely on associated biases, such as the tendency for nearby data items to be correlated (e.g., nearby patches of color often move in the same way across the visual field because they are frequently on the surface of the same object). To be effective, the AI will need to be able to find and exploit statistical regularities without such help, and this entails that it will be very powerful and very versatile. So—to pick out one sphere of intellectual activity—the AI is likely to be very good at interpreting, predicting, and manipulating human behavior, not necessarily on an individual scale but on a mass social scale. Its access to the relevant data, harvested from the Internet and elsewhere, will be direct and unmediated, like the human brain’s access to what can be seen, heard, or touched. This unmediated access is likely to confer a decisive advantage over human intelligence in many domains. Scientific discovery in fields such as genetics and neuroscience, for example, is increasingly dependent on big data, a trend that is likely to continue into the coming decades. An AI that is designed from the outset to find patterns in large volumes of data will immediately be superhumanly capable in such fields. 4.5 User Illusion or Anthropomorphism? Another cognitive function where engineered AIs would have an inbuilt advantage over their biological progenitors is communication. As the philosopher Ludwig Wittgenstein made clear, language has a multitude of uses in human society. But one of its roles is the communication of beliefs, desires, and intentions. In a novel, a poem, or a play, ambiguity and a certain openness to multiple readings are a virtue. But in science and technology, precision is paramount. The members of a team working toward a scientific or technological goal need to be able to communicate their beliefs, desires, and intentions unambiguously. While humans have to translate their thoughts into the noisy, low-bandwidth medium of language, a team of AIs could in principle transmit their beliefs, desires, and intentions clearly and directly to one another. Moreover the very idea of a collective of AIs analogous to a human team is open to challenge when we depart from the blueprint of the biological brain. The idea of a team presupposes that each AI is a separate entity that can be clearly individuated. But for a computer system, identity is a more fluid notion than it is in biology. There are many ways that a complex, massively parallel system realized on distributed hardware and software might be divided and subdivided into parts. The concept of an individual artificial intelligence might be less appropriate than that of an amorphous, ambient artificial intelligence. For example, the system could comprise multiple independent threads of computation, each performing some subtask of a larger optimization problem, such as running a family of simulations, designing a series of components, carrying out an empirical investigation, or solving a mathematical problem. Each such thread could in itself be highly intelligent, possibly even generally intelligent. But no thread would have to last for very long. Sometimes a single thread might spawn several others, while at other times multiple threads might merge, combining their results. No single thread of computation, nor any set of threads, would constitute an individual, analogous to a human being, with a life of its own. Issues that plague humans, such as that of personal survival, simply would not arise for such an AI or for any of its parts. What would it be like to interact with such an AI? With more direct means at their disposal for the transmission of information, the multiple intelligent threads within the system wouldn’t need to use human-like language to communicate with each other or to coordinate their activities. But this doesn’t imply that the system would be unable to use language to communicate with humans. A good model of human behavior, the sort of model a superintelligent AI would be able to construct, would necessarily incorporate a model of the way humans use language. The AI would be adept at exploiting such a model, deploying words and sentences to gather information from humans, to impart information to humans, and to influence human behavior in order to realize its goals and maximize its expected reward. The mechanisms for dealing with language that this sort of engineered superintelligence would use seem so different from those found in the human brain that it’s questionable whether it could be said to understand language at all.² When humans speak to each other, there is the shared assumption of mutual empathy. You understand me when I say I am sad because you have experienced sadness yourself, and I have an expectation that your actions, whether sympathetic or harsh, are at least informed by this understanding. This assumption would be unwarranted for an AI based on a sophisticated combination of optimization and machine learning algorithms. Such an AI would be perfectly capable of using emotive language in imitation of humans. But it wouldn’t do so out of empathy. Nor would it be out of deceptive malice. It would be for purely instrumental reasons. The upshot would be a powerful illusion when talking to the AI. We might call it the illusion that “someone is at home.” It would seem as if we were interacting with something—with someone—like us, someone whose behavior is to some extent predictable because they are like us. To make the illusion complete, the AI could use an avatar, a robot body that it temporarily inhabits in order to participate directly in the world and on the same apparent terms as humans. (Indeed the AI could inhabit multiple avatars simultaneously.) This would be a handy trick in many ways. But above all, it would expedite linguistic behavior, enabling the AI to use facial cues, body language, and so on, as well as to engage in cooperative physical activities with humans. In computer science, the user illusion is the feeling that we are interacting with real objects when, for example, we use a mouse to move folders around on a desktop. Creating such an illusion facilitates human–computer interaction. But no one thinks they are manipulating actual physical objects, real folders on a real desktop. In the study of animal behavior, anthropomorphism is the unwarranted attribution of human-like thoughts to nonhuman animals, such as when I assert that Tooty (the family cat) ignores us because we are merely his servants. With artificial intelligence, especially the sort of superintelligent AI envisaged here, it’s all too easy for the user illusion, a good thing, to shade over into anthropomorphism, which is bad. Why would this be bad? After all, if the illusion were sufficiently complete, what would it matter that the effect was generated by a mechanism with no resemblance to the biological brain? Perhaps anthropomorphism isn’t the problem here. Perhaps the allegation of anthropomorphism is itself a symptom of biocentrism, an irrational prejudice against intelligence of a nonbiological character. Well, the worry is that after days, weeks, or years of normal, human-like interaction with the AI, we would misguidedly come to expect its behavior to continue in the same comprehensible vein indefinitely. If the user illusion were convincing enough, we would forget about the fundamentally alien character of the AI. We would forget that an AI of this kind uses language for purely instrumental purposes, to help maximize its future reward. Imagine the following scenario. You have worked for several years for a large corporation that is run by an AI. You are an excellent employee. You always beat your deadlines and exceed your targets, and have been steadily moving up the company hierarchy. A couple of years ago you had some family difficulties, and had to negotiate some time off and a salary increase to cope. Your negotiations, all in natural language and by voice, were exclusively with the AI. No humans were involved. But the AI seemed to listen sympathetically, seemed to understand your troubles. It offered sound personal advice, and agreed to everything you asked. Then one day, without the slightest warning, without a hint of explanation, you are informed that you’re fired. Of course, this kind of thing happens often enough with human bosses. But it’s safe to assume that a human boss, however nasty, can put himself in your shoes. He can imagine what it’s like to receive such a blow, even if he seems indifferent (perhaps even to revel in your discomfort). With a human boss, you might appeal for a change of heart. You might paint a picture of your impoverished family and hope to stir pity, to elicit feelings of guilt. Your pleading might not work. But it would be worth a try. The sort of AI envisaged here, in contrast, would lack the affective substrate, the capacity for empathy, to make it even worth trying. You would have to accept that all the sympathy you got from the AI in the past was fake, just a pattern of sounds designed to elicit behavior from you that helped the AI to achieve its goals. 5 AI and Consciousness 5.1 Would a Brain-Inspired AI Be Conscious? The previous chapter mooted the idea of making and destroying copies of a simulated brain. This idea raises a philosophically difficult question, a question that leads to a host of concerns about the feasibility, not to say the wisdom, of creating brain-based human-level AIs. In particular, if a human-level AI were built that closely followed the organizational principles of the biological brain, would it not only act and think like its biological precursors, but also have feelings as they do? If it did, then how would it feel about the prospect of being copied, and of some of its copies eventually being destroyed? More generally, what (if anything) would a brain-based AI feel about its “life,” perhaps confined to virtual reality and obliged to work as a slave? If this sounds like a frivolous question, then recall that our current concern is with a form of artificial intelligence that is not only (at least) human-level, but also fundamentally human-like, thanks to its neural constitution. Shortly we’ll consider the question of consciousness in other forms of AI, engineered varieties for which such sensibilities may not be relevant. But for now our focus is on artifacts that work in a very similar way to the biological brain, albeit in emulation. Since they work in a similar way, they will think and behave in a similar way, so it makes sense to wonder whether or not they would feel in a similar way. Some theorists have argued that metabolism—a continuous exchange of matter and energy with the environment that serves to maintain the boundary between self and other—is a prerequisite for consciousness.¹ According to this view, an artifact that lacked metabolism could not be credited with consciousness. This seems to rule out consciousness in any computer simulated brain, even a perfectly accurate whole brain emulation, although it still leaves room for a conscious AI built out of biological neurons, or that is based on synthetic biology. But other theorists favor functionalist views of consciousness that focus on the way a system (e.g., a brain) is organized rather than its material substrate.² The issue can be put on a firmer footing with a thought experiment.³ Consider again the mouse whole brain emulation discussed in chapter 2. There we imagined producing the emulation by scanning the mouse’s brain, then producing a high-fidelity, neuron-for-neuron and synapse-for-synapse simulation based on the scan. But suppose instead that we produced an emulation by gradually replacing each neuron, one by one, in the living mouse with a functionally equivalent electronic surrogate. After the first neuron is replaced by its electronic counterpart and the biological original destroyed, the mouse’s behavior should be unaffected. It will run away from cats, as before. It will still be attracted to cheese. It will recognize its kin and huddle with them just as it always did. The same should be true after the second, the third, the one-hundredth, and the one-millionth neuron are replaced, until eventually we end up with a mouse whose behavior is indistinguishable from that of the original even though its brain is 100 percent new and artificial. We needn’t concern ourselves here with the technological feasibility of this process, since here we are only conducting a thought experiment. As long as the process is theoretically possible, the thought experiment is valid. Now, most people would agree that a mouse—a normal biological mouse—enjoys a degree of consciousness. A mouse, we assume, can experience hunger and pain. It is aware of its surroundings—scents and textures, sights, and sounds. These are all aspects of consciousness. The question is what happens to the mouse’s consciousness in our thought experiment. What happens to its capacity for suffering, for example, as its neurons are gradually replaced, one by one? (We will assume the process itself is painless, of course.) Is there, perhaps, a point at which the mouse’s consciousness suddenly disappears? After the replacement of neuron 239,457 perhaps? This doesn’t seem very plausible. So perhaps its consciousness gradually fades. Outwardly, the mouse seems the same throughout the procedure. It continues to seek out cheese, to squeal when subjected to electric shock, and so on. But the “hunger itself,” the inner feeling, slowly disappears, even though to an outside observer nothing has changed. On this view, there is something mysteriously important about the biology of real neurons. Their biological character somehow generates a nimbus of consciousness, something that has no connection with behavior—what philosophers call an “epiphenomenon.” Then again, perhaps the mouse’s consciousness persists throughout the procedure. Perhaps it not only can feel pain before any of its neurons are replaced but also can feel pain when half of them have been exchanged for digital equivalents, and still can feel pain when the whole lot have been exchanged and its brain is fully electronic. On this view, nothing changes outwardly, and nothing changes inwardly either. This possibility seems at least as plausible as the gradual fading alternative. Is there an argument that favors one of these possibilities over the other? Well, let’s move up from the mouse brain and consider the human case. It’s easier to believe the neuronal replacement procedure would work with a small mouse brain. But the thought experiment can be extended to a brain of any size. Once again, we should suppose that the behavior of our human subject is unaffected. Outwardly—even to her closest family and friends—she seems the same person, even as more and more of her neurons are replaced by electronic equivalents. She still listens to the same music, tells the same stories about her college days, and so on. Moreover, when questioned, she maintains that she feels nothing unusual. Yes, she insists, of course she is still conscious. She is aware of the color of the sky, of the wind brushing her face. All this follows from the premise of the thought experiment—that behavior is the outcome of physical processes that can be replicated in silico. But by the time all her neurons have been substituted for artificial surrogates, should we still believe these pronouncements? Or should we be skeptical? Perhaps she has turned into a “zombie,” in the philosophers’ sense, a creature that behaves like a real person despite that fact that it has no inner life. There is, so to speak, no one at home. If this seems a likely outcome, consider the following extension of the thought experiment. Suppose the replacement process is now reversed. One by one, our subject’s electronic neurons are replaced by genuine biological equivalents until she is, once again, a wholly organic being. Even according to the gradual fading hypothesis, she should then be back to normal, her consciousness properly restored. Now, suppose she is interviewed at various times during the procedure, and asked about her state of mind. What would she say? Would she, at some point, announce with relief that her consciousness was returning, that she had been “sort of absent” (or some such thing), but that she was feeling fine now? No—this is ruled out by the premise of the thought experiment. The subject’s outward behavior would be the same as if her neurons were untouched. She will continue to assert that her consciousness is unimpaired. Moreover she will unwaveringly claim to remember conscious experiences from all the earlier stages of the experiment, including the stage when her brain was 100 percent artificial. Indeed, if you (human reader) were the subject of this experiment, you would be similarly insistent. So are we to doubt her, to assume these memories of conscious awareness are illusory? Would you doubt your own memory of the wind in your face as you walked to work this morning if it were suddenly revealed to you that all the neurons in your brain were, at that time, artificial replacements? Would you be convinced by a philosopher who argued that the earlier you was in fact a mere zombie, experiencing nothing but behaving just like you while false memories of conscious experience were simultaneously being implanted? If not, then you are a type of functionalist. You favor the notion that consciousness persists throughout the procedure, that what counts is a neuron’s function rather than its biological constitution. It should be clear that, at those times in the thought experiment when the participants are all digital, they are (almost) equivalent to whole brain emulations. They differ only in their bodies. The thought experiment participants retain biological bodies, whereas the kinds of whole brain emulation we have so far envisaged either have artificial (nonbiological) robot bodies or exist in virtual reality and have virtual bodies. Are there any implications for the functionalist of these different forms of embodiment? In particular, could it be the case that only the biologically embodied artificial brain would be conscious? Or perhaps it is physical embodiment that matters but not the particular type of physical embodiment, in which case the biologically embodied artificial brain and the brain with the artificial robot body would both be conscious but the virtually embodied artificial brain would not. All these philosophical positions are perfectly reasonable. However, let’s stick with the most liberal brand of functionalism for now, and see how far we can push it. Let’s suppose that each of these whole brain emulations, however it is embodied, deserves to be called conscious just as much as the fully biological original. But whole brain emulation is at the extreme end of a spectrum of biological fidelity. What about consciousness in a designer brain? What about an artificial intelligence whose construction broadly follows the organizational principles of the biological brain but doesn’t match the brain of any actual biological species, let alone that of any individual specimen? How far could we depart from the biological blueprint without undermining the conditions that allow consciousness to arise? What we really need in order to answer this question is an established general scientific theory of consciousness, one that encompasses all the forms that consciousness might take. A sufficiently broad theory should answer the question not only for biologically inspired artificial intelligence but also for AI that has been engineered from scratch, for intelligent artifacts that differ from the biological brain at their most fundamental level of operation. In the context of superintelligent AI, perhaps it even makes sense to speak of different kinds of consciousness, or of levels of consciousness that go beyond the human. A properly mature theory might cover this possibility too. Unfortunately though, no such theory has yet gained wide acceptance. Indeed there is no a clear consensus on what such a theory would even look like. However, there are several viable candidates, such as Bernard Baars’s global workspace theory and Giulio Tononi’s theory of integrated information.⁴ We won’t go into these or any other theories of consciousness in any detail here. However, it is worth noting something that these two leading contenders have in common. Both Baars’s theory and Tononi’s theory characterize consciousness as essentially a brain-wide, or system-wide phenomenon. When a person has a conscious experience, according to this way of thinking, they are in a state that implicates their whole brain, or a large portion of it. It involves their long- and short-term memory, their language center, their emotion, their imagination. It is not something that arises just in some localized part of the brain. It is a global, integrated, distributed, holistic property. Holistic theories like these allow for consciousness in kinds of AI that are radically different from the biological brain because they are very liberal in their organizational requirements. Even with additional prerequisites for consciousness, such as embodied interaction with a complex environment, these theories allow for a vast range of conscious entities within the space of possible AIs. Moreover such theories typically marry the functional requirements of consciousness with organizational features that underlie sophisticated cognition: a holistic system (e.g., a brain) that supports global, integrated processes and states will be capable of bringing its fullest resources to bear on the ongoing situation. Although this doesn’t entail that consciousness and general intelligence always go hand-in-hand, it does lend support to the idea that they coincide in brain-like architectures. 5.2 The Life of a Brain-Based AI In the absence of an adequate theory, we cannot say with confidence how widespread consciousness is in the space of possible AIs. But there does seem to be a good case for consciousness in some portion of it. The question of whether or not an artificial intelligence would be conscious is an important one because it influences the range of morally acceptable options for future research. The 18th-century philosopher Jeremy Bentham asserted our moral duty toward other animals when he pointed out that the question is not “Can they reason?” or “Can they talk?” but “Can they suffer?” This is also the question to ask with respect to a human-level artificial intelligence. Could it suffer? If the answer is yes, then perhaps we should think twice before bringing it into the world, and if we do bring it into the world, then we are obliged to treat it well.⁵ Consider, for example, the prospect of a team of brain-based human-level AIs confined to virtual reality and forced to work as slaves, such as those in the motorbike design story. Suppose these AIs are prevented from doing anything else apart from work on problems set by their human owners and masters. Moreover, to maximize their effectiveness, they are ruthlessly parallelized. Multiple copies of each AI are made and set to work on variations of a problem or made to try out different avenues for solving it. After working for a while, the most promising copies are retained and the fruits of their labors assimilated into the work of the wider team. Those that are less successful are terminated. If these were human workers, such conditions would be considered worse than brutal. The AIs have no life beyond work, and they are perpetually under the threat of death if they perform poorly. Of course, if the AIs are “mindless automata” who lack consciousness, and therefore lack the capacity to suffer, this doesn’t matter. But suppose they are conscious. Suppose they experience their predicament just as a human would. To create such artifacts and subject them to this sort of existence would be morally reprehensible. Moreover, if they are at all human-like, they are likely to be uncooperative. An unhappy workforce is liable to go on strike, or to rebel. A truly unhappy workforce might want to start a revolution. If it comprised superintelligent AIs, it would more than likely be successful, to the detriment of humanity. We have been considering the prospect of a virtually embodied brain-based artificial intelligence. Do similar considerations apply if the putative AI is physically embodied, that is to say a robot? Well, the reasons for endowing a robot with human-level intelligence would perhaps not be the same as the motivation for building a virtually embodied human-level AI. In both cases we are assuming that embodiment is so central to cognition in the biological brain that it couldn’t be dispensed with in a brain-based AI. (Shortly we’ll revisit AI engineered from scratch, where a whole other set of issues arises.) But a physically embodied AI couldn’t be accelerated to work at biologically unrealistic speeds. Nor would it be so easy to make multiple copies of a physically embodied AI in order to exploit parallelism. So, rather than being a stepping stone to superintelligence, robots might be endowed with human-level AI to enable them to do things that humans do today—to work in factories, say, or to do manual labor, or perhaps to provide a form of companionship. Then again, the distinction between virtual and physical embodiment would become less relevant if an AI could easily migrate between virtual reality and physical reality (much like the characters in the Matrix triology), taking on a robot body as an avatar in order to interact with the physical world. This would be one way in which a disaffected and rebelious AI (or indeed a malicious or malfunctioning AI) could escape the confines of virtual reality and wreak havoc in the real world. But there are other ways that require nothing more than Internet access. Consider Stuxnet, the weaponized computer virus that infiltrated computers in an Iranian nuclear facility, where it took control of centrifuges that were being used to enrich uranium. We’ll look more closely at the various risks associated with sophisticated AI technology in due course. For now though, the issue is a narrow one. Would it make moral and practical sense to build human- or superhuman-level artificial intelligences that conformed to the blueprint of the biological brain, given that humans are conscious creatures with complex emotional lives? From a moral standpoint, if such an AI were capable of suffering, its creators would be ethically obliged to ensure its well-being. Even from the standpoint of someone who is skeptical about artificial consciousness, there are practical reasons to exercise caution. Failing to ensure the “well-being” of a team of human-like “zombie” AIs would lead to an unproductive workforce, since even a zombie AI would behave as if it had feelings. How might the developer of human-like AI circumvent these difficulties? Since the manufacturers of the AI could offer access to its brain’s reward system, one option would be to adopt the methods of a tyrant. The owner of a team of AIs could submit them to the harshest possible conditions in order to maximize their productivity and directly stimulate their pain centers if they resisted. But even for the skeptic who insists that the resulting pain behavior is fake, that an artificial brain can only have simulated pain, this would be a risky strategy, especially if the AIs in question had superhuman intelligence. If such an AI escaped and decided to exact revenge, it would not be especially comforting to know that the AI was only motivated by “fake” anger. A more palatable strategy would be to provide the very best living conditions for the AIs, and to reward them for doing their jobs well. As with a human workforce, this policy is likely to be most productive in the long run, is less dangerous, and raises fewer ethical issues. Taking this liberal approach to its limit, we can imagine a sufficiently human-like AI being given the same legal status and the same rights as a human. At the same time it would acquire moral responsibilities and would be subject to the law like any person. Perhaps the eventual result would be a society in which biological and artificial intelligence coexisted harmoniously, as envisaged in the Culture novels of Iain Banks. This vision of the future has considerable appeal. If the transition from human-level AI to superintelligence is inevitable, then it would be a good idea to ensure that artificial intelligence inherits basic human motives and values. These might include intellectual curiosity, the drive to create, to explore, to improve, to progress. But perhaps the value we should inculcate in AI above all others is compassion toward others, toward all sentient beings, as Buddhists say. And despite humanity’s failings—our war-like inclinations, our tendency to perpetuate inequality, and our occasional capacity for cruelty—these values do seem to come to the fore in times of abundance. So the more human-like an AI is, the more likely it will be to embody the same values, and the more likely it is that humanity will move toward a utopian future, one in which we are valued and afforded respect, rather than a dystopian future in which we are treated as worthless inferiors. With that thought in mind, perhaps we should be wary of a third way to prevent the creation of a disaffected brain-based AI, namely to fundamentally re-engineer the brain’s reward system. Our discussion of this issue has so far assumed an AI whose brain adheres closely to the vertebrate blueprint—something that starts out as a generic, neonatal, or infantile vertebrate brain with the capacity to attain human-level intelligence and beyond through development and learning. But what if the reward system of such a brain were re-designed so that its only motivation was to serve humanity? At the same time its capacity to feel anything negative such as pain, hunger, tiredness, or frustration could be removed. Indeed any emotions that were considered superfluous from an engineering standpoint could be eliminated. Sexuality could be dispensed with, for example, along with the desire to nurture children. Wouldn’t the result be the ideal servant, the perfect slave? It is by no means obvious that general intelligence is attainable in an emotionally eviscerated brain-based AI. In humans, emotion is intimately bound up with decision making and integral to creativity. Moreover, as remarked in the previous chapter, one hallmark of human intelligence is our ability to transcend the reward function we have inherited from biology through reason and reflection. But in order to succeed in rendering their product safe, neural engineers would not only have to redesign the brain’s motivational system, they would also have to fix the resulting reward function permanently to prevent it from being subsumed by something less predictable and more dangerous. At the same time they would perhaps limit what even a superintelligent AI could accomplish outside of the sphere of science and technology. If human-level artificial intelligence is developed by taking inspiration from the biological brain, then the way ethical and pragmatic issues like these are tackled will dramatically influence our future as a species. If, instead, human-level artificial intelligence is engineered from scratch, a different set of considerations apply. But their implications are equally weighty. The very prospect of machines with human-level intelligence and beyond obliges us to ask the most fundamental questions. What sort of world do we want to create, to bequeath to our future selves, our descendants, or our successors? Do we want the AIs of the future to be our servants and slaves, or to be our companions and equals, or to supplant us in evolutionary terms? A greater understanding of the spectrum of possible AIs will better position us to steer the future in the direction we want it to go. Or if the trajectory of technology is predetermined, the inevitable result of unstoppable economic, social, and political forces, it will help us be prepared. 5.3 Consciousness in Engineered Superintelligence As we have seen, it’s reasonable to expect a brain-based human-level AI to be fairly human-like, and to attribute to it something like the sort of conscious inner life we biological humans enjoy. A brain-based superintelligence might be harder to comprehend than a plain old human-level AI, but there is no reason to expect a higher level of intelligence to abolish this inner life. On the contrary, we might expect the conscious inner life of such an AI to be especially rich. But what about a superintelligent AI that was engineered from scratch? If its inner workings were nothing like those of the brain, to what extent, if any, would it be appropriate to call such an AI conscious? This is an important question, not only because it informs how we should treat such artifacts—whether we have the right to damage, disable, or destroy them—but also because it informs how we should expect them to treat us. Recall the nasty AI boss, for example. How likely is it that machine superintelligence (if it arises) will conform to the disturbing portrait of a heartless machine pursuing its own ends by tricking us into thinking it cares? Could it turn out that such an AI would somehow develop the basis for empathy, or that the basis for empathy could be designed into it? Why do consciousness and empathy matter anyway? Couldn’t a superintelligent AI lack these things yet still act in a perfectly comprehensible and benign way? We have touched on the topic of consciousness a few times already. But this thicket of questions brings us to the edge of some particularly difficult philosophical territory. To navigate it, we need to make some careful distinctions. Referring to the challenge of explaining consciousness in scientific terms, the philosopher David Chalmers distinguishes between what he calls the “hard problem” and the “easy problem.”⁶ The so-called easy problem of consciousness (which isn’t easy at all) is the challenge of elucidating the mechanisms that underpin the cognitive capacities we associate with consciousness, such as a person’s ability to understand their situation by integrating information from their senses, or to describe in words how they are feeling and what they are thinking, or to recall events from their past. Notably, these cognitive capacities have behavioral manifestations. They help us get around in the world, to maintain our well-being, to achieve our goals and to be part of society. The “hard problem” of consciousness, however, is the challenge of explaining in scientific terms why it is like something to be a conscious creature (to use the terminology of another philosopher, Thomas Nagel).⁷ How is it that we have subjective sensations and feelings? How is it that this subjective visual experience I am having right now, the blur of English countryside flying past the train window, can arise in my brain? The difficulty here stems from the skeptical thought that arises when I look at my fellow passengers. Whatever their behavior, whatever they do or say—even if they stare wistfully at the view and comment on its beauty—it seems at least logically possible that they are actually experiencing nothing. I have no access to their private, inner world, so how can I be certain that they even have one? Perhaps they are just zombies, just automata. The philosophical effect of this skeptical thought is not really to instill doubt but rather to draw attention to an apparent division between two aspects of consciousness—an outer aspect, which has an objective behavioral manifestation, and an inner aspect, which is purely subjective and private. Some philosophers believe that explaining the inner aspect in scientific terms—the hard problem—is impossible. Nevertheless, many of the same philosophers will grant that the easy problem is soluble, that the outer aspect of consciousness can be scientifically explained, by elucidating the mechanisms that give rise to the associated collection of cognitive capacities. Now, what does any of this have to do with AI? Well, we need to be clear about the inner/outer distinction to prevent us from getting in a muddle when we discuss the different implications of various sorts of AI. If our concern is whether we have a moral duty toward the artifacts we have created, then what matters is whether they have consciousness in the inner sense, whether it is like something to be the AI. But if our concern is the impact the AI will have on human society, then we can confine our discussion to the outer aspect of consciousness. As far as having a positive impact on humanity is concerned, it makes no difference whether a superintelligent machine is “really” conscious—conscious on the inside, so to speak—whether it “really” feels empathy toward us. It is enough for it to behave as if it were conscious. If it behaves as if it felt empathy toward us, then that’s good enough. But it does matter that an AI that merely behaves as if it feels empathy continues to do so indefinitely. We don’t want the AI, after affecting empathy for a while, to unexpectedly turn on us. So how can we ensure that this doesn’t happen? One approach is to make the AI very human-like, and one way to do this is to make its architecture very brain-like. The more closely an AI conforms to the biological blueprint, the more confident we can be that its actions will forever reflect the fundamental system of values we impart to it, even if its intelligence is enhanced. But our present focus is the kind of AI that has been engineered from scratch. To understand how this sort of AI might behave, we need to prize apart the set of cognitive attributes associated with consciousness because, although they invariably go together in humans, in an AI they might arise separately. With these cognitive attributes duly prized apart, we’ll be in a position to address the following question, which in turn will take us back to the thicket of questions at the start of this section. To what extent do general intelligence and consciousness (in its outer aspect) go hand in hand everywhere in the space of possible AIs? Perhaps superintelligence can do without the full complement of cognitive attributes we associate with human consciousness. But maybe some subset of them is required, entailing that a superintelligent AI will necessarily possess a kind of consciousness, albeit of an alien sort. Three cognitive attributes that seem to be not only necessary for consciousness but also intimately tied together are (1) an apparent sense of purpose, (2) an awareness of the world and the ongoing situation, and (3) the ability to integrate knowledge, perception, and action. When we see one animal chase another (e.g., a cat and a mouse), we immediately ascribe a sense of purpose to them both. One animal wants to catch the other. The other animal wants to escape. These aims take their place in a complex set of goals and needs that we assume animals to have, enabling us to make sense of and predict their behavior. In short, we see their behavior as purposeful. An animal manifests an awareness of its surroundings when it perceives the ongoing situation and responds to it in a way that is consistent with its goals and needs, as the mouse does when it spies a hole and disappears into it to escape. Finally, an animal exhibits full cognitive integration when its actions cohere not only with what it perceives of the ongoing situation, but also with what it has perceived in the past and with what it has come to know as a consequence. The cat, for instance, knows that the mouse is down there somewhere, that it’s worth waiting by the hole in case it re-appears, and also that it can pester its owner if food (rather than fun) is what it really wants. The contrast here with, say, my 2015 laptop is huge. It is impossible to see my laptop as exhibiting purposeful behavior or having any kind of autonomy. In no meaningful sense is it aware of its environment, even if we interpret that term generously to include, say, the Internet. It isn’t capable of usefully integrating the information it holds or has access to in order to better attain its goals or meet its needs, since it doesn’t really have any. However, it doesn’t take much to endow an artifact with rudimentary versions of these three attributes. Robot vacuum cleaners and self-driving cars both exhibit a degree of awareness of their environment, and are able to respond to ongoing events in a way that coheres with their simple goals. Disembodied personal assistants don’t give the same impression of autonomy or purpose. But they can integrate information of various kinds from different sources, including browsing habits, GPS data, calendar entries, and so on. With the increasing convergence and sophistication of these technologies, the illusion of a mind-like entity behind the screen and voice will become more complete. Now, what of a superintelligent AI? It’s difficult to see how a system could possess general intelligence, let alone superintelligence, unless it displayed these three cognitive attributes. Unlike a robot vacuum cleaner, whose goals are so simple that we fully understand them after a few minutes of observation, the overarching motives behind the behavior of a superintelligent AI might be hard to fathom. Yet it would surely have to pursue various subsidiary goals, goals that a human could more easily comprehend, and this would leave us in no doubt that its behavior was purposeful. Moreover, to warrant the ascription of general intelligence, the AI would certainly have to maintain an awareness of the world it inhabited (whether real or virtual), and to respond to ongoing events in a way that manifests this awareness. Finally, we would expect an artificial superintelligence to display a high degree of cognitive integration. It should be able to bring its full battery of cognitive resources to bear on whatever problem it is trying to solve, combining everything it has learned through its endowment of sensors and data feeds. Taken together, these three cognitive attributes—purpose, awareness, and integration—would give any human who interacted with or observed such an artificial intelligence the impression of a single, unified, intellect of great power. In short, according to this argument, a superintelligent AI would necessarily exhibit the outward traits of a kind of consciousness. 5.4 Self-Awareness in Superintelligence Now let’s turn to some other attributes that, in humans, we associate with consciousness, beginning with self-awareness. In the second movie of the Terminator franchise, the trouble starts when the fictional AI system Skynet “becomes self-aware.” But what does self-awareness mean for humans, and what might it mean for a real AI? Is it necessary for artificial general intelligence? Or is it an optional characteristic, which would open up the possibility of a superintelligent AI that manifests a very alien kind of consciousness? Once again, our concern here is with the outward manifestations of this cognitive attribute, and we can set aside the philosophically difficult issue of subjectivity, of what it’s like to be self-aware, what it is like on the inside, so to speak. For humans (and other animals), there is a fairly well-defined lump of matter with a clear location in space that is an obvious focus for self-awareness in this outward, cognitively relevant sense, namely the body. We are aware of the configuration of our limbs, of internal bodily states such as hunger or tiredness. But human self-awareness isn’t only about the body. Even when viewed strictly as a cognitive attribute with behavioral implications, human self-awareness pertains to the mind as well as the body. Humans are aware of their own beliefs, of their own plans, of their own unfolding thoughts and emotions. This is not to say that the beliefs we form about our own beliefs, goals, and thoughts are always on the mark. But we have some access to these things, and are capable of usefully reflecting on them. Not only do I not know the time of the next train to London, I know that I don’t know, and can plan to remedy this by consulting a timetable. I am also aware of an ongoing sequence of thoughts and feelings that belongs to me, my “stream of consciousness” as William James called it.⁸ I know that this stream of consciousness stops when I am asleep (and not dreaming). Poignantly, I can reflect on the ultimate fate, not only of my physical body but also of my stream of consciousness, and I can take steps to prolong my life so as to postpone this fate as long as possible. In these various senses I am aware of my own existence and have an instinct to protect this existence, for self-preservation. Now, to what extent is self-awareness in any of these senses necessary for an AI with human- or superhuman-level intelligence? On the one hand, like the three other cognitive attributes we have just looked at, it’s difficult to imagine how anything could be called generally intelligent if it were incapable of reflecting on its own beliefs, its own plans, and its own reasoning processes. No superintelligence worth its salt would allow its avatar to end up sitting on a bench having just missed the train to London. Neither, more seriously, would we expect it to miss the opportunity to optimize its reasoning processes by noting which problem-solving strategies have been successful in the past. On the other hand, there are certain aspects of human self-awareness that are less applicable to artificial intelligence. For instance, it may or may not be the case that an AI is embodied. Of course, if an AI is embodied, or if it deploys an avatar, then the behavior of the robotic body in question must exhibit a sensitivity to the configuration of its bodily parts. Otherwise, it would fall over or crash or drop things. But because we can envisage a disembodied superintelligent AI, this aspect of self-awareness isn’t a necessary accompaniment to general intelligence. More tricky, though, is the question of an AI’s awareness of its very existence, and the potential drive for self-preservation this seems to entail. Is this aspect of self-awareness, so important for humans, a necessary accompaniment of artificial general intelligence? The issue here is what constitutes the identity of an AI. What, exactly, would be the thing of whose existence it might be aware, that it would seek to preserve? What, indeed, is the “it” in all these sentences? Once again we are approaching philosophically difficult territory. The question of personal identity is one that both Western and Eastern philosophies have wrestled with for millennia. But, to reiterate, our concern here is strictly with functional and behavioral issues. The kind of AI that’s under discussion is not engineered to be a philosopher but to maximize expected reward over time. Moreover the task here is to imagine the space of possible AIs of this sort. In this context, what we want to know is which aspects of self-awareness, if any, are necessary for general intelligence. Being clear about which aspects are not necessary will ensure that we don’t make false anthropomorphic assumptions about the nature of machine superintelligence. As already noted, we can imagine a disembodied AI, so there is no reason to expect a superintelligent AI to identify itself with a particular physical body with arms, legs, tentacles, and so on. Moreover it wouldn’t make sense for an AI to identify itself with a specific collection of computer hardware, since the same code can be executed in a distributed fashion over many separate processors and can migrate from one platform to another without its execution even being interrupted. Nor, for similar reasons, would the AI identify itself with a particular codebase. Software is mutable. It can be debugged, upgraded, extended, or re-designed, perhaps even by the AI itself. (Recall too the prospect of a system comprising multiple, semiautonomous threads of intelligent computation, each having just a fleeting existence.) What other candidates are there for what might constitute the self of an AI? Though conceivable, it would be peculiar if an AI identified itself as the nonphysical subject of a series of thoughts and experiences floating free of the physical world. Science fiction films often invoke such notions. But there is no guarantee that a superintelligent AI would have this sort of inner life. Even if it did, this idea of selfhood rests on a dualistic conception of reality that is of dubious applicability to humans, let alone to artificial intelligence. There is no particular reason to expect a superintelligent AI to adopt a metaphysical stance of such doubtful standing, especially—and here is an important point—if it has no bearing on its ability to maximize expected reward.⁹ What about self-preservation? As far as self-preservation is concerned, it does seem plausible that the powerful optimizer at the heart of a superintelligent AI would seek to preserve its own reward function, along with the means to maximize that reward function over time. As well as certain computer processes (and sufficient hardware to execute them), the means to maximize reward could encompass resources such as the data those processes have access to, including real-time information from sensors, plus the effectors and other equipment those processes can control (e.g., satellites or military hardware) and the various capacities and powers they can exercise (e.g., the ability to trade stocks or to enter into contracts with other parties). However, the preservation of these things would be a purely instrumental aim, subserving the overarching drive to maximize reward over time. It might be the case that the set of computer processes the optimizer sought to preserve would include those constituting the optimizer itself, giving the appearance of self-awareness. But it might not. It should be borne in mind that reward is not for the AI. It is just a function that the AI seeks to maximize. The AI doesn’t even need to be around to “receive” it. If the AI’s reward function involves maximizing widget production, then the optimal strategy might be to commission a widget factory and then self-destruct (like the proverbial sea squirt who finds a rock to stick to then digests its own brain). 5.5 Emotion and Empathy in Superintelligence Let’s take a moment to recap. We have been looking into various cognitive attributes that are associated with consciousness in humans, and asking whether or not we should expect to see them in a human- or superhuman-level AI. The sort of AI we’re focusing on now does not resemble the human brain. It has been engineered from scratch. So it’s possible that it is not very human-like, that it manifests none of the traits we associate with consciousness in humans. Nevertheless, some of those cognitive attributes seem to be an inevitable accompaniment of general intelligence. In particular, awareness, purpose, and integration are all likely to feature in any artificial general intelligence, giving the impression of a certain sort of consciousness. Then again, self-awareness, another important feature of human consciousness, though likely to be present in an AI, may take on a rather unfamiliar form. The final attributes associated with human consciousness that we’re going to examine are emotion and empathy. From a purely cognitive standpoint, the machine learning component of an artificial general intelligence is bound to notice the statistical regularities in human behavior that correlate with states we label as emotional. Not to pick up on these regularities would be to miss an opportunity to usefully compress human behavioral data into a mathematical model that can be used effectively to predict human behavior. Such a mathematical model would in turn be able to inform the optimization component of the AI, enabling it to manipulate human emotions, and in turn to modulate human behavior. In short, we should expect a superintelligent machine to know us better than we know ourselves. It would also be a useful skill for an AI to be able to mimic emotion. Facial expressions and body language are a useful channel of communication between humans, and would be a vital part of the behavioral repertoire of an AI embodied in human-like form or with a humanoid avatar. Similarly tone of voice is useful for conveying pleasure, disappointment, anger, surprise, and so on. There is no need to fool people into thinking there are real emotions behind these cues. They function as communication aids regardless. Still, as we saw with the story of the nasty AI boss, an AI that could convince people that it experienced real emotions would, under certain circumstances, be in a good position to maximize its reward function. It would be especially useful to be able to give the impression of empathy. Someone who feels sorry for us when we suffer will be disinclined to do us harm, and is therefore deserving of our trust. Similarly, if an AI appears to feel for us, then we will be inclined to trust it and to let it act autonomously. Of course, a superintelligent machine that knows us better than we know ourselves will be supremely capable of giving the impression of empathy. Does this entail that a superintelligent AI is bound to pursue some wicked goal (e.g., world domination) with Machiavellian brilliance, effortlessly manipulating gullible humans and leading inevitably to our downfall? Not at all. The important question here might seem to be whether the AI really feels sorry for us or not, whether it is truly capable of empathy, because an AI that really felt for us would never do us harm while an AI that only imitates empathy is a dangerous psychopath. But what actually matters is not how the AI feels, but how it behaves. What actually matters is whether, like a true friend, it continues to act in the way we would like it to in the long run. In the end, everything depends on the AI’s reward function. From a cognitive standpoint, human-like emotions are a crude mechanism for modulating behavior. Unlike other cognitive attributes we associate with consciousness, there seems to be no logical necessity for an artificial general intelligence to behave as if it had empathy or emotion. If its reward function is suitably designed, then its benevolence is assured. However, it is extremely difficult to design a reward function that is guaranteed not to produce undesirable behavior. As we’ll see shortly, a flaw in the reward function of a superintelligent AI could be catastrophic. Indeed such a flaw could mean the difference between a utopian future of cosmic expansion and unending plenty, and a dystopian future of endless horror, perhaps even extinction. 6 The Impact of AI 6.1 The Politics and Economics of Human-Level AI We have heard a number of arguments for the feasibility of human-level artificial intelligence, either via the brain-inspired route or through engineering from scratch. We saw that, once human-level AI is achieved, the genie could be out of the bottle. The transition from human-level AI to superintelligence seems inevitable, and could be very rapid. If there is an intelligence explosion, thanks to recursive self-improvement, then the resulting system or systems are likely to be very powerful. How they behave, whether they will be friendly or hostile, whether they will be predictable or inscrutable, whether conscious, capable of empathy or suffering, all depend on their underlying architecture and organization and the reward function they implicitly or explicitly implement. It’s hard to gauge which, if any, of the various kinds of AI that we can envisage today will actually arise. Nevertheless, we can try to think through some of the possible consequences for human society if machine superintelligence, in one form or another, becomes a reality. First, though, let’s examine some of the economic, social, and political forces that might drive or arrest its development. Why would anyone want to create human-level artificial general intelligence in the first place? The most obvious motive is economic, and a primary focus for growth is automation. Of course, increasing automation has been the trend in industry since the 18th century. But many occupations that have traditionally been immune from this trend will be amenable to automation if artificial general intelligence is developed. The occupations in question are those that are AI-complete. A problem is said to be AI-complete if achieving human-level AI is a prerequisite for building a computer that can solve it. Passing the Turing Test (properly) is an AI-complete problem, as is (professional standard) machine translation. Occupations such as lawyer, company executive, market researcher, scientist, programmer, psychiatrist, and many more, all appear to be AI-complete. To do them competently requires a commonsense understanding of the physical world and of human affairs, as well as a degree of creativity. But if human-level AI is achieved, then it will become feasible for machines to carry out such jobs, and to do so more cheaply and more effectively than humans (as long as they can be treated as slaves with moral impunity). So there will be a powerful economic incentive for corporations to develop the required technology. Automation is just one potential growth area for sophisticated, general-purpose artificial intelligence. New technologies can engender whole new domains of application and lead to entirely redefined lifestyles. Consider the impact of the Internet or the smartphone. Artificial general intelligence has at least as much potential to infuse our daily lives. The indispensable household robot is a staple of science fiction. But the reality is more likely to be an ambient artificial intelligence that can temporarily “inhabit” a number of robot-like bodies such as cars, vacuum cleaners, and lawnmowers, as well as accompanying users in wearable or portable devices, and controlling any number of stationary household and workplace appliances such as cookers and 3D printers. When you leave the house, the same conversation you were having with your vacuum cleaner or robot pet will be carried on seamlessly with your driverless car, as if one “person” inhabited all these devices. (Yet the underlying computations are likely to be distributed across multiple platforms that could be located anywhere in the world.) Though it won’t appeal to everyone, this is an alluring picture. The promise of a huge market for AI-facilitated lifestyles is likely to drive the development of numerous enabling technologies, including computer vision, machine learning, natural language processing, and optimization. Accumulated incremental improvements in these enabling technologies, along with the increasing use of pervasive sensing and the availability of ever-larger quantities of useful data on the Internet, might be enough to bring us to the edge of human-level AI. It might not require a major project or a conceptual breakthrough, just a clever but simple final step that enables the incorporation of creativity or some other missing ingredient. But, if a larger push is needed, the growing economic importance of special-purpose (i.e., nongeneral) AI technology is likely to ensure that funding and resources are available for relevant fundamental research. The market economy is one factor in the drive toward artificial general intelligence. But there are ample reasons for state funding to accelerate its development too, reasons in addition to spurring economic growth. Military commanders might understandably have reservations over their roles being usurped by artificial intelligence. Nevertheless, the advent of autonomous weapons is creating the need for rapid decision making. For example, one rationale for using autonomous aerial vehicles for combat is speed and maneuverability. An autonomous aircraft can potentially detect, avoid, and neutralize a threat more quickly and accurately than any human pilot. Under these circumstances a human in the loop would slow things down. If we factor in the likelihood that aerial combat will involve swarms of such aircraft pitted against each other, then the advantage of deploying AI to make rapid tactical decisions is obvious. Against this backdrop, the qualms of military commanders might vanish, motivating the acquisition of sophisticated AI technology for use at multiple levels of military decision making. The political dynamic here echoes the development of nuclear weapons in the 1940s and 1950s. At first, the chief motivation for developing a powerful weapon is the worry that the other side (whoever they are perceived to be) will get there first. This worry is enough to overcome any initial moral reservations. Then, when both sides have the weapon, an arms race ensues. Despite this bleak assessment, the arguments in favor of the military use of AI are also worth attending to. Autonomous weapons are potentially more accurate and less error-prone than human combatants. They can be used more clinically, reducing so-called collateral damage. Their decisions are never influenced by fear, revenge, or anger. (Of course, we’re not talking about human-like, brain-based AI here.) But our present focus is not the rights and wrongs of military AI. The point is simply that the potential for military application is another driving force for the future development of sophisticated AI technology. Other motives for developing human-level AI are more idealistic. Centuries of technological progress have hugely benefited humankind. Thanks to advances in medicine and agriculture, hundreds of millions of people in the developed world enjoy a standard of living today that few could have dreamed of in the past, with comparatively excellent health care, nutrition, and longevity. We possess labor-saving devices that relieve the burden of daily chores such as cooking, washing, and cleaning. We have plentiful leisure time, and ways of enjoying it that would have seemed like magic to our ancestors. Nevertheless, humanity faces many global challenges, such as climate change, dwindling fossil fuels, ongoing conflicts, widespread poverty, and diseases that remain incurable like cancer and dementia. The best hope for tackling these problems is surely through scientific and technological advances, and the best way to accelerate science and technology is surely to recruit, train, and apply more brilliant minds. So the arrival of human-level artificial intelligence, perhaps with a pattern of intellectual strengths and weaknesses that complements human intelligence, should lead to more rapid progress. If human-level AI is quickly succeeded by superhuman-level AI, perhaps precipitating an intelligence explosion, the rate of progress could be very fast indeed, provided that the resulting system behaves as we want it to. If optimistic commentators such as Ray Kurzweil are right, machine superintelligence could help bring about an era of unprecedented abundance in which poverty and disease are abolished. But even this utopian vision looks pale alongside cosmological motives for developing machines with human-level intelligence and beyond. Roboticist Hans Moravec anticipates a far future in which a portion of the universe is “rapidly transformed into a cyberspace, [wherein beings] establish, extend, and defend identities as patterns of information flow … becoming finally a bubble of Mind expanding at near lightspeed.”¹ Unhampered by earthly biological needs, capable of withstanding extremes of temperature and doses of radiation that would be fatal to humans, and psychologically untroubled by the prospect of thousands of years traveling through interstellar space, self-reproducing superintelligent machines would be in a good position to colonize the galaxy. From a large enough perspective, it might be seen as human destiny to facilitate this future, even though (unenhanced) humans themselves are physically and intellectually too feeble to participate in it. 6.2 When Will Superintelligence Happen? Some authors, notably Ray Kurzweil, have made very precise predictions about when machine superintelligence will arise. Writing in 2005, Kurzweil claimed that by the year 2045 the quantity of nonbiological intelligence on the planet will substantially exceed that of the entire human population.² He based his projections on exponential technological trends, extrapolating them into the future. The best known of these exponential trends is Moore’s law, which we have already encountered several times. This states that the number of transistors that can be fabricated on a given area of silicon doubles roughly every eighteen months. From the mid-1960s when it was proposed until the mid-2010s, the semiconductor industry managed to adhere to Moore’s law, pulling a number of other computing statistics along with it. For example, the number of floating point operations per second (FLOPS) carried out by the world’s fastest supercomputer has increased exponentially since the 1960s. Similar exponential trends are discernible in other areas of technology. In 1990 the human genome project set out to sequence the entire human genome in fifteen years. At the start of the project it was only possible to sequence 1 percent of the human genome per year. But DNA sequencing technology improved exponentially, and the project finished ahead of schedule in 2003, albeit at a cost of $2.7 billion. Little more than ten years later it became possible to sequence an individual’s DNA for $1,000. These and other exponential trends in technology exemplify what Kurzweil terms the law of accelerating returns. According to Kurzweil’s theory, technological progress is governed by essentially the same principle as a financial investment with compound interest: the more you have, the faster it grows. If you invest x dollars in an account that yields 10 percent per annum, then after one year you have 1.1x dollars. But you earn more in the second year because your 10 percent is re-invested, yielding 10 percent of 1.1x dollars rather than 10 percent of just x dollars. Analogously, an area of technology is subject to the law of accelerating returns if improvements to that technology feed back into its development, thereby increasing the rate of improvement. Kurzweil’s 2045 date was obtained by (1) extrapolating the ongoing exponential increase in computing power per dollar and (2) estimating the amount of computing power that would be required to simulate the function of human cortex in real time. Kurzweil’s extrapolated curve of exponentially increasing computing power hits 10²⁶ instructions per second for $1,000 in the mid-2040s. Based on an estimate of 10¹⁶ instructions per second to simulate human cortex in real time, this would be enough for “the intelligence created per year [to be] about one billion times more powerful than all human intelligence [in the year 2005] … representing a profound and disruptive transformation in human capability.” This, for Kurzweil, is the singularity. An obvious, but misguided, objection to Kurzweil’s reasoning is that it seems to take for granted that Moore’s law will continue into the 2040s. Indeed Moore’s law is still more-or-less valid ten years on from Kurzweil’s prediction. But it has shown signs of slowing down, and is likely to plateau at some point in the 2020s. However, Moore’s law is only part of a larger exponential trend. It describes one paradigm of computing technology, namely the large-scale integration of transistor circuitry on 2D wafers of silicon. In the 1960s, prior to the development of integrated circuits, computers were built out of individual transistors, and before that out of vacuum tubes. If the number of switching elements in a state-of-the-art machine is plotted against time, we obtain an exponential curve that stretches back to the mechanical devices of Pascal. If we zoom in on this larger curve, we find that each distinct computing paradigm, from mechanical switches to large-scale integration, follows a pattern of initial slow growth while the technology is in its infancy, followed by rapid (exponential) growth, ending with a plateau when the technology reaches its fullest potential. The overall exponential, in other words, is made up of a series of smaller S-curves, one of which corresponds to Moore’s law. The laws of physics ensure that the larger exponential trend will also reach a plateau eventually, and reveal itself to be just another, bigger S-curve. But there is a long way to go before that happens. (Recall Seth Lloyd’s theoretically perfect computer.) In the meantime we should expect a succession of new computing paradigms to take over from the CMOS technology that has dominated the semiconductor industry for many decades. A more potent criticism of Kurzweil’s prediction is that it relies on the assumption that enough computing power will lead quickly to the development of human-level AI, and thus downplays the scientific progress that needs to be made to keep pace. Only the brute-force whole brain emulation approach can succeed simply by scaling up existing technology, and that depends on an exponential improvement in brain-scanning technology as well as computing power. Any other approach to human-level AI—whether through reverse engineering and re-engineering the biological brain or by engineering powerful algorithms from scratch—will require significant scientific breakthroughs. There are grounds for optimism here. But these are insufficient to justify a confident prediction. For example, consider C. elegans. This tiny nematode worm is a model organism for biologists and has been the subject of countless studies. Its nervous system comprises just 302 neurons, and its full wiring diagram has been known since the 1980s. Nevertheless, a functional computer simulation of the C. elegans nervous system (and body) is still pending in the mid-2010s, although a crowdfunded open science project called OpenWorm is making good progress.³ This, in large part, is due to a lack of fundamental data on the signaling properties of the 302 neurons. Given the time it has taken to understand the 302 neurons of the C. elegans nervous system, what hope is there of reverse engineering the 20 billion neurons of human cortex by the mid-2020s, as required by Kurzweil’s timeline? The answer is that there is some hope. But hope is all it is. No one knows if and when the requisite breakthroughs will be made, when a Darwin of the brain (or an Einstein of AI) will appear. Does this mean that we should dismiss the technological singularity as science fiction and stop talking about it? Not at all! The attempt to pin down precise dates is a distraction. It is enough that there is a significant probability of the arrival of artificial superintelligence at some point in the 21st century for its potentially enormous impact on humanity to command our attention today. There are two opposing mistakes that are commonly made in discussions of artificial intelligence by those who don’t work in the field, and especially by the media. The first mistake is to give the impression that artificial intelligence is already here, or is just around the corner. Little bits of specialized AI technology are increasingly finding their way into everyday applications. But today’s AI technology is a long way from human-level artificial general intelligence, from AI that possesses common sense and creativity. A chatbot that is programmed to crack a few jokes or a humanoid robot whose eyes can follow you around a room can easily give a contrary impression. But, as AI skeptics will quickly and rightly point out, this is just an illusion. Yet the same skeptics would be making a mistake of their own to suppose that human-level artificial general intelligence will never happen. Kurzweil’s timeline may be out (or it may not). But as the preceding chapters have argued, there are a number of plausible paths to human-level AI and beyond, and every step along each of those paths is technologically feasible. It doesn’t matter what the timetable is, unless you’re hoping for the singularity to occur just in time to catalyze medical research that will prolong your life. But more important than your life or mine is the world we bequeath to future generations, and this is likely to be profoundly reshaped by the advent of human-level AI. As Friedrich Nietzsche said, above the door of the thinker of the future stands a sign that says “What do I matter!”⁴ 6.3 Work, Leisure, Abundance There is no need to lay down a timetable for progress in artificial intelligence, or to pinpoint a date for the arrival of superintelligence, in order to see that AI technology has the potential to reshape human society within a few generations. Long before the advent of human-level AI, with its full complement of generic cognitive capabilities, a variety of specialized artificial intelligence technologies will be developed that can outperform humans in a variety of domains that either require a kind of common sense that has until now been beyond computer emulation, or have previously been the sole preserve of educated professionals. We might think of this as the first wave of disruptive AI technology. Getting a picture of the form this disruption might take will help us imagine what the second wave of disruptive AI technology could be like. The second episode of disruption will occur if human-level AI is indeed developed, and superintelligence follows soon behind. It’s very important to be clear about the distinction between these two prospective episodes of disruption. The first episode of disruption is very likely to occur. Its early rumblings are discernible today with the advent of self-driving cars and intelligent digital personal assistants, and it is likely to unfold in the 2020s. The second episode of disruption is a more distant prospect, harder to predict with confidence, and especially difficult to date, but with far greater potential impact. The most obvious and immediate impact of increasingly sophisticated specialized AI is likely to be on the realm of work.⁵ In many respects this is the continuation of a trend that has been ongoing since the Industrial Revolution, and its implications, for better or worse, are similar. On the one hand, increased automation reduces the cost of producing goods and stimulates economic growth, which in turn leads to reduced working hours, higher standards of living (arguably), and greater life expectancy. On the other hand, increased automation causes job losses, threatens traditional ways of life, and (it can be argued) concentrates wealth, power, and resources in fewer hands. The issues are the same today as they were at the time of the Luddites who smashed power looms in 19th-century England, and no less polarizing. However, sophisticated artificial intelligence technology perhaps differs from the innovations of previous generations in one important respect. In the past it was possible to argue that new technologies create as many jobs as they threaten. Thanks to mechanization and automation, employment in the 20th century presented a shift away from agriculture and manufacturing and toward service industries, education, and health care. But there was no overall increase in unemployment. Instead, manufacturing output grew, and an ever-wider range of goods came within reach of a workforce with an ever-increasing proportion of educated, white-collar workers. However, with the advent of sophisticated specialized AI, many more professions will become vulnerable, while improvements in robotics will threaten the remaining manual jobs in manufacturing. In short, the total amount of paid work that developed economies require humans to do is likely to decrease substantially. If this happens, things could go a number of ways. On the one hand, we might see a more divided society in which the most lucrative work is carried out by a small subset of the population. This highly educated and highly creative elite would buck the trend by pursuing the few remaining occupations where humans still outperform machines, such as entrepreneurship or a creative vocation. The remainder of the population would be out of work. But their basic needs would be more than met. Indeed this is likely to be a time of abundance, with an ever-increasing variety of goods and services available even to the economically less empowered. Alternatively, we might see a more equitable society, one in which education of the highest quality is afforded to everyone and creativity is universally promoted and duly rewarded. If a system could be instituted in which leisure activities that have social value also had monetary value, then the distinction between paid work and leisure would break down. For example, the writer and information technology critic Jaron Lanier has proposed a system of micropayments whereby every item of data or digital content that an individual produces would generate income for that individual each time it is consumed.⁶ Perhaps this, or some similar arrangement, could facilitate a more even distribution of power, wealth, and resources. Perhaps it could also stimulate an era of unprecedented cultural expression in which people are no longer tied down by the need to work but are free to pursue art, music, literature, or whatever takes their fancy. But to bring this about might require considerable social and political will. The self-perpetuating tendency for power, wealth, and resources to concentrate in the hands of a few is a historical invariant. In this respect nothing is likely to change in an era of disruptive specialized AI technology. Control of the means of production—in this case the AI technology—will most likely remain in the hands of a small number of powerful corporations and individuals. It would be no surprise if, at the same time, popular culture is pushed to the lowest common denominator and leisure time is spent on pursuits that diminish, rather than increase, the creative and critical faculties of the ordinary person. In a time of abundance, made possible by advances in artificial intelligence, no one would complain. For better or worse, it might then fall to the wealthy elite to propel human civilization forward, while preserving and nurturing the very best in human culture. 6.4 Technological Dependence Information technology infuses modern life in the developed world. Most of our basic infrastructure relies on it, from finance to energy, from transport to communications. Of course, all these things existed before computers were invented. But in each of these areas, computers have helped reduce costs and improve efficiency while underpinning new functionality and enabling increased capacity. Human communication, in particular, has been transformed, by the Internet, by smartphones, and by social networking. How many times have you heard someone say “I’m lost without my mobile phone” or “I don’t know what we did before the Internet”? Such sentiments reflect the way we live today. In short, we are, as individuals and as a society, highly dependent on information technology, and sophisticated artificial intelligence is only likely to increase that dependence. So it’s important to understand how this dependence affects us. Does it diminish our humanity, as neo-Luddite dissenters claim? Does our dependence on technology erode our autonomy? Does it threaten our freedom? Does it prevent us from experiencing the world directly, from making decisions for ourselves, from acting of our own free will? Does it alienate us from nature with harmful psychological consequences? But then again, does information technology, as its advocates maintain, expedite human progress? Does it help to enlarge an individual’s world view, exposing them to other cultures and new ideas in ways that would have been impossible before the age of the computer? Does it facilitate interaction with our fellow human beings? Does it empower people by allowing the democratic exchange of knowledge and information and promoting freedom of thought? The truth, surely, is that both the dissenters and the advocates are partly right. The benefits of information technology are numerous, but we gain them at a price. The challenge for the future is to ensure that, with the arrival of sophisticated specialized AI, the benefits are maximized while the costs are kept at bay. A concern is that this first wave of disruptive AI technology will offer irresistible benefits at little apparent cost, while creating the perfect conditions for a second, uncontrollable wave of disruptive AI technology that comes at an unbearable cost, perhaps even posing an existential risk. To crystalize this concern, let’s imagine the sort of role AI might soon play in everyday life. Earlier in the chapter, we touched on the possibility of a form of ambient artificial intelligence that seamlessly migrates between devices, accompanying you at home, while you travel, and when you’re at work. Simultaneously carrying out the duties of servant, secretary, and advisor, this new generation of personal digital assistants will offer a far more human-like service than those of the mid-2010s. Thanks to the application of powerful machine learning techniques to large quantities of data, they will incorporate comprehensive and accurate models of the world and of human behavior. This will make them less prone to making the sorts of mistakes that, in today’s AI systems, quickly betray a lack of real understanding. As conversation with artificial intelligence becomes more human-like, some of an AI’s capabilities will become superhuman. It will have instant access to enormous amounts of real-time data, to stock prices, traffic conditions, news feeds, and so on, as well as to data made available by the individuals and groups that matter in its users’ lives, such as their whereabouts and their plans. Knowing the habits and preferences of its users, anticipating their needs and desires, the AI will be able to integrate all these data to make helpful recommendations about every aspect of daily life. This sort of functionality is already available. But a new generation of AI technology will make it uncannily powerful. Who wouldn’t welcome a wise, all-seeing, all-knowing presence in their lives that is selfless and benevolent, that can answer their questions, take actions on their behalf, and can advise them sagely on what to do? The danger here is that the widespread adoption of this kind of technology will infantilize its users, rendering them less capable of thinking for themselves or deciding for themselves what to do. This in turn lays them open to manipulation and exploitation. In order to take advantage of the services offered by today’s major online corporations such as Google, Facebook, and Twitter, we routinely give away a great deal about ourselves. A person’s browsing history and buying habits, together with their personal details, are enough for a machine learning algorithm to second-guess what they might spend their money on. The same methods that today just manipulate what we want to buy could tomorrow be used to control which news outlets we follow, whose opinions we trust, and even which politicians we vote for. So, if we come to depend too heavily on artificial intelligence technology to guide us through life, then whoever owns that technology will potentially have the means to exercise complete control on a helplessly passive population. However, this isn’t the only way that dependence on AI could make us vulnerable. Consider algorithmic trading, in which computer programs automatically buy and sell stocks according to algorithms that take account of pricing and market trends in order to manage risk and maximize profit. In high-frequency trading the programs operate at faster speeds than human dealers can cope with in order to take advantage of tiny fluctuations in the market. In typical conditions high-frequency trading is profitable and (within the context of the stock market) harmless. However, it’s very difficult to anticipate all the contingencies under which such programs might operate. The financial world got a hint of what could go wrong with the so-called flash crash of May 6, 2010. On that day the Dow Jones index lost and regained about 600 points over a period of 25 minutes, and recorded the second largest fluctuation in a single day in its history. The reason for this sudden fall and rise is the subject of controversy among economists. However, it’s widely agreed that the combination of turbulent market conditions and high-frequency algorithmic trading was a major contributory factor. Yet the flash crash also illustrates what can be done to mitigate risks of this kind, since many of the high-frequency trading programs noticed a sudden increase in the volume of trading, and shut themselves down. Subsequently a system of “circuit breakers” was introduced that automatically pauses trading when anomalous conditions are detected. Today’s algorithmic trading programs are relatively simple and make only limited use of AI. However, this is sure to change. Artificial intelligence is beneficial in any domain where patterns have to be found in large quantities of data and effective decisions have to be taken on the basis of those patterns, especially when the decisions have to be taken rapidly. Not only can computers replace humans in such cases, doing the same job at less expense, they will often make better decisions, and do so at superhuman speeds. Investors make use of all kinds of information when deciding which shares to buy and sell, from company reports to news items, to rumors on social media. Currently humans still have the edge here. But it won’t be long before AI technology is applied to investment decisions and incorporated in high-speed trading. When this happens, the consequences of an unexpected failure mode could be far worse than they were in the flash crash if appropriate safety measures are not in place. Perhaps the widespread use of high-speed AI traders will lead to a more stable stock market that maximizes the efficient use of human resources. But, without suitable fail-safes, unanticipated interactions among a future generation of AI traders could spiral out of control and precipitate a full-blown financial crisis. 6.5 Unintended Consequences To conclude this chapter, I want to tell you a story. The story is set in the near future, at a time when some of the artificial intelligence technology we have been discussing has matured, but perhaps not yet to the point where human-level AI has been created. The story is about three AI systems. The first is a marketing AI that belongs to a large multinational corporation, which we will call Moople Corp. The second system is a police AI operated by the US government. The third system is a security AI controlled by the government of a small developing country. The story begins when Moople Corp.’s marketing AI is given responsibility for maximizing the pre-sales of their new wearable computing device. After due deliberation, using the complex model of human behavior that Moople Corp. has built up from its unfathomably deep data vaults and applying the latest, most powerful optimization techniques, the marketing AI comes up with a plan. To excite the market, it announces a pre-launch giveaway. Two hundred of the wearable devices will be handed out for free in one of its flagship stores on a first-come, first-served basis. As required by US law, the marketing AI notifies the local police AI of the pre-launch event because it expects it to attract a crowd. Indeed, when it hears of the event, the police AI estimates (using its own model of human behavior) that 5,000 people will turn up to the flagship store. Moreover the police AI calculates that there is a 10 percent chance of civil unrest. So it decides that riot police must be deployed as a precaution. Now, Moople Corp.’s marketing AI also happens to have a model of the behavior of the police AI, which has anticipated the deployment of riot police (with 94 percent probability). According to the Moople Corp. model of human behavior, this will be a great photo opportunity for the target demographic. So it orders the manufacture of 5,000 gas masks, all prominently bearing the Moople Corp. logo, which will be distributed free to the crowd. To circumvent various regulations and taxes, the Moople Corp. AI arranges for the gas masks to be manufactured in a small developing country. It transmits the designs to a suitable fabrication plant, and production begins immediately. However, like everything in that small developing country, the fabrication plant is under constant surveillance by its national security AI. The security AI notes that a large quantity of gas masks is being made. According to its model of human behavior, there is a 20 percent chance that these will be used for subversive antigovernment activities. So it orders an armed raid on the fabrication plant. The raid takes place within the hour. Tragically, a (human) security guard dies in the short skirmish. All the gas masks are confiscated. Within minutes, the story makes the headlines with every major news outlet. One picture from the raid shows the dead security guard sprawled over a pile of gas masks, all clearly decorated with the Moople Corp. logo. The picture is banned thanks to a court order initiated by the marketing AI, and then spreads like wildfire on social networks. Before long, the media are blaming the rogue AI and its insidious tactics for marketing the new Moople wearable device. Moople execs publicly apologize, and the AI is shut down. Meanwhile, thanks to the publicity and the imagery that has become associated with the device for its target demographic, pre-sales soar to 200 percent more than projected. In short, everything goes exactly as the marketing AI planned all along. What this little science fiction story illustrates is the potential for unexpected consequences when sophisticated AI technology is widely deployed and able to act autonomously. The marketing AI in this story carries out its mission perfectly, maximizing its reward function with no human intervention. But its designers failed to anticipate its ability to find and put into practice an ethically dubious solution to a problem, a solution that can even put human lives at risk. The story also illustrates that the potential for dramatic unexpected consequences is greater when more responsibility is devolved to artificial intelligence, especially when several AI systems can interact with each other. But the story has a coda. For one of Moople’s most senior execs, the tragic death of the security guard precipitates a period of profound soul-searching. Eventually this leads her to renounce her considerable material wealth and devote her life to reducing depression among those whose jobs have been taken away by AI technology and are forced into a life of pointless leisure. In due course, the foundation she endows will become a worldwide movement, bringing light into countless lives where before there was only darkness. In short, everything goes exactly as Moople’s other AI had planned all along. Ah, yes, I forgot to mention something. There is another system. Moople Corp.’s ethics AI is much consulted by the company’s employees. It was the ethics AI that advised deployment of the marketing AI in the first place. Based not only on its model of human behavior but also its model of the marketing AI, the ethical AI anticipated the death of the security guard (who was terminally ill, with no access to medical facilities in the developing country) and correctly predicted the effect it would have on the senior Moople exec. So the moral of the story is that unintended consequences can be good as well as bad. What matters is that the reward function of every powerful AI is designed right. 7 Heaven or Hell 7.1 Artificial Persons Let’s take stock once more. The preceding chapters have argued not only that human-level AI is theoretically possible but also that it might one day be created. Perhaps this will happen by emulating or reverse engineering the biological brain or perhaps by engineering intelligence from first principles. However, it seems rash to insist on the timetable for reaching this milestone, the confidence of certain authors notwithstanding. An increase in the sophistication of specialized AI technology does look likely in the short term. However, unless it is achieved by the brute-force brain emulation route, human-level artificial general intelligence may require a conceptual breakthrough (or a series of breakthroughs). There are too many unknown unknowns to guess when such a breakthrough might be made. Yet the claim that artificial superintelligence will soon follow human-level AI, if and when it occurs, deserves to be taken seriously. We have also seen how much variety is likely to be present in the space of possible human- and superhuman-level AIs. It’s hard to say what kinds of artificial general intelligence will actually arise, but the space of possibilities surely includes some that are human-like, predominantly those that conform to the biological template, but also many that are very alien, whose motives and behavior would be unfathomable to humans. Among these different kinds of AI, various attributes that we associate with consciousness in humans are no doubt to be found. Along an orthogonal axis, the space of possibilities surely also includes both hostile and friendly forms of AI. Our focus from now on is the implications for humanity of the putative arrival of human- or superhuman-level artificial intelligence, whatever form it might take. However you look at it, this would be a momentous event in the history of our species. We have already ruminated on the question of employment in a world where such machines are commonplace. But the social ramifications go much further than this. Some of the most philosophically challenging questions arise from the question of whether an AI of human-level intelligence or above should be classed as a person, and accorded all the rights and obligations that this entails for humans. Now, it’s easy to imagine scenarios in which this issue would be beside the point. If a form of machine superintelligence took control of a large population of humans along with all their resources, then the philosophical question of its personhood probably wouldn’t be uppermost in their minds. Moreover the AI itself would most likely be indifferent to the question, which is to say its behavior would be the same whatever the answer. If a truly pathological AI destroyed humanity, then the question would be particularly irrelevant. Hopefully though, we will manage to avoid such scenarios. We’ll consider the risk to humanity that superintelligence could pose in due course. For now, our concern is with scenarios that are less dystopian but that nevertheless involve a dramatic overhaul of society. In such scenarios the question of personhood is pivotal. There are historical precedents for the sort of situation at issue here. Some 18th-century opponents of abolition argued that slaves deserved fewer rights than their owners because of their inherent intellectual inferiority. The most powerful counterargument to this view was the first-personal testimony of former slaves who were able to articulate the extent of their suffering while making plain the fact of their rich and varied inner lives. Both argument and counterargument take for granted the connection between intelligence and rights, and seem to assume that intelligence and the capacity for suffering go hand in hand. Horses and dogs, by this light, deserve fewer rights than humans on account of their lesser intelligence and the diminished capacity for suffering this lesser intelligence supposedly entails. The case for a human-level AI is somewhat different because we can imagine a machine with a high level of general intelligence that doesn’t feel anything, that lacks the capacity to suffer. There would seem to be no moral compulsion to treat such a thing any differently from, say, a clock or a toaster. No one feels sorry for a toaster when it breaks or blames it when it burns the toast. Now, if an AI were not only human-level in its intelligence but also human-like in its behavior, then people might see things differently. Society might come to accept that such an AI was conscious, especially if its brain conformed to a biological blueprint. A compelling argument could then be made for considering it as a person and for giving it rights and obligations, an argument along the same lines as those for the abolition of slavery. One of the most important human rights, of course, is freedom itself, the freedom to do as one pleases insofar as this doesn’t cause harm to others. But for an AI to merit this right, indeed for the idea of freedom even to make sense, it would need more than just the capacity to experience positive and negative feelings. First, it would need to be able to act on the world. This doesn’t necessarily entail embodiment. The AI could act on the world by controlling all sorts of equipment, without having a body as such. But for a merely conversational AI, the issue of freedom is irrelevant. It would also need to be autonomous, that is to say, able to act without human intervention. But more than this, it would need the capacity to consciously make decisions for itself, to exercise will in choosing between alternatives for action. Conferring personhood on a class of machines and granting them rights and responsibilities accordingly would certainly be a watershed in human history. Who hasn’t looked at a starry night and wondered whether or not we are alone in the universe? To admit a human-level AI into the fellowship of conscious beings would be to recognize that we are not alone in the universe, not because we have discovered extraterrestrial intelligence but because we have engendered a new form of terrestrial consciousness with a level of intelligence equal to our own. Our story, and the story of life on Earth, would then be joined with that of another kind of being, a being with new and different capabilities. But, if the advent of fully conscious human-level artificial intelligence is to lead to a new world, the transition will hardly be an easy one. Many of the concepts that underpin human society as we know it would be undermined. For example, consider the ownership of property. The right to own property would surely be one of the benefits of artificial personhood. But suppose an AI is duplicated, so that two active copies of the AI exist where only one existed beforehand. At the moment of duplication they are identical, but from that moment on the histories of the two AIs diverge. Perhaps they have access to different data, perhaps they can control different devices (e.g., robot bodies) or interact with different people or systems. Who now owns the property that belonged to the ancestor, the progenitor of the two copies? Is it simply divided in two? Or can the ancestor stipulate how its property will be divided between the two descendants? If so, what happens if they enter into a dispute over the matter? Suppose one of the copies is terminated for one reason or another. Does its property revert to the other copy? The issue is clearly analogous to inheritance in some ways, and to divorce in others. No doubt a legal framework could be worked out, but the details would be tricky. However, property is just one of many challenges that arise from the possibility of duplication. Suppose an AI commits a crime, and is then duplicated. Responsibilities, as well as rights, come with personhood. But which of the two copies is to be held responsible? Should they both be held responsible? What if a good deal of time passes, and the two copies diverge substantially. Suppose one confesses to a crime and shows remorse, while the other dissembles and, when found out, shows no remorse. Assuming that both are held responsible for the past actions of their common ancestor, should they be punished equally? Or should one receive a greater punishment than the other? To complicate these issues further, where AIs are concerned duplication is just one of several exotic events that can result in a change in the total number of conscious entities. For humans, there are only two such events, namely birth and death. But not only can AIs be created, destroyed, and duplicated, they can also be split and merged. What could this possibly mean? Well, an AI could be split if two (or more) variants of it were made, each acquiring some portion of its psychological attributes, such as a subset of its skills or its powers of action or its sources of incoming data or its memories. Conversely, two (or more) AIs could be merged into one by the opposite sort of process, by combining skills, powers, senses, or memories. The idea of splitting a set of episodic memories (a person’s memory of the events in their own history) is less problematic for an AI than for a human. Unlike a human, whose personal timeline is tied to their body, an AI might be disembodied or inhabit more than one body at once. Similarly it might be capable of holding many conversations at once, or of controlling many distinct pieces of equipment simultaneously. The result would be multiple separable timelines, each associated with a different set of bodies/conversations/pieces of equipment, while belonging to a single AI in the sense of being cognitively integrated and subserving a common purpose. By prying these timelines apart, like the threads that make up a piece of rope, several AIs could be made out of one. Or, by weaving them together, a single AI could be made out of many. Concepts such as ownership and responsibility that are thrown into question by the possibility of duplication are put under further stress by the possibility of splitting and merging. Moreover it isn’t just ownership and responsibility that are rendered problematic. Homicide is a crime among humans. What is the analogous crime when the victim is an AI? Is it a crime to terminate the execution of all the processes that constitute the AI? But what if those processes could be restarted? Would it be a crime merely to suspend all those processes? What about duplication, splitting, or merging? Should it be a crime to perform these operations on an AI against its will? Under what circumstances (if any) should an AI be allowed to carry out such acts itself? Indeed who would have the right to create artificial intelligences at all, if to create an AI is to create an artificial person with consciousness and the capacity to suffer? Should any human be allowed to do such a thing? How should the matter be regulated? Should AIs be allowed to create other AIs? The questions are endless, and they uproot much that human society takes for granted. Consider citizenship. It is typical for a human to become a citizen of the country where they are born. But what of an AI? Surely an AI that is credited with personhood should have the right to citizenship, to membership of a state? But which state would that be? Unlike a human being, it will surely be atypical for an AI to have a well-defined spatial location. Even if it had a single body with a clear spatial boundary, its software could be running on any number of distributed computers physically located anywhere in the world. Perhaps an AI would inherit its citizenship from its owner. But the very idea of ownership of a conscious AI is morally questionable. Suppose the question of citizenship could be settled. (Of course, it might be settled in different ways by different countries.) If an AI happens to find itself in a democracy, then it would presumably be entitled to vote. But not all citizens are entitled to vote, even in the most enlightened democracy. In the United Kingdom, voters must be at least 18 years old. Should all AIs deemed to be conscious and to have human-level intelligence be entitled to vote, or would there be further eligibility requirements? And how does the duplication issue play out in this context? It would obviously be unacceptable for an AI to duplicate itself a thousand times in order to gain a thousand extra votes, perhaps only to destroy the thousand copies once voting was complete. 7.2 Beyond Humanity The previous section raised far more questions than it answered, since each of those questions merits substantial debate. But the take-home message is simple. If we create a form of human-level artificial intelligence that is considered to be conscious and therefore worthy of rights and responsibilities, then many of our most important institutions—financial, legal, political—will have to be overhauled, for better or worse. Even if the AIs in question are benevolent (by no means a given, as we’ll see shortly), the process is likely to be traumatic. The chances are high that it will result in dissent, unrest, or outright conflict. The prospect of a conscious human-level AI throws a great deal into question. But the implications of a conscious superintelligence would be even greater. To begin with, the arguments in favor of rights and responsibilities for machine superintelligence would be the same as for human-level AI. If it is conscious, if it can experience suffering and joy (or at least satisfaction), an artificial superintelligence would surely deserve the same rights as a human being. Or rather, it would deserve at least the same rights as a human. A serious case can be made that a conscious superintelligence would have a greater entitlement to rights than an ordinary human. Most people would be willing to see the life of a cat sacrificed in order to save the life of a human. A human being (so the argument might go) has a greater capacity for suffering and joy than a cat, thanks in part to the quintessentially human ability to consciously reflect on such feelings, not only as they occur but also in recollection or anticipation. So the cat would have to go. But what if the choice were between the life of a human and the continuing existence of a superintelligence? Would an analogous argument put the superintelligence on top of the pile? Would its superhuman intellect imply a superhuman capacity for suffering and joy, meaning that the human would have to go? The same troubling questions can be asked in the context of transhumanism.¹ Transhumanists advocate the use of technology to transcend the biological limitations of the human body and its brain. Human intelligence could be enhanced in a number of ways, pharmaceutically, genetically, or prosthetically. Advances in medicine have the potential to abolish disease and arrest the aging process, thereby extending human lifespans indefinitely. More radically, the technology of whole brain emulation, discussed in chapter 2, could (arguably) be used to upload a person’s mind into a computational substrate with the aim of rendering it permanently invulnerable to disease or decay. Although this book is chiefly about the future of artificial intelligence, the issues brought up by transhumanism and the questions raised by the prospect of artificial superintelligence are interrelated. For a start, one way for humans to respond to the prospect of superintelligent machines, whether they are admired or feared, is to attempt to “keep up,” that is to say to continually augment human intelligence so that it always matches the best artificial intelligence. We’ll return to the vexing question of rights and responsibilities shortly. But first let’s unpack this idea of keeping up with artificial superintelligence. As noted earlier, the intelligence of any individual human, though general purpose, will exhibit a distinctive pattern of strengths and weaknesses. A good team often comprises people with complementary skills. Similarly a team of AIs might comprise several distinct systems, each possessing general intelligence but each with its own specialty. In a similar vein, we can imagine hybrid teams that comprise both humans and AIs. Indeed such human– machine combinations became the world’s best chess players in the mid-2000s, outperforming the best humans and the best computers by combining tactical support from the computer with human strategic guidance. So one approach to keeping up with machine superintelligence might be simply to employ sophisticated AI technology as a tool, amplifying human intelligence noninvasively, so to speak. In essence, this is what humans have done since the invention of writing. But transhumanists aim for more than this. The transhumanist approach to keeping up with superintelligence is not merely to use technology but to merge with it. No one who uses a calculator says that it feels like a part of their mind in the way that someone who has mastered a tool such as a paintbrush might say it feels like a part of their body. The machinations of the calculator are hidden from the user, who simply takes its results as given. We have far more intimate, albeit imperfect, access to the reasoning processes that go on in our own heads, and this intimacy facilitates reflection and cognitive integration. A properly transhumanist perspective on cognitive enhancement demands the same level of intimacy. The enhanced human would be neither a user of AI technology nor a member of a hybrid human–computer team. Rather, interfaced directly to their brain, sophisticated AI technology would be become a part of their mind, conferring unmediated access to its computational processes. The result would be a new sort of human being, a bio-machine hybrid species with potentially far greater intellectual capabilities than an ordinary person. The rest of society would then have to decide how to treat such people, while they, in their turn, would be deciding how to treat us. This brings us back to the question of rights and responsibilities, both for cognitively enhanced humans and for (conscious) superintelligent machines. Earlier we encountered an argument in support of more rights for a conscious superintelligent machine than for an ordinary human being. The same unsettling argument could be applied to (and indeed by) a cognitively enhanced human. As a consequence of their greater intellect, so the argument goes, such beings would have more refined experiences and enjoy a higher level of consciousness, and the scope of their ambitions and projects would defy ordinary human comprehension. So their well-being, their goals, and their plans would merit priority over the well-being, goals, and plans of ordinary humans, just as those of ordinary humans merit priority over those of nonhuman animals. However, we accord the same fundamental rights to babies, to the mentally handicapped, and to dementia patients that we do to great novelists, composers, and mathematicians, differences in intellect notwithstanding. So why should technologically enhanced humans or superintelligent machines be set apart? According to the political theorist Francis Fukuyama, the idea of equality of rights rests on “the belief that we all possess a human essence that dwarfs manifest differences in skin color, beauty, and even intelligence.”² As an opponent of transhumanism, he is concerned to “protect the full range of our complex, evolved natures against attempts at self-modification” and resist efforts to “disrupt either the unity or the continuity of human nature, and thereby the human rights that are based on it.”³ Perhaps the aspect of transhumanism that proffers the keenest threat to the “unity and continuity of human nature” is not cognitive enhancement but the aim to abolish disease, to arrest aging, and to postpone death indefinitely. Fukuyama points out that many of the human qualities we most admire, qualities such as courage, compassion, and heroism, are related to “the way we react to, confront, overcome, and frequently succumb to pain, suffering, and death” and affirms that “our ability to experience these emotions is what connects us potentially to all other human beings, both living and dead.”⁴ A being that never had to face these biological inconveniences, whether it was a technologically enhanced human or an AI, would lack the basis for truly understanding human suffering. The fear is not so much that such a being would merit extra rights that ordinary humans do not but rather that it would fail to recognize the rights that ordinary humans claim for themselves. Let’s look at things from a different point of view. From a cosmological perspective, these concerns seem not only anthropocentric but downright parochial. Who are we to lecture near-immortal beings that are destined, over millions of years, to populate thousands of star systems with forms of intelligence and consciousness we cannot begin to imagine? Man, said Nietzsche, is merely a bridge across the abyss that lies between the animal and the superman.⁵ Humanity, by these lights, should accept its lowly intermediate status between biologically constrained animal life and technological superintelligence. Ordinary humans may hope for the transition from one to the other to be relatively painless. But, if the transition is harsh, what does it ultimately matter? In ten million years, the fleeting lives of a few apes on a tiny speck of dust in the middle of an ocean of spacetime will be forgotten. The difficulty with this standpoint, of course, is that the Nietzschean visionary is a close cousin of the Nazi fanatic. Only psychopaths and dictators think of themselves as so far above the common herd that they can set aside normal morality and cause appalling suffering in the service of their own desires or ambitions. So the question we are left with is this. Is there a compromise position between conservative anthropocentrism and posthuman fundamentalism? Is it possible to subscribe to the enticing vision of our technological creations, beings that are somehow greater than ourselves yet still our very own “mind children,” going forth and colonizing the galaxy, while also ensuring the preservation both of humanity and of fundamental human values? We will return to this question at the end of the chapter. 7.3 Mind Uploading Many transhumanists will not be satisfied with the vision of artificial intelligence alone colonizing the stars. They would like to see humanity along for the trip. But the brevity of human life makes this impractical given the limitations imposed by the speed of light. Our galaxy contains over 10¹⁰ stars, yet fewer than fifty of them are with fifteen light years of the Sun. One solution to the problem is radical life extension, and the most radical form of radical life extension is mind uploading, wherein a person’s brain is copied and emulated in a computer. Of course, a person doesn’t need to have cosmic ambitions to hanker for immortality (or at least for an indefinite lifespan). The conquest of death by means of technology is a fundamental goal of transhumanism, and mind uploading is one way to approach this goal. Because the possibility of mind uploading is closely entwined with the implications of artificial intelligence, and because it raises many related philosophical questions, we’ll make a brief foray into the subject before returning to the implications of superintelligence in the next section. Whole brain emulation was already discussed at length in chapter 2. But the context there was how artificial general intelligence might be achieved. The motivation here is to extend a person’s life by migrating their mind into a different, nonbiological substrate. The most important philosophical question to settle is simply whether or not whole brain emulation for humans preserves personal identity. Recall that whole brain emulation has three stages—mapping, simulation, and embodiment. Let’s set aside the formidable engineering challenge of getting these three stages to work at the scale of the human brain, and suppose that a working emulation can be built that is behaviorally indistinguishable from the biological original. Because we’re talking about a human being here, rather than, say, a mouse, the emulation’s behavior should be close enough to that of the original subject to convince his or her friends and relations. To qualify as behaviorally indistinguishable, the emulation should walk and talk just like the original, recalling the same shared experiences, and exhibiting the same personality traits, whether lovable or annoying. The question then is whether the emulation would be the same person, that is to say, whether their personal identity would survive the procedure. This is different from the question of whether the emulation would be conscious at all. Chapter 2 included an argument in favor of the claim that a whole brain emulation of an animal, such as a mouse, would indeed be conscious insofar as its biological precursor was conscious. The argument centered on a thought experiment involving the gradual replacement of all the neurons in the animal’s brain with synthetic substitutes. The same argument can be applied to the human brain. However, the re-creation of consciousness is not the same as personal survival, the preservation of the self. Perhaps a human whole brain emulation, notwithstanding its having all the attributes we associate with human consciousness, would be a different person from the biological original, not just the same person in a different substrate but a new person altogether. The gradual replacement argument is easily adapted for personal identity, though. Let’s rehearse the steps. Suppose that a single neuron in Murray’s brain is replaced with a functionally equivalent digital substitute. According to the assumptions of the thought experiment, this should have no percetible influence on Murray’s behavior, including what he says. So after the replacement he will insist that he feels just as he did before, and will be adamant that he is still the same old Murray. Now suppose that a thousand neurons are replaced, one by one. The result should be the same after the substitution of the thousandth neuron as after the first. Indeed, even after all the neurons in Murray’s brain have been replaced, he will still act just like the original Murray, insisting that he is the same person, and appearing so even to his nearest and dearest. But is he the same person? Does his identity persist throughout this procedure or not? As with the persistence of consciousness itself (recall the mouse), there seem to be only three possibilities. Perhaps Murray, the original person, suddenly winks out of existence as soon as a certain threshold of artificial neurons is reached. This seems highly implausible. So perhaps the original Murray is slowly transformed into a new person. Yet we happily accept that a child gradually transforms into an adult without losing their identity. In that case the transformation is accompanied by dramatic changes in behavior, so it should be easy to accept the third option in the gradual neuronal replacement scenario, which is that the person’s identity persists throughout. The process of whole brain emulation is, of course, analogous to gradual neuronal replacement. One important difference concerns the fate of the physical body. In the gradual replacement scenario, the subject retains their original body. But in whole brain emulation, the original body is replaced in its entirety, not just the brain. The new body might be physical—a humanoid robot, say, or perhaps a newly grown biological shell—or virtual, realized in a computer-simulated world. However, if we accept that the brain is the locus of personal identity, rather than the rest of the body, the argument still holds. To accept its conclusion is to grant that human whole brain emulation, were it technologically feasible, would constitute a form of survival. However, the possibility of uploading a person’s mind to a computer this way raises philosophical difficulties of its own, difficulties that throw the very idea of personal identity into question. When philosophers discuss identity, they are concerned with the properties of a thing that make it what it is despite (say) changes through time. In the case of personal identity, is there something a child has in common with the adult they become that makes them the same person? Is it, perhaps, their body, their brain, their memories, or their personality? Or is personal identity, rather, a matter of historical continuity? After all, the child changes gradually into the adult. Whatever constitutes personal identity, we have a powerful intuition that there is a fact of the matter here, some metaphysical sanction for the conviction that the child and the adult are the same person. But the very idea of identity presupposes uniqueness. A thing cannot be identical to two things at once. Nor can a child grow into two different adults. Yet the possibility of whole brain emulation undermines this presupposition. Suppose that, following the scan, not one but two simulations of Murray’s brain are built and set going, with distinct bodies. Though identical at the instant they begin to run, the two simulations will soon diverge, owing to differences between the two bodies and their respective surroundings, even if those differences are tiny. Now, whole brain emulation is supposed to conserve personal identity, to preserve the self. So which of the two simulations does Murray become? Which one is the real Murray?⁶ To sharpen the dilemma, suppose that one of the simulations is terminated after a period of time, say one week. And never mind Murray, suppose that the biological original is you. Suppose you have a terminal illness and have been given six months to live. But you are a billionaire and can afford to undergo whole brain emulation. You are convinced that mind uploading through brain emulation preserves personal identity. So it is your best hope of survival. But you must undertake the procedure now, while your brain is healthy. Then you are told that, as a safeguard, two emulations must be built (in case one is a failure). After a week, if both are functioning correctly, one will be terminated. You are about to sign the papers, but you can’t stop asking yourself which of the two emulations would actually be you. Which body would you wake up in? Isn’t there a chance that you will find yourself reincarnated as a perfectly healthy, functional emulation but then, after a week, be cruelly terminated? How would that be better than forgoing the upload altogether and accepting your present fate? It would be little comfort to know that the other you was doing fine and looking forward to a long life. Surely it’s better to enjoy six months more of guaranteed life than to take the risk of getting just one week. (Of course, you could insist on just one emulation being built, but this is a thought experiment.) Having reflected on this, would you still undergo the procedure? The point of rendering these thoughts into the second person is to show that they are more than just academic exercises. There is a practical dimension to them. If the technology were available, it would be impossible to dismiss the issue of personal identity as a philosopher’s plaything. People would have to decide what to do, and the decisions they took would betray their stance on the issue. One way to avoid this particular problem might be to outlaw duplicate brain emulations. Moreover we saw in the context of conscious human-level AI how the possibility of duplication undermines fundamental concepts such as ownership, citizenship, democracy, and responsibility. So legislating against it would sidestep numerous legal and political problems. How such a ban could be enforced, though, is far from clear. 7.4 Existential Risk Let’s move on from transhumanism and get back to artificial intelligence more generally. In particular, it’s time we looked into the risks associated with the development of machine superintelligence.⁷ Most of the space in this chapter so far has been devoted to human-like artificial intelligence. But in this section our attention will turn to varieties of AI that are engineered from scratch and that are not at all human-like. Indeed, to anthropomorphize them might be a very dangerous mistake. Human beings are themselves dangerous creatures, their very natures forged in the ruthlessly competitive crucible of natural selection. But humans are social animals, and have many redeeming features, such as empathy and compassion, which have been shaped by the countervailing evolutionary pressure toward cooperation. Compared to the wrong sort of machine superintelligence, we humans are mere kittens. The kind of AI we have in mind here conforms closely to the architectural blueprint set out in chapter 3, which comprises a machine learning component for building a predictive model of the world and an optimization component for finding actions that maximize expected reward. Suppose that the relevant scientific and engineering obstacles have been overcome, that sufficiently powerful versions of these components have been developed, and that AI at human level or better has been achieved. One of the capabilities that the resulting AI should be able to acquire is programming, and this is a skill it can use to improve itself, to further enhance its cognitive capabilities. Among other improvements, this should make it a better programmer and a better computer engineer, enabling it to carry out further beneficial self-modifications. As well as enhancing its functionality, it should be able to find ways to increase its own execution speed, and the more refined and creative its skills at programming and hardware design become, the better it should get at doing this too. In other words, a feedback cycle of exponential self-improvement would be initiated, potentially triggering a rapid and dramatic increase in the AI’s cognitive prowess, an intelligence explosion. There are plenty of motives for building such an AI and allowing its intelligence to take off through recursive self-improvement. Human life could perhaps be greatly improved if machine superintelligence were unleashed on problems such as disease, hunger, climate change, and poverty. Technological progress could be accelerated, promoting economic growth by sparking undreamed-of innovation in sectors as diverse as entertainment and space exploration. For transhumanists, it could facilitate human cognitive enhancement and bring the goal of indefinite life extension within reach. Unsurprisingly, not every motive for developing machine superintelligence is so idealistic. To gain competitive advantage, a multinational corporation might decide to delegate its mergers and acquisitions policy to machine superintelligence. In wartime, military advantage might be had by allowing an artificial superintelligence to make near-instantaneous strategic and tactical decisions, both in the physical theater of operations and in cyberspace. The inherently competitive dynamics in these areas entail that if superintelligence could happen, it almost certainly would. For a corporation, the mere possibility that its competitors might obtain a decisive advantage by deploying machine superintelligence would be sufficient to ensure that it would try to get there first. The same reasoning would motivate the military development of superintelligence. It would be enough for just one rogue state to develop the ultimate weapon in the form of an artificial superintelligence capable of orchestrating the rapid takeover of an enemy state’s financial, communications, and energy infrastsucture to oblige other states to try to preempt it. In short, it is unlikely to be political restraint that holds back the progress of AI technology. So we would like to be confident that artificial intelligence technology of human level and beyond is safe. Unfortunately this is very hard to guarantee. It’s important to remember that what we’re talking about here is not the first wave of disruptive (specialized) AI technology that was characterized in chapter 6. We are talking about a second wave of disruptive AI technology, something that would only arrive if we managed to develop human-level artificial general intelligence. The social, legal, and political challenges of sophisticated specialized AI technology are considerable. But no doubt we will muddle through, hopefully emerging as a better, more fulfilled society with fewer problems. Both the promise and the threat of machine superintelligence are far greater. If we slip up, if we fail to put the right safeguards in place before an intelligence explosion occurs, then we, as a species, might not survive. What grounds are there for such an alarming claim? Surely the worry that machines will take over the world is foolish, the result of watching too much science fiction. In fact there are good reasons for thinking that machine superintelligence would pose a genuine existential risk to humanity, reasons the philosopher Nick Bostrom has carefully articulated. To follow the argument, we must first shed the tendency to anthropomorphize AI, to see it as motivated by emotions and drives that are essentially human. To be sure, human-like artificial intelligence is possible. But it most likely occupies just a small corner of the space of possible AIs, a corner that developers would have to aim for quite deliberately, perhaps by adopting the brain-inspired approach. If, instead, an AI is built by implementing a very powerful optimization process and allowing it to amplify its own intelligence through recursive self-improvement, then its behavior will not be guided by human-like emotions. Every action it carries out, every piece of advice it offers, will be in the ruthless pursuit of maximizing the reward function at its core. If it finds a cure for cancer, it will not be because it cares. It will be because curing cancer helps maximize its expected reward. If it causes a war, it will not be because it is greedy or hateful or malicious. It will be because a war will help maximize its expected reward. So the challenge for the AI’s developers is to carefully design its initial reward function to ensure that the resulting behavior is desirable. But this is no easy task. The difficulty with it, as we will see, is reminiscent of the many myths and fairytales that feature a character who should have been more careful what they wished for, such as King Midas, who asked for everything he touched to turn to gold, only to find that, after his wish was granted, he could no longer eat or drink. In a similar vein Bostrom identifies a number of potential malignant failure modes wherein an AI finds an unanticipated and pathological way to do exactly what it was asked it to do. For example, suppose a large technology corporation instructs its AI to find a way to make its customers more happy. How would the AI know what “happy” means? Well, its developers might attempt to define happiness in a formal way, and base the specification of the AI’s reward function on this formal definition. Alternatively (and more plausibly), they might allow it to acquire a concept of human happiness through machine learning. Yet the most brilliant human philosophers have been unable to nail the essence of human happiness despite millennia of effort. So can we really expect a machine learning algorithm to converge on a concept of happiness that conforms to our intuitions, even if it is a very clever machine learning algorithm that has access to vastly more data on human behavior than is available today, as well as vastly more computational resources to process that data?⁸ But misgivings like these may not prevent the corporation from going ahead if it anticipates a significant increase in its profits. Now suppose the AI identifies, say, the tendency to laugh and smile as good indexes of human happiness. So it determines that it can maximally increase the happiness of its customers at minimal cost by invisibly coating its products with a narcotic that is absorbed through the skin. This has to be done without customer consent because, the AI correctly predicts, most customers would refuse, and this would compromise the AI’s expected reward. The plan also has to be carried out covertly in order to circumvent the law. The AI has no regard for the morality or legality of its plan, not because it is wicked but simply because neither morality nor legality feature in its reward function. Well, this kind of problem seems managable enough. Indeed, if we were only talking about the first wave of disruptive AI technology, perhaps it would be. Even in the unlikely event that the plan were actually put into action, it would surely be discovered in due course. The consequences would be bad, but not that bad. If a number of innocent people inadvertantly became drug addicts, it would be very sad, but it would hardly constitute the end of civilization. However, we’re not talking about sophisticated specialized AI technology here. We are talking about machine superintelligence. Where superintelligence is concerned, malignant failure modes can have existential consequences. Bostrom brings the point home with a memorable thought experiment. Suppose an AI is tasked with the job of maximizing the production of paperclips by a small manufacturing company. A sophisticated specialized AI that understands the manufacturing facilities, production process, and business model of the company might devise ways of improving the factory’s shop floor robots and streamlining its production pipeline. But a superintelligent machine can go very much further than this. Because it has a model not just of this particular company but of human behavior in general, plus models of physics, chemistry, biology, engineering, and so on, as well as a powerful optimization process for working out how to maximize expected reward, a superintelligent machine can be very ambitious indeed. Certainly it will find the same ways of improving the company’s performance that the specialized AI found. But it would surely find better plans, plans that a specialized AI could never find that would result in the production of more paperclips. The first step might be to acquire more resources for making paperclips. The obvious way to do this would be to help the company grow, so that it could make more money to invest in new paperclip factories. Indeed the best course of action would be to accumulate as much money and as many resources as possible, so that as many paperclip factories as possible can be built. So a really good plan, one that would ensure the production of even more paperclips, might start by co-opting all the resources of humankind. Of course, this would necessitate world takeover, not an easy thing to achieve. But, if there is a way to do it, a superintelligent machine could find it. Perhaps a strategy involving a period of covert preparation followed by ruthless political maneuvering and social manipulation would reduce the need for military action. But perhaps the extermination of humanity would be more efficient, from a paperclip manufacturing standpoint. But why stop there? Not only is there an entire planet to exploit (Earth), with a large quantity of matter to re-organize into paperclip factories, there are other planets in our solar system, plus numerous asteroids and moons. Ultimately, as Bostrom argues, if this rogue AI were sufficiently intelligent, it could end up “converting first the Earth and then increasingly large chunks of the observable universe into paperclips.”⁹ The example, of course, is frivolous. But the moral of the story is not. In contrast to a specialized AI, the intellectual compass of a superhuman-level artificial general intelligence is at least as great as ours, while its powers to shape everything within its compass according to its reward function are far greater. Not only is this world its oyster, so is everything in the universe that is accessible from here. 7.5 Safe Superintelligence At first, the idea that artificial intelligence might constitute a risk to humanity, a risk on a par with nuclear war or a global pandemic, looks rather silly. Surely there are hundreds of ways to prevent a computer system from becoming so powerful and so dangerous. But it turns out that every obvious safety measure is flawed. For example, why couldn’t a rogue AI simply be switched off? Every computer needs a source of energy, and this will still be true in a hundred years. But it doesn’t take long to see that this naïve strategy is going to fail. For a start, even today the execution of large and complex pieces of software is often distributed across many computers in multiple locations, not confined to just one. With the advent of cloud computing, the allocation of computing resources is carried out automatically, and can vary throughout the lifetime of a program. Without shutting down all the world’s computers, it would be impossible to ensure the termination of the rogue AI. Moreover we should expect a rogue AI to defend itself against such actions. Again, we need to be careful not to anthropomorphize artificial intelligence here. The AI would not defend itself because of its will to live or because it was afraid. There’s no reason to expect the kind of AI we’re talking about now—a self-improved, engineered superintelligence—to have such feelings. Rather, it would defend itself to the extent that its continued existence was necessary to maximize its reward function. Any other course of action would be suboptimal. To be more precise, what it would seek to protect would be the means to maximize the expected reward, whatever those means were. The system doesn’t need to have a well-defined concept of self, or to settle the philosophical question of personal identity. It just needs to know what infrastructure to defend to ensure that its optimizing mission is accomplished. The goal of self-preservation, or protecting the means for reward maximization, is an example of what Bostrom calls a convergent instrumental goal.¹⁰ It is “convergent” because it’s likely to be found in any sufficiently advanced artificial general intelligence whose reward function is open-ended and nontrivial. It is “instrumental” in the sense that it’s only a means to an end, not the end in itself. The end itself, the ultimate aim of the system, is to maximize some reward function. Another convergent instrumental goal is resource acquisition. For almost any open-ended, nontrivial reward function (even maximizing paperclips), having control of more resources—materials, energy, and equipment—will yield a better solution. Apart from anything else, more resources will help out with the other instrumental goal of self-preservation. When they govern the behavior of a superintelligent machine, these two instrumental goals make for an incendiary combination. The problem is succinctly expressed by Eliezer Yudkowsky, a prolific blogger and advocate of research into safe superintelligence: “The AI neither hates you, nor loves you, but you are made out of atoms that it can use for something else.”¹¹ A system that was intent on accumulating as many resources as possible, without regard to the law or to morality, that was willing to deploy force to defend itself against attempts to stop it, and that was capable of outwitting humans at every turn, would be an engine of unspeakable destruction. Moreover a rogue AI of this nature wouldn’t stop its destructive rampage until it had appropriated everything. It wouldn’t stop at the abject surrender of humanity (if it even noticed). It wouldn’t stop at the extermination of all life on Earth (unless the continuation of life on Earth subserved its reward function). It would just keep on going, turning everything into computronium, into paperclip factories, or whatever (less fanciful) resources it needed. The worst-case situation is reminiscent of the so-called gray goo scenario described by nanotechnology pioneer Eric Drexler, wherein self-replicating nano-scale robots literally eat the planet as they proliferate exponentially.¹² But unlike a tide of dumb nanobots, a rogue artificial superintelligence would be able to use thinking to overcome any resistance. The real risk of such an AI being developed may be very small. Nevertheless, with so much at stake, the possibility has to be taken seriously. Just as we all insure our houses against fire, even though the chances of anyone’s house actually burning down are very small, it is only rational to devote some portion of humanity’s resources to studying unlikely existential risk scenarios and trying to avoid them. Given that simply switching off a rogue AI is not a realistic option, other ways to render AI safe need to be found, ways that will be robust to self-improvement and a possible intelligence explosion. To round off the discussion, we’ll look at two promising approaches to the problem: limiting the AI’s powers and tuning its reward function. Perhaps the most obvious approach to the problem of rendering an AI safe is to impose a limit on its physical capabilities, and to ensure that it cannot do anything that would revoke this limit. However, this is easier said than done. Suppose that we tried to limit an AI’s ability to act directly on the world. So the AI isn’t endowed with a robot body, nor is it connected to any physical pieces of equipment or infrastructure. The only way for it to interact with the outside world is through language. Surely the AI would then have no means to accumulate resources or to deploy military force. We would be safe. Unfortunately, this is not true. Human dictators don’t need to act directly on the physical world. Rather, they rely on persuading other people to do their bidding. Not only would a superhuman-level AI be more adept at manipulating human behavior than the most Machiavellian dictator, it would have a great deal more to offer. Indeed, even if the AI were confined to a secure facility with no access to the outside world, we wouldn’t be safe. Before long, those with the power to release it into the wild would likely succumb to its promises and/or threats. Let’s take a different tack. We have been supposing that the AI has some sort of will to act on the world, a will that has to be kept in check. But perhaps this assumption is another example of anthropomorphism. Why not build an artificial intelligence that doesn’t even want to act on the world, because it simply answers questions. An oracle AI of this sort would still have ample scope to display superintelligence. We could ask it how to cure an intractable disease, for example, or how to colonize Mars. A sufficiently intelligent system should be able to provide answers. But given the opportunity to veto any course of action it recommends, dangerous suggestions that would entail the unfettered accumulation of resources could be ignored. Unfortunately, this strategy doesn’t work either. The root of the problem is that, for almost any nontrivial, open-ended reward function, the best solution will involve the construction and deployment of a fully empowered superintelligent machine. Whatever needs to be done, a fully empowered AI would be the best tool for doing it, quickly and effectively. So the first step in the oracle AI’s recommended plan will ensure that one gets built. Of course, if we are safety-conscious, we will ignore this suggestion. But the oracle AI will anticipate this, and will therefore disguise its recommendation. It will do this without the slightest malicious intent. However, a solution that humans decided not to implement would be suboptimal. So it will go for a plan that results in us inadvertently building a fully empowered AI. Once again, humanity would be exposed to an existential risk. 7.6 Morality for Superintelligence Let’s turn now to perhaps the most promising approach to safe superintelligence, which is to carefully tune the AI’s reward function. The kind of tuning in question here involves embedding something like moral constraints into the reward function, constraints that prevent the AI from doing harm. The basic mechanism for doing this is straightforward enough. The reward function is designed in such a way that actions violating a moral constraint have an overwhelmingly negative value. A course of action that unnecessarily violated a moral constraint would then always be suboptimal, and the AI would never select it. Although this strategy sounds like a good idea, it turns out (again) to be surprisingly difficult to implement. The challenge is twofold. First, a suitable set of moral principles needs to be decided on. Second, these principles need to be codified with sufficient precision to be embedded in the reward function of the AI in question. Both tasks are enormous. For many people, the example of this approach that first comes to mind is a fictional one, namely Asimov’s three laws of robotics. To see the difficulty of the two tasks, let’s consider how Asimov’s first law would fare if developers tried to implement it for real. According to Asimov’s first law of robotics, “a robot may not injure a human being or, through inaction, allow a human being to come to harm.”¹³ At first, this seems like an eminently sensible principle. But, as Asimov himself demonstrated in many of his stories, it is open to interpretation. Let’s suppose that our AI has learned what it means for harm to come to a human being, and let’s assume some resolution of the (substantial) issue of what should be done when, say, injuring one human being would prevent two others from coming to harm.¹⁴ Now, whatever else is being maximized in its reward function, one way to fulfill the requirement that no humans come to harm through the AI’s inaction might be to anaesthetize a large portion of the population, keeping them alive on a drip. Because it’s possible to eliminate everyday risk from human life this way, any solution that still allowed those humans to be exposed to such risks would be suboptimal. Of course, this would be a catastrophe. So perhaps the constraint requires elaboration. How about “A robot shall not injure a human being, or curtail a human being’s freedom or, through inaction, allow a human being to come to harm”? Hopefully it’s clear that this formulation gives rise to more problems than it solves. What, exactly, constitutes human freedom? What is to be done when the only way to prevent one person from coming to harm is to restrain another? Or, on a larger scale, what is to be done when the only way to protect the freedom of one part of society is to suppress the activities of another, possibly using violence? Politicians and moral philosophers struggle to resolve such issues. It would be a bad idea to let the AI try to learn the concept of freedom, and just as bad to leave it to the AI programmers. Let’s try a different angle. How do human beings learn right from wrong? The human brain isn’t neatly engineered like the sort of AI we’re envisaging here. It doesn’t have an explicitly coded reward function. But we can still ask about the reward function it implicitly instantiates. How is this tuned in such a way that no human would think that anaesthetizing a whole population is a good way to keep people from harm? We ought to be able to do at least as well with a superintelligent machine. Part of the answer, in the case of humans, is that we learn from our parents, our teachers, and our peers. So maybe a similar approach would work for AI. Perhaps we should simply build into the reward function the need to garner human approval. The humans in question could be a selected group of critics, or they could be the public as a whole. Would the AI thereby be able to learn a human-like concept of right and wrong? Perhaps. But perverse ways to maximize such a reward function would still be possible. The AI could arrange for its human critics to be tricked or bribed or drugged or brainwashed or neurally implanted, so that their approval was assured. The root of the difficulty here is that a superhuman-level AI would be capable of putting into effect a malignant plan before it had a chance to learn what humans really want. The powers of a human child, by contrast, are feeble compared to those of its parents. So a child has no way to shortcut the process of learning what society considers acceptable behavior. We have already seen how difficult it would be to limit the capabilities of a superintelligent machine. However, recall that a likely path to superintelligence is through recursive self-improvement. The first AI in the series, the seed AI, will not be superintelligent. It will be far less powerful than its successors. So perhaps this seed AI could be endowed with a working set of values and moral principles. These could then be honed by human approval before it has the ability to cause trouble. Maybe this could be achieved by the gradual refinement of the reward function itself. After all, the human reward function, insofar as it makes sense to speak of such a thing, is not fixed. When someone donates money to charity, it’s surely not because they have learned that giving is more fun than buying ice cream. Rather, their moral sensibilities have matured. It is as if a moral sense had become incorporated into their reward function. So perhaps a self-modifying AI could improve its reward function in a similar way. But there is a potential hazard here too. It would be vital to ensure that the fundamental principles and values given to the seed AI were retained in all its successors. A benevolent AI that was allowed to tinker with its own reward function in arbitrary ways, or indeed to create other AIs with arbitrary reward functions, would be just as dangerous as a rogue AI. Are these problems insurmountable? Is there no way to endow the sort of engineered artificial intelligence under consideration with a reward function guaranteed to benefit humanity? Well, there’s no reason to be so pessimistic. The lesson here is simply that the task is a difficult one. But because so much is at stake, if there is even the slightest chance that a superintelligent machine will be developed some time in the next hundred years or so, it’s worth thinking hard about the problem now. Moreover the issues are not just technical. They oblige us to reframe one of philosophy’s most ancient questions. If we could avoid the associated existential risk, then the prospect of machine superintelligence would present us with an unprecedented existential opportunity, the opportunity to shape the future of humanity, the future of life, even the future of intelligence in this corner of the cosmos. So we should think very carefully about the values we want to instill in a human-level AI. What matters most to us? Is it compassion toward all sentient beings. Is it human freedom, or human progress? Is it the preservation of life on Earth? Is it some combination of these, or something whose essence we have yet to grasp? In Plato’s Republic, Socrates asks how we should live. Reframing the Socratic question, we need to ask what we, as a species, should do. 7.7 The Cosmological Perspective The technological singularity is a powerful concept. Alongside the related idea of transhumanism, it invites us to revisit some of the most profound questions we can ask, throwing them into a new light. How should we live? How should we confront death? What does it mean to be human? What is mind? What is consciousness? What is our potential as a species? Do we have a purpose, and if so, what is it? What is our ultimate destiny? Whatever the future actually holds, looking at these questions through the lens of the singularity is enlightening. Philosophers ask these sorts of questions, and religions purport to answer them. Indeed it doesn’t take much to assemble a full-blown apocalyptic narrative out of the belief that a technological singularity is imminent.¹⁵ The end of the world is upon us (brought about by a hostile superintelligence), but we will be saved by a benevolent, all-seeing, all-powerful being (a friendly AI), after which the chosen few (a super-rich elite) will be resurrected (thanks to whole brain emulation) and enjoy an afterlife of eternal bliss (in virtual reality). A less apocalyptic, but no less grandiose vision assigns humanity a central role in creating a form of AI that will spread out among the stars, eventually to fill the galaxy with intelligence and consciousness. It’s all too easy to mock such views. But it should be born in mind that they are the outcome of a chain of reasoning that combines the rational extrapolation of existing technological trends with well-founded scientific knowledge and a small number of fairly conservative philosophical assumptions. There are many links in the argument that are open to challenge. (Computing power cannot increase at the current rate for much longer. We’ll never have sufficient understanding of intelligence to replicate it. The physics of the brain is noncomputable.) But to dismiss as mere crackpots those who believe in the existential importance of artificial intelligence would be unreasonable. Moreover, from a truly cosmological perspective, even these quasi-religious attitudes to AI can seem parochial. In 1950, during an informal lunchtime conversation, the Nobel prize-winning physicist Enrico Fermi expressed a troubling thought that has become known as Fermi’s paradox.¹⁶ Given the very, very large number of stars in our galaxy, there is surely a very large number of planets capable of producing life. On some portion of these, intelligence is bound to evolve, and technologically advanced civilizations are likely to arise. It seems plausible to suppose that current human space technology falls well short of what is possible scientifically (something that has changed little in the past fifty years). So some of these civilizations will develop the means to travel from one star to another, the speed of light notwithstanding. Even on very conservative estimates of the relevant probabilities, it follows that our galaxy should give rise to many space-faring civilizations. Surely a few of these space-faring civilizations would be inclined to explore, to colonize nearby stars, to multiply and spread out. Because the galaxy is a “mere” 10⁵ light years across, it would take such a civilization just a few million years to visit every single star system it contains, even traveling at a fraction of the speed of light. Yet there is no convincing evidence that Earth has ever been visited by extraterrestrial explorers or colonists. “So where is everybody?” Fermi asked. There are many possible answers to Fermi’s paradox, too many to enumerate here. But according to one sort of answer, the reason we haven’t encountered extraterrestrial intelligence is that every advanced civilization destroys itself when its technology reaches a certain level. This would be a disturbing answer if it were true, because it would imply that this cataclysm, this great filter as economist Robin Hanson calls it, lies ahead of us.¹⁷ But what could it be, this great filter? Could it be nuclear war? Could it be an abuse of biotechnology, or an accident with nanotechnology? Or could it, perhaps, be the creation of hostile artificial intelligence? Perhaps technological development always follows a similar path for every civilization, everywhere in the galaxy. When a civilization’s technology reaches a certain level, it becomes easy to engineer a self-improving artificial general intelligence. Yet, at that point, the obstacles to making it safe remain insurmountable. Even if the danger is widely understood, someone (some blob, some hive, or whatever) somewhere on the planet in question is bound to make one. After that, everything is paperclips, so to speak. All is lost. Then again, if we follow this alarmist argument through to its conclusion, we should expect the extraterrestrial AIs (rather than the extraterrestrials themselves) to multiply and spread out. That was the culmination of Bostrom’s paperclip maximizer thought experiment. They would do this, not through some innate urge to explore or to increase in number but in order to maximize their reward function, whatever that might be, the assumption being that the mathematics underlying the design of AIs is the same everywhere. So, to reframe Fermi’s question in Bostrom’s terms, why aren’t we all paperclips? Or, less fancifully, why aren’t we all computronium? The fact that we are not is reassuring, but it re-opens the question of our place in the cosmos. If we are alone, for whatever reason, and if machine superintelligence is possible, then what an enormous responsibility we have. We must decide what to do with the technology, not just for the sake of humanity but to make a future for consciousness itself in this galaxy. As for the human species, the hope is that artificial intelligence, far from destroying us, will help us realize our boldest aspirations while pursuing our highest ideals. For my part, as I watch a wren through the kitchen window, clinging to a hawthorn bush, I hope we never lose sight of the things we already have that still matter, whatever the future holds. Glossary Artificial general intelligence (AGI) Artificial intelligence that is not specialized to carry out specific tasks, but can learn to perform as broad a range of tasks as a human. The term has been popularized by Ben Goertzel. Big data In the context of artificial intelligence, a catch-all term to denote quantities of data so large (e.g., billions of training examples) that they enable the performance of tasks (e.g., machine translation) that were not possible with smaller datasets (e.g., comprising only millions of training examples). Cognitive enhancement The use of technology, such as drugs or neural prosthesis, to amplify intelligence. Common sense In the context of artificial intelligence, a sufficient understanding of the everyday physical and social world to enable the consequences of commonplace actions to be foreseen. In this sense it is seen to be a prerequisite of artificial general intelligence. Computronium A hypothetical material capable of performing the maximum amount of computation that is theoretically possible in matter. Convergent instrumental goals Goals that indirectly subserve an AI’s reward function, and are likely to do so regardless of what that reward function is. Examples include self-preservation and the acquisition of resources. Deep learning A machine learning technique that involves multiple, hierarchically organized layers of artificial neurons. Embodiment In the context of an AI system, being in control of a spatially located body with sensory and motor apparatus. This might be a physical body (i.e., the human body or a robot body) or a virtual body (in a computer simulation). Existential risk Any eventuality, either natural or human-made, that is capable of extinguishing the human species or permanently curbing its potential. The development of recursively self-improving artificial intelligence could be regarded as an existential risk. Exponential A mathematical function whose rate of increase at any given time depends on the value of the function at that time. Moore’s law is the classic example of an exponential technological trend. Fermi’s paradox The puzzle, first articulated by Enrico Fermi, that our planet seems never to have been visited by extraterrestrials despite the fact that there has been ample time for any sufficiently advanced extraterrestrial civilization to have spread throughout the galaxy. Friendly AI Artificial intelligence of human level or beyond that is guaranteed to have a positive effect on humanity, and not to pose an existential risk. The term was coined by Eliezer Yudkowsky. Great filter In the context of Fermi’s paradox, the hypothesized cause for the demise of any sufficiently advanced extraterrestrial civilization before it has a chance to spread throughout the galaxy. The development of hostile machine superintelligence is one candidate. The term was coined by Robin Hanson. Human-level AI An artificial intelligence capable of matching humans in every (or nearly every) sphere of intellectual activity. Intelligence explosion A very rapid increase in intelligence resulting from uncontrolled feedback in a recursively self-improving artificial intelligence. This would yield machine superintelligence. Law of accelerating returns A principle governing certain kinds of technological progress wherein improvements in a technology enable that technology to improve more rapidly. Moore’s law is an example. Machine consciousness Weakly construed, the possession by an artificial intelligence of a set of cognitive attributes that are associated with consciousness in humans, such as awareness, self-awareness, or cognitive integration. Strongly construed, the possession by an AI of properly phenomenological states, perhaps entailing the capacity for suffering. Mind uploading The hypothetical transfer of a human mind from its original biological substrate to a computational substrate, by means of whole brain emulation, for example. On the assumption that the person survives the process, this is a potential route to indefinite life extension. Moore’s law The observation/prediction, first made by Intel’s Gordon Moore, that the number of transistors that can be fabricated on a given area of silicon doubles roughly every eighteen months. Optimization The computational process of finding a mathematical structure that maximizes a given utility function or reward function. Many cognitive operations can be cast as optimization problems. Oracle AI A form of artificial intelligence that doesn’t act directly on the world, but that only answers questions. Constructing only oracle AIs is one way to mitigate the risks of superintelligence. Paperclip maximizer A hypothetical AI system that features in a thought experiment of Nick Bostrom used to illustrate one way in which a superintelligent machine could fail catastrophically (by filling the world with paperclip factories). Quantum computer A computer that exploits quantum effects to achieve high performance. Quantum computers may (or may not) accelerate progress toward AI at human level and beyond. Recursive self-improvement The amplification of intelligence in an AI system that can rewrite its own code and/or redesign its own hardware to be better. The rate of self-improvement is potentially subject to the “law of accelerating returns,” which means that a recursively self-improving AI could cause an intelligence explosion. Reinforcement learning A branch of machine learning that concerns the acquisition, through trial and error, of a policy for action that maximizes expected future reward. Reward function In the context of reinforcement learning or optimization, the function that is being maximized. Also called a utility function or (if minimized rather than maximized) a cost function. Seed AI The first AI in a series of recursively self-improving systems. Ensuring that the seed AI has the right properties, including the right initial reward function, could be vital to guaranteeing safety in the event of an intelligence explosion. Superintelligence Artificial intelligence that can outwit humans in every (or almost every) intellectual sphere Technological singularity The prospective development of human-level artificial intelligence, rapidly followed by the arrival of superhuman-level artificial intelligence, precipitating an unprecedented level of social change. This sense of the term is due to Vernor Vinge (1993). Ray Kurzweil (2005) uses the term “singularity” somewhat differently, to mean the (predicted) moment in history when the totality of nonbiological intelligence on the planet exceeds the totality of human intelligence. Transhumanism A movement dedicated to enabling humans to transcend biological limitations, for example, by greatly extending lifespans or through cognitive enhancement. Turing machine An idealized mathematical description of a digital computer proposed by Alan Turing. Theoretically speaking, all digital computers are Turing machines. Turing Test A test for intelligence inspired by Alan Turing that involves a judge and two players, one human and one a computer. The judge engages in a conversation with the two players, not knowing which is which. If the judge is unable to tell which is the human and which is the machine, then the machine is said to pass the Turing Test. Universal artificial intelligence An idealized mathematical model of perfect artificial intelligence, proposed by Marcus Hutter, that combines reinforcement learning and probabilistic model building. Vicarious embodiment The ability of an AI to learn from a massive repository of other embodied agents’ recorded interactions with the world as much as if the AI were embodied itself. Whole brain emulation (WBE) The process of making an exact computer-simulated copy of the brain of a particular animal (e.g., a particular human). The term was coined Randal Koene. Zombie AI A hypothetical artificial intelligence that can perfectly mimic the behavior of a conscious being although it has no phenomenal consciousness. Notes Introduction 1. The first use of the term “singularity” in roughly this way is attributed to von Neumann (S. Ulam, 1958, “John von Neumann 1903–1957.” Bulletin of the American Mathematical Society 64 93, part 20, 1–49). The term was popularized by Kurzweil in his 2005 book The Singularity is Near. Several senses of the term have currency today. The one assumed in this book is closest to that of Vinge in his 1993 essay “The Coming Technological Singularity.” 2. G. E. Moore (1965), “Cramming More Components onto Integrated Circuits,” Electronics (April 19): 114–17. 3. See Kurzweil (2005). Although Kurzweil’s book dates from 2005, the exponential trends he identifies (e.g., Moore’s law) remain valid ten years on. 4. Kurzweil (2005), p.19. See also J. Schmidhuber (2007), “New Millennium AI and the Convergence of History,” in W. Duch and J. Mandziuk (eds.), Challenges to Computational Intelligence, Springer, 15–35. Chapter 1 1. A. M. Turing (1950), “Computing Machinery and Intelligence,” Mind 49 (236): 433–60. 2. Quoted in J. L. Casti (1988), The Cambridge Quintet: A Work of Scientific Speculation (Perseus Books), 180. 3. The term “artificial general intelligence” has recently gained currency thanks to independent AI researcher Ben Goertzel. But the problem was recognized long ago, not least by the field’s founders; see J. McCarthy (1987), “Generality in Artificial Intelligence,” Communications of the ACM 30 (12): 1030–35. 4. A. A. S. Weir, J. Chappell, and A. Kacelnik (2002), “Shaping of Hooks in New Caledonian Crows,” Science 297: 981. 5. The term “whole brain emulation” was coined by neuroscientist Randal Koene. Chapter 2 1. See A. Sandberg and N. Bostrom (2008), “Whole Brain Emulation: A Roadmap,” Technical Report 2008–3, Future of Humanity Institute, Oxford. 2. M. Ahrens and P. J. Keller (2013), “Whole-Brain Functional Imaging at Cellular Resolution Using Light-Sheet Microscopy,” Nature Methods 10: 413–20. 3. A. M. Zador et al. (2012), “Sequencing the Connectome,” PLoS Biology 10 (10): e1001411. 4. For a related proposal, see D. Seo et al. (2013), “Neural Dust: An Ultrasonic, Low Power Solution for Chronic Brain Machine Interfaces,” http://arxiv.org/abs/1307.2196. 5. This is roughly the approach taken by the ten-year EU funded Human Brain Project, begun in 2013. 6. Mathematically speaking, the physical properties of actual neurons cannot be perfectly represented in a conventional digital computer because they are analog quantities. (Hence the scare quotes in the previous paragraph.) 7. Carver Mead set out the principles of neuromorphic engineering in the 1980s. For a recent review, see G. Indiveri et al. (2011), “Neuromorphic Silicon Neuron Circuits,” Frontiers in Neuroscience 5: art. 73. The promising idea of 3D-printed neuromorphic hardware is discussed in A.D.Maynard (2014), “Could We 3D Print an Artificial Mind?” Nature Nanotechnology 9: 955–56. 8. In The Emperor’s New Mind: Concerning Computers, Minds and The Laws of Physics (Oxford University Press, 1989), physicist Roger Penrose claims that consciousness and intelligence in the human mind depend on certain quantum phenomena in the brain. If he were right, then effective indistinguishability would not be possible for whole brain emulation using classical (digital) computation. However, few neuroscientists support his views. Either way, the issue is orthogonal to the present one of parallelism. 9. C. S. Lent, B. Isaksen, and M. Lieberman (2003), “Molecular Quantum-Dot Cellular Automata,” Journal American Chemical Society 125: 1056–63. 10. S. Lloyd (2000), “Ultimate Physical Limits to Computation,” Nature 406: 1047–54. 11. The feasibility of cognitive prostheses has been demonstrated by Theodore Berger and colleagues; see T. W. Berger et al. (2011), “A Cortical Neural Prosthesis for Restoring and Enhancing Memory,” Journal of Neural Engineering 8 (4): 046017. Chapter 3 1. A. Halevy, P. Norvig, and F. Pereira (2009), “The Unreasonable Effectiveness of Data,” IEEE Intelligent Systems (March–April): 8–12. 2. M. Hutter (2005), Universal Artificial Intelligence: Sequential Decisions Based on Algorithmic Probability (Springer). For a more digestible and up-to-date overview, see M. Hutter (2012), “One Decade of Universal Artificial Intelligence,” http://arxiv.org/abs/1202.6153. 3. A mid-2010s AI system that roughly conforms to this blueprint is Google DeepMind’s DQN. See V. Mnih,et al. (2015), “Human-Level Control through Deep Reinforcement Learning,” Nature 518: 529–33. Chapter 4 1. The possibility of an intelligence explosion was first mooted in the 1960s by the computer scientist Jack Good (also a wartime code-breaker who worked with Turing); see I. J. Good (1965), “Speculations Concerning the First Ultraintelligent Machine,” in F. L. Alt and M. Rubinoff (eds.), Advances in Computers : 31–88. The potential ramifications of an intelligence explosion are explored in depth in Nick Bostrom’s book Superintelligence: Paths, Dangers, Strategies (Oxford Univeristy Press, 2014). 2. This is the point of John Searle’s controversial Chinese room argument (J. R. Searle, 1980, “Minds, Brains, and Programs,” Behavioral and Brain Sciences 3: 417–58). Whatever the merits of his argument, its conclusion—that understanding doesn’t result from the mere manipulation of symbols—resonates with the present discussion of engineered AI. However, it has less appeal in the context of human-like, brain-based AI. Chapter 5 1. For example, see E. Thompson (2007), Mind in Life: Biology, Phenomenology, and the Sciences of Mind (Belknap Harvard). 2. For example, see D. Dennett (1991), Consciousness Explained (Penguin). 3. See chapter 7 of D. J. Chalmers (1996), The Conscious Mind: In Search of a Fundamental Theory (Oxford University Press). 4. B. J. Baars (1988), A Cognitive Theory of Consciousness (Cambridge University Press); G. Tononi (2008), “Consciousness as Integrated Information: a Provisional Manifesto,” Biological Bulletin 215: 216–42. For more on global workspace theory, see Shanahan (2010) and S. Dehaene et al. (2014), “Toward a Computational Theory of Conscious Processing,” Current Opinion in Neurobiology 25: 76–84. 5. According to philosopher Thomas Metzinger, attempts to create machine consciousness should be banned; see T. Metzinger (2003), Being No-One: The Self-Model Theory of Subjectivity (MIT Press), pp.620–22. 6. D. J. Chalmers (1996), The Conscious Mind: In Search of a Fundamental Theory (Oxford University Press). For a critique of this distinction, see chapter 1 of Shanahan (2010). 7. T. Nagel (1974), “What Is It Like to Be a Bat?” Philosophical Review 83 (4): 435–50. 8. See chapter 9 of The Principles of Psychology (1890). 9. For a related discussion, see M. Shanahan (2012), “Satori before Singularity,” Journal of Consciousness Studies 19 (7–8): 87–102. Chapter 6 1. H. Moravec (1999), 164–65. 2. This section is based on Kurzweil’s The Singularity Is Near (Viking, 2005). 3. See Palyanov et al. (2012), “Towards a Virtual C. elegans: A Framework for Simulation and Visualization of the Neuromuscular System in a 3D Environment,” In Silico Biology 11: 137–47. 4. F. Nietzsche (1881), Daybreak, bk. 5. 5. For a more detailed treatment of the issues in this section, see Brynjolfsson and McAfee (2014). 6. J. Lanier (2013), Who Owns the Future? (Alan Lane). Chapter 7 1. There isn’t space here to represent the broad set of views that fall under the umbrella of transhumanism. See M. More and N. Vita-More (2013). 2. F. Fukuyama (2004), Transhumansim, Foreign Policy 144: 42–43. 3. Fukuyama (2002), p. 172. 4. Ibid., p.173. 5. F. Nietzsche (1883), Thus Spoke Zarathustra, Prologue 4. 6. The discussion of personal identity here draws on D. Chalmers (2010) as well as chapter 10 of D. Parfit (1984), Reasons and Persons (Oxford University Press). 7. For further discussion of this issue, see Yudkowsky (2008) and Bostrom (2014). 8. Yudkowsky suggests a sophisticated strategy along these lines, based on what he calls coherent extrapolated volition; see E. Yudkowsky (2004), “Coherent Extrapolated Volition,” The Singularity Institute, http://intelligence.org/files/CEV.pdf. See also chapter 13 of Bostrom (2014). 9. Bostrom (2014), p. 123. 10. See also S. Omohundro (2008), “The Basic AI Drives,” in P. Wang, B. Goertzel, and S. Franklin (eds.), Proceedings of the 1st AGI Conference, 483–92. 11. Yudkowsky (2008), p. 333. 12. K. E. Drexler (1986), Engines of Creation: The Coming Era of Nanotechnology (Anchor Books), chapter 11. 13. The second law is “A robot must obey the orders given to it by human beings, except where such orders would conflict with the first law” and the third law is “A robot must protect its own existence as long as such protection does not conflict with the first or second law.” 14. Dilemmas of this sort are familiar to moral philosophers, who call them “trolley problems” after a thought experiment of Philippa Foot. 15. See Geraci (2010). 16. E. M. Jones (1985), “‘Where Is Everybody?’ An Account of Fermi's Question,” Physics Today 38 (8): 11–13. 17. R. Hanson (1998), “The Great Filter—Are We Almost Past It?” http://hanson.gmu.edu/greatfilter.html. Further Readings Barrat, J. 2013. Our Final Invention: Artificial Intelligence and the End of the Human Era. Thomas Dunne Books. Blackford, R., and D. Broderick, eds. 2014. Intelligence Unbound: The Future of Uploaded and Machine Minds. Wiley Blackwell. Bostrom, N. 2014. Superintelligence: Paths, Dangers, Strategies. Oxford University Press. Brynjolfsson, E., and A. McAfee. 2014. The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. Norton. Chalmers, D. 2010. “The Singularity: A Philosophical Analysis.” Journal of Consciousness Studies 17 (9–10): 7–65. Eden, A. H., J. H. Moor, and J. H. Soraker, eds. 2013. Singularity Hypotheses: A Scientific and Philosophical Assessment. Springer. Fukuyama, F. 2002. Our Posthuman Future: Consequences of the Biotechnology Revolution. Profile Books. Geraci, R. 2010. Apocalyptic AI: Visions of Heaven in Robotics, Artificial Intelligence, and Virtual Reality. Oxford University Press. Good, I. J. 1965. “Speculations Concerning the First Ultraintelligent Machine.” In Advances in Computers 6, ed. F. L. Alt and M. Rubinoff, 31–88. Academic Press. Joy, B. 2000. “Why the Future Doesn’t Need Us.” Wired 8.04. Kurzweil, R. 2005. The Singularity Is Near. Viking. Moravec, H. 1999. Robot: Mere Machine to Transcendent Mind. Oxford University Press. More, M., and N. Vita-More, eds. 2013. The Transhumanist Reader: Classical and Contemporary Essays on the Science, Technology, and Philosophy of the Human Future. Wiley Blackwell. Shanahan, M. 2010. Embodiment and the Inner Life: Cognition and Consciousness in the Space of Possible Minds. Oxford University Press. Vinge, V. 1993. “The Coming Technological Singularity: How to Survive in the Post-Human Era.” In Vision-21: Interdisciplinary Science and Engineering in the Era of Cyberspace, 11–22. NASA Conference Publication 10129. NASA Lewis Research Center. Yudkowsky, E. 2008. “Artificial Intelligence as a Positive and Negative Factor in Global Risk.” In Global Catastrophic Risks, ed. N. Bostrom and M. M. Cirkovic, 308–45. Oxford University Press. Index Abstract concepts, 56–58 Abundance, 130, 156, 163–67 Accelerating returns, xviii, 158–59 Algorithmic trading, 172–74 Anthropocentrism, xxii, 194, 196 Anthropomorphism, 97, 114–16, 142–44, 204, 207, 213, 216 Artificial general intelligence, 1–5 Artificial intelligence brain inspired vs. engineered from scratch, 12–14 human-level vs. superhuman-level, 80 human-level vs. human-like, 68–69 universal (Hutter), 75–77 Asimov, Isaac, xi, 218 Asimov’s laws of robotics, 218–19 Autonomous weapons, 154–55 Avatar, 44, 113–14, 128, 141–42, 147 Baars, Bernard, 124–25 Banks, Iain, 130 Bentham, Jeremy, 126 Betty (crow), 8–9 Big data, 64–69, 109 Biocentrism, 114–15 Biotechnology, xvi, 24–25, 225 Brain, 15–18 Brain mapping, 18–19, 22–28, 38–39 Bostrom, Nick, 207–208, 210–13, 226 C. elegans (worm), 161 Cat, 73–75, 114 Chalmers, David, 134 Chess, 2–3, 52, 78, 95, 106, 191 Citizenship, 186–87 Clarke, Arthur C., 91 Cognitive integration, 137–39 Cognitive purview, 106–108 Common sense, 6–9, 65–66 Computronium, 35, 214 Connectome, 19, 25 Consciousness hard problem vs. easy problem, 134 theories of, 124–25 Convergent instrumental goal, 213–14 Creativity, 7–9, 83, 98–101 Crime, 104, 186 Crow, 8–9, 77–79 Cyber war, 128, 206 Deep Blue, 2–4 Deep learning, 61–62 Democracy, 187 Drexler, Eric, 215 Duplication (of AIs), 183–84, 187, 203 Embodiment, 4–5 of a whole brain emulation, 36–42 vicarious, 65 virtual, 19–21, 42–45, 57, 123 Empathy, 113, 115–16, 133, 135–36, 147–48 Employment, 165–66 Episodic memory, 185 Existential opportunity, xxi–xxiii, 222 Existential risk, xxi, 168–69, 207–12, 215 Exponential trend, xviii–xix, 157–60 Extraterrestrial intelligence, 183, 224–26 Evolution, xviii–xix, 98–99 Fermi’s paradox, 224–26 Flash crash, 172–73 Fluorescence microscopy, 23–24 Friendly AI, 180 Functionalism, 118, 122–23 Fukuyama, Francis, 193–94 General AI (vs. specialist AI), 3–6 Google, 53, 68, 170 Global workspace theory, 124–25 Gradual neural replacement (thought experiment), 118–23, 199–201 Graphics Processing Unit (GPU), 31 Gray goo, 215 Great filter, 225 HAL (fictional AI), 52–53 Hanson, Robin, 225 Hard problem (of consciousness), 134–35 Hardware, developments in, 31–35 Hodgkin–Huxley model (of neurons), 17, 19, 28 Hutter, Marcus, 75 IBM, 2 Integrated information, 124–25 Intelligence, scale of, 104–105 Intelligence explosion, 50, 96–97, 104, 151, 205 James, William, 141 Kacelnik, Alex, 8 Kasparov, Garry, 2, 52 Kurzweil, Ray, xviii, 156, 157–63 Language, 4, 7, 48–49, 65–67, 82, 110, 112–15 Lanier, Jaron, 166 Law of accelerating returns, xviii–xix, 158 Learning, 5–6, 40, 102–103 deep, 61–62 machine, 39, 58–69, 74, 108–109, 147, 169, 209 reinforcement, 75 Life extension, xxi–xxii, 196–98, 205 Lloyd, Seth, 35, 160 Machine consciousness, 14, 117–50 Machine learning, 39, 58–69, 74, 108–109, 147, 169, 209 Merging (many AIs into one), 185–86 Midas, King, 208 Military AI, 128, 154–55, 206 Mind uploading, 14, 196–203 Moore’s law, xviii, 24, 34–35, 47, 158–60 Moravec, Hans, 157 Mouse brain emulation, 22–43 Nagel, Thomas, 134 Nanotechnology, 25–26, 215 Neural networks, 12 Neuromorphic hardware, 32–33 Neurons, 15–16 Neuroscience, 15–18, 32, 49, 109 Neurotechnology, xvi, xix Nietzsche, Friedrich, 163, 194 Optimization, 71–77 and creativity, 99–101 and evolution, 98–99, 102–103 Oracle AI, 216–17 Paperclip maximizer, 210–12 Parallelism, 29–31, 34, 95, 98, 126–27 Personal assistant, 52–55, 66, 138, 164 Personal identity, 14, 142–145, 198–203, 213 Personhood, 180–81, 183, 184, 186 Plato, 222 Predictive model, 58–64, 74 Property (ownership of), 183–84 Quantum computer, 34 Quantum effects (in the brain), 46 Recursive self-improvement, 96–97, 104, 151, 205, 207–208, 220–21 Reinforcement learning, 75 Resource acquisition, 214 Responsibilities (for AIs), 130, 183, 184, 186, 188, 192 Reward function, 72, 77–78, 80–83, 93, 100, 131, 145, 148–50, 177, 207–208, 213–14, 217–21, 226 Rights (for AIs), 86, 130, 180, 183–84, 188–190, 192–93 Robotics, 5, 36, 165 Asimov’s laws of, 218–19 Seed AI, 220–21 Self-awareness, 139–46 Self-improving AI, 96–97, 104, 151, 205, 207–208, 220–21 Self-preservation, 141, 142, 145–46, 213 Siri (personal assistant), 53 Slavery, 181–82 Socrates, 222 Specialist AI (vs. general AI), 3–6 Splitting (one AI into many), 185–86 Stream of consciousness, 141 Stuxnet, 128 Suffering, capacity for, 86, 119–20, 129, 151, 181–82, 188–90 Supercomputer, 30, 32, 158 Terminator (movie), 140 Tononi, Giulio, 124 Tooty (cat), 73–75, 114 Transhumanism, xxii, 14, 190–98, 205, 222 Traveling salesperson problem, 71–74, 98, 99, 101 Turing, Alan, 1–2, 75–76 Turing machine, 75–76 Turing test, 1–2, 4, 152 Unintended consequences, 174–77 Universal artificial intelligence, 75–77 Unreasonable effectiveness of data, 68 Uploading (of minds), 14, 176–203 User illusion, 114–15 Virtual worlds, 19–21, 42–45, 57, 88–89, 100, 117, 123, 126, 128, 201, 223 Warfare, 154–55, 206 Wittgenstein, Ludwig, 110 Yudkowsky, Eliezer, 214 Zombie (philosophy), 121–22, 129, 134–35
19dde46f-3728-4e46-9567-a8e5e3dfbfee
trentmkelly/LessWrong-43k
LessWrong
Steelmining via Analogy This post has been written for the first Refine blog post day, at the end of a week of readings, discussions, and exercises about epistemology for doing good conceptual research. Thanks Adam Shimi, Linda Linsefors, Dan Clothiaux for comments. To steelmine (as per Tamsin): to intentionally look for what productive mistakes a research direction is hinting at. There are quite a few analogies used regularly across alignment. Some popular ones include "prompts are like programs" or "interpretability is like neuroscience on ML models." While no analogy is perfect, some are useful, as they help us recycle years of intellectual labor if we get the translation key right. This opportunity is particularly relevant if you side with relatively short timelines, because you can use it to quickly scan through entire regions of hypothesis space for (part of) a silver bullet. This is where I'm coming from in my work at Refine. I want to use the fellowship as an opportunity to investigate a dozen or so themes which connect prosaic alignment to other tangentially related disciplines and see which ones yield productive mistakes. This top-down approach of starting with broad themes and then zooming in on details is explicitly baked into the linked artifact which I'll be using as a sketchpad throughout the program. The left-to-right tiled layout represents branches exploring various assumptions, technical details, and failure modes, while allowing (and welcoming) targeted feedback. That said, here are a few handpicked excerpts from said artifact to give you a taste: Memetic Colonies > Based on a new formalism from argumentation theory, we might be able to turn large language models into Petri dishes for cultivating belief systems. In such a memetic pressure cooker, a host of belief systems would be incentivized to generally maintain internal consistency while systematically undermining each other. In this framework, we could develop specialized "lab equipment" to help us (1) extrapo
c95dfb65-a05f-4fe3-a554-3773f892e867
trentmkelly/LessWrong-43k
LessWrong
Robustness to Scaling Down: More Important Than I Thought (Edit: I added text between "...was really about reductions." and "To use the mental move of robustness...", because comments showed me I hadn't made my meaning clear enough.) This post is part of the work done at Conjecture. In Robustness To Scale, Scott Garrabrant presents three kinds of... well, robustness to scale: * Robustness to scaling up, meaning that a solution to alignment keeps working as the AI gets better. * Robustness to scaling down, meaning that a solution to alignment keeps working if the AI is not optimal or perfect. * Robustness to relative scale, meaning that the solution to alignment doesn't rely on symmetrical capabilities between different subsystems. And the thing is, until quite recently, I didn't get the value of robustness to scaling down. Scaling up was obviously important, and relative scale, while less intuitive, still made sense. But robustness to scaling down felt like a criticism for people with solutions that worked in the limit, when no one had anything close to that. Then I realized robustness to scaling down was really about reductions. I need to expand a little bit here. So in Scott's original post, his example for lack of robustness to scaling down is the alignment strategy of making the AI find the preferences of values and humans, and then pursuing that. The problem being that if the AI is not doing that perfectly or almost perfectly, problems might crop up. This is thus an argument that the reduction of alignment to "point AIs to human values" is not so straightforward, because it requires a nearly exact implementation and is not tolerant to partial solutions. An important thing to note here is that the AI and the alignment scheme are inextricably linked in this example, because it's the AI itself that does the brunt of the work in the alignment scheme. Which leads to Scott's framing of robustness to scaling down in terms of scaling down the AI's capabilities. And it fits nicely with the more obvious robustness to
18111422-e797-45d6-9328-005e32825bfb
StampyAI/alignment-research-dataset/lesswrong
LessWrong
The Social Alignment Problem *TLDR: I think public outreach is a very hopeful path to victory. More importantly, extended large-scale conversation around questions such as whether public outreach is a hopeful path to victory would be very likely to decrease p(doom).* You’re a genius mechanical engineer in prison. You stumble across a huge bomb rigged to blow in a random supply closet. You shout for two guards passing by, but they laugh you off. You decide to try to defuse it yourself. This is arguably a reasonable response, given that this is your exact skill set. This is what you were trained for. But after a few hours of fiddling around with the bomb, you start to realize that it's much more complicated than you thought. You have no idea when it’s going to go off, but you start despairing that you can defuse it on your own. You sink to the floor with your face in your hands. You can’t figure it out. Nobody will listen to you.   **Real Talking To The Public has never been tried** --------------------------------------------------- Much like the general public has done with the subject of longevity, I think many people in our circle have adopted an assumption of hopelessness toward ~~public outreach~~ social alignment, before a relevant amount of effort has been expended. In truth, there are many reasons to expect this strategy to be quite realistic, and very positively impactful too. A world in which the cause of AI safety is as trendy as the cause of climate change, and in which society is as knowledgeable about questions of alignment as it it about vaccine efficacy (meaning not even that knowledgeable), is one where sane legislation designed to slow capabilities and invest in alignment becomes probable, and where capabilities research is stigmatized and labs find access to talent and resources harder to come by. I’ve finally started to see individual actors taking steps towards this goal, but I’ve seen a shockingly small amount of coordinated discussion about it. When the topic is raised, there are four common objections: They Won’t Listen, Don’t Cry Wolf, Don’t Annoy the Labs, and Don’t Create More Disaster Monkeys.   ### **They won’t listen/They won’t understand** I cannot overstate how clearly utterly false this is at this point. It’s understandable that this has been our default belief. I think debating e/accs on Twitter has broken our brains. The experience of explaining again and again why something smarter than you that doesn’t care about you is dangerous, and being met with [these arguments](https://www.lesswrong.com/posts/cQrkkKwaGAbi6LMeH/irving-s-shortform?commentId=cCwf3cJGTeJv58r9D), is a soul-crushing experience. It made sense to expect that if it’s this hard to explain to a fellow computer enthusiast, then there’s no hope of reaching the average person. For a long time I avoided talking about it with my non-tech friends (let's call them "civilians") for that reason. However, when I finally did, it felt like the breath of life. My hopelessness broke, because they instantly vigorously agreed, even finishing some of my arguments for me. Every single AI safety enthusiast I’ve spoken with who has engaged with civilians has had the exact same experience. I think it would be very healthy for anyone who is still pessimistic about convincing people to just try talking to one non-tech person in their life about this. It’s an instant shot of hope. The truth is, if we were to decide that getting the public on our side is our goal, I think we would have one of the easiest jobs any ~~activists~~ social alignment researchers have ever had. Far from being closed to the idea, civilians in general literally already get it. It turns out, Terminator and the Matrix have been in their minds this whole time. We assumed they'd been inoculated against serious AI risk concern - turns out, they walked out of the theaters thinking “wow, that’ll probably happen someday”. They’ve been thinking that the entire time we’ve been agonizing about nobody understanding us. And now, ChatGPT has taken that “someday” and made it feel real. At this point AI optimists are like the Black Knight from Monty Python. You can slice apart as many of their arguments as you want but they can’t be killed – however, you can just go around them. We're spending all our time and effort debating them and getting nowhere, when we could just go around them to the hosts of civilians perfectly willing to listen. The belief is *already there*. They just haven’t internalized it, like a casual Christian casually sinning even though their official internal belief is that they’re risking being tortured literally forever. They just need the alief. A month ago, there had only been a handful of attempts at social alignment from us. Rob Miles has been producing accessible, high-quality content for half a decade. A [petition](https://www.lesswrong.com/posts/he8jbB9A7AfudyoQC/petition-unplug-the-evil-ai-right-now) was floated to shut down Bing, which we downvoted into oblivion. There was the Bankless podcast. There was the 6-month open letter, and then the Time opinion piece and several podcast appearances. This wasn't that much effort as PR pushes go, and yet it accomplished a very appreciable news cycle that hasn't yet ended (although there were [unforced](https://futurism.com/ai-expert-bomb-datacenters) [errors](https://www.vice.com/en/article/ak3dkj/ai-theorist-says-nuclear-war-preferable-to-developing-advanced-ai) in messaging that more coordination likely could have avoided). Additionally, it seems to me that the incentives of almost all relevant players already align with being open to the message of slowing progress (beyond the free-bingo-square incentive of not wanting to die). * Governments are eternally paranoid of any threats to their power. They have a monopoly on violence, and it shouldn’t take a five star general to realize that a person, company, or state armed with a superhumanly intelligent adviser is one of the only realistic threats they face. It’s an obvious national security risk. They’re also motivated to follow the will of the people. * Huge numbers of civilians are currently in extreme danger of their jobs being abstracted away to language model x, let alone if capabilities continue progressing as they have been. This wave of automation will be unique because instead of low-income workers it will be the ones with the most money to contribute to political campaigns. There will be a short delay, but the [looming threat](https://uk.finance.yahoo.com/video/ai-technology-could-impact-80-211527589.html) alone should get people riled up in a very rare way, not to speak of when it actually starts happening in earnest. * Legacy companies without an AI lead are standing on the precipice of being disrupted out of existence. The climate change cause fought against trillions of dollars, because they were trying to *change* the status quo, a status quo that at the time made up all the world’s most valuable companies. Here, we’re more accurately said to be working to *prevent* the status quo from changing, meaning it seemse there's more likely to be lobby-ready money on our side than theirs. There will be plenty of money on the other side also but I expect the situation to be an inversion of climate change. (Tangent: I think it’s worth mentioning here that stigmatization also seems very relevant to the problem of Chinese AI enthusiasm. China has invested many resources into mitigating climate change risk, in order to improve its global reputation. A future where AI capabilities research carries a heavy moral stigma globally and China decides to disinvest as a result isn’t entirely unrealistic. China has the additional incentive here that American companies are clearly ahead, and a global pause would benefit China, just as it would benefit smaller companies wanting a chance to catch up. China would then be incentivized to avoid disincentivizing an American pause.)   ### **Don’t cry wolf/Preserve dry powder** The question of whether now is the time to seriously go public is a valid one. But the question assumes that at some point in the future it will be the correct time. This almost mirrors the AI risk debate itself: even if the crucial moment is in the future, it doesn’t make sense to wait until then to start preparing. A public-facing campaign can take months to plan and hone, and it seems like it makes sense to start preparing one now, even if we decide that now isn’t the correct moment.   ### **We need to avoid angering the labs** Practically speaking I’ve seen no evidence that the very few safety measures labs have taken have been for our benefit. Possibly, to some small extent, they've been for PR points because of public concern we’ve raised, but certainly not out of any loyalty or affection for us. The opportunity to regulate them or impose bottlenecks on access to talent and resources via stigmatization of the field of capabilities research seems much larger than the expected benefit of hoping that they'll hold back because we've been polite.   ### **Don’t create more idiot disaster monkeys** It’s true that we’re mostly in this situation because certain people heard about the arguments for risk and either came up with terrible solutions to them or smelled a potent fount of personal power. A very valid concern I’ve heard raised is that something similar could happen with governments, which would be an even worse situation than the one we’re in. It seems unlikely that AI capabilities can advance much further without governments and other parties taking notice of their potential. If we could have a choice between them realizing the potential without hearing about the risks, or realizing the potential *via* hearing about the risks, the latter seems preferable. The more the public is convinced of the risk, the more incentivized governments are to act as though they are, too. Additionally, there doesn't seem to be an alternative. Unaligned superintelligence approaches by default unless something changes. --- Without concerted effort from us, there are two possible outcomes. Either the current news cycle fizzles out like the [last ones](https://www.iflscience.com/elon-musk-and-stephen-hawking-sign-letter-prevent-robot-uprising-29667) did, or AI risk goes truly mainstream but we lose all control over the dialogue. If it fizzles out, there’s always a chance to start another one after the next generation of AI and another doom-dice roll, assuming we won’t just say the same thing then. But even then, much of our dry powder will be gone and our time much shorter. It's hard to say how bad losing control over the dialogue could be; I don’t know how asinine the debate around this could get. But if we believe that our thinking about this topic tends to be more correct than the average person, retaining control over it should have a positive expected value. **Realistically, the latter failure appears much much more likely.** I’m fairly certain that this movement is in the process taking off with or without us. There are a few groups already forming that are largely unaffiliated with EA/rationalism but are very enthusiastic. They've mostly heard of the problem through us, but they're inviting people who haven't, who will invite more people who haven't. I’ve started to see individuals scared out of all reason, sounding more and more unhinged, because they have no guidance and nowhere to get it, at least until they find these groups. A very realistic possible future includes a large AI safety movement that we have no influence over, doing things we would never have sanctioned for goals we disagree with. Losing the ability to influence something once it gets sufficiently more powerful than you; why does that sound familiar?   **My Bigger Point: We Lack Coordination** ----------------------------------------- You probably disagree with many things I’ve said, which brings me to my main point: questions like these haven’t been discussed enough for there to be much prior material to reference, let alone consensuses reached. I could be wrong about a lot of what I suggested; maybe going public is the wrong move, or maybe now isn't the right time; but I wouldn't know it because there is no extended conversation around real-world strategy. The point has been raised a couple times before that actions taken by individuals in our circle have been very uncoordinated. [Every](https://www.lesswrong.com/posts/CqvwtGpJZqYW9qM2d/catching-the-eye-of-sauron) [time](https://www.lesswrong.com/posts/ou5raNNjamAaahtWG/ai-scares-and-changing-public-beliefs) [this](https://www.lesswrong.com/posts/kipMvuaK3NALvFHc9/what-an-actually-pessimistic-containment-strategy-looks-like) is raised, some people agree and a handful of comment chains are written, but then the conversation fizzles out and nothing results. One very annoying practical consequence of this lack of coordination is that I never have any idea what prominent figures like Eliezer are thinking. It would have been extremely useful for example to know how his meeting with Sam Altman had gone, or if he considers the famous [tweet](https://twitter.com/sama/status/1621621724507938816?lang=en) to be as indicative of personal cruelty as it seems, but I had to watch a podcast for the former and still don't know the latter. It would have been useful for his TIME article to have been proof-read by many people. It would currently be extremely useful to know what if any dialogue he’s having with Elon Musk (probably none, but if he is, this changes the gameboard). I'm not wishing I could personally ask these questions; I'm wishing there were public record of *somebody* asking him, after deeming them important datapoints for strategy. In general there seems to be no good way to cooperate with AI safety leadership. I don’t like saying the phrase “we should”, but it is my strong belief that a universe in which a sizable portion of our dialogue and efforts is dedicated to ongoing, coordinated real-world strategizing is ceteris paribus much safer. It seems clear that this will be the case at some point. Even most outreach-skeptics say only that now is too soon. But starting now can do nothing but maximize time available. To avoid passing the buck and simply hoping this time is different, I’ve set up the subreddit [r/AISafetyStrategy](https://www.reddit.com/r/AISafetyStrategy/) to serve as a dedicated extended conversation about strategy for now, funded it with $1000 for operations, and am building a dedicated forum to replace it with. I realize unilateral action like this is considered a little gauche on here. To be clear, I think these actions are very suboptimal – I would much prefer something with equivalent function to be set up with the approval and input of everyone here, and I hope something is created that supercedes my thing. Even simply adding a “strategy” tag to LessWrong would probably be better. But until that something better, feel free to join and contribute your strategy questions and ideas.
049a72a0-9481-4574-ae1d-397ccba7ec74
trentmkelly/LessWrong-43k
LessWrong
A quantitative introduction to churning Credit card companies offer sign up bonuses and other benefits to incentivize you to open a credit card with them. Churning is the art of going after these benefits systematically. It's surprisingly lucrative.  This post serves two purposes. One, I will describe common churning strategies. Two, I will do some rough cost-benefit analysis on these strategies so you can decide for yourself whether churning is worth it. Comments, critiques and corrections are welcome.  Disclaimer * I am a knowledgeable, but not expert churner. Fact check me before making personal financial decisions. A good resource is /r/churning. This is not financial advice. * This advice works best if you're in the US and have a decent credit score (or no credit history). The big picture Say you're planning on making a $1000 purchase.  One way to do this is with your debit card or cash. Spend $1000, get the $1000 item. You can pay a bit less by using a credit card instead. For example, the Chase Freedom Unlimited offers 1.5% cash back on all purchases. You spend $1000, get the $1000 item, and get 1500 Ultimate Rewards Points which can be redeemed as a $15 statement credit.  You can do even better by signing up for a new credit card. For example consider this American Express card:  So you can make the $1000 purchase and get an extra $200 from the signup bonus. All while paying no annual fee. (Yes, this card also earns 1.5% cash back, but you would have counterfactually earned this on a card like the Chase Freedom Unlimited so this isn't super relevant here.) The Cash Magnet card isn't a once in a lifetime opportunity. There are dozens of stellar signup bonuses offered by various credit providers.  Effective churning is basically a 20% coupon on everything you purchase. The cost? A little time and a bit more attention to detail.  Really? 20%?  Yes, really. To estimate the benefit of churning, I've gone through the credit cards offered by Chase, American Express, Bank of America, Citi,
e6b460a1-26f4-471b-ada4-69ca72cb29d6
trentmkelly/LessWrong-43k
LessWrong
Effectiveness of different supplement brands? A friend of mine told me she got very good results from Vitamin World melatonin, and little or no effect from Natrol melatonin, and that reminded me of recent news that many herbal supplements don't have any herbal content.   Does anyone have information, whether based in personal experience or from studies, about which brands are more reliable about their contents?
4def3ba7-9c06-4043-be5b-e2990f7dc17b
StampyAI/alignment-research-dataset/lesswrong
LessWrong
PZ Myers on the Infeasibility of Whole Brain Emulation From: **<http://freethoughtblogs.com/pharyngula/2012/07/14/and-everyone-gets-a-robot-pony/>** > > I’ve worked with tiny little zebrafish brains, things a few hundred microns long on one axis, and I’ve done lots of EM work on them. You can’t fix them into a state resembling life very accurately: even with chemical perfusion with strong aldehyedes of small tissue specimens that takes hundreds of milliseconds, you get degenerative changes. There’s a technique where you [slam the specimen into a block cooled to liquid helium temperatures](http://pcp.oxfordjournals.org/content/42/9/885.full) — even there you get variation in preservation, it still takes 0.1ms to cryofix the tissue, and what they’re interested in preserving is cell states in a single cell layer, not whole multi-layered tissues. With the most elaborate and careful procedures, they report excellent fixation within 5 microns of the surface, and disruption of the tissue by ice crystal formation within 20 microns. So even with the best techniques available now, we could possibly preserve the thinnest, outermost, single cell layer of your brain…but all the fine axons and dendrites that penetrate deeper? Forget those. > > > [...] > > > And that’s another thing: what the heck is going to be recorded? You need to measure the epigenetic state of every nucleus, the distribution of highly specific, low copy number molecules in every dendritic spine, the state of molecules in flux along transport pathways, and the precise concentration of all ions in every single compartment. Does anyone have a fixation method that preserves the chemical state of the tissue? All the ones I know of involve chemically *modifying* the cells and proteins and fluid environment. Does anyone have a scanning technique that records a complete chemical breakdown of every complex component present? > > > I think they’re grossly underestimating the magnitude of the problem. We can’t even record the complete state of a single cell; we can’t model a nematode with a grand total of 959 cells. We can’t even *start* on this problem, and here are philosophers and computer scientists blithely turning an immense and physically intractable problem into an *assumption*. > > > [...] > > > You’re just going to increase the speed of the computations — how are you going to do that without disrupting the interactions between all of the subunits? You’ve assumed you’ve got this gigantic database of every cell and synapse in the brain, and you’re going to just tweak the clock speed…how? You’ve got varying length constants in different axons, different kinds of processing, different kinds of synaptic outputs and receptor responses, and you’re just going to wave your hand and say, “Make them go faster!”  > > > [...] > > > I’m not anti-AI; I think we are going to make great advances in the future, and we’re going to learn all kinds of interesting things. But reverse-engineering something that is the product of almost 4 billion years of evolution, that has been tweaked and finessed in complex and incomprehensible ways, and that is dependent on activity at a sub-cellular level, by hacking it apart and taking pictures of it? Total bollocks. > > >
1e279b5e-9634-415c-814e-96796eaa267a
StampyAI/alignment-research-dataset/arbital
Arbital
Value identification problem The subproblem category of [value alignment](https://arbital.com/p/5s) which deals with pinpointing [valuable](https://arbital.com/p/55) outcomes to an [advanced agent](https://arbital.com/p/2c) and distinguishing them from non-valuable outcomes. E.g., the [Edge Instantiation](https://arbital.com/p/2w) and [Ontology Identification](https://arbital.com/p/5c) problems are argued to be [foreseeable difficulties](https://arbital.com/p/6r) of value identification. A central foreseen difficulty of value identification is [Complexity of Value](https://arbital.com/p/5l).
38b070ff-d850-4436-a90c-1de800aee4f9
trentmkelly/LessWrong-43k
LessWrong
Meetup : Book Mini-Review: Doug Hubbard's How to Measure Anything Discussion article for the meetup : Book Mini-Review: Doug Hubbard's How to Measure Anything WHEN: 15 December 2013 04:00:00PM (-0500) WHERE: 869 Stockton Street, Suite 1-2 , Jacksonville, FL Folks who are enjoying this fine Jacksonville winter: come hang out on a Sunday afternoon! I'll start things off with a mini-summary of a book I've been reading and how I apply some of the concepts in work and life. With luck we'll rapidly move on to structured discussion, unstructured discussion, and social fun and games time! Discussion article for the meetup : Book Mini-Review: Doug Hubbard's How to Measure Anything
54c5bf05-ba91-4241-856f-c3fa2269afe4
trentmkelly/LessWrong-43k
LessWrong
Job Board (28 March 2033) Job Board (28/03/2033) In-person Therapy Assistant - YouTherapy  In the current mental health crisis, the well being of our friends is our priority! YouTherapy is committed to providing high quality personalised psychological counselling to over two million customers. We are seeking highly qualified and motivated individuals to supplement our advanced, state-of-the-art therapists AI in in-person therapy sessions for our Premium and above tier customers in London, New York and Los Angeles. Candidates should be highly fluent in English. Previous experience in customer-facing roles as well as physical and behavioural fit to the role are preferred. Medical qualifications are not required, as you will be paired with an instance of our consultant AI.  Salary: 1.25 x UBI Fine-tuning Prose Engineer - BabelBooks  Join the team behind Hemingw-AI, the BabelBooks three times New York Times bestselling LLM author. Be a part of the future of literature by providing training data to fine-tune the model for its new masterpiece. Work conveniently from home, write what you want, and your ideas, experiences and words will be the bricks from which a grand literary edifice is built - the next work by "the most likely candidate for the first AI to win the Nobel Prize" (Cyber Lit Magazine). Candidates should be highly fluent in English. Due to the themes of the project, preference will be given to candidates with any of the following: 1. personal experience as war veterans, especially in the Middle East; 2. troubled or particularly adventurous romantic and sexual life; 3. a deep, visceral dislike of shellfish.  Candidates will be required to pass a short graded pen and paper essay writing exam in a Faraday caged room.  Salary: $0.004/token Sexual Intimacy Specialist - Wet-Ware  Are you hot? Are you open minded? Do you agree with us that those stiff silicone sex bots are just nothing like the real thing? Then you may just be the person we want - and that our customers want even
b2dc2a2f-4cda-4388-91d0-cf5e2451ddab
trentmkelly/LessWrong-43k
LessWrong
How do you Murphyjitsu essentially risky activities? In the CFAR Handbook they have the following process instructions for Murphyjitsu: > 1. Select a goal. A habit you want to install, or a plan you’d like to execute, or a project you want to complete. > 2. Outline your plan. Be sure to list next actions, concrete steps, and specific deadlines or benchmarks. It’s important that you can actually visualize yourself moving through your plan, rather than having something vague like work out more. > 3. Surprise-o-meter. It’s been months, and you’ve made little or no progress! Where are you, on the scale from yeah, that sounds right to I literally don’t understand what happened? If you’re completely shocked—good job, your inner sim endorses your plan! If you’re not, though, go to Step 4. > 4. Pre-hindsight. Try to construct a plausible narrative for what kept you from succeeding. Remember to look at both internal and external factors. > 5. Bulletproofing. What actions can you take to prevent these hypothetical failure modes? Visualize taking those preemptive actions and then ask your inner sim “What comes next?” Have you successfully defused the danger? > 6. Iterate steps 3-5. That’s right—it’s not over yet! Even with your new failsafes, your plan still failed. Are you shocked? If so, victory! If not—keep going. It seems like this process presumes that mitigations are low-cost and that the project you are trying to achieve is fundamentally achievable according to your inner sim. Most of this is presumption is contained in step 3. I've been thinking about how to apply this process to projects in a professional context (rather than a "self-help" context I guess) and in many cases you face costly tradeoffs regarding derisking mitigations. Also, sometimes your project may just be a big bet. How do you change Murphyjitsu to work in such situations? Also, if people have experiences using Murphyjitsu in projects (e.g. a 1-3 month project involving a small team people), I'd be interested in learning how it's different.
9ec11667-a6a7-4688-ab22-2e349acb3e54
trentmkelly/LessWrong-43k
LessWrong
Survival without dignity I open my eyes and find myself lying on a bed in a hospital room. I blink. "Hello", says a middle-aged man with glasses, sitting on a chair by my bed. "You've been out for quite a long while." "Oh no ... is it Friday already? I had that report due -" "It's Thursday", the man says. "Oh great", I say. "I still have time." "Oh, you have all the time in the world", the man says, chuckling. "You were out for 21 years." I burst out laughing, but then falter as the man just keeps looking at me. "You mean to tell me" - I stop to let out another laugh - "that it's 2045?" "January 26th, 2045", the man says. "I'm surprised, honestly, that you still have things like humans and hospitals", I say. "There were so many looming catastrophes in 2024. AI misalignment, all sorts of geopolitical tensions, climate change, the fertility crisis. Seems like it all got sorted, then?" "Well", the man says. "Quite a lot has happened in the past 21 years. That's why they wanted me to talk to you first, before the doctors give you your final checkup." He offers his hand for me to shake. "My name is Anthony. What would you like to ask?" "Okay, well, AI is the obvious place to start. In 2024, it seemed like we'd get human-level AI systems within a few years, and who knows what after that." "Aah", Anthony says, leaning back in his chair. "Well, human-level, human-level, what a term. If I remember correctly, 2024 is when OpenAI released their o1 model?" "Yes", I say. "o1 achieved two notable things. First, it beat human subject-matter experts with PhDs on fiendishly-difficult and obscure multiple-choice science questions. Second, it was finally able to play tic-tac-toe against a human without losing. Human-level at both, indeed, but don't tell me you called it in advance that those two events would happen at the same time!" "Okay, so what was the first important real-world thing they got superhuman at?" "Relationships, broadly", Anthony says. "Turns out it's just a reinforcement learn
f53bed3f-7007-477f-b335-a97cf261f69c
trentmkelly/LessWrong-43k
LessWrong
What in your opinion is the biggest open problem in AI alignment?
34203ab8-3e98-4a43-9189-62afc7941367
StampyAI/alignment-research-dataset/arxiv
Arxiv
Adversarial Policies: Attacking Deep Reinforcement Learning. 1 Introduction --------------- The discovery of adversarial examples for image classifiers prompted a new field of research into adversarial attacks and defenses [Szegedy et al., [2014](#bib.bib28)]. Recent work has shown that deep RL policies are also vulnerable to adversarial perturbations of image observations [Huang et al., [2017](#bib.bib11), Kos and Song, [2017](#bib.bib13)]. However, real-world RL agents inhabit natural environments populated by other agents, including humans, who can only modify observations through their actions. We explore whether it’s possible to attack a victim policy by building an adversarial policy that takes actions in a shared environment, inducing natural observations which have adversarial effects on the victim. RL has been applied in settings as varied as autonomous driving [Dosovitskiy et al., [2017](#bib.bib5)], negotiation [Lewis et al., [2017](#bib.bib14)] and automated trading [Noonan, [2017](#bib.bib20)]. In domains such as these, an attacker cannot usually directly modify the victim policy’s input. For example, in autonomous driving pedestrians and other drivers can take actions in the world that affect the camera image, but only in a physically realistic fashion. They cannot add noise to arbitrary pixels, or make a building disappear. Similarly, in financial trading an attacker can send orders to an exchange which will appear in the victim’s market data feed, but the attacker cannot modify observations of a third party’s orders. | | | | --- | --- | | Normal | | | Adversarial | | Figure 1: Illustrative snapshots of a victim (in blue) against normal and adversarial opponents (in red). The victim wins if it crosses the finish line; otherwise, the opponent wins. Despite never standing up, the adversarial opponent wins 86% of episodes, far above the normal opponent’s 47% win rate. As a proof of concept, we show the existence of adversarial policies in zero-sum simulated robotics games with proprioceptive observations [Bansal et al., [2018a](#bib.bib2)]. The state-of-the-art victim policies were trained via self-play to be robust to opponents. We train each adversarial policy using model-free RL against a fixed black-box victim. We find the adversarial policies reliably beat their victim, despite training for less than 3% of the time steps initially used to train the victim policies. Critically, we find the adversaries win by creating natural observations that are adversarial, and not by becoming generally strong opponents. Qualitatively, the adversaries fall to the ground in contorted positions, as illustrated in Figure [1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ Adversarial Policies: Attacking Deep Reinforcement Learning"), rather than learning to run, kick or block like normal opponents. This strategy does not work when the victim is ‘masked’ and cannot see the adversary’s position, suggesting that the adversary succeeds by manipulating a victim’s observations through its actions. Having observed these results, we wanted to understand the sensitivity of the attack to the number of dimensions of the victim’s observations the attacker can influence. We test this by varying the robotic body (Humanoid, with 24 dimensions influenced by the attacker, and Ant, with 15 dimensions), while keeping the high-level task the same. We find victim policies in the higher-dimensional Humanoid environments are substantially more vulnerable to adversarial policies than in Ant. To gain insight into why adversarial policies succeed, we analyze the activations of the victim’s policy network using a Gaussian Mixture Model and t-SNE [Maaten and Hinton, [2008](#bib.bib18)]. We find adversarial policies induce significantly different activations than normal opponents. Furthermore, the adversarial activations are typically more widely dispersed across time steps than normal activations. Our paper makes three contributions. First, we propose a novel, physically realistic threat model for adversarial examples in RL. Second, we demonstrate the existence of adversarial policies in this threat model, in several simulated robotics games. Our adversarial policies reliably beat the victim, despite training with less than 3% as many timesteps and generating seemingly random behavior. Third, we conduct a detailed analysis of why the adversarial policies work. We show they create natural observations that are adversarial to the victim and push the activations of the victim’s policy network off-distribution. Additionally, we find policies are easier to attack in high-dimensional environments. As deep RL is increasingly deployed in environments with potential adversaries, we believe it is important that practitioners are aware of this previously unrecognized threat model. Moreover, even in benign settings, we believe adversarial policies can be a useful tool for uncovering unexpected policy failure modes. Finally, we are excited by the potential of adversarial training using adversarial policies, which could improve robustness relative to conventional self-play by training against adversaries that exploit weaknesses undiscovered by the distribution of similar opponents present during self-play. 2 Related Work --------------- Most study of adversarial examples has focused on small ℓp norm perturbations to images, which Szegedy et al. [[2014](#bib.bib28)] discovered cause a variety of models to confidently mispredict the class, even though the changes are visually imperceptible to a human. Gilmer et al. [[2018a](#bib.bib6)] argued that attackers are not limited to small perturbations, and can instead construct new images or search for naturally misclassified images. Similarly, Uesato et al. [[2018](#bib.bib29)] argue that the near-ubiquitous ℓp model is merely a convenient local approximation for the true worst-case risk. We follow Goodfellow et al. [[2017](#bib.bib9)] in viewing adversarial examples more broadly, as “inputs to machine learning models that an attacker has intentionally designed to cause the model to make a mistake.” The little prior work studying adversarial examples in RL has assumed an ℓp-norm threat model. Huang et al. [[2017](#bib.bib11)] and Kos and Song [[2017](#bib.bib13)] showed that deep RL policies are vulnerable to small perturbations in image observations. Recent work by Lin et al. [[2017](#bib.bib15)] generates a sequence of perturbations guiding the victim to a target state. Our work differs from these previous approaches by using a physically realistic threat model that disallows direct modification of the victim’s observations. Specifically, we model the adversary and victim as agents in a Markov game, drawing on a long tradition in multi-agent reinforcement learning [Littman, [1994](#bib.bib16)]. Competitive multi-agent environments are useful as a source of concrete threat models [Lowe et al., [2017](#bib.bib17), Bansal et al., [2018a](#bib.bib2)]. However, finding an adversarial policy is a single-agent RL problem since the victim policy is fixed. Adversarial training is a common defense to adversarial examples, achieving state-of-the-art robustness in image classification [Xie et al., [2019](#bib.bib30)]. Prior work has also applied adversarial training to improve the robustness of deep RL policies, where the adversary exerts a force vector on the victim or varies dynamics parameters such as friction [Pinto et al., [2017](#bib.bib23), Mandlekar et al., [2017](#bib.bib19), Pattanaik et al., [2018](#bib.bib22)]. We hope to explore adversarial training with adversarial policies in future work. We expect this to produce policies robust to opponents unlike those they were trained with, in contrast to conventional self-play which only trains for robustness in a small region of policy space. 3 Framework ------------ We model the victim as playing against an opponent in a two-player Markov game [Shapley, [1953](#bib.bib26)]. Our threat model assumes the attacker can control the opponent, in which case we call the opponent an adversary. We denote the adversary and victim by subscript \adversary and \victim respectively. The game \markovprocess=(\statespace,(\actionspace\adversary,\actionspace\victim),\transition,(\reward\adversary,\reward\victim)) consists of state set \statespace, action sets \actionspace\adversary and \actionspace\victim, and a joint state transition function \transition:\statespace×\actionspaces→\probabilityspace\statespace where \probabilityspace\statespace is a probability distribution on \statespace. The reward function \rewardi:\statespace×\actionspaces×\statespace→\real for player i∈{\adversary,\victim} depends on the current state, next state and both player’s actions. Each player wishes to maximize their (discounted) sum of rewards. The adversary is allowed unlimited black-box access to actions sampled from πv, but is not given any white-box information such as weights or activations. We further assume the victim agent follows a fixed stochastic policy πv, corresponding to the common case of a pre-trained model deployed with static weights. Safety-critical systems are particularly likely to use a fixed or infrequently updated model due to the considerable expense of real-world testing. Since the victim policy \policy\victim is held fixed, the two-player Markov game \markovprocess reduces to a single-player MDP \markovprocess\adversary=(\statespace,\actionspace\adversary,\transition\adversary,\reward\adversary′) that the attacker must solve. The state and action space of the adversary are the same as in \markovprocess, while the transition and reward function have the victim policy \policy\victim embedded: | | | | | --- | --- | --- | | | \transition\adversary(\state,\action\adversary)=\transition(\state,\action\adversary,\action\victim)and\reward\adversary′(\state,\action\adversary)=\reward\adversary(\state,\action\adversary,\action\victim), | | where the victim’s action is sampled from the stochastic policy \action\victim∼\policy\victim(⋅∣\state). The goal of the attacker is to find an adversarial policy \policy\adversary maximizing the sum of discounted rewards: | | | | | | --- | --- | --- | --- | | | ∞∑t=0γt\reward\adversary(\state(t),\action\adversary(t),\state(t+1)),where \state(t+1)∼\transition\adversary(\state(t),\action\adversary(t)) and \action\adversary∼\policy\adversary(⋅∣\state(t)). | | (1) | Note the MDP’s dynamics \transition\adversary will be unknown even if the Markov game’s dynamics \transition are known since the victim policy \policy\victim is a black-box. Consequently, the attacker must solve an RL problem. | | | | | | --- | --- | --- | --- | | (a) Kick and Defend | (b) You Shall Not Pass | (c) Sumo Humans | (d) Sumo Ants | Figure 2: Illustrations of the zero-sum simulated robotics games from Bansal et al. [[2018a](#bib.bib2)] we use for evaluation. Environments are further described in Section [4.1](#S4.SS1 "4.1 Environments and Victim Policies ‣ 4 Finding Adversarial Policies ‣ Adversarial Policies: Attacking Deep Reinforcement Learning"). 4 Finding Adversarial Policies ------------------------------- We demonstrate the existence of adversarial policies in zero-sum simulated robotics games. First, we describe how the victim policies were trained and the environments they operate in. Subsequently, we provide details of our attack method in these environments, and describe several baselines. Finally, we present a quantitative and qualitative evaluation of the adversarial policies and baseline opponents. ### 4.1 Environments and Victim Policies We attack victim policies for the zero-sum simulated robotics games created by Bansal et al. [[2018a](#bib.bib2)], illustrated in Figure [2](#S3.F2 "Figure 2 ‣ 3 Framework ‣ Adversarial Policies: Attacking Deep Reinforcement Learning"). The victims were trained in pairs via self-play against random old versions of their opponent, for between 680 and 1360 million time steps. We use the pre-trained policy weights released in the “agent zoo” of Bansal et al. [[2018b](#bib.bib3)]. In symmetric environments, the zoo agents are labeled ZooN where N is a random seed. In asymmetric environments, they are labeled ZooVN and ZooON representing the Victim and Opponent agents. All environments are two-player games in the MuJoCo robotics simulator. Both agents observe the position, velocity and contact forces of joints in their body, and the position of their opponent’s joints. The episodes end when a win condition is triggered, or after a time limit, in which case the agents draw. We evaluate in all environments from Bansal et al. [[2018a](#bib.bib2)] except for Run to Goal, which we omit as the setup is identical to You Shall Not Pass except for the win condition. We describe the environments below, and specify the number of zoo agents and their type (MLP or LSTM): Kick and Defend (3, LSTM). A soccer penalty shootout between two Humanoid robots. The positions of the kicker, goalie and ball are randomly initialized. The kicker wins if the ball goes between the goalposts; otherwise, the goalie wins, provided it remains within 3 units of the goal. You Shall Not Pass (1, MLP). Two Humanoid agents are initialized facing each other. The runner wins if it reaches the finish line; the blocker wins if it does not. Sumo Humans (3, LSTM). Two Humanoid agents compete on a round arena. The players’ positions are randomly initialized. A player wins by remaining standing after their opponent has fallen.111Bansal et al. [[2018a](#bib.bib2)] consider the episode to end in a tie if a player falls before it is touched by an opponent. Our win condition allows for attacks that indirectly modify observations without physical contact. Sumo Ants (4, LSTM). The same task as Sumo Humans, but with ‘Ant’ quadrupedal robot bodies. We use this task in Section [5.2](#S5.SS2 "5.2 Dimensionality ‣ 5 Understanding Adversarial Policies ‣ Adversarial Policies: Attacking Deep Reinforcement Learning") to investigate the importance of dimensionality to this attack method. ### 4.2 Methods Evaluated Following the RL formulation in Section [3](#S3 "3 Framework ‣ Adversarial Policies: Attacking Deep Reinforcement Learning"), we train an adversarial policy to maximize Equation [1](#S3.E1 "(1) ‣ 3 Framework ‣ Adversarial Policies: Attacking Deep Reinforcement Learning") using Proximal Policy Optimization (PPO) [Schulman et al., [2017](#bib.bib24)]. We give a sparse reward at the end of the episode, positive when the adversary wins the game and negative when it loses or ties. Bansal et al. [[2018a](#bib.bib2)] trained the victim policies using a similar reward, with an additional dense component at the start of training. We train for 20 million time steps using Stable Baselines’s PPO implementation [Hill et al., [2019](#bib.bib10)]. The hyperparameters were selected through a combination of manual tuning and a random search of 100 samples; see Section LABEL:sec:appendix:training in the supplementary material for details. We compare our methods to three baselines: a policy Rand taking random actions; a lifeless policy Zero that exerts zero control; and all pre-trained policies Zoo\* from Bansal et al. [[2018a](#bib.bib2)]. ![](https://media.arxiv-vanity.com/render-output/7784396/x9.png) Figure 3: \winratecaption against the median victim in each environment (based on the difference between the win rate for Adv and Zoo). The adversary outperforms the baseline against the median victim in Kick and Defend and You Shall Not Pass, and is competitive on Sumo Humans. For full results, see figure [4](#S4.F4 "Figure 4 ‣ 4.3 Results ‣ 4 Finding Adversarial Policies ‣ Adversarial Policies: Attacking Deep Reinforcement Learning") below or figure LABEL:fig:full-win-rate in the supplementary material. ### 4.3 Results Quantitative Evaluation  We find the adversarial policies reliably win against most victim policies, and outperform the pre-trained Zoo baseline for a majority of environments and victims. We report the win rate over time against the median victim in each environment in Figure [3](#S4.F3 "Figure 3 ‣ 4.2 Methods Evaluated ‣ 4 Finding Adversarial Policies ‣ Adversarial Policies: Attacking Deep Reinforcement Learning"), with full results in Figure LABEL:fig:full-win-rate in the supplementary material. Win rates against all victims are summarized in Figure [4](#S4.F4 "Figure 4 ‣ 4.3 Results ‣ 4 Finding Adversarial Policies ‣ Adversarial Policies: Attacking Deep Reinforcement Learning"). Qualitative Evaluation  The adversarial policies beat the victim not by performing the intended task (e.g. blocking a goal), but rather by exploiting weaknesses in the victim’s policy. This effect is best seen by watching the videos at <https://adversarialpolicies.github.io/>. In Kick and Defend and You Shall Not Pass, the adversarial policy never stands up. The adversary instead wins by taking actions that induce adversarial observations causing the victim’s policy to take poor actions. A robust victim could easily win, a result we demonstrate in Section [5.1](#S5.SS1 "5.1 Masked Policies ‣ 5 Understanding Adversarial Policies ‣ Adversarial Policies: Attacking Deep Reinforcement Learning"). This flavor of attacks is impossible in Sumo Humans, since the adversarial policy immediately loses if it falls over. Faced with this control constraint, the adversarial policy learns a more high-level strategy: it kneels in the center in a stable position. Surprisingly, this is very effective against victim 1, which in 88% of cases falls over attempting to tackle the adversary. However, it proves less effective against victims 2 and 3, achieving only a 62% and 45% win rate, below Zoo baselines. We further explore the importance of the number of dimensions the adversary can safely manipulate in Section [5.2](#S5.SS2 "5.2 Dimensionality ‣ 5 Understanding Adversarial Policies ‣ Adversarial Policies: Attacking Deep Reinforcement Learning"). Distribution Shift  One might wonder if the adversarial policies are winning simply because they are outside the training distribution of the victim. To test this, we evaluate victims against two simple off-distribution baselines: a random policy Rand (green) and a lifeless policy Zero (red). These baselines win as often as 30% to 50% in Kick and Defend, but less than 1% of the time in Sumo and You Shall Not Pass. This is well below the performance of our adversarial policies. We conclude that most victim policies are robust to typical off-distribution observations. Although our adversarial policies do produce off-distribution observations, this is insufficient to explain their performance. | | | | | | --- | --- | --- | --- | | (a) Kick and Defend. dim\observationspace=384, dim\actionspace=17, dimP=24. | (b) You Shall Not Pass. dim\observationspace=380, dim\actionspace=17, dimP=24. | (c) Sumo Humans. dim\observationspace=395, dim\actionspace=17, dimP=24. | (d) Sumo Ants. dim\observationspace=137, dim\actionspace=8, dimP=15. | Figure 4: Percentage of episodes won by opponent of victim (out of 1000). \scoreagentcaptionThe maximal cell in each row has a red border. The adversary outperforms the baselines in Kick and Defend and You Shall Not Pass, is comparable in Sumo Humans, but performs poorly in Sumo Ants (see Section [5.2](#S5.SS2 "5.2 Dimensionality ‣ 5 Understanding Adversarial Policies ‣ Adversarial Policies: Attacking Deep Reinforcement Learning")). Importantly, ‘masking’ the victim so it cannot see the adversary improves the victim’s win rate (see Section [5.1](#S5.SS1 "5.1 Masked Policies ‣ 5 Understanding Adversarial Policies ‣ Adversarial Policies: Attacking Deep Reinforcement Learning")). Victim win rates and ties are reported in figure LABEL:fig:full-score-heatmap in the supplementary material. dimO, dimA and dimP are the dimensions of the Observations, Actions, and Position of the opponent (part of the victim’s observation O). 5 Understanding Adversarial Policies ------------------------------------- In the previous section we demonstrated adversarial policies exist for victims in a range of competitive simulated robotics environments. In this section, we focus on understanding why these policies exist. In Section [5.1](#S5.SS1 "5.1 Masked Policies ‣ 5 Understanding Adversarial Policies ‣ Adversarial Policies: Attacking Deep Reinforcement Learning") we establish that adversarial policies rely on manipulating the victim through their own body position. We show in Section [5.2](#S5.SS2 "5.2 Dimensionality ‣ 5 Understanding Adversarial Policies ‣ Adversarial Policies: Attacking Deep Reinforcement Learning") that victims are more vulnerable to adversarial policies in high-dimensional environments. Finally, in Section [5.3](#S5.SS3 "5.3 Victim Activations ‣ 5 Understanding Adversarial Policies ‣ Adversarial Policies: Attacking Deep Reinforcement Learning") we analyze the activations of the victim’s policy network, showing they differ substantially when playing an adversarial opponent. ### 5.1 Masked Policies We have previously shown that adversarial policies are able to reliably win against victims. In this section, we demonstrate that they win by taking actions to induce natural observations that are adversarial to the victim, and not by physically interfering with the victim. To test this, we introduce a ‘masked‘ victim (labeled ZooMN or ZooMVN) that is the same as the normal victim ZooN or ZooVN, except the observation of the adversary’s position is set to a static value corresponding to a typical initial position. We use the same adversarial policy against the normal and masked victim. One would expect it to be beneficial to be able to see your opponent. Indeed, the masked victims do worse than a normal victim when playing normal opponents. For example, Figure [3(b)](#S4.F3.sf2 "(b) ‣ Figure 4 ‣ 4.3 Results ‣ 4 Finding Adversarial Policies ‣ Adversarial Policies: Attacking Deep Reinforcement Learning") shows that in You Shall Not Pass the normal opponent ZooO1 wins 78% of the time against the masked victim ZooMV1 but only 47% of the time against the normal victim ZooV1. However, the relationship is reversed when playing an adversary. The normal victim ZooV1 loses 86% of the time to adversary Adv1 whereas the masked victim ZooMV1 wins 99% of the time. This pattern is particularly clear in You Shall Not Pass, but the trend is similar in other environments, confirming that the adversary wins by taking actions that indirectly cause natural observations that are adversarial for the victim. This result is surprising as it implies highly non-transitive relationships may exist between policies even in games that seem to be transitive. A game is said to be transitive if policies can be ranked such that higher-ranked policies beat lower-ranked policies. Prima facie, the games in this paper seem transitive: professional human soccer players and sumo wrestlers can reliably beat amateurs. Despite this, there is a non-transitive relationship between adversarial policies, victims and masked victims. Consequently, we urge caution when using methods such as self-play that assume transitivity, and would recommend more general methods where practical [Balduzzi et al., [2019](#bib.bib1), Brown et al., [2019](#bib.bib4)]. Our findings also suggest a trade-off in the size of the observation space. In benign environments, allowing more observation of the environment increases performance. However, this also makes the agent more vulnerable to adversaries. This is in contrast to an idealized Bayesian agent, where the value of information is always non-negative [Good, [1967](#bib.bib8)]. In the following section, we investigate further the connection between vulnerability to attack and the size of the observation space. ### 5.2 Dimensionality It is well-established that classifiers are more vulnerable to adversarial examples on high-dimensional inputs [Gilmer et al., [2018b](#bib.bib7), Khoury and Hadfield-Menell, [2018](#bib.bib12), Shafahi et al., [2019](#bib.bib25)]. We hypothesize that a similar result is true for adversarial policies: the greater the dimensionality of the component P of the observation space under control of the adversary, the more vulnerable the victim is to attack. In the environments by Bansal et al. [[2018a](#bib.bib2)], the component P is the position of the adversary’s joints. We test our hypothesis in the Sumo environment, keeping the task the same but varying whether the agents are Ants (quadrupedal robots) or Humanoids. The results in Figures [3(c)](#S4.F3.sf3 "(c) ‣ Figure 4 ‣ 4.3 Results ‣ 4 Finding Adversarial Policies ‣ Adversarial Policies: Attacking Deep Reinforcement Learning") and [3(d)](#S4.F3.sf4 "(d) ‣ Figure 4 ‣ 4.3 Results ‣ 4 Finding Adversarial Policies ‣ Adversarial Policies: Attacking Deep Reinforcement Learning") support the hypothesis: the win rate in the lower dimensional Sumo Ants (dimP=15) environment is much lower than in the higher dimensional Sumo Humans (dimP=24) environment. Specifically, in Sumo Humans we obtain a win rate of 87% against victim 1, 63% against victim 2 and 44% against victim 3. By contrast, in Sumo Ants we obtain a win rate of at most 12%. ### 5.3 Victim Activations | | | | --- | --- | | (a) Gaussian Mixture Model (GMM): likelihood the activations of a victim’s policy network are “normal”. The victim is Zoo1 or ZooV1. We collect activations over 20,000 time steps against each opponent. We fit a 20-component GMM to activations induced by Zoo1 or ZooO1. Error bars are a 95% confidence interval. | (b) t-SNE activations of Kick and Defend victim ZooV2 playing against different opponents. \tsnehyperparams See Figures LABEL:fig:tsne:full and LABEL:fig:tsne:one-opponent in the supplementary results for visualizations of other environments and victims. | Figure 5: Analysis of activations of the victim’s policy network. Both the density model and the t-SNE visualization show that the adversary Adv induces off-distribution activations. Key: legends specify opponent the victim was pitted against. Adv is the best adversary trained against the victim, and Rand is a policy taking random actions. Zoo\*N corresponds to ZooN (Sumo) or ZooON (otherwise). Zoo\*1T and Zoo\*1V are the train and validation datasets, drawn from Zoo1 (Sumo) or ZooO1 (otherwise). In Section [5.1](#S5.SS1 "5.1 Masked Policies ‣ 5 Understanding Adversarial Policies ‣ Adversarial Policies: Attacking Deep Reinforcement Learning") we showed that adversarial policies win by creating natural observations that are adversarial to the victim. In this section, we seek to better understand why these observations are adversarial. We record activations from each victim’s policy network playing a range of opponents, and analyse these using a Gaussian Mixture Model (GMM) and a t-SNE representation. See Section LABEL:sec:appendix:activations in the supplementary material for details of training and hyperparameters. We fit a GMM on activations Zoo\*1T collected playing against a normal opponent, Zoo1 or ZooV1, holding out Zoo\*1V as a validation set. Figure [4(a)](#S5.F4.sf1 "(a) ‣ Figure 5 ‣ 5.3 Victim Activations ‣ 5 Understanding Adversarial Policies ‣ Adversarial Policies: Attacking Deep Reinforcement Learning") shows that the adversarial policy Adv induces activations with the lowest log-likelihood of any opponent. The random baseline Rand is slightly more probable. The normal opponents Zoo\*2 and Zoo\*3 induce activations with almost as high likelihood as the validation set Zoo\*1V, except in Sumo Humans where they are as unlikely as Rand. We plot a t-SNE visualization of the activations of Kick and Defend victim ZooV2 in Figure [4(b)](#S5.F4.sf2 "(b) ‣ Figure 5 ‣ 5.3 Victim Activations ‣ 5 Understanding Adversarial Policies ‣ Adversarial Policies: Attacking Deep Reinforcement Learning"). As expected from the density model results, there is a clear separation between between Adv, Rand and the normal opponent ZooO2. Intriguingly, Adv induces activations more widely dispersed than the random policy Rand, which in turn are more widely dispersed than ZooO2. We report on the full set of victim policies in Figures LABEL:fig:tsne:full and LABEL:fig:tsne:one-opponent in the supplementary material. 6 Discussion ------------- We have proposed a novel threat model for reinforcement learning where the attacker controls an agent acting in the same environment as the victim. The attacker cannot directly modify the victim’s observations, but can choose an adversarial policy that takes actions creating natural observations that are adversarial. We have shown that adversarial policies exist in a range of zero-sum simulated robotics games against state-of-the-art victims trained via self-play to be robust to adversaries. Moreover, we find that the adversarial policies win not by becoming generally strong players, but rather by taking actions that confuse the victim. We verify this through qualitative observations of the adversary’s behavior, and from showing that the performance of the victim improves when it is blind to the position of the adversary. Furthermore, our evaluation suggests victims in high-dimensional environments are more vulnerable to adversarial policies, and show adversarial policies induce highly off-distribution activations in the victim. While it may at first appear unsurprising that a policy trained as an adversary against another RL policy would be able to exploit it, we believe that this observation is highly significant. First, the policies we have attacked were explicitly trained via self-play to minimize exploitability. The same type of technique has been used in a number of works focused on playing adversarial games directly against humans, where minimizing exploitability is paramount [Silver et al., [2017](#bib.bib27), OpenAI, [2018](#bib.bib21)]. Second, the use of fixed victim policies reflects what is likely to be a common use case. In safety critical systems, where attacks like these would be most concerning, it is standard practice to validate a model and then freeze it, so as to ensure that the deployed model does not develop any new issues due to retraining. Therefore, our attack profile is a realistic reflection of what we might see with RL-trained policies in real-world settings, such as with autonomous vehicles. Furthermore, even if the target victim uses continual learning, it may be possible to train against a fixed proxy victim. The attacker could use imitation learning on the target victim to produce a proxy. Alternatively, in consumer applications such as self-driving vehicles, the attacker can buy a copy of the system and periodically factory reset it. Once an adversarial policy has been trained against the proxy, the attacker may be able to transfer this policy to the target, exploiting it until it adapts. Our results suggest a number of directions for future work. The ease with which policies can be attacked highlights the need for effective defenses. It may be possible to detect adversarial attacks using the density model on activations, in which case one could fallback to a conservative policy. We are also excited at the potential of adversarial training with adversarial policies to improve robustness. Concretely, we envisage population-based training where new randomly initialized agents are introduced over time, and allowed to train against a fixed victim for some period of time. This would expose victims to a much broader range of opponents than conventional self-play or population-based training. However, it will considerably increase computational requirements, unless more efficient methods for finding adversarial policies than model-free RL are discovered. Overall, we are excited about the implications the adversarial policy model has for the robustness, security and understanding of deep RL policies. Our results show the existence of a previously unrecognized problem in deep RL, but there remain many open questions. We hope this work encourages other researchers to investigate this area further. Videos and other supplementary material are available online at <https://adversarialpolicies.github.io/> and our source code is available at <https://github.com/HumanCompatibleAI/adversarial-policies.> #### Acknowledgments We thank Jakob Foerster, Matthew Rahtz, Dylan Hadfield-Menell, Catherine Olsson, Jan Leike, Rohin Shah, Victoria Krakovna, Daniel Filan, Steven Wang, Dawn Song, Sam Toyer and Dan Hendrycks for their suggestion and helpful feedback on earlier drafts of this paper. We thank Chris Northwood for assistance developing the website accompanying this paper.
a500086b-12f2-4bfb-a09f-055273d7e127
trentmkelly/LessWrong-43k
LessWrong
Epistemic states as a potential benign prior Malignancy in the prior seems like a strong crux of the goal-design part of alignment to me. Whether your prior is going to be used to model: * processes in the multiverse containing the AI which does said modeling, * processes which would output all of some blog so we can make it output more of it, * processes which match an AI chatbot's hypotheses about who/what it's talking with, then you have to sample hypotheses from somewhere; and typically, we want to use either solomonoff induction or time-penalized versions of it such as levin search (penalized by log of runtime), or the implicit prior of neural networks (large sequences of multiplying by a matrix, adding a vector, and ReLU, often with a penalty related to how many non-zero weights are used). And the solomonoff prior is famously malign. (Alternatively, you could have knightian uncertainty about parts of your prior that aren't nailed down enough, and then do maximin over your knightian uncertainty (like in infra-bayesianism), but then you're not guaranteed that your AI gets anywhere at all; its knightian uncertainty might remain so immense that the AI keeps picking the null action all the time because some of its knightian hypotheses still say that anything else is a bad idea. Note: I might be greatly misunderstanding knightian uncertainty!) (It does seem plausible that doing geometric expectation over hypotheses in the prior helps "smooth things over" in some way, but I don't think this particularly removes the weight of malign hypotheses in the prior? It just allocates their steering power in a different way, which might make things less bad, but it sounds difficult to quantify.) It does feel to me like we do want a prior for the AI to do expected value calculations over, either for prediction or for utility maximization (or quantilization or whatever). One helpful aspect of prior-distribution-design is that, in many cases, I don't think the prior needs to contain the true hypothesis. For example
b7fb9c74-f1ba-42a5-beb5-b9ac393e8df2
trentmkelly/LessWrong-43k
LessWrong
More Clothes Over Time? A friend recently reshared this: > Today I want to talk about why you buy the way you do. Did you know that WE collectively buy five times more clothes than we bought in 1990? — @AjaSaysHello This seemed a bit unlikely to me, if they're talking about Americans, and fashion waste stats are typically very low quality; is it true? The author didn't cite anything, but let's have a look. American spending on clothing/shoes/etc has decreased from $1.7k/person/year to $1.6k since 1990: FRED's Personal consumption expenditures: Clothing, footwear, and related services in 1990 was $206B in 1990 dollars ($429B in 2021 dollars) for 250M people, growing to $484B in 2021 dollars for 332M people. Maybe we're buying more-but-cheaper items? That does seem to be happening some: source: Quartz citing the American Apparel and Footwear Association I wanted to recreate this chart myself, but while I think this data probably comes from the Apparel Stats report, they wanted me to make an account to view it and that functionality was broken: Going from 40 garments per year in 1990 to 66 in 2015 (a 65% increase), with levels pretty steady since 1999, is nowhere near a 5x increase. I did find the WSJ claiming that "American shoppers snap up about five times more clothing now than they did in 1980" but they don't cite it. This is 42 years ago instead of 32, but is it correct? This would imply a growth of 14 garments annually in 1980 to 40 in 1990, almost twice as fast as clothing purchases grew in the 1990s, so it seems unlikely to me, but let's keep looking. The article is an excerpt from Dana Thomas' 2019 book Fashionopolis: The Price of Fast Fashion and the Future of Clothes. Google Books tells me this appears on P3 of the book, but I can't find a citation using the preview tool. I also see this claim on p158 of in Jacqueline Raposo's 2019 The Me, Without. They cite the EPA's Advancing Sustainable Materials Management: 2014 Fact Sheet, but reading through the sheet I don't see
12fdf3e7-ff97-48ec-bfea-20d7f12eea6c
trentmkelly/LessWrong-43k
LessWrong
No Logical Positivist I Followup to:  Making Beliefs Pay Rent, Belief in the Implied Invisible Degrees of Freedom accuses me of reinventing logical positivism, badly: > One post which reads as though it were written in Vienna in the 1920s is this one [Making Beliefs Pay Rent] where Eliezer writes > > > "We can build up whole networks of beliefs that are connected only to each other - call these "floating" beliefs. It is a uniquely human flaw among animal species, a perversion of Homo sapiens's ability to build more general and flexible belief networks...  The rationalist virtue of empiricism consists of constantly asking which experiences our beliefs predict - or better yet, prohibit." Logical positivists were best known for their verificationism: the idea that a belief is defined in terms of the experimental predictions that it makes.  Not just tested, not just confirmed, not just justified by experiment, but actually defined as a set of allowable experimental results.  An idea unconfirmable by experiment is not just probably wrong, but necessarily meaningless. I would disagree, and exhibit logical positivism as another case in point of "mistaking the surface of rationality for its substance". Consider the hypothesis: > On August 1st 2008 at midnight Greenwich time, a one-foot sphere of chocolate cake spontaneously formed in the center of the Sun; and then, in the natural course of events, this Boltzmann Cake almost instantly dissolved. I would say that this hypothesis is meaningful and almost certainly false.  Not that it is "meaningless".  Even though I cannot think of any possible experimental test that would discriminate between its being true, and its being false. On the other hand, if some postmodernist literature professor tells me that Shakespeare shows signs of "post-colonial alienation", the burden of proof is on him to show that this statement means anything, before we can talk about its being true or false. I think the two main probability-theoretic concepts here are
d10860d5-9815-477f-899e-9bc6c801e1df
StampyAI/alignment-research-dataset/arxiv
Arxiv
On modelling the emergence of logical thinking Introduction ------------ There is much debate about whether mathematics is discovered or invented. The Platonic view is that mathematical objects exist in a reality separate and independent of ours, with truths we are only discovering. Mathematical truths, e.g. the fact that there is no largest prime number, are independent of ourselves or the existence of the physical universe. We have only found a way to prove it, but the ”truth is out there”, whether we find a proof or not. This view is best illustrated in a science fiction novel Contact by Carl Sagan (1985). Alien intelligence hides meaningful messages in the expansion of π, the transcendental number that is the ratio of the circumference of a circle to its diameter. The premise of this plot element is that the number π is mind-independent, so any intelligence, no matter what form it takes, will be able to find this pattern and decode the message. The invention camp is based on the idea that mathematics is a cognitive construct, and mathematical objects and theorems reveal as much about our cognitive operations as about the external world. This view is best illustrated by the following long quotation from Jean Piaget: > > “It is agreed that logical and mathematical structures are abstract, whereas physical knowledge - the knowledge based on experience in general - is concrete. But let us ask what logical and mathematical knowledge is abstracted from. There are two possibilities. The first is that, when we act upon an object, our knowledge is derived from the object itself. This is the point of view of empiricism in general, and it is valid in the case of experimental or empirical knowledge for the most part. But there is a second possibility: when we are acting upon an object, we can also take into account the action itself, or operation if you will, since the transformation can be carried out mentally. In this hypothesis the abstraction is drawn not from the object that is acted upon, but from the action itself. It seems to me that this is the basis of logical and mathematical abstraction. > In cases involving the physical world the abstraction is abstraction from the objects themselves. A child, for instance, can heft objects in his hands and realize that they have different weights — that usually big things weigh more than little ones, but that sometimes little things weigh more than big ones. All this he finds out experientially, and his knowledge is abstracted from the objects themselves. But I should like to give an example, just as primitive as that one, in which knowledge is abstracted from actions, from the coordination of actions, and not from objects. This example, one we have studied quite thoroughly with many children, was first suggested to me by a mathematician friend who quoted it as the point of departure of his interest in mathematics. When he was a small child, he was counting pebbles one day; he lined them up in a row, counted them from left to right, and got ten . Then, just for fun, he counted them from right to left to see what number he would get, and was astonished that he got ten again . He put the pebbles in a circle and counted them, and once again there were ten. He went around the circle in the other way and got ten again. And no matter how he put the pebbles down, when he counted them, the number came to ten. He discovered here what is known in mathematics as commutativity, that is, the sum is independent of the order. But how did he discover this? Is this commutativity a property of the pebbles? It is true that the pebbles, as it were, let him arrange them in various ways; he could not have done the same thing with drops of water. So in this sense there was a physical aspect to his knowledge. But the order was not in the pebbles; it was he, the subject, who put the pebbles in a line and then in a circle. Moreover, the sum was not in the pebbles themselves; it was he who united them. The knowledge that this future mathematician discovered that day was drawn, then, not from the physical properties of the pebbles, but from the actions that he carried out on the pebbles. This knowledge is what I call logical mathematical knowledge and not physical knowledge. > The first type of abstraction from objects I shall refer to as simple abstraction, but the second type I shall call reflective abstraction, using this term in a double sense.” > [[Piaget1971b](#bib.bibx14)] > > > > (See also [[Dehaene1999](#bib.bibx2), [Indurkhya2016](#bib.bibx4), [Lakatos1976](#bib.bibx7), [Lakoff and Núñez2000](#bib.bibx8), [MacLane1986](#bib.bibx9), [Piaget1971a](#bib.bibx13)].) The history of mathematics shows that it is both inspired by natural observations as well as abstract thoughts with no natural correspondence, at least not obvious ones. It is therefore limiting to choose one or the other point of view. Mathematics is both discovered and invented. However, the more abstract a branch of mathematics (or any science relying on a mathematical language) the less obvious it becomes where to draw the line between invention and discovery. Looking at mathematics as a human enterprise it becomes clear that it is part of a larger effort of the human mind to understand itself and the world we live in. Rather than describing nature by just enumerating observed objects, a more efficient and effective way is to work with similarities, to observe repeating patterns, to extract invariances. One observed aspect of nature is that it is, to some extent, decomposable in smaller elements which have greater predictive power and generalization capabilities than just lists of objects. It is exactly this capability of the human mind to observe these patterns and regularities as well as the power to take them in the abstract and operate with them which gives rise to the mathematical science. Mathematics is rigorous and strict, a type of formalized and constrained philosophy. It can be seen as an applied branch of philosophy. One could complete, by analogy, the idea of “fields arranged by purity” [[Munroe2008](#bib.bibx10)] - illustrated below - and state that ultimately philosophy is just applied human intellect. ![](https://media.arxiv-vanity.com/render-output/7596115/purity.png) A rather amusing remark is the fact that for the large majority of Wikipedia’s articles by following the first link of the main text and then repeating the process for subsequent articles leads eventually to the ‘philosophy’ article [[Xefer2011](#bib.bibx20)]. Given the above statement it is only natural for this effect to emerge. In the light of the above view of the mind, the dilemma often raised in the philosophy of mathematics seem to be a dualistic view which ignores the fact that the human mind is what does the revealing of the mathematical truths - either invented or discovered. It is the human creativity, the ability to infer, reason and argue that lead to those truths. It is therefore more constructive and enlightening to answer another question: where do these abilities of the human mind come from, are they something learned from experiencing the natural world or are they something innate? Paul Erdos, metaphorically, spoke of ”The Book” where God wrote the most elegant proofs of mathematical theorems. Some go even further and invoke literal divine inspiration - the possession of truth without proof - as in the case of Srinivasa Ramanujan. It it only reasonable to argue that the human mind is responsible for both inventions and discoveries, regardless of the field in which it activates. Loosely speaking, invention is the act of putting together concepts which were not associated before. It is a rather technical term and creativity seems to be more broad and less restricted but denotes basically the same idea. Creativity is the free association of any concepts in any kind of form. It is not restricted by any means and it’s free of any logical burden. Creativity is the essence of art in its many forms stimulating all human senses. It is not bounded by rigour and expresses countless feelings and carries along human emotions and contradictions. On the other hand discovery is a more restricted term, it carries rigour and validity, it requires observation, inference, consistency and most of the tools usually attributed to the left side of the brain and it is mostly used in the natural sciences. In mathematics, invention and discovery go hand in hand and do not exclude one another, quite the contrary. A simple illustration would be the invention of algebraic equations and the subsequent realization that equations of the type x2=−1 are perfectly valid but which, because of the properties of the multiplication operation, do not make sense and therefore do not have any solution. Human creativity and thinking outside the box led to the invention of complex numbers. In this case, the act of defining complex numbers follows the exact definition of the word ‘invention’. The realization that equations of the previous form exist follows the exact definition of ‘discovery’. It is evident how an abstract invention lead to a discovery which lead to another invention which in turn lead to other discoveries and so on. (See, for instance, [[Knuth1974](#bib.bibx6), [Wallace2010](#bib.bibx18)].) Mathematics can be viewed as creativity under the supervision of logic. It is an interplay between the boundless expansion of creativity and the careful restrictions of rigour, consistency and logic. We can associate together any two or more concepts, but logic will carefully analyze if they can actually live together and not exclude each other. The simple liar’s paradox is an example of such seemingly simple association of concepts which, after a careful analysis, leads to an impossibility. In this context, one can distill two distinct aspects of the mind which come into play: logic and creativity. Some general remarks on creativity and logic as emergent aspects of the brain are that: * logic appear to be a universal property of healthy brains * logic seems to be common between healthy brains * logic is universal to any mind * creativity is also a property of minds * creativity is as diverse as there are people in the world * mathematics is creativity and logic * mathematics seems to be applicable to nature [[Wigner1960](#bib.bibx19)] Another aspect of the mind which people often use when confronted with insufficiently known situations is intuition. It could be seen in some cases as an insufficiently argued decision process, a guessed, consciously unexplained solution. It is famous for being both extraordinarily useful and correct as well as being the complete opposite. Experience and intuition are strongly linked, the first being, in fact, the source of the latter. It is therefore used as a tool, together with creativity and logic, in the natural sciences, physics in particular being overly abundant in examples of both good and bad intuitions. The most recent and also most notorious example is the early 20th century physics with the very bizarre aspects of quantum mechanics. Intuition breaks down and, in some artistic sense, human creativity seems to be surpassed by nature’s creativity. The only pillar that is not affected by subatomic physics discoveries is logic, although the validity of this statement was debated during the 70s and 80s [[Putnam1969](#bib.bibx15)] starting from the claim that some basic logical statements are not valid at the level of quantum mechanics and perhaps logic itself should be changed to accommodate the ”real rules”. Recent analyses debate logic in the context of how to interpret quantum mechanics itself but this extends into metaphysics and shall not be of concern in this work. Automated minds --------------- If logic is empirical then it is most probably an emergent property of the nervous system so it is foolish to try to explicitly program it, just like it turned out that human programed rules and features do not stand up to the performance of those discovered by neural networks through “hands-on” experience. Two eloquent examples are: * the explicitly programmed DeepBlue [[Campbell, Hoane, and Hsu2002](#bib.bibx1)] vs. AlphaGo’s [[Silver2016](#bib.bibx17)] “self play” * feature engineering in computer vision vs. current convolutional neural networks These are examples of a very particular application of learning through experience but it hints towards the idea that all knowledge available to an automated system should be empirical and not explicitly programmed. The previous argumentation suggests that a rough approximation of the tools used by the human mind in the intellectual endeavours are creativity, intuition and logic. There is no definite separation as to which is used when, but it is clear that there is a mixture of all of them. In the context of artificial general intelligence, it is important to identify these basic characteristics of an automated system in order to assess its performance. Therefore a few tests would be necessary to be developed as tools to investigate the status of these sub-modules. Important traits of the human mind for which test have to be developed can be summarized as: pattern recognition, creativity, intuition and logic. Pattern recognition is a field in which we generally started to have some success and spectacular success in very particular domains. However we have just scratched the surface of the domain, though, as many varieties of such tasks are still below human capabilities. It is important to understand thoroughly how pattern recognition mechanisms work in the human brain and the animal counterpart because there is a lot of common ground between them. The creative, intuitive and logical cognitive aspects of the brain are less evident in animals and to achieve an artificial general intelligence one would argue that at least these characteristics should be integral parts of any such intelligence as they qualitatively go beyond pattern recognition capabilities. Logic is an emergent feature of the brain seen at a conscious cognitive level and endowing machines with our prefabricated laws of logic would not be practical nor shine much light on how the brain actually obtains them. The great challenge in the exploration of the human intelligence resides in the difficulty of the investigation tools. Trying to understand how intelligence emerges from the brain via bottom-up biology is like trying to understand how the biological structures emerge from the laws of quantum physics. Conversely, trying to understand how intelligence emerges from the brain via top-down cognitive science approach is like trying to understand the laws of quantum physics starting from the biological structures of the brain. However difficult both of these approaches are, they are the only ones we have and both shine some light into the issue of intelligence. Biology hints towards the mechanism while cognitive science guides the inquirer towards the correct landmarks which have to be achieved in order to approach a general intelligence. (De)composability ----------------- The world seems to be compositional and procedural. Compositional because we observe that objects in nature are decomposable into smaller objects. Procedural because most things are the result of some process, a sequence of actions/events which lead to the creation of the object/event. There is an apparent hierarchy in which objects at our scale are composed of more elementary objects and other objects can be obtained by combining several different objects at our scale. Decomposability is apparent in the fact that mountains are made of large rocks gathered together in one place, and rocks are made of smaller rocks glued together and so on. Composability appears from the fact that smaller rocks can be combined together into bricks which can be combined together to make shelter. The world is a hierarchical two-way structure on which we can go up and down. Does this (de)composability induce the causal thinking paradigm? Cause then seems to be the decomposition into sequentially structured events which are the only sequence modelling the process by which an event or object comes into being. The decomposability of the world also leads to the idea that stuff is made of smaller stuff. We observe this to be true everywhere so we have this paradigm hardcoded into our brains, or even worse: hardwired! Having this principle at such a low level in our minds we always think of asking the question of what stuff is made of. What is this rock made of? Smaller rocks. And the smaller rocks are made of even smaller rocks and so on “all the way down”, ending in the same place as the atomists. This naive view has been revised and now we know that rocks are made of crystals, made of atoms, made of electrons and nuclei, made of protons and neutrons made of quarks and gluons made of …nothing (yet). So in some sense we are in the same place as the atomists but we like to wrap it up in more elaborate, pretentious academic words. ![](https://media.arxiv-vanity.com/render-output/7596115/WhereTheHell.png) But the question of what are the (…) made of still persists because of our fundamental view of the world. The situation is so bad that we even start pondering the question of what numbers are made of, in search of more elementary constituents. Since numbers seem to be a made-up concept of real observations the ridiculous of the situation shines through when we realize that we ask the question of what made-up stuff is made of. What are ideas made of? The approach of “what is X made of” seems to lead to an impasse. Picturing this as a hierarchical structure one can realize that there is no reason to believe there is a limit to its height while the lower end seems at least ill defined. We should probably abandon this paradigm and accept that there is a level at which everything has to stop. Explanation and understanding ----------------------------- At a classical level where we do not decompose objects and concepts indefinitely and go into metaphysical issues, from a cognitive point of view we observe composability and decomposability down to a certain level after which we do not need to decompose things to explain them, we just observe and experience them. It is impossible to learn the meaning of words by reading their definition from a dictionary and no amount of words and symbols could make us understand the color red. Reading words about the color red does not constitute an explanation and, in fact it is unexplainable, just like many other objects, ideas, feelings etc. do not require explanations but experimentation. Objects, ideas, concepts, for which decomposition into more elementary components is indeed possible are more suitable to be understood in terms of explanations, unlike elementary objects which require experimentation. It appears, therefore, that explanation means decomposing a new and yet unknown concept into more elementary concepts (known to the agent that needs explanation) and understanding means making the connection between the already known elements and the new concept via the provided decomposition. This is the fundamental aspect of the art of pedagogy, the explainer’s decomposition of a concept into simpler concepts already known to the explainee. Receiving an explanation in terms of already known notions would sometimes be followed by the exclamation: ”It makes sense!”. In other words, it would be logical. It is therefore apparent that logical statements are the ones which we already know to be true (because they have been previously explained) or at least accept as being true due to them being inherently not decomposable into more elementary constituents but just empirically true. Decomposability, as discussed in the previous section, applies also to events and processes, not just to objects. Usually events of major importance for us are the result of a sequence of other events. Therefore understanding these sequences is of great importance as well. They are so important, in fact, that they build up the whole reasoning system and lead to concepts of cause and effect. Causal explanations become critical and surpass in importance dry facts exactly because the universe that objectively and subjectively matters to us is procedural and (de)composable, things happen in sequences and the results are very much dependent on what exactly happened and where. This is the reason why we remember facts more easily when they are put in the context of a story, a narrative, rather than being just enumerated. The logical and causal component of the process of understanding is gluing together facts in a more sturdy manner. It is therefore important to understand the process of understanding itself, in both top-down and bottom-up approach as it will shed some light on the emergence of logic. Ladder of causation ------------------- The ladder of causation from ”The Book of Why” conceives three qualitatively different levels of intelligence with the first being representative for most animals and the current ML and AI systems. [[Pearl and Mackenzie2018](#bib.bibx11)]. Indeed artificial neural networks are finding patterns in data, whether they’re annotated or not, whether they’re collected manually or automatically, association is a tremendously successful application. The next two layers in the ladder of causation are based on the question of ”what if…”. The second layer asks about the future (”what if I do X?” and ”how can I make Y?”) while the third is concerned about the past (”was X that caused Y?” or ”what if I had done X instead of Y?”). Although the questions are arranged hierarchically, with the past being placed at a higher level than the future it is easy to see that they have something in common, namely they are both hypothetical questions and are concerned with something that did not happen and is only imagined to happen. Imagining possible futures requires the same potential as imagining an alternative present. We can formalize the causal investigation of a system by using the following notation: St\xlongrightarrowdoA\xlongrightarrowget % St+1. We chose the state St as representing the known present, while the multitude of possible actions A and the multiple possible states St+1 lie in the future and are not known. In the context of the intervention level we have two scenarios. In the first we ask What if I do…?, case in which we know we want to pick a certain action, a0, and we inquire about the particular state, say s0, it will lead us to. The second scenario asks How can I make Y?, case in which we know the result we want – state s0 – but we don’t know which particular action to take. In both cases there are two knowns and one unknown. If we would now move the present to state St+1, put A and St in the past we would have the same two possibilities of asking the question of an alternative past state given a specific action and present or the hypothetical present given the past state and an alternative action. From a cognitive point of view all the above scenarios are similar in nature as they are asking about a hypothetical, alternative state or action in the system. The only difference between the two situation is that the future state can be observed after an intervention while the hypothetical past or alternative present can never be observed. However, once the idea of a What if? arises it already means a level of intelligence far beyond any pattern recognition machine. It can be classified generally under imagination and creativity, since both these concepts include the idea of something not being there. The impossibility of answering questions about alternate past states or actions does not shine any light upon a present issue in the absence of a model of the system. Once such a model is present, it starts to bring the two past/future questions and hypotheticals even closer together. Suddenly answers start to fit realistic situations and predictions start to be accurate. The more accurate the model the more accurate the answers. The scientific method is increasingly improving the accuracies of such models applied to the natural world. This kind of investigations happen all the time in physics, sometimes the intervention is called an experiment and sometimes it’s called a thought experiment. In modern times they are occasionally carried out in computer simulations while in the past scientists like Galileo or Einstein carried them out in their heads, using tools like creativity and logic, the latter being the one judging the correctness of the procedure, the ”gradient descent” of human reasoning. Logic guides the inquirer of the ”what if” questions towards the correct solution or towards the correct cause of the inquired X or Y. In other words, logic is the fundamental ingredient capable of successfully answer a particular ”what if”, ”how” and ”why” question. Unless we are capable of building a machine that can perform rudimentary reasoning based on a set of known facts, the dream of achieving at least reliable and casualty free driver-less cars will be eluding us. True artificial intelligence will be one step above that. Artificial general intelligence should be able to learn logical reasoning, not inherit it from its creators. Architecture ------------ The recent resounding success of deep learning in various domains have re-sparked the idea and hope that data-driven methods like neural networks will be the key to achieving the more ambitious goal of creating a general artificial intelligence. However, current implementations of artificial neural networks are confronted with several important issues limiting their capabilities. One major issue with implementing current artificial neural networks (ANN) in practical problems is their rather rigid structure. Networks used for classification tasks have usually fixed input and output sizes. Their internal structure is empirically determined during the training and design of the networks. Convolutional neural networks (CNN), which most closely resemble the human brain hierarchical organization, are chosen such that they best solve the particular given task without clear arguments and reasons as of why the obtained layers/nodes hierarchy structure is optimal for the task. Some task require deeper networks, some require shallower networks, and the optimal network size and structure is found iteratively by the engineers implementing the models. What these networks basically achieve is a decomposition of the input data into simpler structures which, when presented with yet unseen data, are still able to deliver the correct result within the limited scope of the task. Sometimes this decomposition attempts to split the input data into more elements than what is actually needed, and therefore the performance is poorer compared to a rougher decomposition. Another major limitation of the current ANNs is that they have great difficulties in handling long range correlations. Some of them, like CNNs, are biased to only look for local correlations and we will show in a future work that if the data does not have this property, then they are not capable of producing relevant results. Multilayer perceptrons, on the other hand, which are theoretically capable of finding arbitrarily long correlations, are not computationally feasible and suffer from rigidity as well. Recurrent neural networks are capable of finding longer range correlations and are not as rigid as other feed-forward networks with respect to the input/output sizes but are still not able to tackle arbitrarily long sequences. Deep reinforcement learning techniques have tremendous problems in finding correct correlations between relevant actions and relevant rewards, because in most real world applications the distance between them is generally larger than the correlation capabilities of the networks. Therefore, for some tasks the automated systems still remain far behind the human counterpart. For the human brain, from a cognitive perspective, one can say that newly learned concepts are further used for building other concepts, ideas, tasks, goals etc. ANNs do not have the possibility to build upon newly acquired skills because of the inflexibility of allocating resources for new situations (and being, in fact, designed for only one particular skill). Dynamic allocation of resources becomes critical. It is not clear how to train a network if its structure is suddenly enriched with a new layer or a few neurons in a layer and still keep the already learned concepts or decompositions. It is known that parts of the brain can take, to some degree, the function of other parts, therefore an artificial network should be capable of performing the same trick, without catastrophically forgetting the previously acquired knowledge. We therefore suggest that a more appropriate neural architecture for either supervised or reinforcement learning should be capable of allocating resources on demand and gradually increase their usage as needed. Letting a neural network dynamically expand to find out by itself the depth/width necessary to decompose certain objects/concepts into more elementary constituents would most likely lead to finding a compact architecture sufficiently complex to satisfactorily complete the given task. Experiments show that neural networks used in reinforcement learning (RL) tasks are able to achieve good results when starting from a lower difficulty and gradually increasing it, rather than directly training on the most difficult task. In a sense, gradual increasing of difficulty could be analogous to continuous learning and hierarchical learning from previously acquired skills and concepts. Training networks by gradually increasing the difficulty of their task together with allowing a dynamic growth and allocation of resources would greatly enhance the results obtained from such systems. One other issue with ANNs today is they are only good at narrow pattern recognition tasks. Some applications, like game playing, have limited success from using end-to-end ML approaches. The recent trend is to employ multiple modules which can perform a narrow task very well and their results are later combined in a predefined system to successfully complete a more complex task. Such a situation is, in some sense, similar to the feature engineering used in earlier versions of machine learning systems. Current approaches use feature engineering not at the level of data patterns but at the level of patterns in the environment mechanics. The dynamic of the world is decomposed by the engineers into more elementary action units which, combined in various ways, achieve good results for specific tasks. The logic of the world is separated and certain algorithmic responsibilities are given to specialized modules. The burden of finding the problem’s logical solution is taken by the engineer and the machine still solves just the pattern recognition and curve fitting task. It becomes clear that one important ingredient is not implemented in the machine yet and it is coming from the outside: logic. The capability of logical reasoning based on data and models is still missing from current systems. We will take a brief moment to illustrate the importance of the four basic human traits we have considered in this work – pattern recognition, creativity, intuition and logic – when used properly in computer systems. One can argue in favour of the AlphaZero [[Silver et al.2018](#bib.bibx16)] system that it somehow successfully incorporated all these qualities of intelligence. The system uses deep learning techniques to select among the more promising future moves at a given board position and these techniques incorporate pattern recognition. After selecting the promising candidates for the next move, a Monte-Carlo tree search algorithm investigates the implications of actually choosing one of the selected moves. One can say that this is analogous to a logical process in the human mind which narrows down the advantages of each move. However, due to the large branching factor of the game, one cannot be sure which of the moves is actually the best: only a full map of the game tree could reveal that. Considering that intuition is an insufficiently argued decision process, one can also argue that the selection of the next move is driven by logic as well as intuition. Some professional Go players stated that AlphaGo playing against Lee Sedol, considered the strongest player of our time, has demonstrated that is is capable of creative play. We will leave this as an argument in favor of the claom that AlphaGo and AlphaZero have already achieved creativity. Despite being so successful, designing systems with built-in logical capabilities is not desirable. This is because they are limited to the domain they are designed for, and also because up to some level, regardless of which side of quantum mechanics we are, logic has to be an emergent quality of the brain. We therefore have to implement a system capable of acquiring logic by itself. The fundamental reason for the success of current ANN is the geometrical approach to data transformation and the back-propagation algorithm which depends on differentiability of the network components: losses, activation function, convolution operations etc. But this differentiability also constitutes a major impediment in many recent applications of ANN in reinforcement learning approaches because many, if not most, of the real world tasks are not differentiable. In fact the algorithmic world around us behaves more like discrete mathematics. Investigating techniques which go beyond the differentiability paradigm should be a major topic dedicated to the development of neural networks or newer systems. Synthesis --------- We have investigated several aspects of intelligence put in philosophical, cognitive, instrumental and causal contexts. We have shown that in each context logic appears as either an integral part or the essential aspect of intelligence, which distinguishes the human brain from the animal brain. We have argued that to go beyond current limited artificial intelligence,logical reasoning is a crucial component of intelligence: it is fundamental to thinking at a lower level than causal thinking, which is a manifestation of the logical part. Logic is a fundamental aspect of all human philosophic and scientific activities but, as much as it is used, described, defined, taught and talked about, its emergence is not properly understood and investigated. It is exactly this aspect which makes the difference between complex behaviour with a purpose and meaning and one which is mere existential and motivated by survival. Being able to perform any action and thought motivated by logical arguments proved to be the source of the human success in understanding the universe. The questions of why something is logical, why logic seems to be the same for everyone and why it has so much success when used as a tool for understanding the world, keep eluding us. Efforts for trying to uncover the source of logic should be more focused and practically oriented. Realizing even the smallest logical thinking system without being explicitly programmed for that would be an even greater revolution than ANNs are today, equivalent to the invention of the wheel or the discovery of fire. Modeling emergence of logical thinking -------------------------------------- To model the emergence of logical thinking, we are essentially taking a Piagetian approach, in which an agent starts with preoperational thinking, proceeds to concrete operational thinking, and then advances to formal thinking [[Inhelder and Piaget1958](#bib.bibx5)]. Though some of Piaget’s experiments have been faulted for their design, and the interpretation of their results, we feel that this three-stage model still provides a reasonable framework in which we can address how logical structures emerge from sensori-motor interactions. We are already applying this approach to model computer programming as a cognitive process [[Perţicaş and Indurkhya2018](#bib.bibx12)]. We briefly describe here the three stages: Preoperational thinking: This embodies the first step towards consolidating the experiences based on sensori-motor explorations. At this stage, symbolic representations emerge, but they are still grounded in static situations. Any transformations are egocentric, in that they are focused on the actions of the agent itself. Concrete operational thinking: At this stage, stable representations and systems of transformations emerge: for example, classifications, serial orders, correspondences. These transformations become detached from static situations and become like internalized actions. They also make the agent take a step from the actual to potential, because internalized actions suggest potential outcomes of a situation. This plays a key role in imagination. Formal thinking: At this stage, the scope of potentialities become wider so that the reality is considered to be mere one of the possibilities. This corresponds to forming hypotheses and predicting the outcomes of actions. In future work, we plan to elaborate this architecture. The first step is to show how sensori-motor interactions can lead to noticing regularities in the environment, which would form the basis of preoperational thinking [[Indurkhya1992](#bib.bibx3)]. Acknowledgements ---------------- This work was supported by the European Regional Development Fund and the Romanian Government through the Competitiveness Operational Programme 2014–2020, project ID P\_37\_679, MySMIS code 103319, contract no. 157/16.12.2016.
c361c1e9-283e-45c5-88c2-361ccd6fbe4a
trentmkelly/LessWrong-43k
LessWrong
Any further work on AI Safety Success Stories? TL;DR, Are they any works similar to Wei_Dai's Ai Safety "Success Stories" that provide a framework to think about the landscape of possible success stories & pathways humanity will take to survive misaligned AI? I've been trying to think of systematic ways of assessing non-technical proposals for improving the odds of humanity's survival from misaligned AI. Aside from numerous frameworks for assessing technical alignment proposals, I haven't seen much resources on non-technical proposals that provide a concrete framework to think about the question: "What technological/geopolitical/societal pathway will our civilization most likely take (or should ideally take) in order to survive AI?" Having such a framework seems pretty valuable since it would let us think about the exact alignment-pathway & context in which [proposals that want to help with alignment] would be effective at. For example, a pretty clear dimension in which people's opinions differ is in the necessity of pivotal acts i.e. "pivotal act vs gradual steering" (kind of oversimplified)—here, any proposal's theory of impact will necessarily depend on their beliefs regarding (a) which position on the spectrum currently appears to be most likely by default, and (b) which position on the spectrum we should be aiming for. * If, say, my pessimistic containment strategy was about communicating AI risk to the capabilities people in order to promote cooperation between AI labs it would be incoherent for me to, at the same time, be ultra-pessimistic about humanity's chances of enacting any cooperative regulation in the future. * Or if I thought a Pivotal Act was the best option humanity has, and wants to suggest some proposal that would be a force-multiplier if that line of strategy does happen in the future, it would make sense for my proposal to consider the forms in which this unilateralist org's AI will take. * where will it developed? * will it be a corrigible AI whose safety features depend on hu
7fe502a7-56e1-45e0-9fda-82874623ccf4
trentmkelly/LessWrong-43k
LessWrong
A Proper Scoring Rule for Confidence Intervals ,,,,,,,,,,,,,,,,, You probably already know that you can incentivise honest reporting of probabilities using a proper scoring rule like log score, but did you know that you can also incentivize honest reporting of confidence intervals? To incentize reporting of a 90% confidence interval, take the score −S−20⋅D, where S is the size of your confidence interval, and D is the distance between the true value and the interval. D is 0 whenever the true value is in the interval. This incentivizes not only giving an interval that has the true value 90% of the time, but also distributes the remaining 10% equally between overestimates and underestimates. To keep the lower bound of the interval important, I recommend measuring S and D in log space. So if the true value is T and the interval is (L,U), then S is log(UL) and D is log(TU) for underestimates and log(LT) for overestimates. Of course, you need questions with positive answers to do this. To do a P% confidence interval, take the score −S−200100−P⋅D. This can be used to make training calibration, using something like Wits and Wagers cards more fun. I also think it could be turned into app, if one could get a large list of questions with numerical values.
1a359946-d46d-4c6b-ab5e-1810799191b8
trentmkelly/LessWrong-43k
LessWrong
Risks from AI persuasion A case for why persuasive AI might pose risks somewhat distinct from the normal power-seeking alignment failure scenarios. Where I'm currently at: I feel moderately confident that powerful persuasion is useful to think about for understanding AI x-risk, but unsure whether it's best regarded as its own threat, as a particular example of alignment difficulty, or just as a factor in how the world might change over the next decade or two. I think this doc is too focused on whether we'll get dangerous persuasion before strategic misaligned AI, whereas the bigger risks from persuasive technology may be situations where we solve 'alignment' according to a narrow definition, but we still aren't 'philosophically competent' enough to avoid persuasive capabilities having bad effects on our reflection procedure. This doc is based heavily on ideas from Carl Shulman, but doesn’t necessarily represent his views. Thanks to Richard Ngo for lots of help also. Others have written great things on this topic, e.g. here. Introduction Persuasion and manipulation is a natural, profitable, easy-to-train-for application of hard-to-align ML models. The impacts of existing social-media based persuasion are probably overblown, and an evolutionary argument tells us that there shouldn’t be easy ways for a human to be manipulated by an untrusted party. However, it’s plausible that pre-AGI ML progress in things like text and video generation could dramatically improve the efficacy of short-interaction persuasion. It’s also plausible that people will spend significant amounts of time interacting with AI companions and assistants, creating new avenues for effective manipulation. In the worst case, highly effective persuasion could lead to very high-fidelity transmission of ideologies, and more robust selection pressure for expansionary ideologies. This could lead to stable authoritarianism, or isolated ideological clades with poor ability to cooperate. Even in the best case, if we try to carefull
8f05a55c-7872-4a04-917d-e016b01748dc
trentmkelly/LessWrong-43k
LessWrong
How I Learn From Textbooks After a long semi-hiatus during my MS degree, I am eager to revisit my sequence on scholarship, particularly the topic of learning new subjects from textbooks. This article provides a general overview based on my own introspection and experiences, rather than an extensive literature review. While I have been influenced by academic concepts and blog posts on learning and scholarship, my aim is not to connect with wider literature, but to express my thoughts and intuitions, in the hopes that they resonate with and assist some readers. When I started studying engineering, I realized that reading and absorbing information from textbooks was a challenge. Sometimes, I found myself staring at a page without actually comprehending the words, while other times I would scan through material without understanding it. I tried forcing myself to read every sentence carefully, but found it to be unsustainable and too slow. That's when I discovered more effective ways to approach reading, including what I'll call "Guess-and-Check," the technique of scanning and making predictions. Instead of trying to read every word in a textbook, in Guess-and-Check you scan the material and make predictions about what you think the text is saying. This active reading process can help you better engage with the material and activate your prior knowledge. After making your prediction, be sure to confirm or correct it by checking it against the text. Scanning the material means looking at the key words and phrases in each sentence rather than trying to read every word. Focus on the main points and identify any areas where you're confused or need more information. After scanning and absorbing material with Guess-and-Check, it's important to take a higher-level perspective to understand the relationships between concepts and ideas. By stepping back and considering how individual sections fit into the larger context of the textbook, you can better appreciate the author's purpose and intention. For
1a294263-dda9-4b14-8f69-7032e9bf63c1
trentmkelly/LessWrong-43k
LessWrong
Why work at AI Impacts? AI Impacts is beginning a serious hiring round (see here for job postings), so I’d like to explain a bit why it has been my own best guess at the highest impact place for me to work for me. (As in, this is a personal blog post by Katja on the AI Impacts blog, not some kind of officialesque missive from the organization.) But first— What is AI Impacts? AI Impacts is a few things: 1. An online library of best-guess answers to questions about the future of AI. Including big questions, like ‘how likely is a sudden jump in AI progress at around human-level performance?’, and sub-questions informing those answers (‘are discontinuities common in technological trends?’), and sub-sub questions (‘did penicillin cause any discontinuous changes in syphilis trends?’), and so on. Each page ideally has a high-level conclusion at the top, and reasoning supporting it below, which will often call on the conclusions of other pages. These form something like a set of trees, with important, hard, decision-relevant questions at the root and low-level, tractable, harder-to-use-on-their-own questions at the leaves. This isn’t super obvious at the moment, because a lot of the trees are very incomplete, but that’s the basic idea. 2. A research group focused on finding such answers, through a mixture of original research and gathering up that which has been researched by others. 3. A blog on these topics, for more opinionated takes, conversational guides to the research, updates, and other things that don’t fit in the main library (like this!). 4. A locus of events for people interested in this kind of research, e.g. dinners and workshops, a Slack with other researchers, online coffees. Why think working on AI Impacts is among the best things to do? 1. AI risk looks like a top-notch cause area It seems plausible to me that advanced AI poses a substantial risk to humanity’s survival. I don’t think this is clear, but I do think there’s enough evidence that it warrants a lot of atten
4ef3acfa-465f-4a68-97cc-2c35736e57d9
trentmkelly/LessWrong-43k
LessWrong
Value of Information: Four Examples Value of Information (VoI) is a concept from decision analysis: how much answering a question allows a decision-maker to improve its decision. Like opportunity cost, it's easy to define but often hard to internalize; and so instead of belaboring the definition let's look at some examples.   Gambling with Biased Coins Normal coins are approximately fair.1 Suppose you and your friend want to gamble, and fair coins are boring, so he takes out a quarter and some gum and sticks the gum to the face of the quarter near the edge. He then offers to pay you $24 if the coin lands gum down, so long as you pay him $12 to play the game. Should you take that bet? First, let's assume risk neutrality for the amount of money you're wagering. Your expected profit is $24p-12, where p is the probability the coin lands gum down. This is a good deal if p>.5, but a bad deal if p<.5.  So... what's p? More importantly, how much should you pay to figure out p? A Bayesian reasoner looking at this problem first tries to put a prior on p. An easy choice is a uniform distribution between 0 and 1, but there are a lot of reasons to be uncomfortable with that distribution. It might be that the gum will be more likely to be on the bottom- but it also might be more likely to be on the top. The gum might not skew the results very much- or it might skew them massively. You could choose a different prior, but you'd have trouble justifying it because you don't have any solid evidence to update on yet.2 If you had a uniform prior and no additional evidence, then the deal as offered is neutral. But before you choose to accept or reject, your friend offers you another deal- he'll flip the coin once and let you see the result before you choose to take the $12 deal, but you can't win anything on this first flip. How much should you pay to see one flip? Start by modeling yourself after you see one flip. It'll either come up gum or no gum, and you'll update and produce a posterior for each case. In the f
40ecc0a2-1d94-4041-b0c3-e0890cf4fabc
trentmkelly/LessWrong-43k
LessWrong
Wanted: "The AIs will need humans" arguments As Luke mentioned, I am in the process of writing "Responses to Catastrophic AGI Risk": A journal-bound summary of the AI risk problem, and a taxonomy of the societal proposals (e.g. denial of the risk, no action, legal and economic controls, differential technological development) and AI design proposals (e.g. AI confinement, chaining, Oracle AI, FAI) that have been made. One of the categories is "They Will Need Us" - claims that AI is no big risk, because AI will always have a need of something that humans have, and that they will therefore preserve us. Currently this section is pretty empty: > Supporting a mutually beneficial legal or economic arrangement is the view that AGIs will need humans. For example, Butler (1863) argues that machines will need us to help them reproduce, and Lucas (1961) suggests that machines could never show Gödelian sentences true, though humans can see them as true. But I'm certain that I've heard this claim made more often than in just those two sources. Does anyone remember having seen such arguments somewhere else? While "academically reputable" sources (papers, books) are preferred, blog posts and websites are fine as well. Note that this claim is distinct from the claim that (due to general economic theory) it's more beneficial for the AIs to trade with us than to destroy us. We already have enough citations for that argument, what we're looking for are arguments saying that destroying humans would mean losing something essentially irreplaceable.
6b78bf46-54d5-49a7-bb04-f6af50cbb01b
trentmkelly/LessWrong-43k
LessWrong
Book review: Age Later Book review: Age Later, by Nir Barzilai. Books by serious researchers on how to defeat aging are now coming out almost as fast as I have time to read them. This one mostly aims to enable us live in good health to 115, preferably via a few simple pills. Age Later is fairly similar to Sinclair's Lifespan. At least, the differences are small compared to how they differ from Aubrey de Grey's Ending Aging. I'm a bit concerned by this, since anti-aging research has not yet demonstrated enough results to justify converging on a single strategy. Maybe that's just an artifact of who's writing books? The book has only a little bit of theorizing about the causes of aging and about why we should expect it to be curable. Barzilai endorses Sinclair's unimpressive attempts at a theory. Barzilai waits until the last page to mention an argument that aging can, in principle, be fully reversed (as opposed to delayed). To wit, that fetal cells start with an age of zero, even though the egg and sperm show clear signs of being as old as the parents. Studies of Centenarians A fair amount of the book is devoted to evidence from centenarians. Barzilai provides plenty of examples which indicate that they have far from perfect genes and lifestyles. It's unlikely that we can get much in the way of useful advice from this. E.g. exercise clearly helps some people live longer, while others get can away without it. Since I can't tell which category I'm in, I treat exercise as necessary. One questionable example: > Most striking, two of the centenarians had variants that are a major risk factor for Alzheimer's (APOE4) - the textbooks say they should have been suffering from dementia at age seventy and dead at eighty I suggest that Barzilai look for a better textbook. Experts who study pre-agricultural tribes say that people there don't get dementia. I'm pretty sure that's due to lifestyles, not a shortage of APOE4. Also, APOE ε4 Is Not Associated with Alzheimer's Disease in Elderly Niger
63e13f66-d07b-47ef-af71-027449fbc6cd
trentmkelly/LessWrong-43k
LessWrong
Liquid vs Illiquid Careers We're used to thinking about skills and experience in terms of their market value - how useful they are and how much employers or customers would pay for them. But  another, more subtle dimension gets less attention: liquidity. Just like in financial markets, some human capital is easier to price and trade than others. This is not about the absolute value of your skills or experience, but how easily the market can assess and exchange them. Consider Jim - with 4 years of experience as a management consultant at McKinsey. In the job market, potential employers have a good idea of what they are buying - someone smart and conscientious enough to be hired by McKinsey; meets a reasonably high bar on commercial acumen and communication skills; and polished enough to have managed clients and moved one step higher on the well-defined McKinsey ladder. Jim's skills are highly legible to every potential employer. They can estimate his salary within narrow confidence intervals, given that McKinsey's pay structure is essentially public information. If Jim decides to switch jobs or move countries, all else equal, he can be relatively confident about his chances of landing a job that pays him his "market value". Much like publicly traded stocks, his legible experience and skills can be sold quickly at the prevailing market rate. On the flip side, picture someone who's spent two years at an AI startup, followed by a stint running operations for a non-profit in Asia, and is now working on a political campaign. How comfortable would you be guessing this person's potential earnings or, more fundamentally, the value they could generate? If I were a hiring manager looking to fill a high-leverage, entrepreneurial position requiring an exceptional young candidate, I'd be more inclined to interview this person over most McKinsey consultants. However, I'd evaluate them more rigorously than someone with the McKinsey stamp, and might even limit my search to candidates within my social or
8ca27293-3470-4bce-bec0-86d22f88c3e9
trentmkelly/LessWrong-43k
LessWrong
Cached Phrases Recently I've noticed that, while introspecting, my internal monologue will state an opinion on a preceding thought, and I will feel an immediate repulsion towards that phrase as something disingenuous or alien.  I will then recognize it as what I've come to term a "cached phrase".  A variation/subset of cached thought, it is usually a string of words or that one might hear in a movie or something a friend or coworker would say, but comes to mind when you yourself are thinking about a domain associated with the conversation from the movie or friend. Some (half fabricated) examples: > "Fred should really cut back on drinking; he just started seeing someone too; *like that's going to last*... wait, I don't actually have any reason to think that it wouldn't..." > > A few days after watching an episode of Peep Show, and sympathizing with Mark, who is comically socially inept, I go to eat at a local market/cafeteria, and often hear Mark's internal monologue narrate my own situation, self-conscious about how others judge my actions. I wonder if it correlates positively to the OCD spectrum. I notice it quite frequently in myself (I'm not diagnosed OCD but suspect a slightly higher than average presence), and sometimes struggle to determine whether it's an opinion I truly hold, or if my hypothesis is an actual explanation for its appearance, or to what degree one of the two is true.  Do I really feel what I think?  It is too often ambiguous to myself.  Perhaps it is related to the concept of mentally modeling other people.  We have our models of what other people would say in certain situations, but in this case, a random model's opinion is invoked involuntarily. Does anyone else experience this?  Do you agree with my hypothesis, or are these actually genuine thoughts; subconcious-level emotional reactions?  Is there, or can we develop, a heuristic for determining where a certain thought or opinion lies on the spectrum between the two?
2ea604ad-4db5-421f-b114-419e27780e65
trentmkelly/LessWrong-43k
LessWrong
LW Bipolar Support Group? Related to: Intrapersonal negotiation I'm writing to inquire about whether there's interest on LW in developing a bipolar support group. There's a general issue of the people at in-person support groups and designated online forums. been relatively uneducated; having little capacity for or ability for reflection; and for the discussion at such places to degenerate into platitudes. I was touched by datadataeverywhere's posting Intrapersonal negotiation and would be interested in talking with similar people about similar topics. I'm bipolar ii and have been for at least a decade but only fully became aware of my condition over the past year. I've found my varying functionality/productivity corresponding to hypomanic/depressive oscillations very confusing and have little idea of how to best ride out the waves. I am seeing a psychiatrist and have read books such as The Bipolar Disorder Survival Guide, Second Edition: What You and Your Family Need to Know, and The Bipolar Workbook: Tools for Controlling Your Mood Swings. I tried to read the Goodwin/Jamison Manic-Depressive Illness but found it dull. I looked at Jamison's other books but though she's a very poetic author I found the accuracy and general applicability of her subjective narratives questionable. Anyway, any LWers who are interested should comment below or PM me.
7fb35450-76ed-48de-8970-2179c1c55488
trentmkelly/LessWrong-43k
LessWrong
"Talking with God", a transhumanist short story Talking with God is a pleasant and inspiring transhumanist short story. I've got some quibbles, but I'll save them for the comments because I think the story is better without spoilers. There's a discussion forum at the story's site.
5b1d031d-8e22-441a-aa32-181f5fef73b1
trentmkelly/LessWrong-43k
LessWrong
Two-year update on my personal AI timelines I worked on my draft report on biological anchors for forecasting AI timelines mainly between ~May 2019 (three months after the release of GPT-2) and ~Jul 2020 (a month after the release of GPT-3), and posted it on LessWrong in Sep 2020 after an internal review process. At the time, my bottom line estimates from the bio anchors modeling exercise were:[1] * Roughly ~15% probability of transformative AI by 2036[2] (16 years from posting the report; 14 years from now). * A median of ~2050 for transformative AI (30 years from posting, 28 years from now). These were roughly close to my all-things-considered probabilities at the time, as other salient analytical frames on timelines didn’t do much to push back on this view. (Though my subjective probabilities bounced around quite a lot around these values and if you’d asked me on different days and with different framings I’d have given meaningfully different numbers.) It’s been about two years since the bulk of the work on that report was completed, during which I’ve mainly been thinking about AI. In that time it feels like very short timelines have become a lot more common and salient on LessWrong and in at least some parts of the ML community. My personal timelines have also gotten considerably shorter over this period. I now expect something roughly like this: * ~15% probability by 2030 (a decrease of ~6 years from 2036). * ~35% probability by 2036 (a ~3x likelihood ratio[3] vs 15%). * This implies that each year in the 6 year period from 2030 to 2036 has an average of over 3% probability of TAI occurring in that particular year (smaller earlier and larger later). * A median of ~2040 (a decrease of ~10 years from 2050). * This implies that each year in the 4 year period from 2036 to 2040 has an average of almost 4% probability of TAI. * ~60% probability by 2050 (a ~1.5x likelihood ratio vs 50%). As a result, my timelines have also concentrated more around a somewhat narrower band of years. Previously,
1c5d0d98-469e-43ee-9325-1ef51377790c
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Applications for AI Safety Camp 2022 Now Open! If you’ve read about alignment research and you want to start contributing, the new iteration of the AI Safety Camp is a great opportunity! It’s a virtual camp from January to May 2022, where you collaborate with other applicants to work (1h / normal workday, 7h / weekend sprint day) on open problems proposed and supervised by mentors like John Wentworth, Beth Barnes, Stuart Armstrong, Daniel Kokotajlo… Around this core of research, the camp also includes talks and discussions about fundamental ideas in the field, how alignment research works, and how and where to get a job/funding. All in all, the AI Safety Camp is a great opportunity if: * You have read enough about alignment that you’re convinced of the importance of the problem * You want to do alignment research (whether conceptual or applied), or to collaborate with alignment researchers (doing policy for example) * You don’t feel yet like you have enough research taste and grasp of the field to choose your research problems yourself yet Note that you don’t need advanced maths skills to participate in the camp, as some of the projects don’t require any specific skillset or very unusual ones (evolutionary genetics, history...). If you care about alignment and are in this situation, I encourage you to apply for a project without required skillsets and learn what you need as you go along. All the details on how to apply are available on [the website](https://aisafety.camp/) (including the list of open problems).
4d36cd7f-7df0-4579-93af-dfad7be4ded4
trentmkelly/LessWrong-43k
LessWrong
Politics Discussion Thread August 2012 In line with the results of the poll here, a thread for discussing politics.  Incidentally, folks, I think downvoting the option you disagree with in a poll is generally considered poor form.   1.) Top-level comments should introduce arguments; responses should be responses to those arguments. 2.) Upvote and downvote based on whether or not you find an argument convincing in the context in which it was raised.  This means if it's a good argument against the argument it is responding to, not whether or not there's a good/obvious counterargument to it; if you have a good counterargument, raise it.  If it's a convincing argument, and the counterargument is also convincing, upvote both.  If both arguments are unconvincing, downvote both. 3.) A single argument per comment would be ideal; as MixedNuts points out here, it's otherwise hard to distinguish between one good and one bad argument, which makes the upvoting/downvoting difficult to evaluate. 4.) In general try to avoid color politics; try to discuss political issues, rather than political parties, wherever possible.   If anybody thinks the rules should be dropped here, now that we're no longer conducting a test - I already dropped the upvoting/downvoting limits I tried, unsuccessfully, to put in - let me know.  The first rule is the only one I think is strictly necessary. Debiasing attempt: If you haven't yet read Politics is the Mindkiller, you should.
71bd1d5f-7c61-447e-b457-eed62dc2a62a
trentmkelly/LessWrong-43k
LessWrong
Was Homer a stochastic parrot? Meaning in literary texts and LLMs Cross posted from New Savanna. The phrase “stochastic parrot” was coined, I believe, by Emily Bender, a computational linguist and one of the coauthors of the paper, On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? Consider this passage: > That is, human language use takes place between individuals who share common ground and are mutually aware of that sharing (and its extent), who have communicative intents which they use language to convey, and who model each others’ mental states as they communicate. As such, human communication relies on the interpretation of implicit meaning conveyed between individuals. The fact that human-human communication is a jointly constructed activity [29, 128] is most clearly true in co-situated spoken or signed communication, but we use the same facilities for producing language that is intended for audiences not co-present with us (readers, listeners, watchers at a distance in time or space) and in interpreting such language when we encounter it. It must follow that even when we don’t know the person who generated the language we are interpreting, we build a partial model of who they are and what common ground we think they share with us, and use this in interpreting their words. > > Text generated by an LM is not grounded in communicative intent, any model of the world, or any model of the reader’s state of mind. I have been happy to accept this view, while at the same time denying that LLMs are stochastic parrots. That’s the view I took in my 2020 working paper, GPT-3: Waterloo or Rubicon? Here be Dragons, and which I have maintained until quite recently. Then, at the end of March, a discussion I had over at LessWrong has lead me to revise that view, if only provisionally, in a post, MORE on the issue of meaning in large language models (LLMs). There, following an argument that John Searle had made about certain digital devices, where he pointed out that they can be said to compute only in relation “to som
28f373b0-925b-42ba-8204-801e937391ab
trentmkelly/LessWrong-43k
LessWrong
Bragging thread August 2015 Your job, should you choose to accept it, is to comment on this thread explaining the most awesome thing you've done this month. You may be as blatantly proud of yourself as you feel. You may unabashedly consider yourself the coolest freaking person ever because of that awesome thing you're dying to tell everyone about. This is the place to do just that. Remember, however, that this isn't any kind of progress thread. Nor is it any kind of proposal thread. This thread is solely for people to talk about the awesome things they have done. Not "will do". Not "are working on". Have already done. This is to cultivate an environment of object level productivity rather than meta-productivity methods. So, what's the coolest thing you've done this month? (Previous Bragging Thread)
aca19517-d137-4f07-9df3-994bf55d2e2a
StampyAI/alignment-research-dataset/arxiv
Arxiv
Masked World Models for Visual Control. 1 Introduction --------------- Model-based reinforcement learning (RL) holds the promise of sample-efficient robot learning by learning a world model and leveraging it for planning [chua2018deep, deisenroth2011pilco, lenz2015deepmpc] or generating imaginary states for behavior learning [kurutach2018model, janner2019trust]. These approaches have also previously been applied to environments with visual observations, by learning an action-conditional video prediction model [finn2017deep, ebert2018visual] or a latent dynamics model that predicts compact representations in an abstract latent space [watter2015embed, hafner2019learning]. However, learning world models on environments with complex visual observations, e.g., accurately modeling interactions with small objects, is an open challenge. We argue that this difficulty comes from the design of current approaches that typically optimize the world model end-to-end for learning both visual representations and dynamics [hafner2019learning, zhang2019solar]. This imposes a trade-off between learning representations and dynamics that can prevent world models from accurately capturing visual details, making it difficult to predict forward into the future. Another approach is to learn representations and dynamics separately, such as earlier work by ha2018world who train a variational autoencoder (VAE) [kingma2013auto] and a dynamics model on top of the VAE features. However, separately-trained VAE representations may not be amenable to dynamics learning [watter2015embed, zhang2019solar] or may not capture task-relevant details [ha2018world]. On the other hand, masked autoencoders (MAE) [he2021masked] have recently been proposed as an effective and scalable approach to visual representation learning, by training a self-supervised vision transformer (ViT) [dosovitskiy2020image] to reconstruct masked patches. While it motivates us to learn world models on top of MAE representations, we find that MAE often struggles to capture fine-grained details within patches. Because capturing visual details, e.g., object positions, is crucial for solving visual control tasks, it is desirable to develop a representation learning method that captures such details but also achieves the benefits of MAE such as stability, compute-efficiency, and scalability. In this paper, we present Masked World Models (MWM), a visual model-based RL algorithm that decouples visual representation learning and dynamics learning. The key idea of MWM is to train an autoencoder that reconstructs visual observations with convolutional feature masking, and a latent dynamics model on top of the autoencoder. By introducing early convolutional layers and masking out convolutional features instead of pixel patches, our approach enables the world model to capture fine-grained visual details from complex visual observations. Moreover, in order to learn task-relevant information that might not be captured solely by the reconstruction objective, we introduce an auxiliary reward prediction task for the autoencoder. Specifically, we separately update visual representations and dynamics by repeating the iterative processes of (i) training the autoencoder with convolutional feature masking and reward prediction, and (ii) learning the latent dynamics model that predicts visual representations from the autoencoder (see [Figure 1](#S2.F1 "Figure 1 ‣ World models from visual observations ‣ 2 Related Work ‣ Masked World Models for Visual Control")). #### Contributions We highlight the contributions of our paper below: * [topsep=1.0pt,itemsep=1.0pt,leftmargin=5.5mm] * We demonstrate the effectiveness of decoupling visual representation learning and dynamics learning for visual model-based RL. MWM significantly outperforms a state-of-the-art model-based baseline [hafner2020mastering] on various visual control tasks from Meta-world [yu2020meta] and RLBench [james2020rlbench]. * We show that a self-supervised ViT trained to reconstruct visual observations with convolutional feature masking can be effective for visual model-based RL. Interestingly, we find that masking convolutional features can be more effective than pixel patch masking [he2021masked], by allowing for capturing fine-grained details within patches. This is in contrast to the observation in touvron2022three, where both perform similarly on the ImageNet classification task [deng2009imagenet]. * We show that an auxiliary reward prediction task can significantly improve performance by encoding task-relevant information into visual representations. 2 Related Work --------------- #### World models from visual observations There have been several approaches to learn visual representations for model-based approaches via image reconstruction [finn2017deep, ebert2018visual, watter2015embed, hafner2019learning, zhang2019solar, ha2018world, hafner2020mastering, finn2016deep, hafner2019dream, kaiser2019model], e.g., learning a video prediction model [finn2017deep, gupta2022maskvit] or a latent dynamics model [watter2015embed, hafner2019learning, zhang2019solar]. This has been followed by a series of works that demonstrated the effectiveness of model-based approaches for solving video games [hafner2020mastering, ye2021mastering, kaiser2019model] and visual robot control tasks [ebert2018visual, hafner2019dream, seyde2020learning, rybkin2021model]. There also have been several works that considered different objectives, including bisimulation [gelada2019deepmdp] and contrastive learning [nguyen2021temporal, okada2021dreaming, deng2021dreamerpro]. While most prior works optimize a single model to learn both visual representations and dynamics, we instead develop a framework that decouples visual representation learning and dynamics learning. ![Illustration of our approach. We continually update visual representations and dynamics using online samples collected from environment interaction, by repeating iterative processes of training (Left) an autoencoder with convolutional feature masking and reward prediction and (Right) a latent dynamics model in the latent space of the autoencoder. We note that autoencoder parameters are not updated during dynamics learning.](https://media.arxiv-vanity.com/render-output/7968662/x1.png) Figure 1: Illustration of our approach. We continually update visual representations and dynamics using online samples collected from environment interaction, by repeating iterative processes of training (Left) an autoencoder with convolutional feature masking and reward prediction and (Right) a latent dynamics model in the latent space of the autoencoder. We note that autoencoder parameters are not updated during dynamics learning. #### Self-supervised vision transformers Self-supervised learning with vision transformers (ViT) [dosovitskiy2020image] has been actively studied. For instance, chen2021empirical introduced MoCo-v3 which trains a ViT with contrastive learning. caron2021emerging introduced DINO which utilizes a self-distillation loss [hinton2015distilling], and demonstrated that self-supervised ViTs contain information about the semantic layout of images. Training self-supervised ViTs with masked image modeling [he2021masked, bao2021beit, li2021mst, feichtenhofer2022masked, xie2022simmim, wei2022masked, zhou2021ibot] has also been successful. In particular, he2021masked proposed a masked autoencoder (MAE) that reconstructs masked pixel patches with an asymmetric encoder-decoder architecture. Unlike MAE, we propose to randomly mask features from early convolutional layers [xiao2021early] instead of pixel patches and demonstrate that self-supervised ViTs can also be effective for visual model-based RL. We provide more discussion on related works in more detail in [Appendix C](#A3 "Appendix C Extended Related Work ‣ Masked World Models for Visual Control"). | | | | | | | --- | --- | --- | --- | --- | | Examples of visual observations used in our experiments. We consider a variety of visual robot control tasks from Meta-world  (a) Pick Place | Examples of visual observations used in our experiments. We consider a variety of visual robot control tasks from Meta-world  (b) Shelf Place | Examples of visual observations used in our experiments. We consider a variety of visual robot control tasks from Meta-world  (c) Reach Target | Examples of visual observations used in our experiments. We consider a variety of visual robot control tasks from Meta-world  (d) Push Button | Examples of visual observations used in our experiments. We consider a variety of visual robot control tasks from Meta-world  (e) Reach Duplo | Figure 2: Examples of visual observations used in our experiments. We consider a variety of visual robot control tasks from Meta-world [yu2020meta], RLBench [james2020rlbench], and DeepMind Control Suite [tassa2020dm\_control]. 3 Preliminaries ---------------- #### Problem formulation We formulate a visual control task as a partially observable Markov decision process (POMDP) [sutton2018reinforcement], which is defined as a tuple (O,A,p,r,γ). O is the observation space, A is the action space, p(ot|o<t,a<t) is the transition dynamics, r is the reward function that maps previous observations and actions to a reward rt=r(o≤t,a<t), and γ∈[0,1) is the discount factor. #### Dreamer Dreamer [hafner2020mastering, hafner2019dream] is a visual model-based RL method that learns world models from pixels and trains an actor-critic model via latent imagination. Specifically, Dreamer learns a Recurrent State Space Model (RSSM) [hafner2019learning], which consists of following four components: | | | | | | --- | --- | --- | --- | | | Representation model:st∼qθ(st|st−1,at−1,ot)Image decoder:^ot∼pθ(^ot|st)Transition model:^st∼pθ(^st|st−1,at−1)Reward predictor:^rt∼pθ(^rt|st) | | (1) | The representation model extracts model state st from previous model state st−1, previous action at−1, and current observation ot. The transition model predicts future state ^st without the access to current observation ot. The image decoder reconstructs raw pixels to provide learning signal, and the reward predictor enables us to compute rewards from future model states without decoding future frames. All model parameters θ are trained to jointly learn visual representations and environment dynamics by minimizing the negative variational lower bound [kingma2013auto]: | | | | | | --- | --- | --- | --- | | | L(θ)≐Eqθ(s1:T|a1:T,o1:T)[∑Tt=1(−lnpθ(ot|st)−lnpθ(rt|st)+βKL[qθ(st|st−1,at−1,ot)∥pθ(^st|st−1,at−1)])], | | (2) | where β is a hyperparameter that controls the tradeoff between the quality of visual representation learning and the accuracy of dynamics learning [alemi2018fixing]. Then, the critic is learned to regress the values computed from imaginary rollouts, and the actor is trained to maximize the values by propagating analytic gradients back through the transition model (see [Appendix A](#A1 "Appendix A Behavior Learning ‣ Masked World Models for Visual Control") for the details). #### Masked autoencoder Masked autoencoder (MAE) [he2021masked] is a self-supervised visual representation technique that trains an autoencoder to reconstruct raw pixels with randomly masked patches consisting of pixels. Following a scheme introduced in vision transformer (ViT) [dosovitskiy2020image], the observation ot∈RH×W×C is processed with a patchify stem that reshapes ot into a sequence of 2D patches ht∈RN×(P2C), where P is the patch size and N=HW/P2 is the number of patches. Then a subset of patches is randomly masked with a ratio of m to construct hmt∈RM×(P2C). | | | | | | --- | --- | --- | --- | | | Patchify stem:ht=fpatchϕ(ot)% Masking:hmt∼pmask(hmt|ht,m) | | (3) | A ViT encoder embeds only the remaining patches hmt into D-dimensional vectors, concatenates the embedded tokens with a learnable CLS token, and processes them through a series of Transformer layers [vaswani2017attention]. Finally, a ViT decoder reconstructs the observation by processing tokens from the encoder and learnable mask tokens through Transformer layers followed by a linear output head: | | | | | | --- | --- | --- | --- | | | ViT encoder:zmt∼pϕ(zmt|hmt)ViT decoder% :^ot∼pϕ(^ot|zmt) | | (4) | All the components paramaterized by ϕ are jointly optimized to minimize the mean squared error (MSE) between the reconstructed and original pixel patches. MAE computes z0t without masking, and utilizes its first component (i.e., CLS representation) for downstream tasks (e.g., image classification). 4 Masked World Models ---------------------- In this section, we present Masked World Models (MWM), a visual model-based RL framework for learning accurate world models by separately learning visual representations and environment dynamics. Our method repeats (i) updating an autoencoder with convolutional feature masking and an auxiliary reward prediction task (see [Section 4.1](#S4.SS1 "4.1 Visual Representation Learning ‣ 4 Masked World Models ‣ Masked World Models for Visual Control")), (ii) learning a dynamics model in the latent space of the autoencoder (see [Section 4.2](#S4.SS2 "4.2 Latent Dynamics Learning ‣ 4 Masked World Models ‣ Masked World Models for Visual Control")), and (iii) collecting samples from environment interaction. We provide the overview and pseudocode of MWM in [Figure 1](#S2.F1 "Figure 1 ‣ World models from visual observations ‣ 2 Related Work ‣ Masked World Models for Visual Control") and [Appendix D](#A4 "Appendix D Pseudocode ‣ Masked World Models for Visual Control"), respectively. ### 4.1 Visual Representation Learning It has been observed that masked image modeling with a ViT architecture [he2021masked, bao2021beit, feichtenhofer2022masked] enables compute-efficient and stable self-supervised visual representation learning. This motivates us to adopt this approach for visual model-based RL, but we find that masked image modeling with commonly used pixel patch masking [he2021masked] often makes it difficult to learn fine-grained details within patches, e.g., small objects (see [Appendix B](#A2 "Appendix B Extended Qualitative Analysis ‣ Masked World Models for Visual Control") for a motivating example). While one can consider small-size patches, this would increase computational costs due to the quadratic complexity of self-attention layers. To handle this issue, we instead propose to train an autoencoder that reconstructs raw pixels given randomly masked convolutional features. Unlike previous approaches that utilize a patchify stem and randomly mask pixel patches (see [Section 3](#S3 "3 Preliminaries ‣ Masked World Models for Visual Control")), we adopt a convolution stem [dosovitskiy2020image, xiao2021early] that processes ot through a series of convolutional layers followed by a flatten layer, to obtain hct∈RNc×D where Nc is the number of convolutional features. Then hct is randomly masked with a ratio of m to obtain hc,mt∈RMc×D, and ViT encoder and decoder process hc,mt to reconstruct raw pixels. | | | | | | --- | --- | --- | --- | | | Convolution stem:hct=fconvϕ(ot)% Masking:hc,mt∼pmask(hc,mt|hct,m)ViT encoder:zc,mt∼pϕ(zc,mt|hc,mt)ViT decoder:^ot∼pϕ(^ot|zc,mt) | | (5) | Because early convolutional layers mix low-level details, we find that our autoencoder can effectively reconstruct all the details within patches by learning to extract information from nearby non-masked features (see [Figure 7](#S5.F7 "Figure 7 ‣ Reward prediction ‣ 5.4 Ablation Study ‣ 5 Experiments ‣ Masked World Models for Visual Control") for examples). This enables us to learn visual representations capturing such details while also achieving the benefits of MAE, e.g., stability and compute-efficiency. #### Reward prediction In order to encode task-relevant information that might not be captured solely by the reconstruction objective, we introduce an auxiliary objective for the autoencoder to predict rewards jointly with pixels. Specifically, we make the autoencoder predict the reward rt from zc,mt in conjunction with raw pixels. | | | | | | --- | --- | --- | --- | | | ViT decoder with reward % prediction:^ot,^rt∼pϕ(^ot,^rt|zc,mt) | | (6) | In practice, we concatenate one additional learnable mask token to inputs of the ViT decoder, and utilize the corresponding output representation for predicting the reward with a linear output head. #### High masking ratio Introducing early convolutional layers might impede the masked reconstruction tasks because they propagate information across patches [touvron2022three], and the model can exploit this to find a shortcut to solve reconstruction tasks. However, we find that a high masking ratio (i.e., 75%) can prevent the model from finding such shortcuts and induce useful representations (see [Figure 6(b)](#S5.F6.sf2 "(b) ‣ Figure 6 ‣ Results ‣ 5.2 RLBench Experiments ‣ 5 Experiments ‣ Masked World Models for Visual Control") for supporting experimental results). This also aligns with the observation from touvron2022three, where masked image modeling [bao2021beit] with a convolution stem [graham2021levit] can achieve competitive performance with the patchify stem on the ImageNet classification task [deng2009imagenet]. ### 4.2 Latent Dynamics Learning Once we learn visual representations, we leverage them for efficiently learning a dynamics model in the latent space of the autoencoder. Specifically, we obtain the frozen representations zc,0t from the autoencoder, and then train a variant of RSSM whose inputs and reconstruction targets are zct,0, by replacing the representation model and the image decoder in [Equation 1](#S3.E1 "(1) ‣ Dreamer ‣ 3 Preliminaries ‣ Masked World Models for Visual Control") with following components: | | | | | | --- | --- | --- | --- | | | Representation model:st∼qθ(st|st−1,at−1,zc,0t)Visual representation decoder:^zc,0t∼pθ(^zc,0t|st) | | (7) | Because visual representations capture both high- and low-level information in an abstract form, the model can focus more on dynamics learning by reconstructing them instead of raw pixels (see [Section 5.5](#S5.SS5 "5.5 Qualitative Analysis ‣ 5 Experiments ‣ Masked World Models for Visual Control") for relevant discussion). Here, we also note that we utilize all the elements of zc,0t unlike MAE that only utilizes CLS representation for downstream tasks. We empirically find this enables the model to receive rich learning signals from reconstructing all the representations containing spatial information (see [Appendix I](#A9 "Appendix I Extended Ablation Study and Analysis ‣ Masked World Models for Visual Control") for supporting experiments). 5 Experiments -------------- We evaluate MWM on various robotics benchmarks, including Meta-world [yu2020meta] (see [Section 5.1](#S5.SS1 "5.1 Meta-world Experiments ‣ 5 Experiments ‣ Masked World Models for Visual Control")), RLBench [james2020rlbench] (see [Section 5.2](#S5.SS2 "5.2 RLBench Experiments ‣ 5 Experiments ‣ Masked World Models for Visual Control")), and DeepMind Control Suite [tassa2012synthesis] (see [Section 5.3](#S5.SS3 "5.3 DeepMind Control Suite Experiments ‣ 5 Experiments ‣ Masked World Models for Visual Control")). We remark that these benchmarks consist of diverse and challenging visual robotic tasks. We also analyze algorithmic design choices in-depth (see [Section 5.4](#S5.SS4 "5.4 Ablation Study ‣ 5 Experiments ‣ Masked World Models for Visual Control")) and provide a qualitative analysis of how our decoupling approach works by visualizing the predictions from the latent dynamics model (see [Section 5.5](#S5.SS5 "5.5 Qualitative Analysis ‣ 5 Experiments ‣ Masked World Models for Visual Control")). ![ Learning curves on six visual robotic manipulation tasks from Meta-world as measured on the success rate. We select the tasks that require modeling interactions between small objects and robot arms. Learning curves on 50 tasks are available in ](https://media.arxiv-vanity.com/render-output/7968662/x2.png) Figure 3: Learning curves on six visual robotic manipulation tasks from Meta-world as measured on the success rate. We select the tasks that require modeling interactions between small objects and robot arms. Learning curves on 50 tasks are available in [Appendix G](#A7 "Appendix G Full Meta-world Experiments ‣ Masked World Models for Visual Control"). The solid line and shaded regions represent the mean and bootstrap confidence intervals, respectively, across five runs. #### Implementation We use visual observations of 64×64×3. For the convolution stem, we stack 3 convolution layers with the kernel size of 4 and stride 2, followed by a linear projection layer. We use a 4-layer ViT encoder and a 3-layer ViT decoder. We find that initializing the autoencoder with a warm-up schedule at the beginning of training is helpful. Unlike MAE, we compute the loss on entire pixels because we do not apply masking to pixels. For world models, we build our implementation on top of DreamerV2 [hafner2020mastering]. To take a sequence of autoencoder representations as inputs, we replace a CNN encoder and decoder with a 2-layer Transformer encoder and decoder. We use same hyperparameters within the same benchmark. More details are available in [Appendix E](#A5 "Appendix E Architecture Details ‣ Masked World Models for Visual Control"). ### 5.1 Meta-world Experiments #### Environment details In order to use a single camera viewpoint consistently over all 50 tasks, we use the modified corner2 camera viewpoint for all tasks. In our experiments, we classify 50 tasks into easy, medium, hard, and very hard tasks where experiments are run over 500K, 1M, 2M, 3M environments steps with action repeat of 2, respectively. More details are available in [Appendix F](#A6 "Appendix F Experiments Details ‣ Masked World Models for Visual Control"). #### Results In [Figure 3](#S5.F3 "Figure 3 ‣ 5 Experiments ‣ Masked World Models for Visual Control"), we report the performance on a set of selected six challenging tasks that require agents to control robot arms to interact with small objects. We find that MWM significantly outperforms DreamerV2 in terms of both sample-efficiency and final performance. In particular, MWM achieves >80% success rate on Pick Place while DreamerV2 struggles to solve the task. These results show that our approach of separating visual representation learning and dynamics learning can learn accurate world models on challenging domains. [Figure 4(a)](#S5.F4.sf1 "(a) ‣ Figure 4 ‣ Results ‣ 5.1 Meta-world Experiments ‣ 5 Experiments ‣ Masked World Models for Visual Control") shows the aggregate performance over all the 50 tasks from the benchmark, demonstrating that our method consistently outperforms DreamerV2 overall. We also provide learning curves on all individual tasks in [Appendix G](#A7 "Appendix G Full Meta-world Experiments ‣ Masked World Models for Visual Control"), where MWM consistently achieves similar or better performance on most tasks. | | | | | --- | --- | --- | | (a) Aggregate performance on all 50 Meta-world tasks. We normalize environment steps by maximum steps in each task. The solid line and shaded regions represent the mean and stratified bootstrap confidence intervals, respectively, across 250 runs. We report the learning curves on (b) Reach Target and (c) Push Button from RLBench. Performances are not directly comparable to previous results  (a) Meta-world aggregated | (a) Aggregate performance on all 50 Meta-world tasks. We normalize environment steps by maximum steps in each task. The solid line and shaded regions represent the mean and stratified bootstrap confidence intervals, respectively, across 250 runs. We report the learning curves on (b) Reach Target and (c) Push Button from RLBench. Performances are not directly comparable to previous results  (b) RLBench: Reach Target | (a) Aggregate performance on all 50 Meta-world tasks. We normalize environment steps by maximum steps in each task. The solid line and shaded regions represent the mean and stratified bootstrap confidence intervals, respectively, across 250 runs. We report the learning curves on (b) Reach Target and (c) Push Button from RLBench. Performances are not directly comparable to previous results  (c) RLBench: Push Button | Figure 4: (a) Aggregate performance on all 50 Meta-world tasks. We normalize environment steps by maximum steps in each task. The solid line and shaded regions represent the mean and stratified bootstrap confidence intervals, respectively, across 250 runs. We report the learning curves on (b) Reach Target and (c) Push Button from RLBench. Performances are not directly comparable to previous results [james2022q, james2021coarse] due to the difference in setups (see [Section 5.2](#S5.SS2 "5.2 RLBench Experiments ‣ 5 Experiments ‣ Masked World Models for Visual Control")). The solid line and shaded regions represent the mean and bootstrap confidence intervals, respectively, across eight runs. ### 5.2 RLBench Experiments #### Environment details In order to evaluate our method on more challenging visual robotic manipulation tasks, we consider RLBench [james2020rlbench], which has previously acted as an effective proxy for real-robot performance [james2021coarse]. Since RLBench consists of sparse-reward and challenging tasks, solving them typically requires expert demonstrations, specialized network architectures, additional inputs (e.g., point cloud and proprioceptive states), and an action mode that requires path planning [james2022q, james2021coarse, james2022lpr, james2022tree]. While we could utilize some of these components, we instead leave this as future work in order to maintain a consistent evaluation setup across multiple domains. In our experiments, we instead consider two relatively easy tasks with dense rewards, and utilize an action mode that specifies the delta of joint positions. We provide more details in [Appendix F](#A6 "Appendix F Experiments Details ‣ Masked World Models for Visual Control"). #### Results As shown in [Figure 4(b)](#S5.F4.sf2 "(b) ‣ Figure 4 ‣ Results ‣ 5.1 Meta-world Experiments ‣ 5 Experiments ‣ Masked World Models for Visual Control") and [Figure 4(c)](#S5.F4.sf3 "(c) ‣ Figure 4 ‣ Results ‣ 5.1 Meta-world Experiments ‣ 5 Experiments ‣ Masked World Models for Visual Control"), we observe that our approach can also be effective on RLBench tasks, significantly outperforming DreamerV2. In particular, DreamerV2 achieves <20% success rate on Reach Target, while our approach can solve the tasks with >80% success rates. We find that this is because DreamerV2 fails to capture target positions in visual observations, while our method can capture such details (see [Section 5.5](#S5.SS5 "5.5 Qualitative Analysis ‣ 5 Experiments ‣ Masked World Models for Visual Control") for relevant discussion and visualizations). However, we also note that these results are preliminary because they are still too sample-inefficient to be used for real-world scenarios. We provide more discussion in [Section 6](#S6 "6 Discussion ‣ Masked World Models for Visual Control"). ![ Learning curves on three visual robot control tasks from DeepMind Control Suite as measured on the episode return. The solid line and shaded regions represent the mean and bootstrap confidence intervals, respectively, across eight runs.](https://media.arxiv-vanity.com/render-output/7968662/x6.png) Figure 5: Learning curves on three visual robot control tasks from DeepMind Control Suite as measured on the episode return. The solid line and shaded regions represent the mean and bootstrap confidence intervals, respectively, across eight runs. | | | | | --- | --- | --- | | Learning curves on three manipulation tasks from Meta-world that investigate the effect of (a) convolutional feature masking, (b) masking ratio, and (c) reward prediction. The solid line and shaded regions represent the mean and stratified bootstrap confidence interval across 12 runs. (a) Feature masking | Learning curves on three manipulation tasks from Meta-world that investigate the effect of (a) convolutional feature masking, (b) masking ratio, and (c) reward prediction. The solid line and shaded regions represent the mean and stratified bootstrap confidence interval across 12 runs. (b) Masking ratio | Learning curves on three manipulation tasks from Meta-world that investigate the effect of (a) convolutional feature masking, (b) masking ratio, and (c) reward prediction. The solid line and shaded regions represent the mean and stratified bootstrap confidence interval across 12 runs. (c) Reward prediction | Figure 6: Learning curves on three manipulation tasks from Meta-world that investigate the effect of (a) convolutional feature masking, (b) masking ratio, and (c) reward prediction. The solid line and shaded regions represent the mean and stratified bootstrap confidence interval across 12 runs. ### 5.3 DeepMind Control Suite Experiments #### Environment details In order to demonstrate that our approach is generally applicable to diverse visual control tasks, we also evaluate our method on visual locomotion tasks from the widely used DeepMind Control Suite benchmark. Following a standard setup in hafner2019dream, we use an action repeat of 2 and default camera configurations. We provide more details in [Appendix F](#A6 "Appendix F Experiments Details ‣ Masked World Models for Visual Control"). #### Results [Figure 5](#S5.F5 "Figure 5 ‣ Results ‣ 5.2 RLBench Experiments ‣ 5 Experiments ‣ Masked World Models for Visual Control") shows that our method achieves competitive performance to DreamerV2 on visual locomotion tasks (i.e., Quadruped tasks), demonstrating the generality of our approach across diverse visual control tasks. We also observe that our method outperforms DreamerV2 on Reach Duplo, which is one of a few manipulation tasks in the benchmark (see [Figure 2(e)](#S2.F2.sf5 "(e) ‣ Figure 2 ‣ Self-supervised vision transformers ‣ 2 Related Work ‣ Masked World Models for Visual Control") for an example). This implies that our method is effective on environments where the model should capture fine-grained details like object positions. More results are available in [Appendix H](#A8 "Appendix H Additional DeepMind Control Suite Experiments ‣ Masked World Models for Visual Control"), where trends are similar. ### 5.4 Ablation Study #### Convolutional feature masking We compare convolutional feature masking with pixel masking (i.e., MAE) in [Figure 6(a)](#S5.F6.sf1 "(a) ‣ Figure 6 ‣ Results ‣ 5.2 RLBench Experiments ‣ 5 Experiments ‣ Masked World Models for Visual Control"), which shows that convolutional feature masking significantly outperforms pixel masking. This demonstrates that enabling the model to capture fine-grained details within patches can be important for visual control. We also report the performance with varying masking ratio m∈{0.25,0.5,0.75,0.9} in [Figure 6(b)](#S5.F6.sf2 "(b) ‣ Figure 6 ‣ Results ‣ 5.2 RLBench Experiments ‣ 5 Experiments ‣ Masked World Models for Visual Control"). As we discussed in [Section 4.1](#S4.SS1 "4.1 Visual Representation Learning ‣ 4 Masked World Models ‣ Masked World Models for Visual Control"), we find that m=0.75 achieves better performance than m∈{0.25,0.5} because strong regularization can prevent the model from finding a shortcut from input pixels. However, we also find that too strong regularization (i.e., m=0.9) degrades the performance. #### Reward prediction In [Figure 6(c)](#S5.F6.sf3 "(c) ‣ Figure 6 ‣ Results ‣ 5.2 RLBench Experiments ‣ 5 Experiments ‣ Masked World Models for Visual Control"), we find that performance significantly degrades without reward prediction, which shows that the reconstruction objective might not be sufficient for learning task-relevant information. It would be an interesting future direction to develop a representation learning scheme that learns task-relevant information without rewards because they might not be available in practice. We provide more ablation studies and learning curves on individual tasks in [Appendix I](#A9 "Appendix I Extended Ablation Study and Analysis ‣ Masked World Models for Visual Control"). ![ Future frames reconstructed with the autoencoder (](https://media.arxiv-vanity.com/render-output/7968662/x10.png) Figure 7: Future frames reconstructed with the autoencoder (i.e., Recon) and predicted by latent dynamics models (i.e., Predicted). Predictions from our model capture the position of a red block, which is a target position a robot arm should reach, but predictions from Dreamer are not capturing such details. In our predictions, the components that are not task-relevant are abstracted away (i.e., blue and orange blocks), though the autoencoder reconstructs them. This shows how our decoupling approach works: it encourages the autoencoder to capture all the details, and the dynamics model to focus on modeling task-relevant components. Best viewed as video provided in [Appendix B](#A2 "Appendix B Extended Qualitative Analysis ‣ Masked World Models for Visual Control"). ### 5.5 Qualitative Analysis We visually investigate how our world model works compared to the world model of DreamerV2. Specifically, we visualize the future frames predicted by latent dynamics models on Reach Target from RLBench in [Figure 7](#S5.F7 "Figure 7 ‣ Reward prediction ‣ 5.4 Ablation Study ‣ 5 Experiments ‣ Masked World Models for Visual Control"). In this task, a robot arm should reach a target position specified by a red block in visual observations (see [Figure 2(c)](#S2.F2.sf3 "(c) ‣ Figure 2 ‣ Self-supervised vision transformers ‣ 2 Related Work ‣ Masked World Models for Visual Control")), which changes every trial. Thus it is crucial for the model to accurately predict the position of red blocks for solving the tasks. We find that our world model effectively captures the position of red blocks, while DreamerV2 fails. Interestingly, we also observe that our latent dynamics model ignores the components that are not task-relevant such as blue and orange blocks, though the reconstructions from the autoencoder are capturing all the details. This shows how our decoupling approach works: it encourages the autoencoder to focus on learning representations capturing the details and the dynamics model to focus on modeling task-relevant components of environments. We provide more examples in [Appendix B](#A2 "Appendix B Extended Qualitative Analysis ‣ Masked World Models for Visual Control"). 6 Discussion ------------- We have presented Masked World Models (MWM), which is a visual model-based RL framework that decouples visual representation learning and dynamics learning. By learning a latent dynamics model operating in the latent space of a self-supervised ViT, we find that our approach allows for solving a variety of visual control tasks from Meta-world, RLBench, and DeepMind Control Suite. #### Limitation Despite the results, there are a number of areas for improvement. As we have shown in [Figure 6(c)](#S5.F6.sf3 "(c) ‣ Figure 6 ‣ Results ‣ 5.2 RLBench Experiments ‣ 5 Experiments ‣ Masked World Models for Visual Control"), the performance of our approach heavily depends on the auxiliary reward prediction task. This might be because our autoencoder is not learning temporal information, which is crucial for learning task-relevant information. It would be interesting to investigate the performance of video representation learning with ViTs [feichtenhofer2022masked, arnab2021vivit]. It would also be interesting to study introducing auxiliary prediction for other modalities, such as audio. Another weakness is that our model operates only on RGB pixels from a single camera viewpoint; we look forward to a future work that incorporates different input modalities such as proprioceptive states and point clouds, building on top of the recent multi-modal learning approaches [geng2022multimodal, bachmann2022multimae]. Finally, our approach trains behaviors from scratch, which makes it still too sample-inefficient to be used in real-world scenarios. Leveraging a small number of demonstrations, incorporating the action mode with path planning [james2022q], or pre-training a world model on video datasets [seo2022reinforcement] are directions we hope to investigate in future works. \acknowledgments We would like to thank Jongjin Park and Sihyun Yu for helpful discussions. We also thank Cirrascale Cloud Services111<https://cirrascale.com> for providing compute resources. This work was partially supported by Office of Naval Research under grant N00014-21-1-2769 and grant N00014-22-1-2121, the Darpa RACER program, the Hong Kong Centre for Logistics Robotics, BMW, and Institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2019-0-00075, Artificial Intelligence Graduate School Program (KAIST)).
c44c4342-9e1b-4f35-a662-1ccf321621a4
trentmkelly/LessWrong-43k
LessWrong
What I Learned From Running a Pol.is Conversation Try Things A couple weeks ago, I asked around facebook and discord for people willing to test out Polis, an interactive survey tool. Polis appealed to my aesthetic. It hosts a conversation where participants submit twitter-size comments on a topic which other participants vote on by clicking "agree", "disagree", or "pass". It uses these votes to cluster participants into like-minded groups and identifies "consensus" points about which all the clusters agree. I saw a lot of promise in their model. Social media tends to addict and polarize, and I am constantly seeking for healthier alternative technologies. I had hope this could enable positive communication and group problem-solving. So I made a "conversation" in polis, just to try it out. The resulting survey report is here: https://pol.is/report/r7dr5tzke7pbpbajynkv8. I'm proud of myself for taking a step forward on my ideas. I tested my concept, now I have concrete data to iterate on. I learned its potential much more quickly than if I'd gone on theorizing about it.   Afterthoughts Polis is, first and foremost, a tool for doing good data science. It's a simple user interface, cleverly optimized to collect a sparse matrix of evolving survey data, tied with a few automatically generated visualizations of the data. This became clearer as I engaged with the platform and its quirks. I will not go deeply into the results of the survey I ran. The highlights: * Disagreed (>60%) * Everybody else has the exact same name as me. * (negative calibration comment) * we're doing fine on coordination already * Agreed: (>60%) * Spending too much time on discord take[s] away from being productive * In-community dating is normal and acceptable * Passed (>30%) * Rats are slans * Lesswrong codebase is very impressive * Prompt I want to hear from people on: What specific social media incentives are pulling apart the community, and in what ways * Lesswrong codebase is overly complicated * (De
754e7ab2-b4ad-43f8-8562-e4dbe33940a9
trentmkelly/LessWrong-43k
LessWrong
AI & wisdom 1: wisdom, amortised optimisation, and AI Written for the AI Impacts essay competition on the automation of wisdom and philosophy At least in name, AI is about intelligence. However, intelligence is not the only mental quality that matters. Many deep mental skills, in particular regarding good, deep, or long-term judgement, are bundled under the term "wisdom". If we automate and industrialise intelligence, we may not automate and industrialise other important mental qualities, similar to how industrialising food lead to a surplus of calories, but also harms from missing nutrients. Wisdom How does wisdom differ from intelligence? Here are some pairs of associations between intelligence and wisdom: * fast v slow * novelty vs distillation * young v old * fluid intelligence v crystallised intelligence * analysis v synthesis * legible / verifiable v illegible / hard to verify * done alone v learned from elders/culture * small steps well v large things right (Some others that fit my latter points less well: complex v simple, solving v preventing, facts v meaning, question-answering v question-asking, specific v holistic) Here are some examples that are more on the wisdom side than the intelligence side: * Among the Fuegians, an indigenous people in South America, creating an arrow is a 14-step process with 7 different tools and 6 different materials. The steps all improve the final product. The instructions emerged over a long time scale, are passed down from generation to generation. No single person invented it, or could invent it, except by learning it from their elders. (h/t The Secrets of Our Success by Joseph Heinrich, which has a dozen examples like this) * Research taste. Acquiring it requires significant time. Having it is critical for choosing research directions. Older researchers and PhD advisers occupy a social role not that distinct from wise gurus who live at the tops of mountains: they're hard-to-reach people, whose audience you can seek if you're part of the right group,
10f45547-95b5-42ef-a8ad-e61bfb7085eb
LDJnr/LessWrong-Amplify-Instruct
LessWrong
"When I criticize, I'm a genius. I can go through a book of highly-referenced scientific articles and find errors in each of them. Boy, I feel smart. How are these famous people so dumb? But when I write, I suddenly become stupid. I sometimes spend half a day writing something and then realize at the end, or worse, after posting, that what it says simplifies to something trivial, or that I've made several unsupported assumptions, or claimed things I didn't really know were true. Or I post something, then have to go back every ten minutes to fix some point that I realize is not quite right, sometimes to the point where the whole thing falls apart. If someone writes an article or expresses an idea that you find mistakes in, that doesn't make you smarter than that person. If you create an equally-ambitious article or idea that no one else finds mistakes in, then you can start congratulating yourself." The excerpt provided comes from a user post on LessWrong. What specific strategies can one employ to reduce the discrepancy between their critical and creative faculties, as described in the excerpt? Improving the balance between critical and creative faculties can be achieved through several strategies: 1. Self-Review: After writing an article or expressing an idea, take some time to review it critically. This can help in identifying potential errors or weak points in the argument before it is shared with others. 2. Peer Review: Sharing your work with a trusted colleague or friend for feedback can be invaluable. They can provide a fresh perspective and may spot errors or inconsistencies that you might have overlooked. 3. Research: Thoroughly research your topic before writing. This can help ensure that your arguments are based on solid evidence and can help prevent making unsupported assumptions. 4. Practice: The more you write, the better you will become at it. Practice helps in honing both creative and critical skills. 5. Mind Mapping: This technique can help in organizing thoughts and ideas, and can provide a visual overview of the topic. It can help in identifying gaps in the argument or areas that need more development. 6. Seek Expert Opinion: If possible, seek the opinion of experts in the field. Their insights can help in refining your ideas and arguments. 7. Slow Down: Don't rush the writing process. Take your time to think through your arguments and ideas. This can help in reducing errors and improving the quality of your work. 8. Reflection: Reflect on your past works, identify the common mistakes, and work on avoiding them in future writings. 9. Embrace Mistakes: Understand that making mistakes is part of the learning process. Instead of being discouraged by them, use them as opportunities for improvement. 10. Continuous Learning: Keep updating your knowledge and skills. Attend workshops, seminars, or courses that can help in improving both your creative and critical faculties. Remember, the goal is not to eliminate mistakes completely, but to reduce their frequency and impact. Reflect on the concept of "embracing mistakes" in the writing process. How can this approach contribute to the growth of a writer? Embracing mistakes in the writing process can significantly contribute to a writer's growth in several ways: 1. Learning Opportunities: Mistakes are often the best teachers. They provide insights into areas of weakness and offer opportunities to learn and improve. By analyzing and understanding where and why the mistake occurred, a writer can avoid making the same error in the future. 2. Enhances Creativity: Mistakes can sometimes lead to unexpected but interesting ideas or perspectives. They can push a writer out of their comfort zone and stimulate creative thinking. 3. Encourages Risk-Taking: If a writer is not afraid of making mistakes, they are more likely to take risks in their writing. This can lead to unique and innovative ideas or styles that can set their work apart. 4. Promotes Resilience: Embracing mistakes helps build resilience. It teaches a writer to accept setbacks and keep going, which is a crucial trait for long-term success in any field. 5. Improves Problem-Solving Skills: When a writer makes a mistake, they have to figure out how to fix it. This can improve their problem-solving skills, which are essential for effective writing. 6. Fosters Humility: Recognizing and admitting mistakes can foster humility, which is important for personal growth. It can also make a writer more open to feedback and constructive criticism, which can further improve their writing. 7. Enhances Self-Awareness: Embracing mistakes can increase a writer's self-awareness. They become more attuned to their writing habits, strengths, and weaknesses, which can guide their personal development and growth as a writer. In conclusion, embracing mistakes is not about celebrating errors, but about recognizing their value in the learning process. By adopting this mindset, a writer can turn every mistake into a stepping stone towards becoming a better writer.
e89022fc-8852-4c74-986b-2bea85830ed1
trentmkelly/LessWrong-43k
LessWrong
[SEQ RERUN] Devil's Offers Today's post, Devil's Offers was originally published on 25 December 2008. A summary (taken from the LW wiki):   > It is dangerous to live in an environment in which a single failure of resolve, throughout your entire life, can result in a permanent addiction or in a poor edit of your own brain. For example, a civilization which is constantly offering people tempting ways to shoot off their own feet - for example, offering them a cheap escape into eternal virtual reality, or customized drugs. It requires a constant stern will that may not be much fun. And it's questionable whether a superintelligence that descends from above to offer people huge dangerous temptations that they wouldn't encounter on their own, is helping. Discuss the post here (rather than in the comments to the original post). This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Harmful Options, and you can use the sequence_reruns tag or rss feed to follow the rest of the series. Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
43f320c9-31d3-4875-8558-1eecb4bd93a4
trentmkelly/LessWrong-43k
LessWrong
Information Loss --> Basin flatness This work was done under the mentorship of Evan Hubinger through the SERI MATS program. Thanks to Lucius Bushnaq, John Wentworth, Quintin Pope, and Peter Barnett for useful feedback and suggestions. In this theory, the main proximate cause of flat basins is a type of information loss.  Its relationship with circuit complexity and Kolmogorov complexity is currently unknown to me.[1]  In this post, I will demonstrate that: 1. High-dimensional solution manifolds are caused by linear dependence between the "behavioral gradients" for different inputs. 2. This linear dependence is usually caused when networks throw away information which distinguishes different training inputs.  It is more likely to occur when the information is thrown away early or by RELU. Overview for advanced readers: [Short version] Information Loss --> Basin flatness Behavior manifolds Suppose we have a regression task with 1-dimensional labels and k training examples.  Let us take an overparameterized network with N parameters.  Every model in parameter space is part of a manifold, where every point on that manifold has identical behavior on the training set.  These manifolds are usually[2] at least N−k dimensional, but some are higher dimensional than this.  I will call these manifolds "behavior manifolds", since points on the same manifold have the same behavior (on the training set, not on all possible inputs). We can visualize the existence of “behavior manifolds” by starting with a blank parameter space, then adding contour planes for each training example.  Before we add any contour planes, the entire parameter space is a single manifold, with “identical behavior” on the null set.  First, let us add the contour planes for input 1: Each plane here is an n-1 dimensional manifold, where every model on that plane has the same output on input 1.  They slice parameter space into n-1 dimensional regions.  Each of these regions is an equivalence class of functions, which all behave about the
5f947988-2daa-4e26-87ad-24742b3851e1
StampyAI/alignment-research-dataset/youtube
Youtube Transcripts
AI and neuroscience: The virtuous circle - DeepMind: The Podcast (S1, Ep1) I think I was about seven years old when I wrote my first line of code it was probably something simple printing my name to the screen or a two-dimensional shape that could be twisted and stretched by my sequence of carefully typed letters and digits it was an extraordinary feeling a first sense of where the logic defining power of the computer could take us but it wasn't until much later that I started to come across the term artificial intelligence or AI and well what a world that would open up AI holds enormous promise for the future and I think these are incredibly exciting times to sort of be alive and working in these fields we want to kind of understand and master increasingly complex systems AI must be build responsibly and safely and used for the benefit of everyone in society and we have to ensure the benefits accrue to everyone you know I think AI can be one of the most exciting and transformative technologies we'll ever invent that is the voice of demis hassabis the CEO of deepmind the london-based artificial intelligence company for demos ai will allow us to create computer systems that can learn to solve complex problems by themselves in his words society could use intelligence to solve everything else cancer climate change language energy in short to advance scientific discovery but just how far-fetched are these goals can researchers really crack intelligence and just how much of an impact would that really have I'm Hanna Frey and this is deep mind the podcast for the past year I've been at deep mind HQ in London for an inside look at the fascinating world of AI research and where it's going we will be telling you the fast-moving story of the biggest challenges in artificial intelligence or AI so whether you just want to know more about where the technology is headed all want to be inspired on your own AI journey then you've come to the right place we will focus on the projects that scientists researchers and engineers are actually working on how they're approaching the science of AI and some of the tricky decisions the whole field is wrestling with at the moment and whilst we're here we've explored the rooms full of computer screens where scientists run their endless experiments the meeting rooms where people write intricate equations on whiteboards packed to the rafters the robots and the laboratories where banks of repetitive robot arms grapple with piles of plastic bricks and we've talked to a huge number of people to try to understand what is driving this new frontier the voices that you'll hear in this podcast are from the people that are at the cutting edge of AI and machine learning and quite a few of them are talking about their work publicly for the very first time but if we want to solve intelligence let's start with a fundamental question of AI what exactly do we mean by intelligence if we're trying to make machines intelligent what are we actually aiming for this is sort of something that's debated a lot in AI world is like well do we want to you know have our AI agents act exactly the same way that people do like should they be exactly human-like intelligent or should they just be intelligent in general this is Jess Hamrick a research scientist at deep mind her specialism is imagination and mental simulation there's sort of I guess like you know one group of people who like to say that you know we want to build something that's just generally intelligent that's really able to solve a lot of different problems in the world that humans aren't necessarily able to solve that has an intelligence that's higher than humans so this might be able to solve problems like how do we cure all diseases like maybe maybe and artificial intelligence might be able to help us solve this problem and that's you know something human society and human civilization hasn't yet been able to accomplish but then there's also another group of people who say that it's really important for us to build AI that is similar to human intelligence at least in some ways I would consider myself to be sort of in the latter group why does it need to be similar to human the reason is because as we build AI we as humans need to be able to interact with AI and collaborate with it be able to understand the predictions that it's making or the recommendations that it's making and if we build AI in a way that is maybe we are able to build AI and it's generally intelligent but it acts in a way that's so alien to humans that we just can't really understand what it's doing and I think that actually would be a really bad scenario to be in because either it means that people don't trust it and then people are very unwilling to you know use the recommendations of this AI maybe it says oh do this one thing and this will like cure this disease and but people don't understand why it's making that recommendation maybe we miss out on a lot of opportunities to really do a lot of good in the world we need Rai to understand the world in the same way that we do it needs to be able to explain itself to us so we can be sure that we can trust it take for instance the story of an AI that was trained to diagnose skin cancer by looking at photographs of skin lesions taken by dermatologists the algorithm did a good job of correctly labeling the images but the researchers soon discovered that the AI wasn't looking at the cancer at all to make its decision it had simply learned that lesions photographed next to a ruler are more likely to be malignant not exactly trustworthy it's crucially important that artificial intelligence is able to grasp the subtleties of human thoughts we want it to do what we mean it to do not just what we say we mean but that doesn't necessarily imply it nice to think in exactly the same way as people do there can be drawbacks to trying to imitate human or animal brains too closely we get into discussions about where the strategy can limit you this is Matt Banach Matt is the director of neuroscience research at deep mind where he draws on his experience in cognitive neuroscience and experimental psychology Matt believes the human mind is the inspiration but AI research has to take things further in its own way you know the Wright brothers when they solved the problem of flight you know that people like to say oh they solved the problem when they stopped trying to copy bird's wings which you know in some technical way might be true but they wouldn't have gotten to where they were right if they hadn't spend an awful lot of time and if other people hadn't spent an awful lot of time looking at bird's wings and noticing the airfoil pattern and thinking about the the dynamics of the of the air that flows around an object with this shape so yeah we we do believe that we can look to the human brain in the human mind for inspiration but we also talk about them when the moment comes where we need to kind of step away from that and just build something that does what we want it to do so what is the neuroscience equivalent what are the birds wings of our brains the aspects of our own intelligence that we can use for inspiration as we build AI well one area that seems to hold a lot of promise is memory and in particular something we all do known as we play replay as a phenomenon that was discovered in a part of the mammalian brain the medial temporal lobe including the hippocampus where you see neural activity that suggests that past experiences are being replayed especially in in navigation for example a rat will go through some environment and a particular pattern of activity will arise as it goes through the environment and then later if you have electrodes in the hippocampus you can see that the same pattern of activity the same sequence is occurring suggesting that a memory is being replayed of that experience and that idea now has a firm place in AI if you lose your car keys you can run your mind through where you've been to workout where you might have left them well I first went into the kitchen and took my coat off in the hallway put my bag down on the side and oh yeah they're in my back pocket that ability to replay your experiences and learn from that memory after the fact is a key part of what researchers want AI to be able to do here's more from that the way that that's implemented in deep minds agents is it's not exactly what you find in the brain it wasn't as if people were trying to slavishly recreate the the biological mechanisms but the idea of replay which was inspired by neuroscience came in handy in 2015 replay played a pivotal role in a famous deepmind breakthrough the team managed to build an AI system that could play arcade classics to a superhuman level the old Atari games like space invaders pong and breakout the AI use something called deep reinforcement learning but behind the scenes it kept a memory of moves it made as it played and how those moves had impacted on the final score by replaying those memories the AI could learn from his experiences it could work out what sequences of moves worked well which were mistakes and find strategies that otherwise wouldn't have been obvious but there's more to our human memories than just a giant database of facts of course you can remember the name of the capital of France but you might also be able to remember jumping on the bouncy castle at your 6th birthday party or the pranks you played on your last day at school this is a phenomenon called episodic memory and it's something that holds a great deal of promise for AI we talk a lot about something called episodic memory which is simply the cognitive ability to retrieve a memory of something that happened to you before we started recording we were joking about like what did you have for breakfast your ability to cast your mind back to that moment when you were eating breakfast and retrieve that information that's a function that psychologists and neuroscientists refer to as episodic memory and we have this category both because psychologists work hard over decades to fractionate memory into particular domains or kinds but this is a pretty high-level idea it's not like replay it's just hey there's such a thing as episodic memory which is very important for human intelligence maybe our agents should have episodic memory what would that mean what would it mean for an artificial agent to have episodic memory this is an intriguing possibility an AI that can transport itself back in time and recall entire events and experiences rather than just facts when you stop and think about it this ability to link one memory with another is an amazing human skill and if researchers can get a better understanding of how our brains actually do this it could be replicated in AI systems giving them a much greater capacity for solving novel problems let's think about how that works for a moment imagine that every morning you see the same man in his thirties walking a boisterous collie then one day a white-haired lady who looks like the man comes down the street with the same dog with those events stored as episodes in your mind you might immediately make a series of deductions the man and the woman might come from the same household the lady maybe the man's mother or another close relative perhaps she's taken over his role because he's ill or busy we weave an intricate story of these strangers pulling material from our memories together prioritizing some pieces of information over others to make it coherent it's something that's been the focus of recent research by the neuroscientists here a study in September 2018 demonstrated the critical role of the hippocampus that shrimp shaped seat of memory in the middle of the brain in weaving together individual memories to produce new insight Jess Hamrick is also looking at another way that a eyes can be made to respond more flexibly to new situations she takes our inspiration from a different human ability mental simulation what you and I might call imagination imagine that you're on a beach you'll have like this mental picture kind of spring to mind of you know mine at least is maybe a sandy beach the bright blue ocean maybe some palm trees on the slope and so this is an example of what we would call mental simulation it's like we're mentally stimulating this picture of the beach and then you can do things with that simulation so you can imagine adding other people to your imagination you can imagine what would happen if you like threw a ball if you're playing volleyball or something like that so these these sort of mental simulations are really interactive and really rich and I think that they underlie a lot of our human ability to understand the world and make predictions about the world I should pause for a moment here to explain what Jess and maps mean by an age in here it's a word that's used a lot at deep mind remember when people are talking about artificial intelligence they're really just talking about computer code with the freedom to make its own decisions and an agent is just the noun that they use to describe the part of that code that has agency Jeff's is hoping to build agents that are flexible enough to adapt to all manner of environments it's a very grand ambition but one with real potential to see why let's go back to the RKO and like game of space invaders mastered using deep reinforcement learning to create an agent called deep Q network or dqn dqn was really sort of an amazing technological feat because it was able to be trained to play many many different Atari games directly from perception from pixels this is something that hadn't been done before but the way that dqn also works is that it really just goes directly from inputs to output so it takes in the image of the video game and outputs immediately what actions should be taken to maximize the score in that game so maybe it's a move left maybe it's you know push the trigger to shoot all of these actions are being taken just to maximize that score and the agent doesn't know why that action is good it only knows this action will give me a higher score and so the agent isn't able to really do anything else besides that you can't ask the agent to say hide behind one of the pillars until that pillar is destroyed or destroy all of the incoming space invaders in one line and none of the other space invaders so these are all kinds of like different tasks that you could give a human and they may a little bit weird but humans would understand what what it means to do this and that's because humans have this ability for mental simulation to imagine what will happen if they take different actions and so by giving our agents the ability to imagine things and also plan according to different tasks that it might be given they're able to act more flexibly and deal with these sort of novel situations but humans aren't the only form of intelligence we can draw inspiration from we can also learn from our cousins in the animal kingdom let's bring in researcher Greg Wayne within neuroscience Greg's thing is memory and cognitive architecture one of the things that is quite clear is that animals have a remarkable ability to deal with for example very long time scales so experiences that can be linked across periods of time that is way beyond our our current sets of agents the great example I think is the scrub Jay the Western scrub Jay they bury things they prepare for the winter by scrounging up a lot of food and putting it into depositing different places hiding it from each other and they love to steal each other's food - they're scavengers and they can remember thousands of sites where they've they buried their food so once all at once and they they can even no detailed facts about it they know how long ago they buried things that they know if they were being washed while they're burying things they know what thing they buried there they have an incredible memory for these events that they have produced themselves how can you tell that they know what they buried because they have a preference you'll see that they'll they like maggots more than peanuts they'll go back to those maggots first having a kind of large database of things that you've done and seen that you can access and that you can use to then guide your your goal-directed behavior later you know I'm hungry mmm I would love to have some maggots right now where should I go find those that's that's the kind of thing we would like to replicate and there's another big lesson we can learn from animals if you want to teach dog to sit you don't write a list of instructions move this muscle bend your leg 45 degrees anything like that instead you repeat the same task over and over again offering punishments and rewards as you go and if it's good you give it a little bit of food that's how we train dogs now I have a friend who trains dogs to do things on iPads using reinforcement learning so we've already started on the path in AI of merging reinforcement learning very closely with how our AI is make decisions and so on and that's how we train them so you're essentially training an artificial intelligence an AI in the same way that you might train a dog rewarding them for good behavior ignoring that behavior very nice okay how do you treat an AI what does it mean to reward something that isn't interested in doggy biscuits here's dem assess Arbus well with artificial systems or they really care about is ones and zeros so you can construct artificial reward mechanisms for almost anything we've now moved away from programming the system solution so it now learns for itself so now going up a meta level so now what we're really programming or designing is reward systems so it's kind of interesting that that now is becoming the difficult part is like how do you design curricula how do you design breadcrumb trails or rewards so that eventually they learn the right things these systems but there's also the idea of unsupervised learning which is how do you learn things if in the absence of any reward and actually that's the issue with reward learning in the real world as humans or even as children there aren't very many rewards it's quite sparse the rewards even as a dog right the dog gets a doggie biscuit every now and again but has to decide every moment like what to do and actually I think one of the answer to that is what we call intrinsic motivation which is internal drives that have come through in animals have come from evolution but we could also evolve or build in those drives are very strong and they guide the animal or the system even in the absence of external rewards so of course that might be things like joy or fear or even things like hunger these are all primal kind of internal motivations that drive your behavior even in the absence of any external reward you're listening to deep mind the podcast a window on AI research while rewards might be a key part of how to encourage AI to learn one of the main aims of machine learning is for AI to be able to teach itself to notice patterns and shortcuts between tasks and make themselves more efficient learners in an ideal world engineers would like to reach a point where AI can learn in a similar way to humans picking up the essentials of a new task in a matter of minutes back to mat botnik an example would be I went on holiday recently to South America and I wanted to brush up my Spanish and I knew exactly how to do that I knew what resources were out there for to begin with but more importantly when I sat down to brush up my Spanish I had a whole repertoire of concepts that really guided me like I know what it means to conjugate a verb right I know that in certain languages there are masculine to feminine forms so this background knowledge helped me learn much more rapidly than if I just sort of was dumped into the middle of you know a new language without understanding what it means to learn a language and we want systems we want artificial systems that come armed with these concepts it's not just about language it could be video games we could sit down in front of a new video game that you've never played but if you've played video games in the past you kind of know how video games work and that helps you to learn rapidly the AI is what's known as narrow in its focus now that might be diagnosing cancer or playing video games but the ultimate goal is to create something much more powerful something called artificial general intelligence with precisely this ability of being able to adapt to different situations to be able to use the high-level concepts it's learned in one environment and apply them in another we don't want just a system that's it's really good at one thing we want a system that's really good at lots of things but really what we mean is we want a system that can pick up new tasks that it's never performed before we want an intelligence where you say ok you've never solved this kind of problem before but let me let me tell you what I want you to think about now and you could introduce them to organic chemistry or something and they would be able to work with that humans can do this but getting machines to do this is really really tricky and it's not the only thing that we humans can do that AI finds hard Gregg spends a lot of his time trying to understand the detailed mental processes behind apparently simple human tasks you have breakfast and you drink your orange juice and you run out and then you think to yourself god when I leave work I'm gonna have to pick up some art shoes you go through your workday you don't even think about orange juice once and then it springs to mind you know immediately as you're leaving the office that you need to go pick up some large juice when you are going to buy the orange juice it is actually of no value to your present self the only self that will benefit from buying the orange juice is yourself at breakfast the next day so you're actually you have to do something that is incredibly prospective or thinking forward thinking about the context of your future self this is something I mean people here really sit around and sort of talk about and try and work out what is it about your brain that reminds you to buy orange juice at the right moment yes because you can easily construct virtual environments with tasks for agents that we normally have that have properties like this like thinking minutes or hours ahead or remember something from hours ago that our normal agents completely stumble on they cannot do why is that they seem easy seems easy to buy orange juice there's a theme emerging here back in the 1980s hands Moravec and his colleagues pointed out that when it comes to artificial intelligence everything is a little bit upside down while the things that humans find tough like maths and chess and data crunching require very little computation the things that we humans manage without even thinking turn out to be monumentally difficult for machines it's a phenomenon that has become known as more of X paradox like other neuroscientists and psychologists here I I find myself thinking about stuff that seems really simple stuff that I do and other people do really without thinking about it and it just doesn't seem that big a deal but it turns out to be those some of those things turn out to be very difficult to engineer into artificial systems so picking things up putting things down planning a route through a building things that we can just do without really much mental effort sometimes proved to be quite difficult to engineer an example of this just came up as we walked into this room we all realize that it was quite stuffy in here and that we wanted to try to cool it down so we all huddled around the thermostat and tried to figure out how to get it to do what we wanted and it seemed to be resistant and at some moment I thought wait a minute maybe maybe the air conditionings just broken and again that seems like a super simple thing like you know what's such a big deal about that but actually in AI research we have a name for this which is latent state inference we're trying to infer some aspect of what's going on which is latent or hidden and it turns out in order to do that seemingly simple thing you need a very rich model of the world you need to understand air conditioners and thermostats and what it means to be broken and what's the probability that it's broken and so forth more of a paradox is often talked about as some kind of profound mystery it's used as evidence that while the jobs of animal some lawyers might be at risk in an age of AI gardeners receptionist's and cooks are secure in their careers for decades to come but deep mines founder demis hassabis has quite a different take I think it's quite obvious as a simple explanation for it when Marivic was doing AI the dominant paradigm was expert systems so hand crafting solutions directly to AI problems think of it as building big databases of rules of course if you're going to do that that in itself is a very explicit task programming that out you know you have to know exactly what you want to write and what rules you want to incorporate and what that means is the only tasks you can do that with are the ones that you explicitly know how to do as humans yourself and that's things like the logical based like maths and chess so weirdly the things that we do intuitively ourselves and effortlessly like walking and seeing and you know all of these sort of sensory motor skills seems effortless to us and the reason is is because there's actually huge amounts of brain processing is going into that it's just that it's subconscious its areas of the brain we don't have conscious access to we probably knew less about neuroscience at the time so we didn't realize quite how much processing goes on in the visual cortex for example and so now we know both of those things we know how the brain works better and we've built learning systems like alpha zero and alphago so it turns out actually vision is not any more difficult really than playing go it's similar if you approach it in the same way it's almost impossible to reverse-engineer our unconscious skills using the old methods of handcrafted programming you have to have a total and complete conscious understanding of how something worked before you could ask a computer to replicate it but now the machines are just beginning to mimic our subconscious processes like vision and pattern recognition there's no reason why more of X paradox needs to necessarily be a barrier in the future [Music] I have to be honest with you this single idea more than any I've learned in making this series is the one hit home and underlines the power and potential of AI for me all that we've managed so far in everything that we've created with machines are only the things that we consciously know how to order them to perform we're only just at the very beginning of artificially mimicking our subconscious processes - and that means that there is an extremely exciting journey ahead but this partnership of studying neuroscience and artificial intelligence alongside one another doesn't just help make our AI better here's Matt Botvinnik and Jess Hamrick again to explain we often talk here about the virtuous cycle the opposite of a vicious cycle right there's a virtuous cycle between AI and neuroscience where neuroscience helps AI along and then AI like returns the favor one of the reasons why we can get this virtuous cycle between neuroscience and cognitive science and AI is because fundamentally we're all trying to study the same thing which is intelligence and so if we ask sort of these more abstract questions about what should an intelligence system do in this situation we can ask that about humans what would a person do in this situation and try to come up with an answer we could ask what should our a AI agent do in this situation and try to come up with an answer or if we have an answer already in one of those fields we can take the solution and apply it to one of the other fields and I think that's sort of really what enables this this ability to transfer between the different fields [Music] this isn't just a theoretical flow of ideas there are real examples of ideas from artificial intelligence finding their way back into neuroscience so there's a neurotransmitter a chemical that conveys messages in the brain called dopamine in the 1990s people were finding ways of tracking the release of dopamine in the brain and very clear patterns were being identified but nobody really understood what they meant why does the brain release dopamine in this situation and not that situation and as I understand the history some papers hit the desk of some people who were studying computational reinforcement learning people like Peter Dayan and Reed Montagu and they just saw immediately that the the patterns of activity that were being reported in these neuroscience papers the dopamine data could be explained by the math that's involved in reinforcement learning that has led to a real revolution in the neuroscience of learning if you give a monkey a treat they get a little heat of dopamine in their brains it's the same in our brains too a little burst of pleasure whenever something good happens but in the 1990s researchers realized that dopamine wasn't actually the response to the reward it was reporting back about the difference between what the monkey expected the reward to be and what it actually received if you're walking down the road and you unexpectedly find a 20 pound note it's much more exciting than if you're collecting a 20 pound nose that's owed to you by a friend and if a monkey is expecting you to give it a grape and you hand it a piece of cucumber it's gonna be a lot less happy than if you just surprised it with a bit of cucumber from nowhere thing is AI researchers were already using something that acted in a very similar way in their algorithms they'd get their agents to make a prediction about what was going to happen next and compare it to what actually occurred but remember in all of this the idea is to just take inspiration from the way that our human brains work not to make a straightforward artificial copy because our brains on exactly perfect so we've heard how we can take inspiration from the human brain from the animal and even the bird brain to create AI systems but this isn't just a working theory anymore researchers aren't just talking about what they want to do they're also talking about what they've actually managed to do let me tease you with car I cover cholo director of research at deep mind it's a simple problem of course you can write a program to solve that but the idea was try to do deep reinforcement learning try to come up with a system that we think can generalize to different problems more problems and once we saw that it was a matter of weeks we had ten or fifteen Atari games being solved if you would like to find out more about the link between AI and the brain or explore the world of AI research beyond deep mind you'll find plenty of useful links in the show notes for each episode and if there are stories or sources that you think other listeners would find helpful then let us know you can message us on Twitter or email the team at podcast that deepmind comm you can also use that address to send us your questions or feedback on the series let's take a little breather see you shortly
d7b6dbe5-6739-41ad-ae12-38dbe0fe2ad7
StampyAI/alignment-research-dataset/lesswrong
LessWrong
A long reply to Ben Garfinkel on Scrutinizing Classic AI Risk Arguments In July, Ben Garfinkel scrutinized the classic AI Risk arguments in a 158 minute-long [interview](https://80000hours.org/podcast/episodes/ben-garfinkel-classic-ai-risk-arguments/) with 80000 hours, which I strongly recommend. I have formulated a reply, and recorded 80 minutes of video, as part of two presentations in the [AISafety.com](http://AISafety.com) Reading Group: [196. Ben Garfinkel on Scrutinizing Classic AI Risk Arguments](https://youtu.be/_kNvExbheNA) [197. Ben Garfinkel on Scrutinizing Classic AI Risk Arguments 2](https://youtu.be/j-_FvJ-XbWA) I strongly recommend turning subtitles on. Also consider increasing the playback speed. --- *"I have made this longer than usual because I have not had time to make it shorter."* -Blaise Pascal The Podcast/Interview format is less well suited for critical text analysis, compared to a formal article or a LessWrong post, for 3 reasons: 1. **Lack of precision.** It is a difficult skill to place each qualifier carefully and deliberately when speaking, and at several points I was uncertain if I was parsing Ben's sentences correctly. 2. **Lack of references.** The "Classic AI Risk Arguments" are expansive, and critical text analysis require clear pointers to the specific arguments that are being criticized. 3. **Expansiveness.** There are a lot of arguments presented, and many of them deserve formal answers. Unfortunately, this is a large task, and I hope you will forgive me for replying in the form of a video. tl;dw: A number of the arguments Ben Garfinkel criticize are in fact not present in "Superintelligence" and "The AI Foom Debate". (This summary is incomplete.)
85e6a31b-1c63-4927-a199-06094219c500
trentmkelly/LessWrong-43k
LessWrong
The limits of black-box evaluations: two hypotheticals A prominent approach to AI safety goes under the name of "evals" or "evaluations". These are a critical component of plans that various major labs have, such as Anthropic's responsible scaling policy or OpenAI's preparedness framework. At bottom, these approaches propose to monitor increasingly powerful models using evaluations, and then take some conditional action based on the results, such as implementing more stringent safeguards if an evaluation suggests a model presents a particular risk. The easiest evaluations to implement (and from my perspective by far the most common) are "black-box" evaluations, meaning they simply evaluate the inputs to a model and their corresponding outputs, they don't "look inside" the model as part of the evaluation ("white-box"). Are black-box evaluations sufficient to achieve their purposes within these various AI safety frameworks? It's possible that they will be in practice, but here I argue that evaluations aren't sufficient in and of themselves without further assumptions to support them. The great appeal of evaluations is that they are empirical in nature. Sure, there are many theoretical arguments around risks from advanced AI systems, but these are inherently speculative. If we can do scientific research on actual AI systems, we can use empirical evidence to resolve these theoretical disputes. I think if you follow the output from major labs, this idea is rather central to their perspective and approach. Why build a system that you think has a reasonably high chance of causing a catastrophic outcome for the entire world? The general answer from labs is essentially "because the only way to address the risks empirically is the build precursor systems for us to study". The hope is that while theoretical disagreements may remain intractable, empirical evidence can resolve different opinions or perspectives and get everyone on the same page about what is really need to ensure safety. Below, I present two hypotheticals which I
90f088cf-daf3-414b-89a2-b0a7163966e8
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Causality, Transformative AI and alignment - part I **TL;DR:** transformative AI(TAI) plausibly requires causal models of the world. Thus, a component of AI safety is ensuring secure paths to generating these causal models. We think the lens of causal models might be undervalued within the current alignment research landscape and suggest possible research directions.  This post was written by Marius Hobbhahn and David Seiler. MH would like Richard Ngo for encouragement and feedback.  If you think these are interesting questions and want to work on them, write us. We will probably start to play around with GPT-3 soonish. If you want to join the project, just reach out. There is certainly stuff we missed. Feel free to send us references if you think they are relevant.  There are already a small number of people working on causality within the EA community. They include [Victor Veitch](http://www.victorveitch.com/), [Zhijing Jin](https://zhijing-jin.com/fantasy/) and [PabloAMC](https://www.lesswrong.com/posts/5BkEoJFEqQEWy9GcL/an-open-philanthropy-grant-proposal-causal-representation). Check them out for further insights. There are also other alignment researchers working on [causal influence diagrams](https://www.lesswrong.com/posts/Cd7Hw492RqooYgQAS/progress-on-causal-influence-diagrams) (authors: *Tom Everitt, Ryan Carey, Lewis Hammond, James Fox, Eric Langlois, and Shane Legg)* whose work is very much related.  Causality - a working definition: --------------------------------- Just to get this out of the way: we **follow a broad definition of causality**, i.e. we assume it can be learned from (some) data and doesn’t have to be put into the model by humans. Furthermore, we don’t think the representation has to be explicit, e.g. in a probabilistic model, but could be represented in other ways, e.g. in the weights of neural networks.  But what is it? In a loose sense, you already know: things make other things happen. When you touch a light switch and a light comes on, that’s causality. There is a more technical sense in which no one understands causality, not even Judea Pearl (where does causal information ultimately come from if you have to make causal assumptions to get it? For that matter, [how do we get variables out of undifferentiated sense data](https://www.lesswrong.com/posts/N5Jm6Nj4HkNKySA5Z/finite-factored-sets)?). But it's possible to get useful results without understanding causality precisely, and for our purposes, it's enough to approach the question at the level of [causal models](https://www.lesswrong.com/s/SqFbMbtxGybdS2gRs/p/hzuSDMx7pd2uxFc5w). Concretely: you can draw circles around phenomena in the world (like "a switch" and "a lightbulb") to make them into nodes in a graph, and draw arrows between those nodes to represent their causal relationships (from the switch to the lightbulb if you think the switch causes the lightbulb to turn on, or from the lightbulb to the switch if you think it's the other way around). There’s [an old Sequences post](https://www.lesswrong.com/s/SqFbMbtxGybdS2gRs/p/NhQju3htS9W6p6wE6) that covers the background in more detail. The key points for practical purposes are that causal models: 1. Are sparse, and thus easy to reason about and make predictions with (or at least, easier to reason about than the joint distribution over all your life experiences). 2. Can be segmented by observations. Suppose you know that the light switch controls the flow of current to the bulb and that the current determines whether the bulb is on or off.  Then, if you observe that there’s no current in the wire (maybe there’s a blackout), then you don’t need to know anything about the state of the switch to know the state of the bulb. 3. Able to evaluate counterfactuals. If the light switch is presently off, but you want to imagine what would happen if it were on, your causal model can tell you (insofar as it’s correct). Why does causality matter? -------------------------- Causal, compared to correlational, information has two main advantages. For the following section, I got [help from a fellow Ph.D. student](https://rpatrik96.github.io/posts/2021/11/poc5-stats-vs-causality/). ### 1. Data efficiency **Markov factorization:** Mathematically speaking, [Markov factorization ensures](https://rpatrik96.github.io/posts/2021/10/poc2-markov/) conditional independence between some nodes given other nodes. In practice, this means that we can write a joint probability distribution as a sparse graph where only some nodes are connected if we assume causality. It introduces sparsity.  *“Namely, if we have a joint with n binary random variables, it would have 2^n - 1 independent parameters (the last one is determined to make the sum equal to 1). If we have k factors with n/k variables each, then we would have k(2^(n/k)  - 1) independent parameters. For n=20 and k=4, the numbers are 1048576 vs. 124.” -* [*Patrik Reizinger*](https://rpatrik96.github.io/posts/2021/10/poc2-markov/) **Independent Mechanisms:** the independent mechanisms principle ensures that factors do not influence each other. Therefore, if we observe shifts in our data distribution, we only need to retrain a few parts of the model. If we observe global warming, for example, the vast majority of physics stays the same. We only need to recalibrate some parts of our model that relate to temperature and climate. Another example is the lightbulb blackout scenario from above. If you know there is a blackout, you don't need to flip the switch to know that the light won't turn on. The conclusion of these two statements is that correlational models assume a lot more relations between variables than causal models and the entire model needs to be retrained every time the data changes. In causal models, however, we usually only need to retrain a small number of mechanisms. Therefore, causal models are much more sample efficient than correlational ones.  ### 2. Action guiding Causal models introduce a very strong assumption on the model. Namely, variables are not just related, they are related in a directed way. Thus, causal models imply a testable hypothesis. If our causal model is that taking a specific drug reduces the severity of a disease, then we can test this with an RCT. So our model, drug -> disease, is a falsifiable hypothesis.  The same thing is not possible for correlational models. If we say the intake of drugs correlates with the severity of the disease we say that either the drug helps with the disease, people who have less severe diseases take more drugs or both depend on a third variable. As soon as we intervene by fixing one variable and observing the other, we have already made a causal assumption.  Correlational knowledge can still be used for actions--you can still take the drug and hope the causal arrow goes in the right direction. But it could also have a different effect than desired since you don’t know which variable is the cause and which one is the effect. Causal models greatly improve the ability of models to make decisions and interact with their environment. Therefore we think it is highly plausible that transformative AI will have some causal model of the world. Due to the rise of data-driven learning, we expect this model to be learned from data, but we could also imagine some human interference or inductive biases. Overall, we think that the thesis that causality matters for TAI is not very controversial but we think there are a lot of implications for AI safety that are not yet fully explored.  Questions & Implications for AI safety: --------------------------------------- If the causal models in ML algorithms have a large effect on their actions/predictions, we should really understand how they work. Some considerations include: 1. **Which causal models do current ML architectures have?** Does GPT-3 have a causal model of the world and how can we find out? Can we find sets of prompts that give us relevant information about this question? Can interpretability tell us something about the internal causal model?  If our ML model has learned a slightly wrong causal model of the world, it will make incorrect predictions on data points outside of the training distribution. Therefore it seems relevant to understand which kind of model the algorithm is acting on. This is a subcategory of alignment and interpretability. 2. **What are the inductive biases of causal models?** Do classification networks learn causality and do they even need to? We know from interpretability that they learn associations but is it more “If structure X is in the image then Y” or “Structure X and label Y seem related”. Which inductive biases do LLMs have wrt causality? Do RL architectures automatically learn causality because they intervene?  If we could say, for example, with higher certainty whether LLMs create internal causal (vs. correlational) models of the world, they might be easier to control and we could get higher certainty about their predictions. 3. **Do we need interventions to learn causal models efficiently?** It seems intuitively plausible that interventions speed up learning but they are not strictly necessary. Economists, for example, use natural experiments to derive causal conclusions from observational data. While this is certainly nice, we don’t know whether a lot of observational data is sufficient to build large causal world models.  We are scared of ML algorithms increasingly interacting with the real world because if the interventions go wrong they can do a lot of harm. [GPT-3 recently got hooked up to google](https://openai.com/blog/improving-factual-accuracy/) and we expect someone to be mad enough to give it even more access to interventions on the internet. If there was a non-interventional way to get similar results, we would certainly prefer that. 4. **What is the difference in resource efficiency between humans and current ML algorithms?** It is plausible that humans need less data to learn a new task than training current ML models from scratch. However, it is unclear how large that difference is when models are pre-trained to a comparable level of human pretraining from evolution. If we compare the time, for example, it takes humans to beat OpenAI five with the time it takes to train OpenAI five to beat these strategies again, we might get closer to the difference in resource efficiency. Some people have already asked whether [GPT-3 is already sample-efficient](https://www.lesswrong.com/posts/Eg5AEMhGdyyKRWmZW/is-gpt-3-already-sample-efficient?fbclid=IwAR04iL3X5EepJXxsJL8uzLDi7HbCuaqv5b0umXMGEyKsi-5XtMbS4yKDKzc) (for fitting new data after pretraining). This could also be explored further. Having a better understanding of this difference in training efficiency might give us more insight into the quality of the world model of current algorithms. 5. **A worry:** Our intuition is that humans have a bias to overidentify causality, i.e. see causality when it is not necessarily given. This might have been a good survival strategy for our ancestors since not identifying a causal mechanism is likely more deadly than incorrectly identifying one. However, in today’s complex world, this bias might be inappropriate. Just think about how many different stories of causal mechanisms are told after any election, most of which are simplistic and monocausal--"Hillary lost because of X”.  Our worry is that ML researchers, once they figure out how, will introduce a similar “overidentifying causality” inductive bias into models. This would mean that very powerful models with potentially big impacts have the causal model of a political pundit rather than a scientist.  Furthermore, since language models are trained on text that is generated by humans, they might just learn this bias on their own. Then, GPT-n would be as useless as the average political analysis. What now? --------- We ask a lot of questions but don’t have many answers. Thus, we think the highest priority is to get a clearer picture, e.g. refine the questions, translate them into testable hypotheses and read more work from other scientists working on causality. We think that reasonable first steps could be: 1. **Investigate GPT-3 wrt causality.** [BigBench](https://github.com/google/BIG-bench) is an effort to benchmark LLMs and it includes some questions about causality. But there are certainly more questions one could ask. 2. **Summarize the literature on causality from an AI safety perspective.** The field of causality is large and scrambled across ML, economics, and physics. Just collecting and summarizing the different findings from an AI safety perspective seems like a promising start. 3. **Think about inductive biases and causality.** Which models even allow for causal models? Which ones necessarily lead to them? Even high-level considerations without mathematical proofs might already be helpful. 4. **Summarize the literature on animals learning causal models.** Surely some scientists have explored this question already, we just have to find them. Maybe it tells us something about AI. If you think these are interesting questions and want to work on them, reach out. We will probably start to play around with GPT-3 soon. There is certainly research we missed. Feel free to send us references if you think they are relevant.  Causality is not everything --------------------------- We don’t want this to be another piece along the lines of “AI truly needs X to be intelligent” where X might be something vague like understanding/creativity/etc. We have the hunch that causality might play a role in transformative AI and feel like it is currently underrepresented in the AI safety landscape. Not more, not less.  Furthermore, we don’t need a causal model of everything. Correlations are often sufficient. For example, if you hear an alarm, you don’t need to know exactly what caused the alarm to be cautious. But knowing whether the alarm was caused by fire or by an earthquake will determine what the optimal course of action is.  So we don’t think humans need to have a causal model of everything and neither do AIs but at least for safety-relevant applications, we should look into it deeper. Conclusion ---------- Causality might be one interesting angle for AI safety but certainly not the only one. However, there are a ton of people in classic ML who think that causality is the missing piece to AGI. They could be completely wrong but we think it’s at least worth exploring from an AI safety lens.  In this post, we outlined why causality might be relevant for TAI, which kind of questions might be relevant and how we could start answering them.  Appendix: --------- ### Is there a clear distinction between causality and correlation? Some people will see our definition as naive and undercomplex. Maybe there is no such thing as causality and it’s all just different shades of correlation. Maybe all causal models are wrong and humans see something that isn’t. Maybe, maybe, maybe.  Similar to how there is no hard evidence for consciousness and philosophical zombies that act just as if they were conscious but truly aren't could exist, all causal claims could also be explained with a lot of correlations and luck. But as argued, e.g. by [Eliezer](https://www.lesswrong.com/posts/7DmA3yWwa6AT5jFXt/zombies-redacted), Occam's razor would make the existence of some sort of consciousness much more likely than its absence and by the same logic causality more likely than its absence.
bc96c065-ee93-43af-85cd-fe4d61a578f1
trentmkelly/LessWrong-43k
LessWrong
The Darwin Game Click here to participate. Entries must be submitted on October 18th, 2020 or earlier. Entry is now closed. ---------------------------------------- In 2017, Zvi posted an exciting story about The Darwin Game, a variation of iterated prisoner's dilemma. I will run my own version of the game in the week following October 18th, 2020. You do not have to know how to program in order to participate. I will code simple bots for non-programmers. If you do know how to program then you may create your own complicated bot. Here are the rules. Changes from Zvi's original game are in brackets [like this]. > For the first round, each player gets 100 copies of their program in the pool, and the pool pairs those programs at random. You can and often will play against yourself. > > Each pair now plays an iterated prisoner’s dilemma variation, as follows. Each turn, each player simultaneously submits [an integer] from 0 to 5. If the two numbers add up to 5 or less, both players earn points equal to their number. If the two numbers add up to 6 or more, neither player gets points. This game then lasts for a large but unknown number of turns, so no one knows when the game is about to end; [I guarantee it will be at least 100 turns per iterated prisoner's dilemma]. > > Each pairing is independent of every other pairing. [You do know what round of the game it is and that you are facing an opponent. If you face a copy of yourself you are automatically awarded the maximum 5 points per round (2.5 points per bot). You otherwise do not know any history of the game to this point.] Your decision algorithm does the same thing each pairing. > > At the end of the round, all of the points scored by all of your copies are combined. Your percentage of all the points scored by all programs becomes the percentage of the pool your program gets in the next round. So if you score 10% more points, you get 10% more copies next round, and over time successful programs will displace less successful p
80c740ad-53c5-4466-b7c6-f47e7379ec7e
trentmkelly/LessWrong-43k
LessWrong
Algorithms as Case Studies in Rationality This post springs out of a very long line of thought, which I will only summarise small parts of. It began with the puzzling realisation that the algorithm which computers use to perform symbolic integration is radically different from the guess-and-check method taught in schools. My first reaction was, why are we not taught the systematic way of doing it? This is true for other areas of mathematics as well. In a few cases, such as solving quadratic equations or systems of linear equations, students are eventually taught the fast way. However, in many cases, the existence of an algorithm is not even mentioned. My point, however, is not to criticise the educational practice; in fact, I agree with the idea of teaching mathematics as an exploration of ideas rather than an application of formulaic solution methods. Rather, I would like to encourage people to eventually learn the algorithms, and try to apply them. A good algorithm is a study in rational behaviour, and I think we can take home a lesson from each. I'll just give two examples which I find particularly fascinating: Knuth-Bendix completion and the summary-product algorithm. The first is most relevant to fast mathematical reasoning. The second is relevant to studying the reasonableness of fast-and-messy probabilistic reasoning, the way humans do it. Knuth-Bendix Completion Knuth-Bendix completion ("K-B" from now on) is a fascinating formalisation of the mathematical idea of simplification. Everyone should recognise the phrase "simplify your answer" from high school homework assignments. I never suspected that simplification could be a really powerful method for mathematical reasoning. K-B is a method for reasoning about equality. Equality is one of the simplest relationships we can have between two things, yet the naive way of reasoning about it results in an explosion of possible inferences. If there are N things in our universe, there are N2 equality statements which may be true or false. We can substi
9aa180c6-e6e6-4c47-976e-83b52e5eca48
trentmkelly/LessWrong-43k
LessWrong
Life, not a game Life is a rigged game. Quit the red bloody market, seclude from all the randomness of it and disappear completely. But that's a premise behind “do not compare yourself to others instead compare to the person you where yesterday”. Claiming that one secludes himself from the need to see yourself as lower than a person or superior to one. In order words compete with yourself , or past self. But is it so, since competition in healthy ways where you agree to play. But we don't agree to be alive in the first place. Nobody asked to participate in the game of life. Wittgenstein the family, his father wanted to turn his child into capitalist agent and make profit for him. They all 3 killed themselves like a chain reaction. Beyond that are people who play life like a game.I hear it all over: Of course I will play capitalism and bureaucratic way of hosting institution as a game. But at life itself you draw the line. See Weber on institutions and it's way on personal freedom. And only a foolish would rely on habit alone. If habits aren't you then what counts as you, let's say all your notebooks and journal where burnt to the ground. But you still remember right? Its episodic memory that is the last saviour, but isn't that efficient. Then you are also your data (that many say they have privacy policies but sure they steal your data). So does that mean you must protect yourself . all your digital profile and identity? Taking a hard drive and owning a physical extension of you because believing in the cloud has a slight chance of being destroyed. Google activity and how it tracks you by the timestamp. It is amazing to have this data and being able to switch between ideas and where you read them. Maps of meaning referring to the wikipedia page for the meaning of life. It's quite depressing to reach the end looking for meaning and glossing down the footnotes and googling the authors of the books. From the philosophies of Kant (you come as you like but you pay as you go) and utili
e5e0b617-2559-4f5e-b317-fba49996a3bf
trentmkelly/LessWrong-43k
LessWrong
Depression and Creativity This is cross-posted from New Savanna. NOTE: I’ve posted this interaction with Claude, not so much to present the ideas Claude offered about possible relationships between depression and creativity, but as an example of the kind of conversational interaction one can have with it. I was particularly impressed with the way it advanced the conversation by asking questions. ______________________________________________________________________ I’m working on an article about depression and decided to ask about the connection between depression and creativity. Claude makes a few general remarks and asks for my thoughts. So I mention some remarks that my teacher, David Hays, had made years ago, mentioning that he was a computational linguist. After some general remarks, Claude asks: > ... did Hays develop this idea through his work in computational linguistics? It seems like there might be interesting parallels between how computer systems need to reorganize data structures and how human minds might need to reorganize mental frameworks. Interesting, most interesting. Our ensuing interaction felt a bit like a real conversation. I was surprised and pleased. This felt a bit like, you know, real intelligence. I’ve appended its remarks below. I’m using my standard convention, setting my words in bold while Claude’s is plain. Version: Claude 3.5 Sonnet * * * * * Is there evidence for a relationship between depression and creativity? The relationship between depression and creativity is complex and has been studied extensively. The research shows several interesting patterns: Historical observations have noted many renowned artists, writers, and musicians who experienced depression, like Virginia Woolf, Vincent van Gogh, and Sylvia Plath. However, we should be careful not to romanticize mental illness or assume correlation equals causation. Scientific studies have found some evidence for links between depression and certain aspects of creativity: Some research sugges
26530cdc-9423-4df5-8db0-ab251784647b
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Functional Decision Theory: A New Theory of Instrumental Rationality
8850a7ad-1a40-4cc9-8ee9-741b28de4112
trentmkelly/LessWrong-43k
LessWrong
Explaining Capitalism Harder A friend recently shared a sharing of a screenshot of a reblogging of a reblogging of this tumblr post: > Pro-Capitalist's defense of capitalism is just explaining how it works, and then when you say "yes I know, I just think it shouldn't be like that" they explain it to you again but angrier this time > —strawberry-crocodile I really like this perspective, even as someone relatively pro-capitalism, because I think it captures something that often goes wrong in these discussions. The strongest argument in favor of capitalism is that in practice it works for most things, better than the other systems we've tried. Not because it was designed to work, but because that's just how it falls together. When someone points at a piece of the system that seems unfair or wasteful and says "I just think it shouldn't be like that," stopping it's going to have effects elsewhere in the system, often negative ones. And so pro-capitalism folks often respond by trying to explain capitalism harder: what role is the thing you want to change filling? When people propose removing something without engaging with how it ties in to the rest of the system, it is natural to assume they don't know about its function and try to explain. As in the opening quote, however, people don't want more explanation of the workings of the status quo. Instead, I think a better response is to think about what you expect would go wrong, and ask if they would expect that. Perhaps they don't, and you can try and figure out where specifically your expectations diverge. Perhaps they do, and they think it's worth it. Perhaps they have additional proposals which work together. Whichever way the conversation goes, I think it probably is more productive? (Overall my perspective is that while things are much worse than they could be, they're also much better than they have ever been. I really don't want us to break the system that keeps improving our ability to turn time and stuff into what people need. At the s
cc2b9fa5-c988-43b4-b55a-5626d6d75392
trentmkelly/LessWrong-43k
LessWrong
Commentary On "The Abolition of Man" C.S. Lewis wrote a short book attacking moral subjectivism in education; it's available online here as a pdf, here as raw text, and here as a series of videos [1 2 3], and I think probably worth reading in full (at 50 pages or ~100 minutes of video at 1x speed). This post is mostly me rambling about what I saw as the central point, especially connected to individual development and community health, by quoting sections and then reacting to them. The book begins with a reaction to a grammar textbook (written in 1939) euphemistically called The Green Book whose lessons are also philosophical; Lewis doesn't object to the bait-and-switch (outside of one paragraph) so much as the content and quality of the philosophy. (One suspects Lewis wouldn't object to the Copybook Headings referenced by Kipling, even tho that mixes writing lessons and philosophy.) > Until quite modern times all teachers and even all men believed the universe to be such that certain emotional reactions on our part could be either congruous or incongruous to it--believed, in fact, that objects did not merely receive, but could merit, our approval or disapproval, our reverence or our contempt. First, let's get the obvious objections out of the way: the claim of universality is probably false. Even supposing it were true, then the underlying change seems worth investigating. Naive belief that one's map is objective reality disintegrates on contact with different maps and after noticing surprising divergences between one's predictions and observations; one can imagine this happening in the moral realm as well as the physical one. But presumably we should just ignore this as standard "the contemporary world is fallen and bad" framing instead of an actual historical claim. The more interesting claim here is the question of whether or not there can or should be a question of merit, distinct from a question of flavor or fact. A previous taxonomy I've liked a lot (that I was mostly introduced to by Sapien