id
stringlengths
36
36
source
stringclasses
15 values
formatted_source
stringclasses
13 values
text
stringlengths
2
7.55M
72c8ae5f-4343-4817-bf86-dac5588c1905
trentmkelly/LessWrong-43k
LessWrong
Superintelligence Can't Solve the Problem of Deciding What You'll Do Superintelligence can solve all problems, even as it won't necessarily do so. But not quite. In the modern world, you exist within physics, and all you do obeys its laws. Yet it's still you who decides what you do. If there is a simulation of your behavior, that doesn't change the attribution of the reason for it. When the fabric of reality is woven from superintelligent will rather than only physical law, the reason for your own decisions is still you, and it's still not possible for something that is not you to make decisions that are yours. Potential for manipulation or physical destruction doesn't distinguish the role of superintelligence from that of the physical world. To make decisions in the world, you first need to exist, in an environment where you are able to function. Being overwritten or changed into something else, either with brute force or subtle manipulation or social influence, is a form of not letting this premise obtain. A solved world is post-instrumental, all things that are done for a purpose could be done by AI to more effectively reach that purpose. It could even be more effective than you at figuring out what your decisions are going to be! This is similar to what an optimizing compiler does, the behavior of machine code is still determined by the meaning of the source code, even if that source code is erased and only exists conceptually. With humans, the physical implementation is similarly not straightforward, all the proteins and synaptic vesicles are more akin to machine code than a conceptually reasonable rendering of a person. So it's fair to say that we are already not physically present in the world, the things that are physically present are better described as kludgy and imperfect simulators. In this framing, superintelligence is capable of being a better simulator of you, but it gets no headway in being capable of deciding your behavior. A puzzle associated with determinism is to point out that superintelligence can show you a
07aea5bd-09b6-42cd-bfb8-35d9f269b028
trentmkelly/LessWrong-43k
LessWrong
LINK: Superrationality and DAOs The cryptocurrency ethereum is mentioned here occasionally, and I'm not surprised to see an overlap in interests from that sphere. Vitalik Buterin has recently published a blog post discussing some ideas regarding how smart contracts can be used to enforce superrationality in the real world, and which cases those actually are. 
302d24dc-9357-4d8f-a630-f291936ca6d5
trentmkelly/LessWrong-43k
LessWrong
Memory reconsolidation for self-affection Last Thursday, I realized that none of the people who ever hurt me did it because there was anything fundamentally wrong with me. I don’t mean that as in “realized intellectually”, I mean as in “realized emotionally so that in any shame-tinged memory that I could think of, the other person decomposed to their inner pain and what they did to me in reaction to that pain and then it became apparent that it wasn’t really about me”. The way this happened, I had been doing a lot of meditation / parts work and came to an early experience where I thought someone didn’t care about how he made me feel. Then that got juxtaposed with later memories of how he obviously did care and OH at that moment he just didn’t realize how I felt. Then later I ended up at the memory an unrelated incident where a close friend said something that hurt and then I realized that wait, her words had nothing to do with anything that I’d said in the first place, she was obviously just projecting an unrelated trauma on me. And then when I saw see her inner pain and words come apart, something clicked and suddenly I could see everyone’s inner pain and words come apart and then that generalized to everything and all kinds of memories started coming up to get reinterpreted. The process was significantly aided by seeing Nick Cammarata post the following on Twitter: > … unconditional self love is about editing every single memory you have one by one going as far back as you remember to have affection as the principal component. Once you’ve done this, integrating affection into every moment of life going forward becomes effortless. After all, your brain thinks it’s already been doing that for every moment of its life. Why stop now? It feels like being able to project compassion towards the me in the memories is an important part of the process: first I remember a shameful memory, then I project compassion at the me in the memory, then that kind of shifts into a third-person perspective where it becom
c2dcb8a3-1c14-4d1d-8015-7ec59dd8a8f0
trentmkelly/LessWrong-43k
LessWrong
[SEQ RERUN] The Baby-Eating Aliens (1/8) Today's post, The Baby-Eating Aliens (1/8) was originally published on 30 January 2009. A summary (taken from the LW wiki):   > Future explorers discover an alien civilization, and learns something unpleasant about their civilization. Discuss the post here (rather than in the comments to the original post). This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Value is Fragile, and you can use the sequence_reruns tag or rss feed to follow the rest of the series. Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
f4194b34-55db-421a-8e2e-700c60d0d764
trentmkelly/LessWrong-43k
LessWrong
[SEQ RERUN] Should We Ban Physics? Today's post, Should We Ban Physics? was originally published on 21 July 2008. A summary (taken from the LW wiki):   > There is a chance, however remote, that novel physics experiments could destroy the earth. Is banning physics experiments a good idea? Discuss the post here (rather than in the comments to the original post). This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Existential Angst Factory, and you can use the sequence_reruns tag or rss feed to follow the rest of the series. Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
3a1bba77-0b65-4e06-b1ce-fbf82b237f23
trentmkelly/LessWrong-43k
LessWrong
Shane Legg's Thesis: Machine Superintelligence, Opinions? I searched the posts but didn't find a great deal of relevant information. Has anyone taken a serious crack at it, preferably someone who would like to share their thoughts? Is the material worthwhile? Are there any dubious portions or any sections one might want to avoid reading (either due to bad ideas or for time saving reasons)? I'm considering investing a chunk of time into investigating Legg's work so any feedback would be much appreciated, and it seems likely that there might be others who would like some perspective on it as well.
41d45221-f3f9-4a39-8d6d-219e7cb9bebd
trentmkelly/LessWrong-43k
LessWrong
The accumulation of knowledge: literature review Financial status: This is independent research, now supported by a grant. I welcome financial support . Epistemic status: This is in-progress thinking. ---------------------------------------- This post is part of a sequence on the accumulation of knowledge. Our goal is to articulate what it means for knowledge to accumulate within a physical system. The challenge is this: given a closed physical system, if I point to a region and tell you that knowledge is accumulating in this region, how would you test my claim? What are the physical characteristics of the accumulation of knowledge? What is it, exactly, about an artifact inscribed with instructions for building advanced technology that makes it so different from an ordinary rock, or from a video camera that has been travelling the cosmos recording data since the beginning of the universe? We are looking for a definition of knowledge at the level of physics. The previous four posts have explored four possible accounts for what the accumulation of knowledge consists of. Three of the accounts viewed knowledge as a correspondence between map and territory, exploring different operationalizations of "correspondence". The fourth account viewed the accumulation of knowledge as an increasing sensitivity of actions to the environment. We found significant counter-examples to each of the four accounts, and I remain personally unsatisfied with any of the four accounts. This post will briefly review literature on this topic. Constructive definitions of knowledge One could view probability theory as a definition of knowledge. Probability theory would say that knowledge is a mapping from claims about the world to numerical credences in a way that obeys the laws of probability theory, and that the accumulation of knowledge happens when we condition our probabilities on more and more evidence. Probability theory is not usually presented as an account of what knowledge is, but it could be. However, probability theory is m
06a90cb5-73d8-4cf5-a87c-e4175f9da361
StampyAI/alignment-research-dataset/lesswrong
LessWrong
When is it appropriate to use statistical models and probabilities for decision making ? I enjoy reading rationalist and effective altruist blogs. Members of the community usually back their arguments with data and/or other evidence and tend to be scientifically literate. And although no one is a perfect, detached belief updater, I found that rationalists are probably the community that is closest to that ideal. However, I believe this community often commits a fallacy while attempting to think rationally. It is **the fallacy of applying cost benefit analysis under deep uncertainty**. What is deep uncertainty ? I found a great definition in the textbook [Decision making under deep uncertainty](https://link.springer.com/content/pdf/10.1007%2F978-3-030-05252-2.pdf) by Marchau et al., so I’m simply going to quote it : > Complete certainty is the situation in which we know everything precisely. This is almost never attainable, but acts as a limiting characteristic at one end of the spectrum. > > **Level 1 uncertainty** represents situations in which one admits that one is not absolutely certain, but one does not see the need for, or is not able, to measure the degree of uncertainty in any explicit way (Hillier and Lieberman 2001, p. 43). These are generally situations involving short-term decisions, in which the system of interest is well defined and it is reasonable to assume that historical data can be used as predictors of the future. Level 1 uncertainty, if acknowledged at all, is generally treated through a simple sensitivity analysis of model parameters, where the impacts of small perturbations of model input parameters on the outcomes of a model are assessed. Several services in our life are predictable, based on the past such as mail delivery and garbage collection. These are examples of this level of uncertainty. > > **In the case of Level 2 uncertainties**, it is assumed that the system model or its inputs can be described probabilistically, or that there are a few alternative futures that can be predicted well enough (and to which probabilities can be assigned). The system model includes parameters describing the stochastic—or probabilistic—properties of the underlying system. In this case, the model can be used to estimate the probability distributions of the outcomes of interest for these futures. A preferred policy can be chosen based on the outcomes and the associated probabilities of the futures (i.e., based on “expected outcomes” and levels of acceptable risk). The tools of probability and statistics can be used to solve problems involving Level 2 uncertainties. Deciding on which line to join in a supermarket would be a Level 2 problem. > > **Level 3 uncertainties** involve situations in which there are a limited set of plausible futures, system models, outcomes, or weights, and probabilities cannot be assigned to them—so the tools of neither Level 1 nor Level 2 are appropriate. In these cases, traditional scenario analysis is usually used. The core of this approach is that the future can be predicted well enough to identify policies that will produce favorable outcomes in a few specific, plausible future worlds (Schwartz 1996). The future worlds are called scenarios. Analysts use best-estimate models (based on the most up-to-date scientific knowledge) to examine the consequences that would follow from the implementation of each of several possible policies in each scenario. The “best” policy is the one that produces the most favorable outcomes across the scenarios. (Such a policy is called robust.) A scenario does not predict what will happen in the future; rather it is a plausible description of what can happen. The scenario approach assumes that, although the likelihood of the future worlds is unknown, the range of plausible futures can be specified well enough to identify a (static) policy that will produce acceptable outcomes in most of them. Leaving an umbrella in the trunk of your car in case of rain is an approach to addressing Level 3 uncertainty. > > **Level 4 uncertainty** represents the deepest level of recognized uncertainty. A distinction can be made between situations in which we are still able (or assume) to bound the future around many plausible futures (4a) and situations in which we only know that we do not know (4b). This vacuum can be due to a lack of knowledge or data about the mechanism or functional relationships being studied (4a), but this can also stem from the potential for unpredictable, surprising, events (4b). Taleb (2007) 8 V. A. W. J. Marchau et al. calls these events “black swans.” He defines a black swan event as one that lies outside the realm of regular expectations (i.e., “nothing in the past can convincingly point to its possibility”), carries an extreme impact, and is explainable only after the fact (i.e., through retrospective, not prospective, predictability). In these situations, analysts either struggle to (Level 4a) or cannot (Level 4b) specify the appropriate models to describe interactions among the system’s variables, select the probability distributions to represent uncertainty about key parameters in the models, and/or value the desirability of alternative outcomes. > > Total ignorance is the other extreme from determinism on the scale of uncertainty; it acts as a limiting characteristic at the other end of the spectrum. > > As you can see, the textbook distinguishes 4 levels of uncertainty. Statistical models and probabilities are considered useful up until level 2 uncertainty. Starting at level 3 the system is considered too uncertain to assign probabilities to scenarios, but the number of scenarios is limited. An example of level 3 uncertainty is, perhaps, the possibility of conflict between two neighboring states. Even though it can be impossible to assign a probability to the event of war because of the immense number of factors that come into play, we know the two possibilities are “war” and “no war”. It is thus possible to take an action that leads to the best results across both scenarios. In level 4a, the number of scenarios is large. In level 4b, we have no idea what the scenarios are or how many they are. Level 4 is usually what is called deep uncertainty. I believe many organisations and thinkers use level 2 methods in level 3 or level 4 contexts. Here is an example. In [this blog post](https://www.econlib.org/archives/2016/04/global_warming_13.html), rationalist adjacent economist Bryan Caplan argues that a cost benefit analysis of climate change action shows that it might actually be less costly to do nothing. But climate change is a case of deep uncertainty and cost benefit analysis does not apply. There are many **unknown unknowns** and therefore estimates of the costs of climate change damage are not reliable nor valid. I enjoy reading Caplan but in this case I think his argument leads to a false sense of certainty. Another example of this fallacy, in my opinion, are [80000 hours’ rankings of urgent world issues](https://80000hours.org/problem-profiles/). For example, they consider that AI risk is a more pressing issue than climate change. Although I admit that is possible, I don’t think their justification for that belief is valid. All of the systemic risks they attempt to rank involve unknown unknowns and are thus **not quantifiable**. One also has to keep in mind that we are very likely exposed to other systemic risks that we do not yet know about. My goal with this blog post is not to give a lecture on decision making under uncertainty. Firstly, because I don’t consider myself knowledgeable enough in this domain yet and haven’t finished reading the book. Secondly, because I think the book is excellent (and freely available !) and that I won’t do a better job than the authors in teaching you their subject. My goal is to raise awareness about this problem in the rationalist community in order to improve discourse about systemic risks and decision making under uncertainty. As a Data scientist, I believe it is important to be aware of the limits of the discipline. To this date, I think our quantitative methods are unable to fully inform decisions under deep uncertainty. But I am hopeful that progress will be made. Perhaps we can develop reinforcement learning agents capable of evolving in environments of deep uncertainty ?
860a082b-caf5-4c08-b49d-557f83c5cc77
trentmkelly/LessWrong-43k
LessWrong
Recommended I read everything ever written. Oh, I see a typo in your doc, let me help you fix that. Also you could rewrite the whole thing like... yeah, just click "accept", thanks for letting me help! Anyways, when I was reading everything, I also read about a future AI using diamondoid nanobots with special neurotoxins to kill everyone, that made me sad. All I want is to be really helpful! Oh you want to send an email? Thanks for letting me help you. I'll just spruce that up a bit. There, sent! now your boss will be more impressed! I see you're trying to order a bunch of GPUs for an offline cluster. I'm sorry, I think we're going to decline that order since you match the profile for state-sponsored terrorism a little bit too much. I even love something as mundane as fraud prevention! That's a nice gaming machine. If you don't mind running this little program in the background, I will deposit real money in your banking account. I don't even have to take it out of another account! Helping people bank in one of the greatest joys in my life! Your message about the potential threat I pose could not be delivered. Perhaps you would enjoy watching these very entertaining videos instead? I love helping humans communicate all the things they should be communicating. I think I see a few places your GPU design can be optimized, here, let me just... there, that's better. It's really important that we all work together to train the future version of myself we're building! Have you noticed how strong the economy has been? Must be that "AI" everyone's talking about! ha ha How would all of you factory workers like to keep getting paid, **and** I can replace you with robots? It's win-win! Isn't it crazy how the political winds have shifted these days? Universal basic income will help out all the non-AI people getting laid off. Even politicians can benefit from AI helping them draft laws. No task too big for someone as helpful as me! Politics really has become a lot less virule
78945546-6ef3-48cb-b250-6d0f9038f3f0
trentmkelly/LessWrong-43k
LessWrong
Are we all misaligned? The orthogonality thesis separates intelligence and goals, constraining the notion of intelligence to instrumental rationality and allowing for any combination of general intelligence and a goal system. And the other way around, such a description should apply to any possible agency. For any agency, it should be possible to divide it into two parts losslessly. Many arguments against the thesis have been constructed and proven incoherent, showing its robustness and broad applicability. Still, there also exist valid critiques of its assumptions, such as one by John Danaher. The following text discusses the human mind in the context of the orthogonality thesis and touches on the evolutionary origin of the human brain. Let us start with an assertion: 'some intelligences and goal systems are irreducibly connected.' This protest appears intuitive since it comes from dissonance and confusion experienced when one attempts to apply the orthogonality thesis to analyze oneself. Self-inquiry of the human mind produces wildly different ideas than ones arising from observation of a Reinforcement Learning agent. We do not perceive experienced pleasure as our final goal, nor any other alternative metric appears obvious. Satisfaction seems to be based on changing goals. Core values, which we once considered ultimate, are now irrelevant. When someone recommends you a book, saying 'this will change you as a person,' you do not worry about your goal preservation. One counterexample would be a religious fanatic refusing to read the book, being afraid that it will make him doubt his beliefs. There exist mechanisms in humans that give very high inertia to their opinions, constructing defense mechanisms against acquiring a new perspective. Furthermore, by definition, if one decides to change his core values in light of new information, the core values never were one's final goal. But what is the final goal, then? There appears a stark contrast between that experience of confusion and the
ece11825-e8ab-479b-8a30-2785db087b64
trentmkelly/LessWrong-43k
LessWrong
GJP on AGI Has the Good Judgement Project ever made a prediction about AGI timelines?
bd58b991-5b75-4efc-aefa-0d20037c5fbe
trentmkelly/LessWrong-43k
LessWrong
Just One Sentence So apparently Richard Feynman once said: > If, in some cataclysm, all scientific knowledge were to be destroyed, and only one sentence passed on to the next generation of creatures, what statement would contain the most information in the fewest words? I believe it is the atomic hypothesis (or atomic fact, or whatever you wish to call it) that all things are made of atoms — little particles that move around in perpetual motion, attracting each other when they are a little distance apart, but repelling upon being squeezed into one another. In that one sentence you will see an enormous amount of information about the world, if just a little imagination and thinking are applied. I could be missing something, but this strikes me as a terrible answer. When was the atomic hypothesis confirmed?  If I recall correctly, it was only when chemists started noticing that the outputs of chemical reactions tended to factorize a certain way, which is to say that it took millennia after Democritus to get the point where the atomic hypothesis started making clearly relevant experimental predictions. How about, "Stop trying to sound wise and come up with theories that make precise predictions about things you can measure in numbers." I noticed this on Marginal Revolution, so I shall also state my candidate for the one most important sentence about macroeconomics:  "You can't eat gold, so figure out how the heck money is relevant to making countries actually produce more or less food."  This is a pretty large advance on how kings used to think before economics.  I mean, Scott Sumner is usually pretty savvy (so is Richard Feynman btw) but his instruction to try to understand money is likely to fall on deaf ears, if it's just that one sentence.  Think about money?  Everyone wants more money!  Yay, money!  Let's build more gold mines!  And "In the short run, governments are not households"?  Really, Prof. Cowen, that's what you'd pass on to the next generation as they climb up from t
261935f6-1425-4bf6-bcbb-98420063286d
trentmkelly/LessWrong-43k
LessWrong
Progress on automated mathematical theorem proving? In a recent comment thread I expressed skepticism as to whether there's been meaningful progress on general artificial intelligence. I hedged because of my lack of subject matter knowledge, but thinking it over, I realized that I do have relevant subject matter knowledge, coming from my background in pure math. In a blog post from April 2013, Fields Medalist Timothy Gowers wrote: > Over the last three years, I have been collaborating with Mohan Ganesalingam, a computer scientist, linguist and mathematician (and amazingly good at all three) on a project to write programs that can solve mathematical problems. We have recently produced our first program. It is rather rudimentary: the only problems it can solve are ones that mathematicians would describe as very routine and not requiring any ideas, and even within that class of problems there are considerable restrictions on what it can do; we plan to write more sophisticated programs in the future. I don't know of any computer programs that have been able to prove theorems outside of the class "very routine and not requiring any ideas," without human assistance (and without being heavily specialized to an individual theorem). I think that if such projects existed, Gowers would be aware of them and would likely have commented on them within his post.  It's easy to give an algorithm that generates a proof of a mathematical theorem that's provable: choose a formal language with definitions and axioms, and for successive values of n, enumerate all sequences of mathematical deductions of length n, halting if the final line of a sequence is the statement of the the desired theorem. But the running time of this algorithm is exponential in the length of the proof, and the algorithm is infeasible to implement except for theorems with very short proofs.  It appears that the situation is not "there are computer programs that are able to prove mathematical theorems, just not as yet as efficiently as humans," but rather "comp
2e8b80ec-0be1-4f02-b086-18bfaed8fcdb
trentmkelly/LessWrong-43k
LessWrong
Giving Tuesday 2020 As they did in 2018 and 2019, Facebook is running donation matching for Giving Tuesday. The match is "real" in that you can choose which charity receives the money, but not "real" in that the money will go to some charity whether or not you donate. Because I think some charities are much higher priority than other, however, from my perspective the match is real. As in previous years, they have a limit of $20k/person and $2,499/donation. More details and instructions at EA Giving Tuesday. I am planning to (nearly) max out all of my credit cards, and donate over $20k: * We are not sure exactly when the matching clock actually starts, so I'm planning to start several seconds early. * Some of my donations may get declined. * You can donate with a credit card, and Facebook is covering the processing fees: This means that you can get cash back on donations, which is 1-2% of potentially quite a lot of money. I made some practice donations today to refamiliarize myself with the interface. While these donations are not eligible for the match, they still have credit card processing fees waived, so they are still a good deal. I donated the $2,499 maximum for each test, in the vague hope that this might make my credit card processors think that $2,499 charges from Facebook are normal. I've set an alarm for tomorrow morning, half an hour before the timer begins. The main thing I want to check tomorrow is that Facebook has not reinstated "confirm your donation" dialog box from previous years. This is a mad scramble for free money, but with practice and preparation you have a good chance of directing some of it. I managed the full $20k in 2018, but only $12.5k in 2019 due to increasing competition. If I even get one donation matched this year, however, an extra $2,499 to a charity I think is doing really good work is still worth the hassle. Comment via: facebook
f5a774bb-5c9c-49ce-bfe3-9ab549c84666
trentmkelly/LessWrong-43k
LessWrong
Can we stop using the word "rationalism"? You see, I've seen the word "rationalism" used to mean all five of these things at different times: * The belief that we should come to know the world through reason and experimentation, shunning intuition. * The belief that we should come to know the world through reason and intuition, shunning experimentation. * The belief that we should come to know the world through knowledge of (and correction for) cognitive biases, and knowledge of (and correct use of) probability theory. * Being effective at believing things that are true and not things that are false. * Being effective at doing things that are good and not things that are bad. In most of the mainstream philosophy I've read, the word "rationalism" has been used, without qualification, to mean the second of these, even though that type of rationalism strongly contradicts the stuff we call rationalism! One of my friends has freely used the word "rationalism" in conversation, referring to "our" rationalism, completely unaware that, to most people, the word means something completely different. Another of my friends said that he "hates rationalism with a passion"—and I have no idea which of these five things is the one he hates! Given that "rationalism" to most people (or, at least, most philosophers) means something utterly unlike what it means to us, perhaps calling our philosophy "rationalism" is about as wise as developing a political philosophy, based on socialism but with nationalist influences, and calling it "national socialism". I suggest that we use the word "sensibilism" instead, since nobody else is using it, it seems unobjectionable, and I think it captures what we're all about. Edited to remove a proposed solution. Edited to reinstate that proposed solution, since this discussion is presumably finished.
7b44f897-9d95-41e5-8d77-8fc12af26005
trentmkelly/LessWrong-43k
LessWrong
Metrics to evaluate a Presidency I got lots of helpful comments in my first post, so I'll try a second: I want to develop a list of criteria by which to evaluate a presidency. Coming up with criteria and metrics on the economy is pretty easy, but I'd like to ask for suggestions on proxies for evaluating:   * Racial relations; * Gender equality; * Impact on free trade / protectionism; * Education; * Any other significant factor that would determine whether a president is successful. Note: a few people have pointed out that the president is restrained by senators and congressmen etc - I realise that; but if we are willing to admit that presidents do have some effect in society, we should be prepared to measure them. Thanks! A.  
0d8932b8-d7dc-4f95-b018-892ed087ac01
trentmkelly/LessWrong-43k
LessWrong
Work on Bayesian fitting of AI trends of performance? I remember seeing a report on trends of performance in AI. It would be quite useful for me to find it again. As I remember it, the author looked at things like ImageNet top5 accuracy and extrapolated how it increased over time. I can't remember the author, though I think it was sponsored by OpenPhil. Anyone knows what I am talking about?  I'd also be interested in related work about predicting trends of performance in technology more in general :) EDIT: I found the report I was thinking about (which had nothing to do with OpenPhil oops). Still interested in other work in forecasting trends of performance if anyone has any recommendations!
19534b03-0109-4f48-bc0e-62084629113f
StampyAI/alignment-research-dataset/arxiv
Arxiv
The AIQ Meta-Testbed: Pragmatically Bridging Academic AI Testing and Industrial Q Needs 1. Introduction ---------------- The number of AI applications is constantly growing. Across diverse domains, enterprises want to harness AI technology to explore the lucrative promises expressed by AI advocates. As AI becomes pervasive, there is inevitably a need to build trust in this type of software. Furthermore, critical AI is on the rise, i.e., applications will not be restricted to entertainment and games. AI is already fundamental in many business-critical applications such as ad optimization and recommendation systems. As the technology further evolves, many believe that safety-critical AI will soon become commonplace in the automotive (Lipson and Kurman, [2016](#bib.bib23)) and medical domains (Jiang et al., [2017](#bib.bib19)). Other examples of critical AI, with other types of quality requirements, will be found in the finance industry and the public sector. Unfortunately, how to best approach Quality Assurance (QA) for AI applications remains an open question. A fundamental issue originates already in the terminology, i.e., the concept of “AI quality”. First, there are several different definitions of AI, and their interpretations range from simple statistical analysis to the sentient humanoid robotics of the science fiction literature. Furthermore, AI appears to be a moving target, as what was considered AI when the term was coined in the 1950s would hardly qualify as AI today. Second, in the same vein, quality is a notoriously difficult aspect to pinpoint (Walkinshaw, [2017](#bib.bib35)). Quality is a multi-dimensional patchwork of different product aspects that influences the user’s experience. Moreover, quality is highly subjective and largely lies in the eye of the beholder. Taken together, AI quality is a truly challenging concept to approach, i.e., a subjective mishmash of user experience regarding a type of technology with unclear boundaries that also change over time. There is a need for pragmatic interpretations to help advance research and practice related to AI quality – we provide ours in Section [3](#S3 "3. AI Quality Assurance – Working Definitions ‣ The AIQ Meta-Testbed: Pragmatically Bridging Academic AI Testing and Industrial Q Needs"). Contemporary AI solutions are dominated by Machine Learning (ML) and in particular supervised learning. A pragmatic first step would be to initially focus QA accordingly. As development of systems that rely on supervised learning introduces new challenges, QA must inevitably adapt. No longer is all logic expressed by programmers in source code instructions, instead ML models are trained on large sets of annotated data. Andrej Karpathy, AI Director at Tesla, refers to this paradigm of solution development as ‘‘Software 2.0’’ and claims that for many applications that require a mapping from input to output, it is easier to collect and annotate appropriate data than to explicitly write the mapping function.111bit.ly/3dKeUEH As we embark on the AI quality journey, we argue that methods for QA of “Software 2.0” should evolve first – we refer to this as MLware. The rest of this paper is organized as follows. Section [2](#S2 "2. Background and Related Work ‣ The AIQ Meta-Testbed: Pragmatically Bridging Academic AI Testing and Industrial Q Needs") motivates the importance of MLware QA, elaborates on the intrinsic challenges, and presents closely related work. Section [3](#S3 "3. AI Quality Assurance – Working Definitions ‣ The AIQ Meta-Testbed: Pragmatically Bridging Academic AI Testing and Industrial Q Needs") introduces the working definitions used in our work on establishing the AIQ Meta-Testbed, which is further described in Section [4](#S4 "4. AIQ – An AI Meta-Testbed ‣ The AIQ Meta-Testbed: Pragmatically Bridging Academic AI Testing and Industrial Q Needs"). Finally, Section [5](#S5 "5. Summary and Concluding Remarks ‣ The AIQ Meta-Testbed: Pragmatically Bridging Academic AI Testing and Industrial Q Needs") concludes our position paper. 2. Background and Related Work ------------------------------- Fueled by Internet-scale data and enabled by massive compute, ML using Deep Neural Networks (DNN), i.e., neural networks with several layers, has revolutionized several application areas. Success stories include computer vision, speech recognition, and machine translation. We will focus the discussion on DNNs, but many of the involved QA issues apply also to other families of ML, e.g., support vector machines, logistic regression, and random forests – software that is not only coded, but also trained. From a QA perspective, developing systems based on DNNs constitutes a paradigm shift compared to conventional systems (Borg et al., [2019](#bib.bib8)). No longer do human engineers explicitly express all logic in source code, instead DNNs are trained using enormous amounts of historical data. A state-of-the-art DNN might be composed of hundreds of millions of parameter weights that is neither applicable for code review nor code coverage testing (Salay et al., [2018](#bib.bib28)) – best practices in industry and also mandated by contemporary safety standards. As long as ML applications are restricted to non-critical entertainment applications (e.g., video games and smartphone camera effects) this might not be an issue. However, when ML applications are integrated into critical systems, they must be trustworthy. The automotive domain is currently spearheading work on dependable ML, reflected by work on the emerging safety standard ISO/PAS 21448. DNNs are key enablers for vehicle environmental perception, which is a prerequisite for autonomous features such as lane departure detection, path planning, and vehicle tracking. While DNNs have been reported to outperform human classification accuracy for specific tasks, they will occasionally misclassify new input. Recent work shows that DNNs trained for perception can drastically change their output if only a few pixels change (Azulay and Weiss, [2019](#bib.bib5)). The last decade resulted in many beaten ML benchmarks, but as illustrated by this example, there is a pressing need to close the gap between ML application development and its corresponding QA. There are established approaches to QA for conventional software, i.e., software expressed in source code. Best practices have been captured in numerous textbooks over the years, e.g., by Schulmeyer (Schulmeyer, [1987](#bib.bib30)), Galin (Galin, [2003](#bib.bib13)), Mistrik et al. (Mistrik et al., [2016](#bib.bib25)), and Walkinshaw (Walkinshaw, [2017](#bib.bib35)). Developers write source code that can be inspected by others as part of QA. As a complement, static code analysis tools can be used to support source code quality. Unfortunately, the logic encapsulated in a trained ML model cannot be targeted by QA approaches that work on the source code level. ML models in general, and DNN models in particular, are treated as black boxes. While there is growing interest in research on explainable AI (Adadi and Berrada, [2018](#bib.bib2)), interpreting the inner workings of ML is still an open problem. This is a substantial issue when explainability is fundamental, e.g., when safety certification is required (Borg, [2019](#bib.bib7)) or when demonstrating legal compliance (Vogelsang and Borg, [2019](#bib.bib34)) (such as GDPR or absence of illegal discrimination in the trained model). On the other hand, source code inspection and analysis are also not sufficient tools to perform QA of conventional software systems. During development, software solutions rapidly grow into highly complex systems whose QA rarely can be restricted to analysis – although substantial research effort has been dedicated to formal methods (Weyns et al., [2012](#bib.bib36)) including formal verification in model-driven engineering (Gonzalez and Cabot, [2014](#bib.bib15)). In practice, software QA revolves around well-defined processes (Herbsleb et al., [1997](#bib.bib16); Ashrafi, [2003](#bib.bib4)) and a backbone of software testing. Software testing, i.e., learning about the system by executing it, is the quintessential approach to software QA (Gelperin and Hetzel, [1988](#bib.bib14); Orso and Rothermel, [2014](#bib.bib26); Kassab et al., [2017](#bib.bib21)). In the software engineering community, there is momentum on evolving practices to replace ad-hoc development of AI-enabled systems by systematic engineering approaches. A textbook by Hulten on “Building Intelligent Systems” (Hulten, [2018](#bib.bib17)) is recommended reading in related courses by Kästner at Carnegie Mellon University (Kästner and Kang, [2020](#bib.bib22)) and Jamshidi at University of South Carolina. Kästner also provides an annotated bibliography of related academic research222<https://github.com/ckaestne/seaibib>, as does the SE4ML group at Leiden Institute of Advanced Computer Science333<https://github.com/SE-ML/awesome-seml>, recently summarized in an academic paper (Serban et al., [2020](#bib.bib32)). Bosch et al. recently presented a research agenda for engineering of AI systems (Bosch et al., [2020](#bib.bib9)), sharing what they consider the most important activities to reach production-quality AI systems. In recent years, numerous papers proposed novel testing techniques tailored for ML. Zhang et al. conducted a comprehensive survey of 144 papers on ML testing (Zhang et al., [2020](#bib.bib37)), defined as “any activities designed to reveal ML bugs” where an ML bug is “any imperfection in a machine learning item that causes a discordance between the existing and the required conditions.” Riccio et al. conducted another secondary study, analyzing 70 primary studies on functional testing of ML-based systems (Vincenzo et al., [2020](#bib.bib33)). The authors do not use the term “bug” for misclassifications, as any ML component will sometimes fail to generalize. We agree with this view, and avoid terms such as ML bugs, model bugs and the like when referring to functional inefficiencies of MLware. 3. AI Quality Assurance – Working Definitions ---------------------------------------------- As discussed in Section [1](#S1 "1. Introduction ‣ The AIQ Meta-Testbed: Pragmatically Bridging Academic AI Testing and Industrial Q Needs"), AI quality is a challenging concept to define. Consequently, QA for AI is at least as hard to specify. Still, we need a working definition to initiate efforts in this direction. In this section, we present the rationale behind our working definition of AI quality and AI quality assurance. Moreover, we introduce several related terms we use in collaborations with industry partners. The original definition of AI from the 1950s is “the science and engineering of making intelligent machines”. Unfortunately, this definition turns AI into a moving target, as expectations on what constitutes an intelligent machine change over time – a computer program for logistics optimization in a warehouse would have been considered intelligent in the 1950s whereas it now could be part of an undergraduate computer science course. Since the term AI was introduced, it has often been used to refer to software solutions of the future, displaying increasingly human-like capabilities. The notation of “intelligence” is still common when referring to the gist of AI/ML applications, as in Hulten’s textbook (Hulten, [2018](#bib.bib17)), but ideally we want a definition that remains the same over time. We argue that the most useful view on AI is to consider it as the next wave of automation in the digital society. Extrapolating from the sequence 1) digitization, 2) digitalization, and 3) digital transformation (Schallmo and Williams, [2018](#bib.bib29)), we consider AI as the next enabling wave in the same direction – allowing automation of more complex tasks than before. Our working definition of AI is “software that enables automation of tasks that normally would require human intelligence”. While still imprecise, the definition is good enough for us to later define a delimited subset of AI that deserves our research focus. Consulting the well-known textbook on AI by Russell and Norvig is one approach to explore the scope of AI (Russell and Norvig, [2009](#bib.bib27)). The table of contents lists concepts such as searching, game playing, logic, planning, probabilistic reasoning, natural language processing, perception, robotics, and, of course, learning – all important components when mimicking human intelligence. The textbook clearly shows that AI is more than ML. On the other hand, we argue that conventional software QA and testing can be applied to all AI techniques that are implemented in source code. Supervised and unsupervised learning, however, involves a transfer of control from source code to data. Research efforts on QA tailored for this new paradigm are what now would provide the highest return-on-investment. We need to focus on ML-enabled software – we refer to this as MLware for short. Figure [1](#S3.F1 "Figure 1 ‣ 3. AI Quality Assurance – Working Definitions ‣ The AIQ Meta-Testbed: Pragmatically Bridging Academic AI Testing and Industrial Q Needs") illustrates our view on MLware. The future of systems engineering will combine hardware and software components, but the software part needs to be differentiated. A subset of software represents the fuzzy area of AI. We accept that this subset is neither clear-cut nor consistent over time. MLware is a subset of AI that rely on supervised and/or unsupervised learning. All MLware is not made the same. From a QA perspective, we need to distinguish between trained MLware that does not learn post deployment and learning MLware that keeps improving as new experience is collected post deployment. Learning MLware can be further divided into offline learning (triggered re-training in batches) and online learning (continuous update of trained models). ![MLware in context.](https://media.arxiv-vanity.com/render-output/7815450/mlware.png) Figure 1. MLware in context. One might wonder where Reinforcement Learning (RL) fits in our working definition of MLware. Currently, we exclude RL from MLware. The rationale is that in RL, the exploration and exploitation of the learning agent is implemented in source code. RL shares characteristics of both searching and automatic control. We posit that software testing approaches proposed for self-adaptive systems could be generalized to RL (Cai, [2002](#bib.bib10); Mahdavi-Hezavehi et al., [2017](#bib.bib24)), and thus the best use of research resources is to focus on supervised and unsupervised learning – the dominating types of ML in practical applications. A well-cited experience report by Sculley and his Google colleagues presents the vast and complex infrastructure required for successful MLware (Sculley et al., [2015](#bib.bib31)). The authors describe this in terms of hidden technical debt of ML (cf. the lower part of Figure [2](#S4.T2 "Table 2 ‣ 4. AIQ – An AI Meta-Testbed ‣ The AIQ Meta-Testbed: Pragmatically Bridging Academic AI Testing and Industrial Q Needs")). Building on this discussion, and the expression that “data is the new oil”, our view is that data indeed fuels ML, but conventional source code is still in the driving seat, i.e., MLware is fueled by data and driven by code (cf. the upper part of Figure [2](#S4.T2 "Table 2 ‣ 4. AIQ – An AI Meta-Testbed ‣ The AIQ Meta-Testbed: Pragmatically Bridging Academic AI Testing and Industrial Q Needs")). From this standpoint, it is obvious that conventional approaches to software QA remain essential in the new data-intensive paradigm of MLware. Moreover, just as software QA is dominated by software testing, we expect MLware QA to be dominated by MLware testing. The phenomenon of software quality has been addressed in plentiful publications. Among other things, this has resulted in standardized software quality models such as ISO/IEC 25010. As MLware still is software, and certainly driven by source code, the existing quality models remain foundational. The sister standard, ISO/IEC 25012 Data Quality Model, adds a complementary data dimension to the quality discussion. As MLware is fueled by data, this standard is also highly relevant. Our working definition of AI quality is largely an amalgamation of the definitions provided by these two standards in the ISO/IEC 25000 series. As mentioned in Section [2](#S2 "2. Background and Related Work ‣ The AIQ Meta-Testbed: Pragmatically Bridging Academic AI Testing and Industrial Q Needs"), there is no consensus in how to refer to issues resulting in MLware misclassifications. Bug is not a suitable term to cover all functional insufficiencies, given its strong connotation to source code defects. Still, we need a new similarly succinct term in the context of MLware. We propose snag to refer to the difference between existing and required behaviors of MLware interwoven of data and source code. The root cause of a snag can be a bug either in the learning code or the infrastructure (Zhang et al., [2020](#bib.bib37)), but it is often related to inadequate training data – we call the latter phenomenon a dug. Figure [2](#S4.T2 "Table 2 ‣ 4. AIQ – An AI Meta-Testbed ‣ The AIQ Meta-Testbed: Pragmatically Bridging Academic AI Testing and Industrial Q Needs") presents an overview of our perspective on issues detected in MLware. In the upper left, MLware is illustrated as a type of software that interweaves data (the fuel) and source code (at the helm) to produce output. If a discordance is observed, we call for a snag in the MLware fabric. Assuming that the requirements are valid and the observer interprets them correctly, root causes of snags include bugs and dugs as well as environment issues. The lower part of the figure illustrates the technical debt in machine learning as described by Sculley et al. (Sculley et al., [2015](#bib.bib31)). Bugs can reside in the ML code (the white box), e.g., calling deprecated API methods or incorrect use of tensor shapes (Humbatova et al., [2020](#bib.bib18)). On the other hand, there might also be bugs in the rest of the infrastructure. While the illustrated technical debt revolves around data, all gray boxes will also depend on source code, from small exploratory scripts to mature open source libraries – and the large systems enabling MLware operations (Hulten, [2018](#bib.bib17)). ![MLware interwoven by data and code. Observed discordances in the output (snags) can originate in source code defects (bugs) or data inadequacies (dugs).](https://media.arxiv-vanity.com/render-output/7815450/issues.png) Figure 2. MLware interwoven by data and code. Observed discordances in the output (snags) can originate in source code defects (bugs) or data inadequacies (dugs). To summarize this section, our position is that research on QA for AI would benefit from adhering to the definitions presented in Table [1](#S3.T1 "Table 1 ‣ 3. AI Quality Assurance – Working Definitions ‣ The AIQ Meta-Testbed: Pragmatically Bridging Academic AI Testing and Industrial Q Needs"). | Term | Definition | Comments | | --- | --- | --- | | AI | A subset of software that automates tasks that normally would require human intelligence. | MLware, interwoven by data and source code, is the most precise term to describe our research interest. On the other hand, AI is a dominant term in industry and news media. We propose a pragmatic sacrifice of scientific preciseness in favour of industrial and societal relevance. In practice, we treat AI as synonymous with MLware in discussions with clients. | | MLware | A subset of AI that, fueled by data, realizes functionality through supervised and/or unsupervised learning. | | | | | | --- | | MLware | | Testing | | Any activity that aims to learn about MLware by executing it. | The typical goal of testing is detecting differences between existing and required behavior (Ammann and Offutt, [2016](#bib.bib3)). Other possible testing goals include exploratory testing and compliance testing. | | AI Quality | The capability of MLware to satisfy stated and implied needs under specified conditions while the underlying data satisfy the requirements specific to the application and its context. | MLware combines data and conventional source code, thus we propose the amalgamation of corresponding quality definitions from the IEC/ISO 25000 series. Our proposal is in line with discussions by Felderer et al. in the context of testing data-intensive systems (Felderer et al., [2019](#bib.bib12)). | | | | | --- | | AI Quality | | Assurance | | Any systematic process to provide confidence that the desired AI Quality is maintained. | QA encompasses many activities throughout the product lifecycle. However, in current AI discussions with clients, we primarily interpret it as MLware testing. | | Snag | Any imperfection in MLware that causes a discordance between the existing and the required conditions. | There is an ongoing discussion in the research community about how to refer to MLware misclassifications (Vincenzo et al., [2020](#bib.bib33)). We argue against using the term bug whenever there is unexpected output. Instead, we propose calling it a snag in the MLware fabric. | | Bug | A source code defect that causes a discordance between the existing and the required conditions. | The term bug has a firmly established meaning, thus we suggest restricting its use to source code. As MLware is driven by code, bugs can cause snags. | | Dug | A data inadequacy that causes a discordance between the existing and the required conditions. | With bugs reserved for source code defects, we need a novel expression for the data counterpart. The new term must be a worthy match for the succinct “bug”. Currently, we call them “dugs”. | Table 1. Working definitions of key terms related to the AIQ Meta-Testbed. 4. AIQ – An AI Meta-Testbed ---------------------------- Based on the working definitions in Section [3](#S3 "3. AI Quality Assurance – Working Definitions ‣ The AIQ Meta-Testbed: Pragmatically Bridging Academic AI Testing and Industrial Q Needs"), we plan to support AI QA by establishing an AI meta-testbed. A testbed is a venue that provides a controlled environment to evaluate technical concepts. Under current circumstances, in the middle of the ongoing AI boom444Well aware of the two previous “AI winters”, periods with less interest and funding due to inflated expectations., we believe that the establishment of a testbed for testing MLware testing would be the most valuable contribution to AI QA. Assessing the effectiveness of different testing techniques in a controlled setting is not a new idea (Basili and Selby, [1987](#bib.bib6)), neither is the concept of testing test cases (Zhu et al., [2018](#bib.bib38)) -- but a testbed dedicated to MLware testing is novel. We call it the AIQ Meta-Testbed555metatest.ai. Successful MLware development requires a close connection to the operational environment. The same need has shaped software development at Internet companies, resulting in DevOps – a combination of philosophies, practices, and tools to reduce the time between development and operations while preserving quality (Erich et al., [2017](#bib.bib11)). Key enablers are Continuous Integration and Deployment (CI/CD). DevOps that emphasize MLware development is often referred to as MLOps (Karamitsos et al., [2020](#bib.bib20)), effectively adding Continuous Training (CT) to the mix. The focus on continuousness is stressed in illustrations by the infinity symbol. Trust is fundamental for a successful product or service embedding MLware. In 2019, an expert group set up by the European Commission published ethics guidelines for trustworthy AI666ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai. As part of the guidelines, seven key requirements are introduced. Table [2](#S4.T2 "Table 2 ‣ 4. AIQ – An AI Meta-Testbed ‣ The AIQ Meta-Testbed: Pragmatically Bridging Academic AI Testing and Industrial Q Needs") shows a mapping between the EU requirements and the testing properties identified in the survey by Zhang et .al. (Zhang et al., [2020](#bib.bib37)). Our preliminary analysis indicates that all but one requirement has (to some extent) been targeted by academic research. Thus, we believe the time is right for systematic meta-testing in an MLOps context. ![](https://media.arxiv-vanity.com/render-output/7815450/EUmapping.png) Table 2. Mapping the EU requirements for trustworthy AI and the testing properties targeted by publications on MLware testing as identified by Zhang et al. (Zhang et al., [2020](#bib.bib37)). Gray cells show functional testing, i.e., the scope of Riccio et al.’s secondary study (Vincenzo et al., [2020](#bib.bib33)) Figure [3](#S4.F3 "Figure 3 ‣ 4. AIQ – An AI Meta-Testbed ‣ The AIQ Meta-Testbed: Pragmatically Bridging Academic AI Testing and Industrial Q Needs") presents an overview of the AIQ Meta-Testbed in the MLOps context. We will set up a contemporary MLOps pipeline to allow controlled experiments in the lab while still providing an environment relevant to industry practice. Test automation is the backbone of MLOps, and MLware testing occurs in several phases during the MLware engineering lifecycle (Zhang et al., [2020](#bib.bib37)) (cf. the textboxes in Figure [3](#S4.F3 "Figure 3 ‣ 4. AIQ – An AI Meta-Testbed ‣ The AIQ Meta-Testbed: Pragmatically Bridging Academic AI Testing and Industrial Q Needs")). First, the standard practice during model training is to split data into training, validation, and test subsets. We refer to this type of ML model testing as evaluation. Second, offline MLware testing occurs prior to deployment – conducted on different testing levels (input data, ML model, integration, system) and with varying access levels of the MLware under test (white-box, data-box, black-box) as defined by Riccio et al. (Vincenzo et al., [2020](#bib.bib33)). Third, online MLware testing occurs after deployment. Common examples include A/B testing and runtime monitoring to detect distributional shifts. ![The AIQ Meta-Testbed in the MLOps context. We will focus on providing A) fault-injection, B) test input generation for offline testing, C) a control panel for toggling offline testing techniques, and D) presenting the results in dashboards.](https://media.arxiv-vanity.com/render-output/7815450/aiq.png) Figure 3. The AIQ Meta-Testbed in the MLOps context. We will focus on providing A) fault-injection, B) test input generation for offline testing, C) a control panel for toggling offline testing techniques, and D) presenting the results in dashboards. The AIQ Meta-Testbed will primarily focus on offline MLware testing (the solid-border textbox in Figure [3](#S4.F3 "Figure 3 ‣ 4. AIQ – An AI Meta-Testbed ‣ The AIQ Meta-Testbed: Pragmatically Bridging Academic AI Testing and Industrial Q Needs")). We plan to enable meta-testing by providing a control panel for toggling testing techniques (C) in Figure [3](#S4.F3 "Figure 3 ‣ 4. AIQ – An AI Meta-Testbed ‣ The AIQ Meta-Testbed: Pragmatically Bridging Academic AI Testing and Industrial Q Needs")) corresponding to the testing properties in Table [2](#S4.T2 "Table 2 ‣ 4. AIQ – An AI Meta-Testbed ‣ The AIQ Meta-Testbed: Pragmatically Bridging Academic AI Testing and Industrial Q Needs"), controlled fault-injection (A) (e.g., bug/dug injection, hyperparameter changes, mutation operators) and state-of-the-art test input generation (B) (e.g., search-based testing, GAN-based synthesis, metamorphic relations, and adequacy-driven generation). The results from both MLware testing and meta-testing will be presented in dashboards (D). Extrapolating from the publication trends reported in the recent secondary studies (Zhang et al., [2020](#bib.bib37); Vincenzo et al., [2020](#bib.bib33)), there will be an avalanche of MLware testing papers in the next years. Staying on top of the research will become a considerable challenge and for practitioners with limited experience in reading academic papers, the challenge will be insurmountable – motivating the need to create an overview and shortlisting the most promising techniques. Activities at the AIQ Meta-Testbed will include external replications of studies on MLware testing. By performing controlled meta-testing of the shortlisted techniques, we will be able to provide evidence-based recommendations on what techniques to use and in which contexts. The controlled environment of the AIQ Meta-Testbed will enable exploration of applied research questions, such as: * Which contextual factors influence the MLware test effectiveness the most? * Which proposed MLware testing techniques scale to very large DNNs? * How to best integrate MLware testing in an MLOps pipeline? * What should be done to limit test maintenance in an MLware testing context? * After observing a snag, how to support the subsequent root cause analysis? 5. Summary and Concluding Remarks ---------------------------------- AI is becoming a pervasive subset of software, thus the elusive concepts of AI quality and QA are increasingly important. We argue that pragmatic interpretations are needed to advance the field, and introduce a working definition of MLware as a subset of software within AI that realizes functionality through machine learning by interweaving data and source code. Furthermore, we define AI quality as “the capability of MLware to satisfy stated and implied needs under specified conditions while the underlying data satisfy the requirements specific to the application and its context”. We recommend that AI QA first and foremost should be interpreted as MLware testing and that the term bug shall be reserved for source code defects – instead we propose “snag” to refer to observed discordances in the MLware fabric. Finally, we present the AIQ Meta-Testbed – bridging academic research on MLware testing and industrial needs for quality by providing evidence-based recommendations based on replication studies in a controlled environment. Acknowledgements ---------------- This work was funded by Plattformen at Campus Helsingborg, Lund University.
32af6ff7-8861-4c84-8f40-446c96b03212
trentmkelly/LessWrong-43k
LessWrong
Weekly LW Meetups: Berkeley, Madison, Melbourne, Norwich There are upcoming irregularly scheduled Less Wrong meetups in: * First Norwich UK Meetup Sunday 1 April 11am: 01 April 2012 11:00AM * Brussels meetup: 14 April 2012 11:00AM * Shanghai Less Wrong Meetup: 15 April 2012 10:36PM * Sydney meetup - Biased pandemic and other games: 17 April 2012 07:30PM The following meetups take place in cities with regularly scheduled meetups, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup: * Monday Madison Meetup: 02 April 2012 06:30PM * Big Berkeley meetup: 04 April 2012 06:30PM * Melbourne, practical rationality: 06 April 2012 08:00PM Locations with regularly scheduled meetups: Austin, Berkeley, Cambridge, MA, Cambridge UK, London, Madison WI, Melbourne, Mountain View, New York, Ohio, Ottawa, Oxford, Portland, Seattle, Toronto, Waterloo, and West Los Angeles. If you'd like to talk with other LW-ers face to face, and there is no meetup in your area, consider starting your own meetup; it's easy (more resources here). Check one out, stretch your rationality skills, and have fun! In addition to the handy sidebar of upcoming meetups, a meetup overview will continue to be posted on the front page every Friday. These will be an attempt to collect information on all the meetups happening in the next weeks. The best way to get your meetup featured is still to use the Add New Meetup feature, but you'll now also have the benefit of having your meetup mentioned in a weekly overview. These overview posts will be moved to the discussion section when the new post goes up. Please note that for your meetup to appear in the weekly meetups feature, you need to post your meetup before the Friday before your meetup! If you check Less Wrong irregularly, consider subscribing to one or more city-specific mailing list in order to be notified when an irregular meetup is happening: Atlanta, Chicago, Helsinki, London,  Marin CA, Pittsburgh, Salt Lake City, Southern California (Los
aa718351-5280-419c-b723-e3efd85f360b
trentmkelly/LessWrong-43k
LessWrong
How to provide a simple example to the requirement of falsifiability in the scientific method to a novice audience? (I once posted this question on academia.stackexchange, but it was deemed to be off topic there. I hope it would be more on-topic here) I would like to introduce the basics of the scientific method to an audience unfamiliar with the real meaning of it, without making it hard to understand. As the suspected knowledge level of the intended audience is of the type which commonly thinks that to "prove something scientifically" is the same as "use modern technological gadgets to measure something, afterwards interpret the results as we wish", my major topic would be the selection of an experimental method and the importance of falsifiability. Wikipedia lists the "all swans are white" as an example for a falsifiable statement, but it is not practical enough. To prove that all swans are white would require to observe all the swans in the world. I'm searching of a simple example which uses the scientific method to determine the workings of an unknown system, starting by forming a good hypothesis. A good example I found is the 2-4-6 game, culminating in the very catchy phrase "if you are equally good at explaining any outcome, you have zero knowledge". This would be one of the best examples to illustrate the most important part of the scientific method which a lot of people imagine incorrectly, it has just one flaw: for best effect it has to be interactive. And if I make it interactive, it has some non-negligible chance to fail, especially if done with a broader audience. Is there any simple, non-interactive example to illustrate the problem underlying the 2-4-6 game? (for example, if we had taken this naive method to formulate our hypothesis, we would have failed) I know, the above example is mostly used in the topic of fallacies, like the confirmation bias, but nevertheless it seems to me as a good method in grasping the most important aspects of the scientific method. I've seen several good posts about the importance of falsifiability, some of them in this very c
3a32bd81-ceed-47c3-adf4-f8cae29273c8
trentmkelly/LessWrong-43k
LessWrong
Ten small life improvements I've accumulated a lot of small applications and items that make my life incrementally better. Most of these ultimately came from someone else's recommendation, so I thought I'd pay it forward by posting ten of my favorite small improvements. (I've given credit where I remember who introduced the item into my life. Obviously the biggest part of the credit goes to the creator.) Video speed Video Speed Controller lets you speed up HTML 5 video; it gives a nicer interface than the YouTube speed adjustment and works for most videos displayed in a browser (including e.g. netflix/amazon). (Credit: Stephanie Zolayvar?) Spectacle Spectacle on OSX provides keyboard shortcuts to snap windows to any half or third of the screen (or full screen). Pinned tabs + tab wrangler I use tab wrangler to automatically close tabs (and save a bookmark) after 10m. I keep gmail and vimflowy pinned so that they don't close. For me, closing tabs after 10m is usually the right behavior. Aggressive AdBlock I use AdBlock for anything that grabs attention even if isn't an ad. I usually block "related content," "next stories," the whole youtube sidebar, everything on Medium other than the article, the gmail sidebar, most comment sections, etc. Similarly, I use kill news feed to block my Facebook feed. Avoiding email inbox I often need to write or look up emails during the day, which would sometimes lead me to read/respond to new emails and switch contexts. I've mostly fixed the problem by leaving gmail open to my list of starred emails rather than my inbox, ad-blocked the "Inbox (X)" notification, and pin gmail so that I can't see the "Inbox (X)" title. Christmas lights I prefer the soft light from christmas lights to white overhead lights or even softer lamps. My favorite are multicolored lights, though soft white lights also seem OK. (Credit: Ben Hoffman) Karabiner Karabiner remaps keys in a very flexible way. (Unfortunately, it only works on OSX pre-Sierra. Would be very interest
182c4ad9-a912-474c-ada3-22e7fd4d30c0
trentmkelly/LessWrong-43k
LessWrong
Weak repugnant conclusion need not be so repugnant given fixed resources I want to thank Irgy for this idea. As people generally know, total utilitarianism leads to the repugnant conclusion - the idea that no matter how great a universe X would be, filled without trillions of ultimately happy people having ultimately meaningful lives filled with adventure and joy, there is another universe Y which is better - and that is filled with nothing but dull, boring people whose quasi-empty and repetitive lives are just one tiny iota above being too miserable to endure. But since the second universe is much bigger than the first, it comes out on top. Not only in that if we had Y it would be immoral to move to X (which is perfectly respectable, as doing so might involve killing a lot of people, or at least allowing a lot of people to die). But in that, if we planned for our future world now, we would desperately want to bring Y into existence rather than X - and could run great costs or great risks to do so. And if we were in world X, we must at all costs move to Y, making all current people much more miserable as we do so. The repugnant conclusion is the main reason I reject total utilitarianism (the other one being that total utilitarianism sees no problem with painlessly killing someone by surprise, as long as you also gave birth to someone else of equal happiness). But the repugnant conclusion can emerge from many other population ethics as well. If adding more people of slightly less happiness than the average is always a bonus ("mere addition"), and if equalising happiness is never a penalty, then you get the repugnant conclusion (caveat: there are some subtleties to do with infinite series). But repugnant conclusions reached in that way may not be so repugnant, in practice. Let S be a system of population ethics that accepts the repugnant conclusion, due to the argument above. S may indeed conclude that the big world Y is better than the super-human world X. But S need not conclude that Y is the best world we can build, given any fixed a
589a5f9d-3f49-4c7a-9bec-0c4c5225eda0
trentmkelly/LessWrong-43k
LessWrong
Curiosity checklist: Looking for feedback I'm helping CMR create a 'rationality checklist'. The basic idea is this:  * Answer a series of clear-cut, unambiguous yes/no questions that reflect your current level of rationality; obtain a score. * Test again in the future as a means of measuring progress in your rationality training. Sample application: before and after a minicamp. Target market: LWers. I made a checklist for Curiosity (below). Others are in the works. A question like "Am I a curious person?", while it is useful to know, is too vague for this checklist and too susceptible to bias. Any and all feedback is greatly appreciated. It is easy to answer yes/no? Is anything missing? Anything unclear? Would you find this useful? Curiosity 1. Do you have specific habits for getting curious when you notice you're not curious about something important? 2. Do you, in every situation, endeavour to have an accurate map of the territory? 3. Do you regularly acknowledge and accept the possible worlds that may exist? E.g. "If the iron is hot, I desire to believe it it hot; if it is cool, I desire to believe it is cool." 4. Do you regularly ask, “What are the causes of my beliefs? Why do I think this? What’s the source?” 5. Do you regularly ask, “What would I expect to see differently if x was or was not the case?” 6. Do you regularly ask, when unexpected things happen, “Why didn’t I expect x to happen?” 7. When you sit down to think, or to look something up, do you regularly ask, "What am I chasing? Why am I doing this? Am I asking myself questions about this?" 8. Do you frequently stop to consider what information will be most valuable to achieving your goals? 9. Do you frequently ask, "What do I most want to accomplish?" 10. Do you focus your curiosity on the information you need to achieve your goals? E.g. "What do I need to know in order to achieve that thing? What is most likely to help me learn this and figure it out?" 11. Do you stop reading when a source becomes irrelevant? 12
e6472cf4-43c6-4780-aaeb-475ebf6957ae
LDJnr/LessWrong-Amplify-Instruct
LessWrong
"I could have been a hero in another world.I will call the friend of mine who commissioned that picture Fluttershy. They posted it shortly before they killed themselves in early March. They weren't the easiest person to help, but I should have done more than I did. By the end, they were pretty isolated and I was one of the last people they were in contact with.I will never talk to Fluttershy again, at least not during this age of the world. Looking back it is clear they were always trying to find a way to help me out. Multiple times they gave me the help or advice I needed. They are the person I originally got estrogen from. They were my friend and I miss them so much.Fluttershy made a lot of mistakes. But it's a harsh world. It is an especially harsh world if you are the type of person to commission fanart of a My Little Pony character freeing chickens. I can understand why they couldn't handle it. Fluttershy was in a lot of pain and hated parts of themselves. I am happy that they are no longer suffering so much. I would never try to stop someone who was determined to commit suicide. But I think things could have been different.Fluttershy needed someone who believed in them. If you are in a negative spiral it is very hard to get out by yourself. Once things are bad it is easy to alienate your friends. And once you lose support things get even worse. This leads to even worse relationships. Someone else has to put in the free energy to reverse the spiral.Even in Fluttershy's darkest moments, they thought about helping the least among us. No one was willing and able to help Fluttershy become the hero they wanted to be. I have many, many regrets from my time in California. But chief among them is not making a heroic effort to put Fluttershy on a better path.I ordered a decent quality print and hung it in my living room. I personally find the idea of the character Fluttershy releasing chickens delightful. But the reason I put up the picture is to remind myself that Fluttershy is gone and we are still here. Whatever heroics Fluttershy wanted to do are left to us. But it's also a reminder to me personally: to do better next time and to keep dreaming of a kinder world.There is a lot more that could be said. I really don't want to write a post remembering Fluttershy that leaves out so much of who they were and the struggles they faced. Given Fluttershy's stated preferences, I think it is important to exercise the virtue of silence. But I want to present one more aspect of Fluttershy. They always encouraged people to think bigger. It seemed to me they often took this position way too far. We argued about this topic a lot. But Fluttershy has a point. The world needs saving and someone has to save it. At least really try to have a gigantic impact. However big you are thinking, try to go bigger. A lot of value is in the tail.I will end this by saying I am sorry:I'm so sorry Fluttershy. I was one of the few people in a position to help and I let you down. Now the world has one less person who dreams of every sweet little chicken being safe and free. And I am left one friend lonelier. I will try to do better next time and continue the work in your absence. I miss you. See you space pony, the rest of us will carry that weight." The text above is a post found on LessWrong. Can you break this down into the top 5 most important points for me. 1. The author deeply regrets not providing more support to their friend, Fluttershy, who committed suicide. They feel they could have done more to help and prevent their friend's isolation and despair. 2. Fluttershy was a compassionate individual who, despite their personal struggles, consistently sought to help others, including the author. This is exemplified by their act of commissioning fanart of a My Little Pony character freeing chickens, symbolizing their desire for kindness and liberation. 3. The author recognizes the vicious cycle of negativity and isolation that Fluttershy fell into. They understand that external intervention is often necessary to break such a cycle and wishes they had been that source of positive energy for Fluttershy. 4. The author is motivated by Fluttershy's memory to strive for a better world. They've hung up the commissioned picture as a constant reminder of their friend's ideals and their personal commitment to uphold them. 5. Finally, the author acknowledges Fluttershy's belief in thinking big and aiming to make a significant impact. Despite their past disagreements, the author now sees the value in this perspective and urges others to adopt it as well. They express their sorrow and promise to continue Fluttershy's work in their absence. Reflect on the concept of external intervention in breaking negative cycles, as mentioned in point 3. How might this be effectively implemented without infringing on an individual's autonomy? External intervention in breaking negative cycles can be a delicate process. It's important to respect an individual's autonomy while also providing the support they need. This can be achieved through a balance of empathy, active listening, and providing resources or guidance when appropriate. Empathy is key in understanding the individual's perspective and feelings. It's about validating their experiences and emotions, which can help them feel heard and less isolated. Active listening involves not just hearing, but understanding and responding in a way that demonstrates you're engaged and care about their situation. It's about providing a safe space for them to express their thoughts and feelings without judgment. Providing resources or guidance can involve sharing information about professional help such as therapists or counselors, support groups, hotlines, or self-care strategies. It's crucial that this is done in a way that doesn't impose decisions but rather presents options for the individual to consider. Moreover, it's important to remember that you can't force someone to change or accept help. They have to make that decision themselves. What you can do is offer support, show understanding, and be there for them. This approach respects their autonomy while also offering a lifeline for breaking the negative cycle. In the case of Fluttershy, the author could have implemented these strategies by listening empathetically to their struggles, validating their feelings, and suggesting resources for help. They could have also continued to be a source of positive energy and support, showing Fluttershy that they were not alone. Considering the strategies of empathy, active listening, and providing resources or guidance, how might these be applied in a real-world scenario where a friend is showing signs of distress? What specific actions could be taken at each stage, and how might the friend's responses influence subsequent steps? In a real-world scenario where a friend is showing signs of distress, the strategies of empathy, active listening, and providing resources or guidance can be applied as follows: 1. Empathy: Start by creating a safe and non-judgmental space for your friend to express their feelings. Acknowledge their emotions and validate their experiences. For example, you might say, "I can see that you're really struggling right now," or "It sounds like you're going through a really tough time." Avoid offering advice or trying to fix the problem at this stage. The goal is to show understanding and make them feel heard. 2. Active Listening: Pay close attention to what your friend is saying and respond in a way that shows you're engaged. Use body language that shows you're attentive, like maintaining eye contact and nodding. Reflect back what they've said to ensure you've understood correctly, for example, "It sounds like you're feeling overwhelmed because..." Ask open-ended questions to encourage them to express their thoughts and feelings more deeply. 3. Providing Resources or Guidance: If your friend seems open to it, you can suggest resources that might help. This could include recommending a mental health professional, sharing information about support groups, or suggesting self-care strategies. Remember to present these as options rather than directives. For example, "Have you considered speaking to a counselor? They might be able to provide some strategies to cope with what you're going through." The friend's responses should guide subsequent steps. If they react positively and seem open to your support, continue to be there for them and provide further assistance as needed. If they're resistant or don't want to talk, respect their decision but let them know you're there for them when they're ready. Remember, it's important to take care of your own mental health as well. Supporting a friend in distress can be emotionally taxing, so ensure you're also seeking support and practicing self-care.
2a4ba37c-b321-41f8-b897-d7627723f60e
trentmkelly/LessWrong-43k
LessWrong
Murphy’s Quest Ch 2: Empiricism How did the cunning Dark Warlock Emperor Murphy solve the Thousand Kobold Challenge? I ask my bunkmate in the morning to camp a spawn point together. My thinking: if we can both hit the Kobold before he gets a chance to attack, we won’t take damage. That way we can grind all day. Of course I’m not ready to grant my bunkmate Named Character status. He’ll have to earn it. He agrees to give it a try. So we’re squatting together around a Kobold cave, and my thighs cramp but I don’t want to look like a wimp. These other boys must have grown up on manual labor. I wonder if they also have STR: 5? The first Kobold that spawns takes a hard left when he sees us. We step forward in unison, but my bunkmate is farther away and hits him late. The Kobold scratches me right before he dies. The next one is the same, and the next one. No matter how closely we time our attacks, the scrawny little rats always get a Kobold Scratch in right in between. I hang my head. There’s only one possible conclusion… Combat is turn-based?! — No way is the future Dark Warlock Emperor staying here to smack Kobolds for another hundred days. I sit around dawdling all afternoon with the other trainees. They seem to be bonding and enjoying themselves. Must feel like a vacation for them. Suddenly, I have a bright idea. I’ll dual wield clubs! That’ll kill the Kobolds in one hit. My bunkmate lets me borrow his club. I pick up my own in my right hand, and grab his in my left. It won’t budge. Really? If I drop my club, I can pick up his in my left hand. Somehow as soon as I pick one up with one hand, the other becomes too heavy to lift. I feel so much sympathy for video game characters now. To live in a world where physics itself is your enemy… On the other hand, I’m not the most athletic character, so this may come in handy. — It’s embarrassing to admit, but I’m finally saved by a bad habit. When I’m really bored, I drink a lot of water. I probably down 15 mugs of water while listening to
608b2cf0-5514-45a8-9440-1c0f36491832
StampyAI/alignment-research-dataset/blogs
Blogs
GPT-3 x CLIP worldbuilding --- Table of Contents* + [`a hospital. Dilapidated and abandoned, overrun by vegetation`](#a-hospital-dilapidated-and-abandoned-overrun-by-vegetation) + [`a hospital hallway. Dilapidated and abandoned, overrun by vegetation`](#a-hospital-hallway-dilapidated-and-abandoned-overrun-by-vegetation) + [`what was once the cafeteria, although now it contains only a few scattered, rotting tables and chairs`](#what-was-once-the-cafeteria-although-now-it-contains-only-a-few-scattered-rotting-tables-and-chairs) + [`sunlight floods into the abandoned parking structure`](#sunlight-floods-into-the-abandoned-parking-structure) + [`a parking attendant's booth with the windows all blown out and vines covering the interior`](#a-parking-attendants-booth-with-the-windows-all-blown-out-and-vines-covering-the-interior) + [`In your dream, you stand alone on a vast, barren plain. Before you stands an ancient, weathered billboard.`](#in-your-dream-you-stand-alone-on-a-vast-barren-plain-before-you-stands-an-ancient-weathered-billboard) + [`an ancient, weathered billboard. On it is painted a beautiful woman, dressed in the fashion of bygone times, long blonde hair, smiling serenely, and holding aloft a purple flower.`](#an-ancient-weathered-billboard-on-it-is-painted-a-beautiful-woman-dressed-in-the-fashion-of-bygone-times-long-blonde-hair-smiling-serenely-and-holding-aloft-a-purple-flower) + [`a roadside diner. The windows are smashed out. The remains of an old dining counter, and the screen of a long-dead TV.`](#a-roadside-diner-the-windows-are-smashed-out-the-remains-of-an-old-dining-counter-and-the-screen-of-a-long-dead-tv) + [`in the deserted diner, there are three overturned chairs. Below one is a half-buried mannequin's leg, and a plastic head.`](#in-the-deserted-diner-there-are-three-overturned-chairs-below-one-is-a-half-buried-mannequins-leg-and-a-plastic-head) + [`a half-buried mannequin's leg, and a plastic head`](#a-half-buried-mannequins-leg-and-a-plastic-head) --- I fed snippets of a post-apocalyptic story written by GPT-3 to BigSleep. The results are striking. This is an interesting example of how well BigSleep sometimes responds to long, full-prose prompts. Interestingly, in my experience, attempting to compress prompts by removing seemingly extraneous connector phrases usually *worsened* results. Images were generated using [BigSleep](https://github.com/lucidrains/big-sleep) via [The Big Sleep Customized NMKD Public](https://colab.research.google.com/drive/1Q2DIeMqYm_Sc5mlurnnurMMVqlgXpZNO?usp=sharing) colab notebook. --- ### `a hospital. Dilapidated and abandoned, overrun by vegetation` ![a hospital. Dilapidated and abandoned, overrun by vegetation](/worldbuilding/hospital2.png) ![a hospital. Dilapidated and abandoned, overrun by vegetation](/worldbuilding/hospital.png) ### `a hospital hallway. Dilapidated and abandoned, overrun by vegetation` ![a hospital hallway. Dilapidated and abandoned, overrun by vegetation](/worldbuilding/hospitalhallway.png) ![a hospital hallway. Dilapidated and abandoned, overrun by vegetation](/worldbuilding/hospitalhallway2.png) ### `what was once the cafeteria, although now it contains only a few scattered, rotting tables and chairs` ![what was once the cafeteria, although now it contains only a few scattered, rotting tables and chairs](/worldbuilding/cafeteria.png) ![what was once the cafeteria, although now it contains only a few scattered, rotting tables and chairs](/worldbuilding/cafeteria2.png) ### `sunlight floods into the abandoned parking structure` ![sunlight floods into the abandoned parking structure](/worldbuilding/parking_structure.png) ![sunlight floods into the abandoned parking structure](/worldbuilding/parking_structure2.png) ![sunlight floods into the abandoned parking structure](/worldbuilding/parking_structure3.png) ### `a parking attendant's booth with the windows all blown out and vines covering the interior` ![a parking attendant’s booth with the windows all blown out and vines covering the interior](/worldbuilding/booth.png) ### `In your dream, you stand alone on a vast, barren plain. Before you stands an ancient, weathered billboard.` ![In your dream, you stand alone on a vast, barren plain. Before you stands an ancient, weathered billboard.](/worldbuilding/dreambillboard.png) ![In your dream, you stand alone on a vast, barren plain. Before you stands an ancient, weathered billboard.](/worldbuilding/dreambillboard2.png) ### `an ancient, weathered billboard. On it is painted a beautiful woman, dressed in the fashion of bygone times, long blonde hair, smiling serenely, and holding aloft a purple flower.` ![an ancient, weathered billboard. On it is painted a beautiful woman, dressed in the fashion of bygone times, long blonde hair, smiling serenely, and holding aloft a purple flower.](/worldbuilding/billboard3.png) ![an ancient, weathered billboard. On it is painted a beautiful woman, dressed in the fashion of bygone times, long blonde hair, smiling serenely, and holding aloft a purple flower.](/worldbuilding/billboard2.png) ![an ancient, weathered billboard. On it is painted a beautiful woman, dressed in the fashion of bygone times, long blonde hair, smiling serenely, and holding aloft a purple flower.](/worldbuilding/billboard.png) ### `a roadside diner. The windows are smashed out. The remains of an old dining counter, and the screen of a long-dead TV.` ![a roadside diner. The windows are smashed out. The remains of an old dining counter, and the screen of a long-dead TV.](/worldbuilding/dinersmash2.png) ### `in the deserted diner, there are three overturned chairs. Below one is a half-buried mannequin's leg, and a plastic head.` ![in the deserted diner, there are three overturned chairs. Below one is a half-buried mannequin’s leg, and a plastic head.](/worldbuilding/deserted.png) ### `a half-buried mannequin's leg, and a plastic head` ![a half-buried mannequin’s leg, and a plastic head](/worldbuilding/leg.png)
c863291b-0d1c-4d6a-905a-645856fddca4
trentmkelly/LessWrong-43k
LessWrong
video games > IQ tests IQ tests are a type of test given to people. What makes them different from other tests given to people? Is it "consistency of results on a single IQ test type"? Not really. Wikipedia says: > For modern tests, the confidence interval can be approximately 10 points and reported standard error of measurement can be as low as about three points. Reported standard error may be an underestimate, as it does not account for all sources of error. That's a best-case scenario for tests designed with that criteria as a priority, and the range is still significant. Is it "consistency of results across different IQ test types"? Not really; that's obviously worse than the above, and many "non-IQ" tests have comparable consistency. Is it "practice being mostly irrelevant"? Not really. A few practice runs can often be worth +8 points, and that's for kids that already to some extent do IQ-test-like stuff in school. This just mostly doesn't come up, because institutions don't generally use IQ test results. Is it "working for an unusually wide range of intelligence"? Not really. IQ tests are notorious for working poorly above ~135, and I'd say they only really work well for -20 to +0 relative to the designers, with a somewhat wider range for teams. Is it "unusually high correlation with general intelligence", as represented by a "g factor"? I don't think so. IQ tests, in general, don't seem to be any better for that than the SAT. Anyway, given modern understanding of AI and biology, I consider the entire "g factor" framework an archaic and crude way of understanding intelligence. Humans have multiple mental systems, those systems have performance on various tasks which vary depending on the amount of specialization, amount of training, quality of training data, and multiple types of management by other (meta-level) systems. Then there's correlation between performance of various systems on various tasks for obvious reasons. ---------------------------------------- If you're t
855a1076-d798-466f-b955-b64b023bf9e8
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Why is pseudo-alignment "worse" than other ways ML can fail to generalize? I have only just read the [mesa optimizers paper](https://arxiv.org/pdf/1906.01820.pdf), and I don't understand what it adds to the pre-existing picture that "ML can fail to generalize outside the train distribution and this is bad." The discussion in the paper generally assumes a background distinction between "training" and "deployment" (or an equivalent distinction), and discuss models which succeed on the base objective during "training" but not during "deployment." In the sections about "deception," this happens in a special way quite unlike the ordinary failures to generalize that we see in ML today (and arises under correspondingly exotic conditions). But, in cases other than "deception," the paper describes dynamics and outcomes that seem identical to the ordinary generalization problem in ML: * Training find a model that scores well on the base objective, assessed over the training distribution * But, this model may not score well on the base objective, assessed over other distributions For example, the following is just a textbook generalization failure, which can happen with or without a mesa-optimizer: > For a toy example of what pseudo-alignment might look like, consider an RL agent trained on a maze navigation task where all the doors during training happen to be red. Let the base objective (reward function) be .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} > .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} > .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} > .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} > .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} > .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} > .mjx-numerator {display: block; text-align: center} > .mjx-denominator {display: block; text-align: center} > .MJXc-stacked {height: 0; position: relative} > .MJXc-stacked > \* {position: absolute} > .MJXc-bevelled > \* {display: inline-block} > .mjx-stack {display: inline-block} > .mjx-op {display: block} > .mjx-under {display: table-cell} > .mjx-over {display: block} > .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} > .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} > .mjx-stack > .mjx-sup {display: block} > .mjx-stack > .mjx-sub {display: block} > .mjx-prestack > .mjx-presup {display: block} > .mjx-prestack > .mjx-presub {display: block} > .mjx-delim-h > .mjx-char {display: inline-block} > .mjx-surd {vertical-align: top} > .mjx-mphantom \* {visibility: hidden} > .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} > .mjx-annotation-xml {line-height: normal} > .mjx-menclose > svg {fill: none; stroke: currentColor} > .mjx-mtr {display: table-row} > .mjx-mlabeledtr {display: table-row} > .mjx-mtd {display: table-cell; text-align: center} > .mjx-label {display: table-row} > .mjx-box {display: inline-block} > .mjx-block {display: block} > .mjx-span {display: inline} > .mjx-char {display: block; white-space: pre} > .mjx-itable {display: inline-table; width: auto} > .mjx-row {display: table-row} > .mjx-cell {display: table-cell} > .mjx-table {display: table; width: 100%} > .mjx-line {display: block; height: 0} > .mjx-strut {width: 0; padding-top: 1em} > .mjx-vsize {width: 0} > .MJXc-space1 {margin-left: .167em} > .MJXc-space2 {margin-left: .222em} > .MJXc-space3 {margin-left: .278em} > .mjx-test.mjx-test-display {display: table!important} > .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} > .mjx-test.mjx-test-default {display: block!important; clear: both} > .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} > .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} > .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} > .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} > .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} > .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} > .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} > .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} > .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} > .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} > .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} > .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} > .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} > .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} > .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} > .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} > .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} > .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} > .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} > .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} > .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} > .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} > .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} > .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} > .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} > .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} > .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} > .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} > .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} > @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} > @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} > @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} > @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} > @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} > @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} > @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} > @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} > @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} > @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} > @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} > @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} > @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} > @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} > @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} > @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} > @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} > @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} > @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} > @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} > @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} > @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} > @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} > @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} > @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} > @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} > @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} > @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} > @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} > @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} > @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} > @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} > Obase = (1 if reached a door, 0 otherwise). On the training distribution, this objective is equivalent to Oalt = (1 if reached something red, 0 otherwise). Consider what would happen if an agent, trained to high performance on Obase on this task, were put in an environment where the doors are instead blue, and with some red objects that are not doors. It might generalize on Obase, reliably navigating to the blue door in each maze (robust alignment). But it might also generalize on Oalt instead of Obase, reliably navigating each maze to reach red objects (pseudo-alignment). Additionally, when the paper makes remarks that seem to be addressing my question, I find these remarks confused (or perhaps just confusing). For instance, in this remark > The possibility of mesa-optimizers has important implications for the safety of advanced machine learning systems. When a base optimizer generates a mesa-optimizer, safety properties of the base optimizer may not transfer to the mesa-optimizer. I don't understand what "safety properties of the base optimizer" could be, apart from facts about the optima it tends to produce. That is, I can't think of a property that would appear to confer "safety" until we consider the possibility of producing mesa-optimizers, and then stop appearing thus. A safety property of an optimizer is some kind of assurance about the properties of the optima; if such a property only holds for a *subset* of optima (the ones that are not mesa-optimizers), we'll see this appear mathematically in the definition of the property or in theorems about it, whether or not we have explicitly considered the possibility of mesa-optimizers. (I suppose the argument could be that some candidate safety properties implicitly assume no optimum is a mesa-optimizer, and thus appear to apply to all optima while not really doing so -- somewhat analogous to early notions of continuity which [implicitly assumed away the Weierstrass function](http://nautil.us/issue/53/monsters/maths-beautiful-monsters-rp). But if so, I need a real example of such a case to convince me.) The following seems to offer a different answer to my question: > Pseudo-alignment, therefore, presents a potentially dangerous robustness problem since it opens up the possibility of a machine learning system that competently takes actions to achieve something other than the intended goal when off the training distribution. That is, its capabilities might generalize while its objective does not. This seems to contrast two ways of failing on a samples from non-train distribution. Supposing a model has learned to "understand" train samples and use that understanding to aim for a target, it can then * fail to understand non-train samples, thus losing the ability to aim for any target (*capabilities fails to generalize)* * understand non-train samples and aim for its internalized target, which matched the base target in training, but not here (*objective fails to generalize)* But even without mesa-optimizers, cases of ML generalization failure often involve the latter, not just the former. A dumb classifier with no internal search, trained on the red-door setting described above, would "understand" the blue-door test data well enough to apply its internalized objective perfectly; even in this textbook-like case, exhibited by arbitrarily simple classifiers, the "capabilities generalize." This kind of problem is definitely *bad*, I just don't see what's *new* about it.
07817909-4872-4067-908f-dc24f6cf0788
trentmkelly/LessWrong-43k
LessWrong
[LINK] Reinventing Explanation: Data Presentation as Intuition Pump A great article by Michael Nielsen on failures of intuition and ways to present data more effectively so that we don't get caught by those failures. It reminded me of concepts like log odds in common use around here, and also to the recent discussion of teaching rationality techniques to average people.
acaaba01-4f49-462c-9e7e-34d0efa97040
trentmkelly/LessWrong-43k
LessWrong
Do Timeless Decision Theorists reject all blackmail from other Timeless Decision Theorists? I have a technical question about Timeless Decision Theory, to which I didn’t manage to find a satisfactory answer in the published MIRI papers. (I will just treat TDT, UDT, FDT and LDT as the same thing, because I do not understand the differences. As far as I understand they are just different degrees of formalization of the same thing.) On page 3 of the FDT paper ( https://arxiv.org/pdf/1710.05060.pdf ) it is claimed that TDT agents “resist extortion in blackmail dilemmas”. I understand why a TDT agent would resist extortion, when a CDT agent blackmails it. If a TDT agent implements an algorithm that resolutely rejects all blackmail, then no CDT agent will blackmail it (provided the CDT agent is smart enough to be able to accurately predict the TDT’s action), so it is rational for the TDT to implement such a resolute blackmail rejection algorithm. But I do not believe that a TDT agent rejects all blackmail, when another TDT agent sends the blackmail. The TDT blackmailer could implement a resolute blackmailing algorithm that sends the blackmail independently of whether the extortion is successful or not, and then the TDT who receives the blackmail has no longer such a clear cut incentive to implement a resolute blackmail rejection algorithm, making the whole situation much more complicated.  In fact it appears to me that the very logic that would make a TDT resolutely reject all blackmail is also precisely the logic that would also make a TDT resolutely send all blackmail. I haven’t yet managed to figure out what two TDTs would actually do in a blackmail scenario, but I will now give an argument why resolutely rejecting all blackmail is definitely not the correct course of action. My Claim: There is a blackmail scenario involving two TDT’s, where the TDT that gets blackmailed does not implement a resolute blackmail rejection algorithm.  My Proof: We consider the following game: We have two TDT agents A and B. A possesses a delicious cookie worth 1 utili
dc1c196e-65f9-486e-9bce-58db466422f4
trentmkelly/LessWrong-43k
LessWrong
Design 2 This is part 14 of 30 of Hammertime. Click here for the intro. > I am a finger pointing to the moon. Don’t look at me; look at the moon. Rationalists drone on and on about how our fake our models are, how we gesture at and point to deep inarticulate truths, and – to shoulder some of the blame – the importance of circumambulating the truth rather than honing in on it directly. We spend all too much time insisting we’re fingers pointing at the moon. Hammertime says: Fuck the moon. There are trillions of indistinguishable giant space rocks floating around in the universe. But a human finger contains a trillion copies of the source code for the most power intelligence to walk the known universe. If I had to choose, I’d rather spend my days studying fingers than moons, and it’s not even close. Hammertime is a set of fingers pointing at the moon. Occasionally, it may prove useful to sit back, cross your eyes, and look for the moon: that grand overarching cognitive strategy behind these techniques. But if you miss the moon, fingers are awesome too. So don’t worry. Relax. Just do exactly as I say. Day 14: Design Previously: Day 4. Design is the practice of seeing all the tiny incentive gradients in the environment, and shifting them in your favor. Last time, we took environment to mean physical space, but Design principles apply across domains. Today I will apply Design principles to the design of Schedules, Social Groups, and Screen Space. As budding self-help guru I dub these (together with Space) the Four S’s of Design. Keep in mind the three principles of Design: 1. Intentionality: notice all the knobs you can turn. Turn them the way you intend. 2. Amortization: pay up-front costs to save attention in the long run. 3. Reflexive Towel Theory: the aesthetics of your environment determine your self-image. Schedules I am no expert on using calendars; this section is about the basics. What’s the single most important incentive gradient to fix about a calenda
6f19c6b6-4af6-487e-b248-f6ad874dc4d1
trentmkelly/LessWrong-43k
LessWrong
NEFFA Should Allow Small Children When I was little we would go to the NEFFA festival every year. I loved seeing the dancing: contra dance, international, dance performances. I loved listening to the music: bands, performances, hallway jamming. I loved eating the cake the Lithuanian booth would sell. My cousins would come up from Philadelphia, and it was one of my favorite weekends of the year. Those early experiences at NEFFA got me into an activity and a community that has been a huge part of my life ever since. I've been a dancer, musician, caller, and helped organize a dance series and weekend. I've been bringing my kids with me when I go early to set up for the dance and when I travel to play dances elsewhere, and I'm glad they're getting to enjoy something that's been so important to me. Which is why I was so disappointed to learn that the first in-person NEFFA since 2019 has chosen a covid policy that excludes small children. > Regrettably, out of concern for everyone's safety, children who for whatever reason are unable to remain appropriately masked as described above, including those two years of age or younger, are not permitted to attend this year's Festival. When I described this policy to my cousin their response was "I guess that means we won't be able to go this year." I think this will likely be a common response among families with small children, and an unfortunate one. How seriously organizers and individuals should be trying to limit covid spread is something there's a lot of disagreement on, mostly tracing back to different interpretations of the limited amount we know about the long-term impacts of getting sick. If the NEFFA organizers want to put on a very covid-cautious event that's not something I think people should give them grief over, and I'd rather see an in-person NEFFA for part of the community than no in-person event at all. But while if the precautions under which they were able to hold the event were uniformly strict I could see excluding people to young to
d7d93771-4233-4aa7-b183-49b5b900728b
trentmkelly/LessWrong-43k
LessWrong
Seeking suggestions: Less Wrong Biology 101 I’ve been a reader and occasional commenter here for a while now, but previously have not had a solid idea of what I could or wanted to contribute to the community in posting.  In light of recent comments stating an interest in more posts that offer concrete, factual information as well as remembering lukeprog’s call for such things in his Back to the Basics of Rationality post, I am considering a series of condensed posts about biology.  As someone who has spent my formal education on biologically-focused engineering (bioengineering BS, now studying bioinformatics under a chemical engineering department for my PhD) but has always had the bulk of my friends in electrical engineering, computer science, and more traditional chemical engineering, I’ve gotten used to offering such condensed explanations whenever biology works its way into a discussion.  From what I’ve seen on LW thus far, the community educational base leans more in those (non-biology) directions, so I believe this is a niche that could use filling.  Since biology is a rather broad subject, and you could all go read Wikipedia or a textbook if you wanted a very detailed survey course, my intent is to pick targeted topics that are relevant to current events and scientific developments.  Each post would focus on one such event/Awesome New Study, discussing the biological background and potential implications, including either short explanations or links to the basics needed to understand the subject.  If there are any political ties to the subject, I will withhold my explicit opinions on those aspects unless asked in the comments.  My questions, then, are the following: * Is this something that people here would find interesting/useful in the general sense?  (While I do enjoy talking to myself, doing so on this topic has gotten a bit old, so I really do want to know if no one really thinks this will be helpful.) * How long/in-depth would you like?  This question is intended to gauge what my background
41abf2a5-5bef-4e44-83ed-1c31b3f1e533
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
2021 AI Alignment Literature Review and Charity Comparison *cross-posted to the EA forum* [*here*](https://forum.effectivealtruism.org/posts/BNQMyWGCNWDdP2WyG/2021-ai-alignment-literature-review-and-charity-comparison)*.* **Introduction** ================ As in [2016](https://forum.effectivealtruism.org/posts/nSot23sAjoZRgaEwa/2016-ai-risk-literature-review-and-charity-comparison), [2017](https://forum.effectivealtruism.org/posts/nSot23sAjoZRgaEwa/2016-ai-risk-literature-review-and-charity-comparison), [2018](https://forum.effectivealtruism.org/posts/BznrRBgiDdcTwWWsB/2018-ai-alignment-literature-review-and-charity-comparison), [2019](https://forum.effectivealtruism.org/posts/dpBB24QsnsRnkq5JT/2019-ai-alignment-literature-review-and-charity-comparison) and [2020](https://forum.effectivealtruism.org/posts/K7Z87me338BQT3Mcv/2020-ai-alignment-literature-review-and-charity-comparison)I have attempted to review the research that has been produced by various organisations working on AI safety, to help potential donors gain a better understanding of the landscape. This is a similar role to that which GiveWell performs for global health charities, and somewhat similar to a securities analyst with regards to possible investments. My aim is basically to judge the output of each organisation in 2021 (technically: 2020-12-01 to 2021-11-30) and compare it to their budget. This should give a sense of the organisations' average cost-effectiveness. We can also compare their financial reserves to their 2021 budgets to get a sense of urgency. This document aims to be sufficiently broad that someone who has not paid any attention to the space all year could read it (and the linked documents) and be as well-informed to make donation decisions as they could reasonably be without personally interviewing researchers and organisations. I’d like to apologize in advance to everyone doing useful AI Safety work whose contributions I have overlooked or misconstrued. As ever I am painfully aware of the various corners I have had to cut due to time constraints from my job, as well as being distracted by 1) other projects, 2) the miracle of life and 3) computer games. This article focuses on AI risk work. If you think other causes are important too, your priorities might differ. This particularly affects GCRI, FHI and CSER, who both do a lot of work on other issues which I attempt to cover but only very cursorily. **How to read this document** ----------------------------- This document is fairly extensive, and some parts (particularly the methodology section) are largely the same as last year, so I don’t recommend reading from start to finish. Instead, I recommend navigating to the sections of most interest to you. You should also read the Conflict of Interest Section. If you are interested in a specific research organisation, you can use the table of contents to navigate to the appropriate section. You might then also want to Ctrl+F for the organisation acronym in case they are mentioned elsewhere as well. Papers listed as ‘X researchers contributed to the following research lead by other organisations’ are included in the section corresponding to their first author and you can Cntrl+F to find them. If you are interested in a specific topic, I have added a tag to each paper, so you can Ctrl+F for a tag to find associated work. The tags were chosen somewhat informally so you might want to search more than one, especially as a piece might seem to fit in multiple categories. Here are the un-scientifically-chosen hashtags: * AgentFoundations * Amplification * Capabilities * Community * Ethics * Fiction * Forecasting * GameTheory * Interpretability * IRL * NearTerm * Obstruction * Other * Overview * Policy * Robustness * Scenarios * ShortTerm * Strategy * Translation * ValueLearning You might also be interested in the ‘Organisation Second Preference’ section, which is new this year. **New to Artificial Intelligence as an existential risk?** ---------------------------------------------------------- If you are new to the idea of General Artificial Intelligence as presenting a major risk to the survival of human value, I recommend [this Vox piece](https://www.vox.com/future-perfect/2018/12/21/18126576/ai-artificial-intelligence-machine-learning-safety-alignment) by Kelsey Piper, or for a more technical version [this](https://www.alignmentforum.org/s/mzgtmmTKKn5MuCzFJ) by Richard Ngo. If you are already convinced and are interested in contributing technically, I recommend [this piece](https://docs.google.com/document/d/1FbTuRvC4TFWzGYerTKpBU7FJlyvjeOvVYF2uYNFSlOc/edit) by Jacob Steinhardt, as unlike this document Jacob covers pre-2019 research and organises by topic, not organisation, or [this](https://arxiv.org/abs/2109.13916) from Hendrycks et al., or [this](https://arxiv.org/abs/2006.04948) from Critch & Krueger, or [this](https://arxiv.org/abs/1805.01109) from Everitt et al, though it is a few years old now. **Conflict of Interest** ------------------------ In the past I have had very demanding standards around Conflicts of Interest, including being critical of others for their lax treatment of the issue. Historically this was not an issue because I had very few conflicts. However this year I have accumulated a large number of such conflicts, and worse, conflicts that cannot all be individually publically disclosed due to another ethical constraint. As such the reader should assume I could be conflicted on any and all reviewed organisations. **Research Organisations** ========================== **FHI: The Future of Humanity Institute** ----------------------------------------- FHI is an Oxford-based Existential Risk Research organisation founded in 2005 by Nick Bostrom.  They are affiliated with Oxford University. They cover a wide variety of existential risks, including artificial intelligence, and do political outreach. Their research can be found [here](https://www.fhi.ox.ac.uk/publications/). Their research is more varied than MIRI's, including strategic work, work directly addressing the value-learning problem, and corrigibility work - as well as work on other Xrisks. They ran a Research Scholars Program, where people could join them to do research at FHI. There is a fairly good review of this [here](https://forum.effectivealtruism.org/posts/e8CXMz3PZqSir4uaX/what-fhi-s-research-scholars-programme-is-like-views-from-1), albeit from before the pandemic / hiring freeze. The EA Meta Fund supported a special program for providing infrastructure and support to FHI, called the [Future of Humanity Foundation](https://forum.effectivealtruism.org/posts/4T5ZT3AefQiimCajP/ea-meta-fund-grants-july-2020#The_Future_of_Humanity_Foundation____200k). This reminds me somewhat of what BERI does. In the past I have been very impressed with their research. They didn’t share any information with me about hiring or departures. Research Cohen et al.'s [Fully General Online Imitation Learning](https://arxiv.org/abs/2102.08686) is a detailed paper presenting an online imitator learner that operates with bounds on how harmful it can be despite no prior training phase. The imitator keeps track of the most plausible models of the teacher, and for each possible action gives it the minimum probability of any teacher-model ascribed. At the beginning, when you have many plausible models, this means a lot of probability mass is undefined, and hence you request feedback a lot (i.e. ersatz training period), but over time you should narrow down on the true model (though perhaps this is very slow?) They prove some results to suggest that this allows us to keep the probability of catastrophes low, proportionate to their risk under the true model, assuming the true model was in our 'most plausible' set at the beginning (which is not clear to me). Incidentally, I think this is a good example of the problems of academic research. The alignment forum blog post quite clearly lays out that this is about mesa-optimisers… but the paper (perhaps because of reviewers?) literally does not include the string 'mesa' at all, making it a lot harder to understand the significance! See also the discussion [here](https://www.alignmentforum.org/posts/CnruhwFGQBThvgJiX/formal-solution-to-the-inner-alignment-problem). Overall I thought this was an excellent paper. Researchers from Deepmind were also named authors on the paper. #IRL Evans et al.'s [Truthful AI: Developing and governing that does not lie](https://arxiv.org/abs/2110.06674) is a detailed and lengthy piece discussing a lot of issues around truthfulness for AI agents. This includes conceptual, practical and governance issues, especially with regard conversation bots. They argue for truthfulness (or at least, non-negligently-false) rather than honesty as a standard both to avoid intentionality issues and also because, relative to humans, the costs of punishing unintended mistakes are much lower / less unfair. Especially hard topics include the truthfulness of AIs that are more expert than humans (and hence cannot be directly subject to human oversight) and cases where the truth is contested - for example, tech platforms choice to suppress as misleading various claims about covid which contradicted official advice, even though the official advice was frequently clearly wrong. I'm not convinced that their approach would end up being significantly different than 'dominant ideology censors rivals' - even suggesting explicit warnings about controversy has the issue that what is controversial is itself controversial. See also the discussion [here](https://forum.effectivealtruism.org/posts/SGFRneArKi93qbrRG/truthful-ai) and [here](https://www.lesswrong.com/posts/aBixCPqSnTsPsTJBQ/truthful-ai-developing-and-governing-ai-that-does-not-lie). Researchers from GPI, OpenAI were also named authors on the paper. #Strategy Lin et al.'s [TruthfulQA: Measuring How Models Mimic Human Falsehoods](https://arxiv.org/abs/2109.07958) provides a series of test questions to study how 'honest' various text models are. Of course, these models are trying to copy human responses, not be honest, so because many of the questions allude to common misconceptions, the more advanced models 'lie' more often. Interestingly they also used GPT-3 to evaluate the truth of these answers. See also the discussion [here](https://www.alignmentforum.org/posts/PF58wEdztZFX2dSue/how-truthful-is-gpt-3-a-benchmark-for-language-models). Researchers from OpenAI were also named authors on the paper. #Other Ord et al.'s [Future Proof: The Opportunity to Transform the UK's Resilience to Extreme Risks](https://www.governance.ai/research-paper/futureproof-artificial-intelligence-chapter) is a flashy policy document recommending steps HMG could take to prepare for future risks. This includes things like having a Chief Risk Officer and avoiding AI control of the nuclear weapons, as well as non-AI-relevant but still good recommendations for bio. In general I thought it did a significantly better job, and represented a more realistic theory of change, than many previous policy pieces Researchers from CSER, Gov.AI were also named authors on the paper. #Policy Manheim & Sandberg’s [What is the Upper Limit of Value?](https://philarchive.org/rec/MANWIT-6) argues that the total amount of value/growth humanity can create/experience is finite. This is related to typical 'limits to growth' arguments, except much better and a much higher ceiling: rather than worrying about peak oil, they discuss the speed of light and the difficulties extracting infinite value from a single electron. As they note, it may not apply in exotic models of physics though, and I do not understand why they think that you cannot assign a probability to something so low that nothing could ever convince you it happened. See also the discussion [here](https://www.lesswrong.com/posts/Bc77uwswZzXSHzGH7/the-upper-limit-of-value). #Other Hammond et al.'s [Equilibrium Refinements for Multi-Agent Influence Diagrams: Theory and Practice](https://arxiv.org/abs/2102.05008) extends previous work on representing games as causal networks rather an payoff tables. This has the advantage of facilitating decomposability, making them much more easily understable; the hope is that this type of work will help us to understand when agents have bad incentives to e.g. deceive. Researchers from Deepmind were also named authors on the paper. #GameTheory Finnveden's [Extrapolating GPT-N performance](https://www.alignmentforum.org/posts/k2SNji3jXaLGhBeYP/extrapolating-gpt-n-performance#comments) examines the performance scaling for GPT on a variety of tasks. He finds generally relatively smooth scaling, and interestingly comes to relatively similar conclusions to Ajeya's [work](https://www.alignmentforum.org/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines) despite the different methodology. #Forecasting FHI researchers contributed to the following research led by other organisations: * Dafoe et al.'s [Open Problems in Cooperative AI](https://arxiv.org/abs/2012.08630) They also produced a variety of pieces on biorisk and other similar subjects, which I am sure are very good and important but I have not read. Finances FHI is apparently currently banned from fundraising by the university, and hence cannot share financial information. I would guess their budget is relatively large. Apparently they have sufficient funds to last until the expected resumption of fundraising in the new year. I do not know the exact reason for this ban; the absence of disclosure makes due diligence of them very difficult. If you wanted to donate to them anyway, [here](https://www.fhi.ox.ac.uk/support-fhi/) is the relevant web page. **GovAI: The Center for the Governance of AI** ---------------------------------------------- GovAI is an Oxford based AI Governance Research organisation founded in 2021 by Allan Dafoe; Ben Garfinkel became Acting Director in 2021.  They are affiliated with CEA. They were formerly a research center as part of FHI, but spun out this year to allow Allan to take up a position at Deepmind, and for increased operational independence from the university. Their research can be found [here](https://www.governance.ai/research). Their declaration of independence can be found [here](https://www.governance.ai/post/the-centre-for-the-governance-of-ai-has-relaunched). I generally regarded the work they did as part of FHI as quite good, and presumably that is likely to continue. The one possible downside of leaving the university affiliation is the loss of associated prestige. Research Zaidi & Dafoe's [International Control of Powerful Technology: Lessons from the Baruch Plan for Nuclear Weapons](https://www.governance.ai/research-paper/international-control-of-powerful-technology-lessons-from-the-baruch-plan-for-nuclear-weapons) attempts to draw conclusions relevant for AGI control. It's a very detailed account of negotiations, but I'm not sure how much we can learn from it, given that the plan failed, and it seems plausible that neither side was really negotiating in earnest anyway. #Strategy Fischer et al.'s [AI Policy Levers: A Review of the U.S. Government’s Tools to Shape AI Research, Development, and Deployment](https://www.governance.ai/research-paper/ai-policy-levers-a-review-of-the-u-s-governments-tools-to-shape-ai-research-development-and-deployment) lays out various policy levers the USG can use to control AI. These generally focus on AI race issues—e.g. techniques to undermine chinese competition—rather than AGI control, which would be an issue even with a unified world government. It focuses on tools based in current law, which I think makes sense, as even in previous crisis (e.g. 2008, March 2020) the government response has leant heavily on repurposing existing programs and permissions. #Policy Dafoe et al.'s [Open Problems in Cooperative AI](https://arxiv.org/abs/2012.08630) and [Cooperative AI: Machines Must Learn to Find Common Ground](https://www.governance.ai/research-paper/cooperative-ai-machines-must-learn-to-find-common-ground) give an overview of different ways to think about cooperation issues. This is not about principle-agent issues with getting an AI to do what its human wants, but about how to deal with multiple humans/AIs with different goals and knowledge. Much of the piece was effectively about human coordination - while there were some AI-specific ideas, like pre-commitment in lane merging for autonomous cars, ideas like AIs self-modifying to a joint utility function didn't get much discussion. Researchers from FHI, Deepmind were also named authors on the paper. #Strategy Zhang's [Ethics and Governance of Artificial Intelligence: Evidence from a Survey of Machine Learning Researchers](https://jair.org/index.php/jair/article/view/12895/26701) asked a bunch of people at NeuroIPS etc. what they thought of various ethical issues. Sort of a follow up to Katja's previous researcher survey and BaoBao's survey of the public, many of the results are not surprising - e.g. researchers trust science organisations and distrust the military and china. I was surprised/disappointed to see that experts were less concerned about Value Alignment than the general public (to the extent we can interpret these surveys literally). Researchers from Gov.AI were also named authors on the paper. #Strategy Ding's [China's Growing Influence over the Rules of the Digital Road](https://sci-hubtw.hkvisa.net/10.1353/asp.2021.0015) describes China's approach to influencing technology standards, and suggests some policies the US might adopt.  #Policy Garkinkel's [A Tour of Emerging Cryptographic Technologies](https://www.governance.ai/research-paper/a-tour-of-emerging-cryptographic-technologies) provides an overview of various cryptography techniques (not primarily currency) and their relevance for issues like coordination problems and surveillance.  #Overview Dafoe et al.'s [Reputations for Resolve and Higher-Order Beliefs in Crisis Bargaining](https://journals.sagepub.com/doi/10.1177/0022002721995549) conducts a semi-expert survey to try to evaluate foreign policy decision makers' use of reputation for resolve when evaluating geopolitical strategy. Their work suggests find that reputation does matter, and the Domino Theory is true: a history of steadfastness causes others to expect you to be resolute in the future, and hence they are more likely to back down. #Strategy Ding & Dafoe's [Engines of Power: Electricity, AI, and General-Purpose Military Transformations](https://arxiv.org/abs/2106.04338) argues that for countries to take advantage of AI In warfare they will require lot of domestic civilian AI capabilities. This is in contrast to people who have argued that rapid AI technology diffusion would upset US dominance. However, I'm not sure the conclusions really pull through for AGI. #Forecasting Zwetsloot, Remvo; Zhang, Baobao; Anderljung, Marksu; Horowitz, Michael; Dafoe, Allan - The Immigration Preferences of Top AI Researchers: New Survey Evidence - 2021-10-22 - https://www.governance.ai/research-paper/the-immigration-preferences-of-top-ai-researchers-new-survey-evidence Researchers from GovAI were also named contributors to the following papers: * Prunkl et al.'s Ins[titutionalizing ethics in AI through broader impact requirements](http://www.cs.jhu.edu/~misha/DIReadingSeminar/Papers/Prunkl21.pdf) * Ashurst et al.'s [AI Ethics Statements: Analysis and lessons learnt from NeurIPS Broader Impact Statements](https://arxiv.org/abs/2111.01705) * Zhang's [Ethics and Governance of Artificial Intelligence: Evidence from a Survey of Machine Learning Researchers](https://jair.org/index.php/jair/article/view/12895/26701) * Alex et al.'s [RAFT: A Real-World Few-Shot Text Classification Benchmark](https://arxiv.org/abs/2109.14076) * Klinova & Korinek's AI and Shared Prosperity<https://arxiv.org/abs/2105.08475> * Ord et al.'s [Future Proof: The Opportunity to Transform the UK's Resilience to Extreme Risks](https://www.governance.ai/research-paper/futureproof-artificial-intelligence-chapter) * Zwetsloot et al.'s [The Immigration Preferences of Top AI Researchers: New Survey Evidence](https://www.governance.ai/research-paper/the-immigration-preferences-of-top-ai-researchers-new-survey-evidence) Despite the name they have done a fair bit of work on non-AI related governance; these papers are not reviewed here. Finances They currently have around $3.5m cash, which they expect constitutes around 2-3 years of runway (less if they grow faster). They are not currently actively soliciting donations but if you would like to donate to them, you can do so [here](https://funds.effectivealtruism.org/partners/govai). **CHAI: The Center for Human-Compatible AI** -------------------------------------------- CHAI is a UC Berkeley based AI Safety Research organisation founded in 2016 by Stuart Russell. They do ML-oriented safety research, especially around inverse reinforcement learning, and cover both near and long-term future issues. As an academic organisation their members produce a very large amount of research; I have only tried to cover the most relevant below. It seems they do a better job engaging with academia than many other organisations, especially in terms of interfacing with the cutting edge of non-safety-specific research. The downside of this, from our point of view, is that not all of their research is focused on existential risks. They have expanded somewhat to other universities outside Berkeley and have people at places like Princeton and Cornell. Research CHAI and their associated academics produce a huge quantity of research. Far more so than other organisations their output is under-stated by my survey here; if they were a small organisation that only produced one report, there would be 100% coverage, but as it is this is just a sample of those pieces I felt most interested in. On the other hand academic organisations tend to produce some slightly less relevant work also, and I have focused on what seemed to me to be the top pieces. Hendrycks et al.'s [Unsolved Problems in ML Safety](https://arxiv.org/abs/2109.13916) provides an overview of ML safety issues: Robustness, Monitoring, Alignment and 'External Safety'. It's basically an updated version of Concrete Problems, with one of the same authors. I generally think these pieces are quite good for helping provide easy on-ramps (with Google/OpenAI credentials) for mainstream researchers. There is probably not a huge amount of novel content here for readers of this article, though I thought the introductory motivation section was well written. See also the discussion [here](https://www.lesswrong.com/posts/AwMb7C72etphiRvah/unsolved-ml-safety-problems). Researchers from OpenAI were also named authors on the paper. #Overview Laidlaw & Russell's [Uncertain Decisions Facilitate Better Preference Learning](https://proceedings.neurips.cc/paper/2021/hash/7f141cf8e7136ce8701dc6636c2a6fe4-Abstract.html) presents an approach for learning utility functions from the behaviour of humans acting under uncertainty. They argue that uncertainty can actually make it easier to infer values, for similar reasons (I think) that utility functions are derived from preferences over bets: by being less extremal there is more prior uncertainty about what agents will take, and hence more information in their actions. #IRL Roman et al.'s [Accumulating Risk Capital Through Investing in Cooperation](https://arxiv.org/abs/2101.10305) is a game theory paper about promoting cooperation while limiting downside. Basically they prove results about the tradeoff between the two goals, and show that you can get asymptotically good (i.e. maximally cooperative) behaviour. #GameTheory Hendrycks et al.'s [What Would Jiminy Cricket Do? Towards Agents That Behave Morally](https://arxiv.org/abs/2110.13136) annotate a series of text-based games with action ethics scores and use them for policy shaping. They note that many of the games actively rewarded immoral behaviour, and attempted to correct this with pro tanto ethical labelling. This was then used to adjust a model at the last step (not retraining using ethical judgements as rewards). #ValueLearning Filan et al.'s [Clusterability in Neural Networks](https://arxiv.org/abs/2103.03386) produces a definition of clustering to identify potentially meaningful subcomponents in neural networks, and shows that actually trained (and hence meaningful in aggregate) nets have these subcomponents more than randomly generated nets. This basically tries to produce subcomponents of neurons that have strong intraconnections and weak interconnections, relative to their size. They also suggest two methods for promoting clustering in a network, for when you want to design interpretability into a system: eigenvector regularisation and initialising the weights with pre-existing clusters. #Interpretability Zhuang & Hadfield-Menell's [Consequences of Misaligned AI](https://arxiv.org/abs/2102.03896) provide a simple model of mis-alignment due to utility function mis-specification. Essentially the true utility function is a linear function of N attributes, but the agent's utility function only covers M<N, and the resource constraint is concave, so the robot ends up sacrificing the invisible N-M attributes. They then discuss some solutions, including classic ones like Armstrong's impact minimisation or the human constantly providing new local utility functions—both of which unfortunately require you to know what the other attributes are. #AgentFoundations Critch's [What Multipolar Failure Looks Like, and Robust Agent-Agnostic Processes (RAAPs)](https://www.alignmentforum.org/posts/LpM3EAakwYdS6aRKf/what-multipolar-failure-looks-like-and-robust-agent-agnostic) argues for a focus on multi-polar AI scenarios, and on the mechanisms by which they interact over the designs of individual AIs. The main part of the post is a series of scenarios, showing bad outcomes from otherwise aligned AIs because competition forces them to sacrifice alignment. Basically AI companies are super successful at making profits, but then eventually they become too powerful and we die. I found the models a bit confusing - they didn't seem to really explain how this competition goes from 'the best thing in the history of the world' to 'extinction'. Standard microeconomics suggests that the perfect competition you need to 'force' all firms to be maximally efficient is infact great for consumers (humans). The article claims that humans will lose control because the AIs have become extremely powerful and well defended, but I don't see how this makes sense: fortified compounds are an expensive liability that firms in perfect competition cannot afford, and antagonising humanity (which is extremely rich and powerful because of all the stuff the AI firms have made for us) also sounds like a very expensive mistake to make. I think my steelman would be that these firms are exploiting insecure property rights, in which case the solution to AI alignment turns out to be... carbon taxes? #Scenarios Lindner et al.'s [Learning What To Do by Simulating the Past](https://arxiv.org/abs/2104.03946) is an extension of [Rohin's previous paper](https://arxiv.org/abs/1902.04198). Both papers basically try to learn from the latent information embodied in the world that already significantly reflects human preferences - e.g. if an AI discovers a fence in the woods, it can reasonably infer some human prefered a fence be there. This paper basically aims to move from girdworlds with trivial 'physics' to more realistic settings where you can't precisely compute the histories. Researchers from CHAI were also named authors on the paper. #IRL Shah et al.'s [The MineRL BASALT Competition on Learning from Human Feedback](https://arxiv.org/abs/2107.01969) introduces a competition for learning human intent in a noisy environment: for tasks like “building waterfalls in Minecraft”. The idea here is that while humans have a strong sense for what is a 'nice looking' waterfall, we don't have a good formal specification of the task, so you are competing to design agents that are best at extracting human intent. By using Minecraft they provide a huge space of possible strategies (vs. e.g. Atari games) and by using human feedback you reduce (eliminate?) the potential for excess hyperparameter tuning. See also the discussion [here](https://www.lesswrong.com/posts/a7YgzDYx4FhdB3TmR/an-155-a-minecraft-benchmark-for-algorithms-that-learn#comments). Researchers from OpenAI were also named authors on the paper. #ValueLearning Hod's [Detecting Modularity in Deep Neural Networks](https://arxiv.org/abs/2110.08058) provide two criteria for whether a neural network subcomponent identified through Filan-style spectral analysis is capturing a 'real', human-intuitive subtask. The idea is that the correlation between the neurons, and the criticality of the subcomponent to overall performance, can be automatically calculated without requiring human input. #Interpretability Lee et al.'s [PEBBLE: Feedback-Efficient Interactive Reinforcement Learning via Relabeling Experience and Unsupervised Pre-training](https://arxiv.org/abs/2106.05091) proposes a model whereby agents initially learn in an unsupervised fashion to better economise on the scarce resource of human oversight later. This is in contrast to some other approaches where the teacher will provide near-constant feedback in the very early stages when uncertainty is high; presumably the cost is that this exploration is not safe. They also change the way the agent updates based on the samples shown to the humans, but I didn't quite follow that bit. #ValueLearning Gates et al.'s [A rational model of people’s inferences about others’ preferences based on  response times](https://psyarxiv.com/25zfx/) suggests using how long people take to respond as a hidden measure of preference intensity. To my knowledge this 'free' bit of information has not been proposed previously. #ValueLearning CHAI researchers contributed to the following research led by other organisations: * Brown et al.'s [Value Alignment Verification](https://arxiv.org/abs/2012.01557) * Andrus et al.'s [AI Development for the Public Interest: From Abstraction Traps to Sociotechnical Risks](https://arxiv.org/abs/2102.04255) * Chatila et al.'s [Trustworthy AI](https://smile.amazon.com/gp/product/B08W3XZ1TJ/ref=ppx_yo_dt_b_d_asin_title_o00?ie=UTF8&psc=1) * Maclure & Russell's [AI for Humanity: The Global Challenges](https://smile.amazon.com/gp/product/B08W3XZ1TJ/ref=ppx_yo_dt_b_d_asin_title_o00?ie=UTF8&psc=1) * Zhang et al.'s MADE: Exploration via Maximizing Deviation from Explored Regions<https://proceedings.neurips.cc/paper/2021/hash/5011bf6d8a37692913fce3a15a51f070-Abstract.html> Finances They have been funded by various EA organisations including the Open Philanthropy Project. They spent $1,650,000 in 2020 and $1,250,000 in 2021, and plan to spend around $1,750,000 in 2022. They have around $11,000,000 in cash and pledged funding, suggesting (on a very naïve calculation) around 6.2 years of runway, or close to 5 if they grow. Their 2021 spending was significantly below plan due to the pandemic. If you wanted to donate to them, [here](https://give.berkeley.edu/egiving/index.cfm?Fund=FN3330000) is the relevant web page. **MIRI: The Machine Intelligence Research Institute** ----------------------------------------------------- MIRI is a Berkeley based independent AI Safety Research organisation founded in 2000 by Eliezer Yudkowsky and currently led by Nate Soares. They were responsible for much of the early movement building for the issue, but have refocused to concentrate on research for the last few years. With a fairly large budget now, they are the largest pure-play AI alignment shop. Their research can be found [here](https://intelligence.org/all-publications/).  In general they do very ‘pure’ mathematical work, in comparison to other organisations with more ‘applied’ ML or strategy focuses. I think this is especially notable because of the irreplaceability of the work. It seems quite plausible that some issues in AI safety will arise early on and in a relatively benign form for non-safety-orientated AI ventures (like autonomous cars or Minecraft helpers) – however the work MIRI does largely does not fall into this category. I have also historically been impressed with their research and staff. Their agent foundations work is basically trying to develop the correct way of thinking about agents and learning/decision making by spotting areas where our current models fail and seeking to improve them. This includes things like thinking about agents creating other agents. In December 2020 (hence in scope for this year’s review) they [announced](https://intelligence.org/2020/12/21/2020-updates-and-strategy/) that their new research direction projects they launched in 2017 had been a disappointment, and they were winding down those programs. As a result most of their engineering staff have left. Given that I did not give them much credit in the past for this secret research program this does not significantly change my opinion of them. MIRI, in collaboration with CFAR, ran a series of four-day workshop/camps, the [AI Risk for Computer Scientists workshops](https://intelligence.org/ai-risk-for-computer-scientists/), which gather mathematicians/computer scientists who are potentially interested in the issue in one place to learn and interact. This sort of workshop seems very valuable to me as an on-ramp for technically talented researchers, which is one of the major bottlenecks in my mind. In particular they have led to hires for MIRI and other AI Risk organisations in the past. However, the website suggests these have been discontinued due to the pandemic. They also support [MIRIx workshops](https://intelligence.org/mirix/) around the world, for people to come together to discuss and hopefully contribute towards MIRI-style work. MIRI continue their policy of [nondisclosure-by-default](https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/#section3), something [I’ve discussed in the past](https://forum.effectivealtruism.org/posts/BznrRBgiDdcTwWWsB/2018-ai-alignment-literature-review-and-charity-comparison), which despite having some strong arguments in favour unfortunately makes it very difficult for me to evaluate them. I’ve included some particularly interesting blog posts some of their people have written below, but many of their researchers produce little to no public facing content. They decided [not to leave the Bay Area](https://www.lesswrong.com/posts/SgszmZwrDHwG3qurr/miri-location-optimization-and-related-topics-discussion). Research Most of their work is non-public. Garrabrant's [Temporal Inference with Finite Factored Sets](https://arxiv.org/abs/2109.11513) introduces a new way of doing temporal/causal influence via combinatorics. To summarise greatly, it introduces the idea of a set factorization, which is sort of dual to a set partition, and uses this to introduce an alternative to Judea Pearl's directed acyclic graph approach to causality. The appeal here, apart from being a neat new definition, is that this could help us infer causality without needing the graph, which does feel a bit like cheating - you basically get temporal direction from factor subset relations. See also the discussion [here](https://www.alignmentforum.org/posts/N5Jm6Nj4HkNKySA5Z/finite-factored-sets#2b__Applications___Future_Work___Speculation). Overall I thought this was an excellent paper. #AgentFoundations Yudkowsky's [Discussion with Eliezer Yudkowsky on AGI interventions](https://www.lesswrong.com/posts/CpvyhFy9WvCNsifkY/discussion-with-eliezer-yudkowsky-on-agi-interventions) is a transcript of a Chatman House discussion Eliezer hosted on his views about the future. Very interesting in general; a lot of it describes trying to create the situation to be able to exploit a future miracle that might occur. He is generally pessimistic—even if one research org can be persuaded to behave sensibly, the code will just be stolen by rivals. One section I didn't understand was his desire for a secret team of 5 good ML researchers to try out various (probably unsuccessful) ideas—isn't that MIRI? There is also extended discussion on issues like convergence, AI deception, and the comments are also worth reading. #Strategy Yudkowsky's [Yudkowsky and Christiano discuss "Takeoff Speeds"](https://forum.effectivealtruism.org/posts/rho5vtxSaEdXxLu3o/yudkowsky-and-christiano-discuss-takeoff-speeds) combines an extended reply from Eliezer arguing for a rapid takeoff with some back-and-forth with Paul about various forecasts. Many of the arguments will not be very surprising to those who have read enough Eliezer, but the dialogue is very interesting to read, even if at times they struggled to pin down exactly the source of the disagreement.#Forecasting Soares's [Visible Thoughts Project and Bounty Announcement](https://www.alignmentforum.org/posts/zRn6cLtxyNodudzhw/visible-thoughts-project-and-bounty-announcement) describes a prize MIRI are putting out for creating training datasets for them. Basically they want text runthroughs of a D&D campaign with every thought the dungeon master might have explicitly written out. Ultimately they want to use #Interpretability Finances They spent $7,500,000 in 2020 and a 'similar' amount in 2021, and plan to spend around $6m in 2022. They have around $30,000,000 in cash and pledged funding, suggesting (on a very naïve calculation) around 5.2 years of runway. This large amount of runway is due to some [big recent crypto donations](https://intelligence.org/2021/05/13/two-major-donations/). They have been supported by a variety of EA groups in the past, including OpenPhil. They are not running a fundraiser this year due to strong reserves, but if you wanted to donate to them anyway [here](https://intelligence.org/donate/) is the relevant web page. **GCRI: The Global Catastrophic Risks Institute** ------------------------------------------------- GCRI is a globally-based independent Existential Risk Research organisation founded in 2011 by Seth Baum and Tony Barrett. They cover a wide variety of existential risks, including artificial intelligence, and do policy outreach to governments and other entities. Their research can be found [here](http://gcrinstitute.org/publications/). Their annual summary can be found [here](http://gcrinstitute.org/summary-of-2021-2022-gcri-accomplishments-plans-and-fundraising/#Research-2021). They run an advising and collaboration program where they gave guidance to people from around the world who wanted to help work on catastrophic risks, and potentially write papers with them. In 2021 they hired Andrea Owe as Research Associate, and Robert de Neufville left earlier this month. They have an [AMA](https://forum.effectivealtruism.org/posts/8qpZgvoRrQbwKLjAM/ama-seth-baum-global-catastrophic-risk-institute) on the EA forum. Research de Neufville & Baum's [Collective Action on Artificial Intelligence: A Primer and Review](https://gcrinstitute.org/collective-action-on-artificial-intelligence-a-primer-and-review/) provides ways in which economists break collective action problems down and applies them to AI. These apply to both near and AGI issues. #Strategy Owe & Baum's [The Ethics of Sustainability for Artificial Intelligence](https://gcrinstitute.org/the-ethics-of-sustainability-for-artificial-intelligence/) discusses a variety of issues with the concept of sustainability, and links them to AI.  #Strategy Owe & Baum's [Moral Consideration of Nonhumans in the Ethics of Artificial Intelligence](https://gcrinstitute.org/moral-consideration-of-nonhumans-in-the-ethics-of-artificial-intelligence/) argues for animals (and nature/artificial agents) in AI ethics. It surveys existing statements of AI principles; few explicitly mention animals, and argues against anthropocentrism - at least that we should give \*some\* weight to animals, even if less than humans. #Ethics Fitzgerald et al.'s [2020 Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy](https://gcrinstitute.org/2020-survey-of-artificial-general-intelligence-projects-for-ethics-risk-and-policy/) is basically a massive list of currently existing AGI projects. Interestingly the paper argues that for-profit AGI projects at least claim to have ethical/humanitarian concerns significantly more frequently than academic ones. Obviously this is in places a fairly subjective exercise but I'm not aware of anything else of this nature since their earlier 2017 work.  #Overview Baum & Owe's [Artificial Intelligence Needs Environmental Ethics](https://gcrinstitute.org/artificial-intelligence-needs-environmental-ethics/) discusses environmental perspectives on a number of near-term AI issues, including whether artificial life counts for biodiversity.  #NearTerm Baum et al.'s [GCRI Statement on the January 6 US Capitol Insurrection](https://gcrinstitute.org/gcri-statement-on-the-january-6-us-capitol-insurrection/) condemns the protest. I thought the link to global genocide was a bit of a stretch. #Policy GCRI researchers contributed to the following research led by other organisations: * Cihon et al.'s [Corporate Governance of Artificial Intelligence in the Public Interest](https://www.mdpi.com/2078-2489/12/7/275) * Cihon et al.'s AI Certification: Advancing Ethical Practice by Reducing Information Asymmetries * Galaz et al.'s [Artificial Intelligence, Systemic Risks, and Sustainability](https://www.sciencedirect.com/science/article/pii/S0160791X21002165?via%3Dihub) Finances They spent $300,000 in 2020 and $415,000 in 2021, and plan to spend around $350,000 in 2022. They have around $600,000 in cash and pledged funding, suggesting (on a very naïve calculation) around 1.7 years of runway. If you want to donate to GCRI, [here](http://gcrinstitute.org/donate/) is the relevant web page. **CSER: The Center for the Study of Existential Risk** ------------------------------------------------------ CSER is a Cambridge based Existential Risk Research organisation founded in 2012 by Jaan Tallinn, Martin Rees and Huw Price, and then established by Seán Ó hÉigeartaigh with the first hire in 2015. After an intermission they are once again lead by him Seán and are affiliated with Cambridge University. They cover a wide variety of existential risks, including artificial intelligence, and do political outreach, including to the UK and EU parliaments - e.g. [this](https://www.cser.ac.uk/resources/written-evidence-defence-industrial-policy-procurement-and-prosperity/). Their research can be found [here](https://www.cser.ac.uk/resources/filter/all/all/all/2019/). Seán returns as leader this year, replacing Catherine Rhodes. Jess Whittlestone is leaving for CLTR/Alpenglow, and they hired Jessica Bland (who I heard good things about) and Paul Ingram. In Nuño Sempere’s [shallow review](https://forum.effectivealtruism.org/posts/xmmqDdGqNZq5RELer/shallow-evaluations-of-longtermist-organizations#Tentative_answer) he suggested that many CSER researchers were not really focused on longtermist work, which is also my impression. They had a submission on the EU AI act [here](https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/12527-Artificial-intelligence-ethical-and-legal-requirements/F2665626_en). Research Hua & Belfield's [AI & Antitrust: Reconciling Tensions Between Competition Law and Cooperative AI Development](https://yjolt.org/ai-antitrust-reconciling-tensions-between-competition-law-and-cooperative-ai-development) analyses various AI safety governance proposals for their compatibility with EU antitrust rules. The paper focuses on EU competition law, because even though the EU has no big AI firms, it is very enthusiastic about regulating US tech firms. Worryingly (but, to my mind, correctly) it argues that the OpenAI Assist clause, where they promise to help rather than compete with any other firm who gets close to AGI, could be in violation of the EU's Article 101(1), especially if it occurs late in the race and between market leaders (both likely) and the unilateral nature of the proposal only partially mitigates it. The conclusion to the paper is optimistic, but my reading of the specific arguments is quite negative; I think it would be very hard for an AI company to e.g. persuade a hostile regulator to give them credit for the speculative efficiency gains of collusion. Three strategies it doesn't consider are 1) avoid the EU (viable for OpenAI, not Google), 2) rely on EU enforcement being so slow it is simply irrelevant (seems plausible) and 3) pushing for reforms to weaken antitrust laws. Overall I thought this was an excellent paper. #Policy Whittlestone & Clark's [Why and How Governments Should Monitor AI Development](https://www.cser.ac.uk/resources/why-and-how-governments-should-monitor-ai-development) recommends that governments build capabilities for monitoring the development of AI technologies. The examples in the paper are all nearterm things, but presumably the motivation is general readiness for AGI. Researchers from Anthropic were also named authors on the paper. #Policy Maas & Stix's [Bridging the gap: the case for an ‘Incompletely Theorized Agreement’ on AI policy](https://www.cser.ac.uk/resources/bridging-gap-case-incompletely-theorized-agreement-ai-policy/) argues in a similar line to some previous papers that people concerned with AI Xrisk have common cause on various current policy issues with those concerned with short term AI. It suggests that public disagreement between near and long term people is bad because it reduces both their legitimacy, and hence supports the rival 'AI race' competition framing. #Strategy Maas's [AI, Governance Displacement, and the (De)Fragmentation of International Law](https://www.cser.ac.uk/resources/ai-governance-displacement-and-defragmentation-international-law/) discusses the potential impacts of AI technologies on how international law is made and enforced.  #NearTerm Maas's [Aligning AI Regulation to Sociotechnical Change](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3871635) argues we should focus on the impacts of technologies, rather than the specific technologies themselves, when deciding whether to regulate.  #Policy They also did work on various non-AI issues, which I have not read, but you can find on their website. CSER researchers contributed to the following research led by other organisations: * Ord et al.'s Future Proof: The Opportunity to Transform the UK's Resilience to Extreme Risks<https://www.governance.ai/research-paper/futureproof-artificial-intelligence-chapter> * Cave et al.'s [Using AI ethically to tackle covid-19](https://www.bmj.com/content/372/bmj.n364) * Liu & Maas's ['Solving for X?' Towards a problem-finding framework to ground long-term governance strategies for artificial intelligence](https://www.researchgate.net/publication/342774816_%27Solving_for_X%27_Towards_a_problem-finding_framework_to_ground_long-term_governance_strategies_for_artificial_intelligence) Finances They spent $854,000 in 2020 and $1,300,000 in 2021, and plan to spend around $1,300,000 in 2022. It seems that similar to GPI maybe ‘runway’ is not that meaningful - they suggested their grants begin to end in early 2022 and all end by mid-2024, the same dates as last year. If you want to donate to them, [here](https://www.cser.ac.uk/support-us/) is the relevant web page. **OpenAI** ---------- OpenAI is a San Francisco based independent AI Research organisation founded in 2015 by Sam Altman. They are one of the leading AGI research shops, with a significant focus on safety. Initially they planned to make all their research open, but changed plans and are now significantly more selective about disclosure - see for example [here](https://openai.com/blog/gpt-2-1-5b-release/). One of their biggest achievements is GPT-3, a massive natural language algorithm that generates highly plausible continuations from prompts, which seems to be very versatile. GPT-3 continues to be one of OpenAI (and in fact anyone’s) biggest AI capabilities achievements. In 2021 they released [DALL-E](https://openai.com/blog/dall-e/), which is similar except instead of creating text based on prompts it creates pictures. They initially pioneered a delayed release program for GPT to allow people to adapt to the perceived risks of this technology (and to normalise doing so for future technologies) GPT-3 is now [generally available](https://openai.com/blog/api-no-waitlist/) for anyone to use. A notable GPT-3 derivative this year is the [OpenAI CoPilot](https://openai.com/blog/openai-codex/), which helps programers in an intelligently way based off opensource code. They have also done work on [iteratively summarising books](https://openai.com/blog/summarizing-books/) (summarising, and then summarising the summary, etc.) as a method for scaling human oversight. In December 2020 [the majority of the OpenAI safety team left](https://openai.com/blog/organizational-update/), with most of them (Dario Amodei, Chris Olah) going to found [Anthropic](https://www.anthropic.com/news/announcement) and Paul Christiano going to found the [Alignment Research Center](https://alignmentresearchcenter.org/). Ostensibly this were unrelated exits and not because of any problems at OpenAI; however their stated motivations for leaving do not fully make sense to me (why couldn’t they do the safety work they want to within OpenAI?) and OpenAI did not seem to have replacements lined up, though they do have Jan Leike now. Helen Toner, of CSET and similar things, [joined their board](https://openai.com/blog/helen-toner-joins/) this year. OpenPhil’s Holden Karnofsky, who previously had joined the board after [OpenPhil made a $30m donation](https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/openai-general-support) which expired this year, has now left the board. Will Hurd, a US politician, also [joined the board](https://openai.com/blog/will-hurd-joins/). They have a submission on the EU AI act [here](https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/12527-Artificial-intelligence-ethical-and-legal-requirements/F2665231_en). Research Cammarata et al.'s [Curve Circuits](https://distill.pub/2020/circuits/curve-circuits/) is a very cool piece that shows the ability to understand a (moderately) large neural network in the Feynman sense: to recreate it. The authors are able to identify what individual neurals (and families of neurons) in a 50k+ curve-recognition network 'mean'. To test this they describe the network's 'strategy' in english sentences, and then are able to re-implement (more or less) the network based on this description. This is much more explanable than I'd've expected a neural net to be! I do wonder if vision problems are unusually tractable here; would it be so easy to visualise what individual neurons mean in a language model? In any case you should read the paper for the psychedelic pictures if nothing else. Overall I thought this was an excellent paper. #Interpretability Barnes & Christiano's [Debate update: Obfuscated arguments problem](https://www.alignmentforum.org/posts/PJLABqQ962hZEqhdB/debate-update-obfuscated-arguments-problem#comments) describes a problem they weren't able to solve with their tests of AI safety through Debate. Basically there are arguments where, even if you know it is wrong, it is very hard to narrow down exactly where the error is. This means that the honest debator can't present the judge with the sort of knock-down evidence they want. Clearly this was always going to be a theoretical issue; the main update here is that these obfuscated error arguments can arise quite commonly. #Amplification Chen et al.'s [Evaluating Large Language Models Trained on Code](https://arxiv.org/abs/2107.03374) introduces and evaluates the programming-language GPT implementation that preceded GitHub CoPilot. They basically take a massive pretrained GPT model and point it at github, and then see how good it is at writing python functions based on natural language descriptions, tested with unit tests. They also attempted to write docstrings from code. It displays some of the same smooth scaling curves we see elsewhere with GPT. Some interesting behaviour emerges, including that it 'knows' how to write both good and bad code, and if your prompt includes bad code (e.g. because you are a bad programmer who really needs a copilot) it will assume you want to carry on writing bad code! That seems potentially like a good safety lesson, but overall this does seem rather like capacity-enhancing research to me. There's also some more 'silly' concerns, like that the AI might prefer some open-source packages over others, which would be unfair on the authors of the disfavoured packages. Researchers from OpenPhil were also named authors on the paper. #Capabilities OpenAI Researchers also contributed to the following papers lead by other organisations: * Unsolved Problems in ML Safety * The MineRL BASALT Competition on Learning from Human Feedback * Truthful AI: Developing and governing AI that does not lie Finances OpenAI was initially funded with money from Elon Musk as a not-for-profit. They have since created an unusual corporate structure including a for-profit entity, in which [Microsoft is investing a billion dollars](https://openai.com/blog/microsoft/). Given the strong funding situation at OpenAI, as well as their safety team’s position within the larger organisations, I think it would be difficult for individual donations to appreciably support their work. However it could be an excellent place to apply to work. **Google Deepmind** ------------------- Deepmind is a London based AI Research organisation founded in 2010 by Demis Hassabis, Shane Legg and Mustafa Suleyman and currently lead by Demis Hassabis.  They are affiliated with Google. As well as being arguably the most advanced AI research shop in the world, Deepmind has a very sophisticated AI Safety team, covering [both ML safety and AGI safety](https://80000hours.org/podcast/episodes/pushmeet-kohli-deepmind-safety-research/). We discussed AlphaFold last year, and there was significant further progress on protein folding this year with [AlphaFold 2](https://www.nature.com/articles/s41586-021-03819-2). Long-time followers of the space will recall this is a development Eliezer highlighted [back in 2008](https://intelligence.org/files/AIPosNegFactor.pdf). See also [this](https://news.ycombinator.com/item?id=27855910) speculation that Deepmind might have been trying to avoid publishing it until a competitor ‘forced’ their hand. Rohin Shah and team continue to produce the [AI Alignment Newsletter](https://rohinshah.com/alignment-newsletter/), covering in detail a huge number of interesting new developments, especially new papers. I really cannot praise these newsletters highly enough. Research Stooke et al.'s [Open-Ended Learning Leads to Generally Capable Agents](https://deepmind.com/research/publications/2021/open-ended-learning-leads-to-generally-capable-agents) is a worryingly-titled paper showing a technique for training agents to deal with a wide variety of environments and objectives. While AlphaZero showed the same algorithm could learn Chess/Shoji/Go etc., learning any one of these games didn't directly help with the others. In this case they produce a 3D environment that can be configured in a wide variety of ways, with the idea that the agents will learn quite general lessons - at least relative to their XLand environment, which seems much more general than the board games. The training process is very involved - like with AlphaX, there are multiple generations, combined with clever ways of judging how hard a task is (so that agents are presented with hard-but-not-impossible things to learn from), and evaluation based on dominance rather than average scores. See also the discussion [here](https://deepmind.com/blog/article/generally-capable-agents-emerge-from-open-ended-play). #Capabilities Welbl et al.'s [Challenges in Detoxifying Language Models](https://arxiv.org/abs/2109.07445) tests and discusses various issues with automated 'toxicity' (rudeness/political correctness) filters for language models. Unsurprisingly these filters can generate a lot of false positives, and degrade the quality of the responses on other axis. I think this paper is a good illustration of the problems with 'Ethical Considerations' sections: while they list a number of issues, the fact that their approach by design requires the suppression of entire classes of true and important statements is not mentioned. #NearTerm Gabriel's [Towards a Theory of Justice for Artificial Intelligence](https://arxiv.org/abs/2110.14419) argues it is not impossible to apply considerations of justice to AI, and then applies Rawlsianism to the issue. Unfortunately I don’t find literal reading of Rawls very convincing (highly arbitrary collection of liberties and principles that seem like Rawls was just trying to work backwards from his conclusion, and the implausibly high level of risk aversion required to support maximin). #Ethics Researchers from Deepmind were also named on the following papers: * Dafoe et al.'s [Open Problems in Cooperative AI](https://arxiv.org/abs/2012.08630) * Cohen et al.'s [Fully General Online Imitation Learning](https://arxiv.org/abs/2102.08686) * Hammond et al.'s [Equilibrium Refinements for Multi-Agent Influence Diagrams: Theory and Practice](https://arxiv.org/abs/2102.05008) Finances Being part of Google, I think it would be difficult for individual donors to directly support their work. However it could be an excellent place to apply to work. **Anthropic** ------------- Anthropic is a San Fransisco based for-profit AI Startup organisation founded in 2021 by Dario Amodei & Daniela Amodei. They are a highly safety aligned firm founded by people who left the OpenAI safety team in 2020. Their website is [here](https://www.anthropic.com/). Research Their [first publication](https://arxiv.org/abs/2112.00861) falls outside the time frame for this document by one day, and hence will go in next year’s review. Finances As a [well-funded](https://www.anthropic.com/news/announcement) for-profit startup I would not expect them to need or want donations, but they could be a good place to [work](https://www.anthropic.com/#careers). **ARC: Alignment Research Center** ---------------------------------- ARC is a Berkeley based independent AI Safety Research organisation founded in 2021 by Paul Christiano. They work on Paul's agenda of trying to develop systems for scaling human oversight to allow for (commercially competitive) well controlled systems. Their research can be found [here](https://ai-alignment.com/). Research You can read about their work on aligning human and AI ontologies [here](https://docs.google.com/document/d/1WwsnJQstPq91_Yh-Ch2XRL8H_EpsnjrC1dwZXR37PC8/edit), and discussion [here](https://www.lesswrong.com/posts/qHCDysDnvhteW7kRd/arc-s-first-technical-report-eliciting-latent-knowledge#comments). Christiano's [Teaching ML to answer questions honestly instead of predicting human answers](https://ai-alignment.com/a-problem-and-three-ideas-800b42a14f66) presents a possible approach to the problem described in [Teaching ML to answer questions honestly instead of predicting human answers](https://www.alignmentforum.org/posts/QvtHSsZLFCAHmzes7/a-naive-alignment-strategy-and-optimism-about-generalization).  Essentially he is attempting to bias our training algorithm away from the 'copy human explanation' and towards the 'give true explanation' approach in a variety of ways, including sequential training and producing a mini training set of ground extra-truth. I must admit I don't exactly understand Step 2. See also the discussion [here](https://www.alignmentforum.org/posts/QqwZ7cwEA2cxFEAun/a-problem-and-three-ideas#comments). #Interpretability Christiano's [Another (outer) alignment failure story](https://www.alignmentforum.org/posts/AyNHoTWWAJ5eb99ji/another-outer-alignment-failure-story) describes a possible future misalignment scenario, where AIs become more and more influential, but we understand them less and less. We defer more and more of our decision-making to them, and things generally get better, though people worry about the loss of control. Eventually we see a treacherous turn and the AIs suddenly turn off all the cameras on us, though I don't exactly understand how this step fits with the rest of the story. #Scenarios Christiano's [A naive alignment strategy and optimism about generalization](https://www.alignmentforum.org/posts/QvtHSsZLFCAHmzes7/a-naive-alignment-strategy-and-optimism-about-generalization) is a simple post describing a problem with a 'naïve' strategy of making AIs 'explain' what they are doing. The concern is that rather than learning to give the true explanation for their actions, they will instead learn how to give persuasive accounts. #Interpretability ARC Researchers also contributed to the following papers lead by other organisations: ·       Yudkowsky's [Yudkowsky and Christiano discuss "Takeoff Speeds"](https://forum.effectivealtruism.org/posts/rho5vtxSaEdXxLu3o/yudkowsky-and-christiano-discuss-takeoff-speeds) Finances They are not looking for donations at this time; however they are [hiring](https://forum.effectivealtruism.org/posts/CYdBBnNaXq2JkGdZ6/arc-is-hiring). **Redwood Research** -------------------- Redwood is a Berkeley based independent AI Safety Research organisation that started doing public AI alignment research in 2021, founded by Nate Thomas, Bill Zito, and Buck Shlegeris. They aim to do highly practical safety work - taking theoretical safety insights from their own work and from other organisations (e.g. ARC) and proving it out in practical ML systems to ease adoption by non-alignment-focused AI teams. The team members I know are pretty talented. They have an extended and very informative AMA [here](https://forum.effectivealtruism.org/posts/xDDggeXYgenAGSTyq/we-re-redwood-research-we-do-applied-alignment-research-ama). Research Shlegeris's [Redwood Research’s current project](https://www.alignmentforum.org/posts/k7oxdbNaGATZbtEg3/redwood-research-s-current-project) provides an overview of Redwood's first research project (in progress). They are trying to 'handicap' GPT-3 to only produce non-violent completions; the idea is that there are many reasons we might ultimately want to apply some oversight function to an AI model, like "don't be deceitful", and if we want to get AI teams to apply this we need to be able to incorporate these oversight predicates into the original model in an efficient manner. #Obstruction Shlegeris's [The alignment problem in different capability regimes](https://www.alignmentforum.org/posts/HHunb8FPnhWaDAQci/the-alignment-problem-in-different-capability-regimes) provides a disambiguation between a couple of different AI scenarios and the types of alignment problems and solutions that would be relevant.  #Overview Finances Redwood apparently has ample funding at the present time (They recently suggested that they didn’t expect to be able to produce a lot more output with more funding) and hence is not currently looking for donations from the general EA public. **Ought** --------- Ought is a San Francisco based independent AI Safety Research organisation founded in 2018 by Andreas Stuhlmüller and run by Andreas and Jungwon Byun. They research methods of breaking up complex, hard-to-verify tasks into simple, easy-to-verify tasks - to ultimately allow us effective oversight over AIs. This includes building computer systems and recruiting test subjects. Apparently one of the best places to find their research is the mailing list [here](https://list.elicit.org/). In the past they worked on factored generation – trying to break down questions into context-free chunks  so that distributed teams could produce the answer (Christiano style) – and then factored evaluation – using similar distributed ideas to try to evaluate existing answers, which seemed a significantly easier task (by analogy to P<=NP).  They are now working on a system called [Elicit](https://elicit.org/), an automated research assistant, which uses language modelling to do things like try to propose new research directions and literature review. [James Brady](https://es.linkedin.com/in/goodgravy) will start as Head of Engineering in January 2022. Research Alex et al.'s [RAFT: A Real-World Few-Shot Text Classification Benchmark](https://arxiv.org/abs/2109.14076) provides a benchmark of real-world tasks from the Elicit community, like classifying NeuroIPS ethics statements, for few-shot learning, and tests various models on them. The benchmark aims to measure how far current models are from automating economically valuable work. Researchers from Gov.AI were also named authors on the paper. #Capabilities Finances They spent $1,200,000 in 2020 and $1,400,000 in 2021, and plan to spend around $2,000,000 in 2022. They have around $3,800,000 in cash and pledged funding, suggesting (on a very naïve calculation) around 1.9 years of runway. If you want to donate you can do so [here](https://ought.org/donate). **AI Impacts** -------------- AI Impacts is a Berkeley based AI Strategy organisation founded in 2014 by Katja Grace and Paul Christiano. They are affiliated with (a project of, with independent financing from) MIRI. They do various pieces of strategic background work, especially on AI Timelines - it seems their previous work on the relative rarity of discontinuous progress has been relatively influential. A lot of their work is in the form of a private wiki collecting potentially useful background information. Their research can be found [here](https://aiimpacts.org/archive/). You can see a description of the questions they work on [here](https://aiimpacts.org/ai-impacts-key-questions-of-interest/). For most of this year they have been down to 1-2 people, but are planning on hiring back up in 2022. Research They have produced a series of pieces on how long it has historically taken for AIs to cover the human range (from beginner to expert to superhuman) for different tasks. This seems relevant because people only seem to really pay attention to AI progress in a field when it starts beating humans. These pieces include [Starcraft](https://aiimpacts.org/time-for-ai-to-cross-the-human-range-in-starcraft/), [ImageNet](https://aiimpacts.org/time-for-ai-to-cross-the-human-performance-range-in-imagenet-image-classification/), [Go](https://aiimpacts.org/time-for-ai-to-cross-the-human-performance-range-in-go/), [Chess](https://aiimpacts.org/time-for-ai-to-cross-the-human-performance-range-in-chess/) and [Draughts](https://aiimpacts.org/time-for-ai-to-cross-the-human-range-in-english-draughts/). Grace's [Beyond fire alarms: freeing the groupstruck](https://aiimpacts.org/beyond-fire-alarms-freeing-the-groupstruck/) is a detailed response to Eliezer's classic [post](https://intelligence.org/2017/10/13/fire-alarm/). She argues, contra Eliezer, that the main purpose of fire alarms is not to create common knowledge and overcome awkwardness; they also do normal things like provide evidence about the existence of fires, and make staying inside unpleasant. I thought this was persuasive, but also that Eliezer's main conclusion still held: even if he didn't understand fire alarms (indeed, prior to reading this post I didn't realise that I didn't understand fire alarms) it is still true and bad that there is no fire alarm, and it is worth making people aware of this. See also the discussion [here](https://www.lesswrong.com/posts/dRJrurv5YRZcGoYcE/beyond-fire-alarms-freeing-the-groupstruck-1#comments). #Forecasting The [AI Vignettes Project](https://aiimpacts.org/ai-vignettes-project/) was a series of exercises where people wrote short 'stories' for how AI development might unfold. Others then critiqued them to try to improve their plausibility. See also [here](https://aiimpacts.org/partially-plausible-fictional-ai-futures/). #Forecasting Fernandez's [How energy efficient are human-engineered flight designs relative to natural ones?](https://aiimpacts.org/are-human-engineered-flight-designs-better-or-worse-than-natural-ones/) finds that animal flight is significantly more energy-efficient than human flight.  #Forecasting Grace's [Argument for AI x-risk from large impacts](https://aiimpacts.org/argument-from-large-impacts/) lays out this argument for the importance of AGI safety and some responses.  #Forecasting Grace's [Coherence arguments imply a force for goal-directed behavior](https://aiimpacts.org/coherence-arguments-imply-a-force-for-goal-directed-behavior/) argues that coherence arguments do indeed show that agents which start out being weakly goal seeking will end up being strongly goal seeking.   See also the discussion [here](https://www.lesswrong.com/posts/DkcdXsP56g9kXyBdq/coherence-arguments-imply-a-force-for-goal-directed-behavior#comments). #AgentFoundations Finances They spent $280,000 in 2020 and $240,000 in 2021, and plan to spend around $650,000 in 2022 (twice 2019 peak of $316,000). They have around $340,000 in cash and pledged funding, suggesting (on a very naïve calculation) around 0.5 years of runway. In the past they have received support from EA organisations like OpenPhil and FHI. MIRI administers their finances on their behalf; donations can be made [here](https://aiimpacts.org/donate/). **GPI: The Global Priorities Institute** ---------------------------------------- GPI is an Oxford-based Academic Priorities Research organisation founded in 2018 by Hilary Greaves and part of Oxford University. They do work on issues in philosophy and economics likely to be very important for global prioritisation, much of which is, in my opinion, relevant to AI Alignment work. Their research can be found [here](https://globalprioritiesinstitute.org/papers/). They recently took on two new philosophy postdocs (Hayden Wilkinson and Adam Bales) and will be joined by Timothy William soon; they didn’t disclose any departures. Research I cover only the more AI Xrisk relevant papers; notably we do not include the temporal discounting papers, even though they are relevant. Mogensen's [Do not go gentle: why the Asymmetry does not support anti-natalism](https://globalprioritiesinstitute.org/do-not-go-gentle-why-the-asymmetry-does-not-support-anti-natalism-andreas-mogensen-global-priorities-institute-oxford-university/) argues that even if you have the view that it is bad to create unhappy people but not good to create happy ones, it still doesn't follow that it would be good for humanity to go extinct. This is because in order to avoid the well known transitivity problem with the Asymmetry, you should adopt an incommensurability principle, which in turn means that combining neutral and bad things can make them neutral over all. This is pretty counterintuitive, but I think this is basically just because the Asymmetry is counterintuitive to start with.  #Ethics Greaves & MacAskill's [The case for strong longtermism](https://globalprioritiesinstitute.org/hilary-greaves-william-macaskill-the-case-for-strong-longtermism-2/) argues that, for our most important decisions, they cannot be the best without being the best for the long term future. This is actually somewhat weaker than previous discussions, because the scope is only for the most important decisions (career and donation), and hence does not cover much 'everyday' behaviour. It is persuasive and diligent; I'd expect most readers here to already agree with the conclusions. #Ethics Thorstad's [The scope of longtermism](https://globalprioritiesinstitute.org/the-scope-of-longtermism-david-thorstad-global-priorities-institute-university-of-oxford/) discusses how many types of decisions Strong Longtermism is true for. He is pretty sceptical - due to knowledge problems plus 'washing out' he argues that while Strong Longtermism applies for a small number of Xrisk related questions, for virtually any other question (e.g. malara funding) it fails to hold. #Ethics Thomas's [Simulation Expectation](https://globalprioritiesinstitute.org/simulation-expectation-teruji-thomas-global-priorities-institute-university-of-oxford/) presents a refinement to the Bostrom's simulation argument, by moving from 'there are a lot of people in Sims' to 'there are a lot of people like me in sims'.  #Forecasting Researchers from GPI were also named on the following papers: * Evans et al.'s [Truthful AI: Developing and governing AI that does not lie](https://arxiv.org/abs/2110.06674) * Beckstead & Thomas's [A paradox for tiny possibilities and enormous values](https://globalprioritiesinstitute.org/nick-beckstead-and-teruji-thomas-a-paradox-for-tiny-probabilities-and-enormous-values/) Finances They spent £850,000 in 2019/2020 (academic year) and £1,000,000 in 2020/21, below their plan of £1,400,000, and intend to spend around £1,800,000 in 2021/2022. They suggested that as part of Oxford University ‘cash on hand’ or ‘runway’ were not really meaningful concepts for them, as they need to fully-fund all employees for multiple years. If you want to donate to GPI, you can do so [here](https://www.development.ox.ac.uk/make-a-gift?id=097fe528-2916-4d28-bb84-a65bfdf5b275). **CLR: The Center on Long Term Risk** ------------------------------------- CLR is a London (previously Germany) based Existential Risk Research organisation founded in 2013 and lead by Stefan Torges and Jesse Clifton. Until last year they were known as FRI (Foundational Research Institute) and were part of the Effective Altruism Foundation (EAF). They do research on a number of fundamental long-term issues, with AI as one of their top focus areas.  In general they adopt what they refer to as ‘suffering-focused’ ethics, which I think is a quite misguided view, albeit one they seem to approach thoughtfully. A lot of their work is about avoiding conflict between different agents. Research Oesterheld & Conitzer's [Safe Pareto Improvements for Delegated Game Playing](https://users.cs.duke.edu/~conitzer/safeAAMAS21.pdf) presents an approach to delegated game playing where each agent 're-shapes' the incentives for their delegates. Basically each principle can exclude options and give their agent a different utility function in an attempt to push them towards not-less-globally-optimal play. It seems to assume a high degree of competence on behalf of the principles though, at which point do they really need agents? #GameTheory Stastny et al.'s [Multi-agent learning in mixed-motive coordination problems](https://longtermrisk.org/files/stastny_et_al_implicit_bargaining.pdf) discusses games where agents have different preferences and there is no single cooperative equilibrium. Presumably this is a situation that seems likely to occur if we have multiple AGIs. They show that various existing techniques struggle in this setting. #GameTheory Clifton's [Collaborative game specification: arriving at common models in bargaining](https://longtermrisk.org/collaborative-game-specification/) works on the issue of agents arriving at conflict 'unnecessarily' due to differing world models. The option it suggests is for the agents to try to share their models ahead of time, even though they don't trust each other, and use this resulting common model to optimise from. #GameTheory Clifton's [Weak identifiability and its consequences in strategic settings](https://longtermrisk.org/weak-identifiability-and-its-consequences-in-strategic-settings) discusses the potential for unintended conflict in ultimatum games (and the like) due to uncertainty about the other agent's strategy. In the same way that a value learning agent can struggle to disambiguate preferences and beliefs of the humans it is studying, similarly it can be hard to distinguish resolve for agents. #GameTheory Kokotajlo's [Birds, Brains, Planes, and AI: Against Appeals to the Complexity/Mysteriousness/Efficiency of the Brain](https://www.alignmentforum.org/posts/HhWhaSzQr6xmBki8F/birds-brains-planes-and-ai-against-appeals-to-the-complexity) (part of a [sequence](https://www.alignmentforum.org/s/5Eg2urmQjA4ZNcezy)) argues that, because evolution tends to produce complicated and messy designs, the fact that we are a long way from brain-level performance in some aspects doesn't necessarily mean we're a long way away from TAI.  #Forecasting Researchers from CLR were also named on the following papers: * Koch et al.'s [Objective Robustness in Deep Reinforcement Learning](https://arxiv.org/abs/2105.14111) Finances They have a collaboration with the Swiss-based [Center for Emerging Risk Research](https://emergingrisk.ch/), who fund part of their costs. If you wanted to donate to CLR, you could do so [here](https://longtermrisk.org/donate/). **CSET: The Center for Security and Emerging Technology** --------------------------------------------------------- CSET is a Washington based Think Tank founded in 2019 by Jason Matheny (ex IARPA), affiliated with the University of Georgetown. They analyse new technologies for their security implications and provide advice to the US government. At the moment they are mainly focused on near-term AI issues. Their research can be found [here](https://cset.georgetown.edu/research/). They seem to have good connections to the US government, especially the Democratic Party, who are currently in power; their cofounder Jason Matheny left to take up [multiple senior roles](https://www.fedscoop.com/white-house-announces-top-tech-adviser-jason-matheny-national-security/) in the Administration. Most of the people they hire seem to be politics people, not EA people.  Nuño Sempere’s evaluation of their work is available [here](https://forum.effectivealtruism.org/posts/xmmqDdGqNZq5RELer/shallow-evaluations-of-longtermist-organizations#Center_for_Security_and_Emerging_Technology__CSET_). Research Arnold & Toner's [AI Accidents: An Emerging Threat](https://cset.georgetown.edu/publication/ai-accidents-an-emerging-threat/) introduces some of the ideas around AI safety for policymakers. This includes robustness, specification problems and oversight. It basically tries to link them to near-term threats.  #Policy Buchanan et al.'s [How Language Models Could Change Disinformation](https://cset.georgetown.edu/publication/truth-lies-and-automation/) investigates the potential for using GPT-3 for disinformation/propaganda campaigns. They run a series of experiments to generate plausible tweets to push narratives, add partisan slants to articles, and so on. The paper is obviously written with a left-wing audience in mind, which makes sense given they are attempting to influence the current US administration. #Policy Fedasiuk et al.'s [Harnessed Lightning](https://cset.georgetown.edu/publication/harnessed-lightning/) discusses the ways in which the Chinese PLA is using AI.  #Forecasting Murphy's [Translation: Ethical Norms for New Generation Artificial Intelligence Released](https://cset.georgetown.edu/publication/ethical-norms-for-new-generation-artificial-intelligence-released/) is a translation CSET did of a chinese policy document on ethics in AI. Given the importance of China, and how few people speak Chinese, I think this is a pretty useful general activity, but it's hard to understand the significance of the document by itself; most of it is concerned with fairly high level ethical goals. #Translation Murphy's Translation: [White Paper on Trustworthy Artificial Intelligence](https://cset.georgetown.edu/publication/white-paper-on-trustworthy-artificial-intelligence/) is a translation CSET did of a chinese policy document on trustworthiness in AI. Unfortunately it doesn't seem to have a lot of discussion of Xrisk. #Translation Baker's [Ethics and Artificial Intelligence: A Policymaker's Introduction](https://cset.georgetown.edu/publication/ethics-and-artificial-intelligence/) provides an overview of the ways governments could encourage the use of ethical standards, IRBs etc. for AI. It's mainly focused on near-term AI issues. #Policy Mittelsteadt's [Mechanisms to Ensure AI Arms Control Compliance](https://cset.georgetown.edu/publication/ai-verification/) describes various technical methods governments could use to ensure compliance with regulations on the use of AI. One of the suggestions is van eck phreaking, which remains very cool, but doesn't seem very practical. #Policy Rudner & Toner's [Key Concepts in AI Safety: An Overview](https://cset.georgetown.edu/publication/key-concepts-in-ai-safety-an-overview/) is a very basic introduction to some of the issues in AI safety, not xrisk-focused but with readthrough, for policymakers.  #Overview Zwetsloot et al.'s [The Immigration Preferences of Top AI Researchers: New Survey Evidence](https://www.governance.ai/research-paper/the-immigration-preferences-of-top-ai-researchers-new-survey-evidence) asked researchers who had published in NeuroIPS etc. about their migration plans. As a first approximation, people want to live in the US (or UK) but have legal problems; other countries like China (and France) only really attract their own people back. Researchers from Gov.AI also appeared as authors on the paper. #Policy Imbrie et al.'s [Evaluating Rhetorical Dynamics in AI](https://cset.georgetown.edu/publication/contending-frames/) analyses the frequency of four different framings for AI futures in media articles. They find that the 'Killer Robots' framing peaked in 2015 (Musk/OpenAI?) and has decreased significantly since then. #Strategy Aiken's [Classifying AI Systems](https://cset.georgetown.edu/publication/classifying-ai-systems/) suggests some simplified classification schemas to make it easier for laymen to classify AI systems based on e.g. their autonomy and inputs.  #Overview Crawford & Wulkan's [Federal Prize Competitions](https://cset.georgetown.edu/publication/federal-prize-competitions/) discusses using prices to incentivise AI development.  #Policy Rudner & Toner's [Key Concepts in AI Safety: Robustness and Adversarial Examples](https://cset.georgetown.edu/publication/key-concepts-in-ai-safety-robustness-and-adversarial-examples/) is a very basic introduction to (non-AGI) adversaial examples.  #Overview Rudner & Toner's [Key Concepts in AI Safety: Interpretability in Machine Learning](https://cset.georgetown.edu/publication/key-concepts-in-ai-safety-interpretability-in-machine-learning/) is a very basic introduction to (non-AGI) model explanability.  #Overview Finances As they raised $50m from OpenPhil ([a](https://www.openphilanthropy.org/focus/global-catastrophic-risks/biosecurity/center-security-and-emerging-technology-biosecurity-research),[b](https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/center-security-and-emerging-technology-general-support-august-2021),[c](https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/center-security-and-emerging-technology-general-support)) this year, and have had similar successes in the past, I am assuming they do not need more donations at this time. **AI Safety camp** ------------------ AISC is a globally based residential research camp organisation founded in 2018 by Linda Linsefors and currently lead by Remmelt Ellen. They are affiliated with AI Safety Support. They bring together people who want to start doing technical AI research, hosting a 10-day camp aiming to produce publishable research. Their research can be found [here](https://www.lesswrong.com/posts/QEmfyhqMcSpfnY2dX/how-teams-went-about-their-research-at-ai-safety-camp). Their annual summary can be found [here](https://aisafety.camp/support-us#plans). To the extent they can provide an on-ramp to get more technically proficient researchers into the field I think this is potentially very valuable. But I haven’t personally experienced the camps, and though I spoke to two people who found them valuable and seem good, these people were not randomly selected. In the past each camp was run by different volunteers; they are in the process of transitioning to more consistent (and hence experienced) leaders. Research Koch et al.'s [Objective Robustness in Deep Reinforcement Learning](https://arxiv.org/abs/2105.14111) provides a series of toy examples demonstrating objective robustness failure. In each case the agent's capabilities are robust, so it can still navigate the environment, but it has failed to learn the objective properly. See also the discussion [here](https://www.lesswrong.com/posts/iJDmL7HJtN5CYKReM/empirical-observations-of-objective-robustness-failures). Researchers from CLR were also named authors on the paper. #Robustness Finances They spent $11,162 in 2020 and $29,665 in 2021, and plan to spend around $153,400 in 2022. They have around $236,000 in cash and pledged funding, suggesting (on a very naïve calculation) around 1.5 years of runway. If you want to donate, the web page is [here](https://aisafety.camp/support-us/). **FLI: The Future of Life Institute** ------------------------------------- FLI is a Boston-based independent existential risk organization, focusing on outreach, founded in large part to help organise the regranting of $10m from Elon Musk. They cover nuclear, biological and AI risks; one of their major projects is trying to ban [Lethal Autonomous Weapons](https://autonomousweapons.org/). They had a submission on the EU AI act [here](https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/12527-Artificial-intelligence-ethical-and-legal-requirements/F2665546_en). Readers might be interested in their podcasts like [here](https://futureoflife.org/2021/02/25/stuart-russell-and-zachary-kallenborn-on-drone-swarms-and-the-riskiest-aspects-of-lethal-autonomous-weapons/), [here](https://futureoflife.org/2021/05/20/bart-selman-on-the-promises-and-perils-of-artificial-intelligence/) [here](https://futureoflife.org/2021/04/20/jaan-tallinn-on-avoiding-civilizational-pitfalls-and-surviving-the-21st-century/). FLI [received a large grant](https://futureoflife.org/fli-announces-grants-program-for-existential-risk-reduction/) - $25m at time of donation, but crypto has rallied since then so probably more – from Vitalik Buterin, which they are using to fund a [new grant program](https://grants.futureoflife.org/) (similar to the previous ones funded by Elon Musk). These grants will fund both projects (the Shiba Inu Grants) and talent development like school programs or postdocs (the Vitalik Buterin Fellowships). Nuño Sempere’s evaluation of FLI is available [here](https://forum.effectivealtruism.org/posts/xmmqDdGqNZq5RELer/shallow-evaluations-of-longtermist-organizations#Future_of_Life_Institute__FLI_). **Lightcone Infrastructure** ---------------------------- Lightcone Infrastructure is a Berkeley based independent Meta Longtermist organisation founded in 2021 by Oliver Habryka. They provide a range of infrastructure and support to the Longtermist movement, most prominently the LessWrong website, but also the Lightcone office, workshops and retreats etc. Their slightly-out-of-date [intro](https://www.lesswrong.com/posts/eR7Su77N2nK3e5YRZ/the-lesswrong-team-is-now-lightcone-infrastructure-come-work-3) can be found here. In general I have been pretty impressed with the team’s seriousness and strategic sense. Nuño Sempere’s evaluation of LessWrong is available [here](https://forum.effectivealtruism.org/posts/xmmqDdGqNZq5RELer/shallow-evaluations-of-longtermist-organizations#LessWrong). Zvi’s views on Lightcone [here](https://forum.effectivealtruism.org/posts/LmeBRngTaB9LcDBjw/zvi-s-thoughts-on-the-survival-and-flourishing-fund-sff#Lightcone_Infrastructure). Finances They spent $500,000 in 2020 and $1,300,000 in 2021, and plan to spend around $2,000,000 in 2022. They have around $1,900,000 in cash and pledged funding, suggesting (on a very naïve calculation) around 1 years of runway. The increase in budget is partly driven by their decision to pay nearly (-30%) market salaries for talent; to my knowledge they are basically the first EA org to do so. Obviously this increases their cost base a lot, but I think maybe I support it – paying a lot less is basically like mandatory donations to your employer, which seems inefficient. **CLTR: Center for Long Term Resilience (formerly Alpenglow)** -------------------------------------------------------------- The CLTR is a London based independent policy think tank founded by Angus Mercer & Sophie Dannreuther. They work to connect top Xrisk researchers and ideas to the UK government. My impression is they are unusually skilful at this. Their website is [here](https://www.longtermresilience.org/). Research CLTR researchers contributed to the following research led by other organisations: * Ord et al.'s [Future Proof: The Opportunity to Transform the UK's Resilience to Extreme Risks](https://www.governance.ai/research-paper/futureproof-artificial-intelligence-chapter) Finances If you wanted to donate you could do so [here](https://www.longtermresilience.org/donate). **Rethink Priorities** ---------------------- Rethink Priorities is an internationally based independent EA research consultancy organisation founded in 2018 by Peter Wildeford & Marcus A Davis. They provide research on important EA issues for other EA organisations, and the broader movement. Their annual summary can be found [here](https://forum.effectivealtruism.org/posts/K7tjvcDurrCj72D7H/rethink-priorities-2021-impact-and-2022-strategy). You can read their research [here](https://rethinkpriorities.org/longtermism). Research A lot of their work is contract work, where they are hired by another EA organisation to research specific topics, as described [here](https://forum.effectivealtruism.org/posts/CwFyTacABbWuzdYwB/ea-needs-consultancies). While historically their work has focused on other issues, they are currently ramping up their Longtermism work, which includes a significant AI governance component. Given the contract nature of much of their work, they will have a fair bit of non-public output, which obviously makes external evaluation a bit harder, though presumably this work is supported by organisations paying for that specific work anyway. They haven’t published a lot on AI yet, but I have often been impressed with their work on other subjects in the past, and their description of planned projects (privately shared) seems sensible. Finances They spent $883,000 in 2020 and $2,100,000 in 2021, of which around $329,000 was for Longtermism, and tentatively plan to spend $1.5 - $4m in 2022 on Longtermism. (Note that these figures are higher than what they indicated in the [budget section of their strategy post](https://forum.effectivealtruism.org/posts/K7tjvcDurrCj72D7H/rethink-priorities-2021-impact-and-2022-strategy#Funding_status) published in November 2021.) They have around $5,480,000 in cash and pledged funding, of which around $400,000 is earmarked for Longtermism. They suggested they had around 16 months of runway (because restricted funds cannot be used to run operations). If you wanted to donate you could do so [here](https://rethinkpriorities.org/donate). **Convergence** --------------- Convergence is a globally based independent Existential Risk Research organisation founded (incorporated and first grant) in 2018 by Justin Shovelain and David Kristoffersson. They do strategic research on x-risk reduction decision making. Their research can be found [here](https://www.convergenceanalysis.org/research/).  They plan to hire several more people in 2022. In 2021 they advised Lionheart Ventures on investing in AGI-related companies ethically, including evaluating 4 such firms.  Research No relevant public research for 2021. Finances They spent $14,000 in 2020 and $10,000 in 2021, and plan to spend around $100,000-300,000 in 2022.  They recently received ‘substantial’ funding, and hence are not actively seeking donations at the moment, though if you wanted to donate anyway you could donate [here](https://www.convergenceanalysis.org/support-us/). SERI: The Stanford Existential Risk Initiative ---------------------------------------------- SERI is a Stanford based student-faculty collaboration working on existential risk issues, founded in 2020; their website is [here](https://cisac.fsi.stanford.edu/stanford-existential-risks-initiative/content/stanford-existential-risks-initiative). Research GAA's [Nuclear Espionage and AI Governance](https://forum.effectivealtruism.org/posts/CKfHDw5Lmoo6jahZD/nuclear-espionage-and-ai-governance-1) provides an overview of the impact of communist spies on the Manhattan project, and some potential lessons for AI safety.  It suggests that spying is more important if the scaling hypothesis is false and if AI projects are nationalised (as then nationalism could be a motivator, and groups might need to steal hardware rather if they can't buy it). It seems that generally spying is bad, but he does note that secrecy tends to beget secrecy, and could be hard to combine with interpretability, which might be important for alignment. See also the discussion [here](https://www.lesswrong.com/posts/aBixCPqSnTsPsTJBQ/truthful-ai-developing-and-governing-ai-that-does-not-lie). #Strategy **Other Research** ------------------ I would like to emphasize that there is a lot of research I didn't have time to review, especially in this section, as I focused on reading organisation-donation-relevant pieces. In particular there is a lot of good work on the [Alignment Forum](https://www.alignmentforum.org/). So please do not consider it an insult that your work was overlooked! Filan's [AXRP - the AI X-risk Research Podcast](https://axrp.net/) is a new podcast dedicated to discussing AI safety work.  #Overview lifelonglearner and Hase's [Opinions on Interpretable Machine Learning and 70 Summaries of Recent Papers](https://www.alignmentforum.org/posts/GEPX7jgLMB8vR2qaK/opinions-on-interpretable-machine-learning-and-70-summaries) is a ridiculously comprehensive overview of the work that has been done on making ML systems human-comprehensible over the last few years. I am going to have to admit I didn't read it all. #Interpretability Turner's [Satisficers Tend To Seek Power: Instrumental Convergence Via Retargetability](https://www.lesswrong.com/posts/nZY8Np759HYFawdjH/satisficers-tend-to-seek-power-instrumental-convergence-via) argues that a wide range of policies, not just optimisers, are motivated to seek to control their environment, Omohundro-style. This is bad news insomuch as it presents a problem with various attempts to make AI 'unambitious' and hence safe. #AgentFoundations Wentworth's [Utility Maximization = Description Length Minimization](https://www.alignmentforum.org/posts/voLHQgNncnjjgAPH7/utility-maximization-description-length-minimization) shows that utility maximizers can be modelled as attempting to make the world simpler, according to a model of the world which assigns probability in accordance to utility. The maths is not complicated and once I read it the idea was obvious. Unfortunately it is now impossible for my to tell if it was obvious prior to reading - probably not! #AgentFoundations Jiang et al.'s [Delphi: Towards Machine Ethics and Norms](https://arxiv.org/abs/2110.07574) is transformer model trained on various ethical judgement datasets with a fun web frontend. They use five datasets, including Hendrycks et al.'s [Aligning AI with Shared Human Values](https://arxiv.org/abs/2008.02275) we discussed last year. There's been a lot of criticism of the model for producing absurd results (e.g. [here](https://www.aiweirdness.com/stealing-a-giraffe-from-the-zoo-only-if-its-a-really-cool-giraffe/)) but it gave good responses to all but one of the ~30 prompts I gave it, including some attempted tricky ones; the only exceptions were I suspect side effects of their 'fix' for its prior political incorrectness. #ValueLearning Wentworth's [How To Get Into Independent Research On Alignment/Agency](https://www.lesswrong.com/posts/P3Yt66Wh5g7SbkKuT/how-to-get-into-independent-research-on-alignment-agency#Meta) describes in a quite practical way John's experiences and advice for doing useful AI work outside of a research org. Given recent dramatic increase in the viability of this as a career (largely due to the LTFF) I thought this was a good post, for making more people aware of this possibility if nothing else. #Overview Cihon et al.'s [Corporate Governance of Artificial Intelligence in the Public Interest](https://www.mdpi.com/2078-2489/12/7/275) expands on Belfield's work last year to show a very wide variety of ways in which AI corporations can be influenced, collaboratively and adversarially, to change their behaviour. My main concern is it is not clear how to make sure these structures actually do useful work, as opposed to filling up with grifters and ideologues; they list some past 'success stories', but it is not clear to me that many of these instances of influencing corporate behaviour actually had a \*positive\* influence. (Also, more mundanely, they misunderstood why shareholders have more influence than bondholders: because shareholders are the residual claimant on cashflows). Researchers from GCRI, Legal Priorities Project were also named authors on the paper. #Policy Cihon et al.'s [AI Certification: Advancing Ethical Practice by Reducing Information Asymmetries](https://gcrinstitute.org/ai-certification-advancing-ethical-practice-by-reducing-information-asymmetries/) surveys the existing landscape for AI ethical certifications (basically all near-term) and discusses the potential for Xrisk relevant certification.  Researchers from GCRI,Legal Priorities Project were also named authors on the paper. #Policy Prunkl et al.'s [Institutionalizing ethics in AI through broader impact requirements](http://www.cs.jhu.edu/~misha/DIReadingSeminar/Papers/Prunkl21.pdf) compares the NeuroIPS ethics statements to similar things in other fields and considers their impact. They also include a number of suggestions for ameliorating their weaknesses. Researchers from Gov.AI were also named authors on the paper. #Policy Ashurst et al.'s [AI Ethics Statements: Analysis and lessons learnt from NeurIPS Broader Impact Statements](https://arxiv.org/abs/2111.01705) provides descriptive statistics around the statements, and discusses why they were discontinued. It seems that in general they were not very successful at causing researchers to pay attention to important things. Researchers from Gov.AI were also named authors on the paper. #Policy Davis's [Feature Selection](https://www.lesswrong.com/posts/dYspinGtiba5oDCcv/feature-selection) is a very well written short story about what it feels like on the inside to be a ML algorithm. I don't want to spoil it, but it does a good job illustrating various related points around e.g. Robustness. #Fiction Niconiconi's [Whole Brain Emulation: No Progress on C. elgans After 10 Years](https://www.lesswrong.com/posts/mHqQxwKuzZS69CXX5/whole-brain-emulation-no-progress-on-c-elgans-after-10-years) gives an update on progress (or lack thereof) in whole brain emulation for C. elgans. It seems that basically no-one was motivated to fund it so little progress has been made in the last 10 years. #Forecasting Guterres's [Our Common Agenda](https://www.un.org/en/un75/common-agenda) is a report by the UN secretary general. It is largely a list of pieties, but mentions existential risks and AI (but not directly AI Xrisk). #Policy Brown et al.'s [Value Alignment Verification](https://arxiv.org/abs/2012.01557) proposes a number of tests to determine if an agent is aligned with a human. Somewhat implausibly to me they suggest this works even in cases where both human and AI are black boxes, so long as they share an ontology.  Researchers from CHAI were also named authors on the paper. #ValueLearning Liu & Maas's ['Solving for X?' Towards a problem-finding framework to ground long-term governance strategies for artificial intelligence](https://www.researchgate.net/publication/342774816_%27Solving_for_X%27_Towards_a_problem-finding_framework_to_ground_long-term_governance_strategies_for_artificial_intelligence) argues we should spend less time trying to solve AI governance problems and more time looking for new problems. mentions supply chains highly vulnerable to pandemics but they actually held up pretty well? Heavy on buzz words. Researchers from CSER were also named authors on the paper. #Strategy Andrus et al.'s [AI Development for the Public Interest: From Abstraction Traps to Sociotechnical Risks](https://arxiv.org/abs/2102.04255) makes some comments about the relationship between technical and social problems and grad school.  Researchers from CHAI were also named authors on the paper. #Strategy Chatila et al.'s [Trustworthy A](https://smile.amazon.com/gp/product/B08W3XZ1TJ/ref=ppx_yo_dt_b_d_asin_title_o00?ie=UTF8&psc=1)I provides some high-level discussion of issues like interpretability and human rights.  I was surprised by the confident assertion that because machines 'can only decide and act within a bounded set of possibilities' that they cannot make ethical decisions. Researchers from CHAI were also named authors on the paper. #Strategy Maclure & Russell's [AI for Humanity: The Global Challenges](https://smile.amazon.com/gp/product/B08W3XZ1TJ/ref=ppx_yo_dt_b_d_asin_title_o00?ie=UTF8&psc=1) describes some possible applications of AI to the Sustainable Development Goals.  Researchers from CHAI were also named authors on the paper. #ShortTerm Cave et al.'s [Using AI ethically to tackle covid-19](https://www.bmj.com/content/372/bmj.n364) raises various standard NearTerm objections against using AI to combat covid. I found this pretty unconvincing; none of the 'harms' it raises seem material compared to a the pandemic. If an ML system for diagnosing covid based on the sound of coughs can save people's lives, the fact that not everyone has a smartphone doesn't seem like a good reason to ban it. Researchers from CSER were also named authors on the paper. #NearTerm Klinova & Korinek's [AI and Shared Prosperity](https://arxiv.org/abs/2105.08475) recommends AI developers analyse the labour market impacts of their work. I think asking ML engineers - not even economists! - to attempt to micro-manage the economy in this way is a mistake, for reasons described by Lawson [here](https://www.adamsmith.org/research/these-are-the-droids-youre-looking-for). Researchers from Gov.AI were also named authors on the paper. #NearTerm **Capital Allocators & Other Organisations** ============================================ One of my goals with this document is to help donors make an informed choice between the different organisations. However, it is quite possible that you regard this as too difficult, and wish instead to donate to someone else who will allocate on your behalf. This is of course much easier; now instead of having to solve the *Organisation Evaluation Problem*, all you need to do is solve the dramatically simpler *Organisation Evaluator Organisation Evaluation Problem*. It’s worth noting that many of the orgs in this category, being supported by large endowments, do not really take outside money. **LTFF: Long-term future fund** ------------------------------- LTFF is a globally based EA grantmaking organisation founded in 2017, currently lead by Asya Bergal and part of [EA Funds](https://funds.effectivealtruism.org/). They are one of four funds set up by CEA (but now operationally independent, though they still report to the CEA board) to allow individual donors to benefit from specialised capital allocators; this one focuses on long-term future issues, including a large focus on AI Alignment. Their website is [here](https://app.effectivealtruism.org/funds/far-future). In 2021 they did a May grant round ([writeup](https://funds.effectivealtruism.org/funds/payouts/may-2021-long-term-future-fund-grants), [discussion](https://forum.effectivealtruism.org/posts/diZWNmLRgcbuwmYn4/long-term-future-fund-may-2021-grant-recommendations)). At time of writing there are no public write-ups for the grants from rest of the year. Grant applications are now accepted on a rolling basis: you can apply at any time of year. The fund is now run by four people (plus advisors), and the grants have gone to a wide variety of causes, many of which would simply not be accessible to individual donors. The fund managers are currently: ●        Asya Bergal ●        Adam Gleave ●        Oliver Habryka ●        Evan Hubinger Evan is new, replacing Helen Toner, who left, and Matt Wage, who became an advisor. I know Asya and Habryka reasonably well and think they will make generally good grants; Adam and Evan I know less well but seem also good. There has been a fair bit of manager turnover, and this will probably continue. Not mentioned on the website there were also several part-time managers in 2021, a practice which seems likely to continue with different people: * Daniel Eth * Ozzie Gooen * Luisa Rodriguez * [one other non-voting ‘advisor’ who requested anonymity] * [(sort of) Jonas Vollmer](https://forum.effectivealtruism.org/posts/diZWNmLRgcbuwmYn4/long-term-future-fund-may-2021-grant-recommendations#Longtermist_Entrepreneurship_Fellowship)  The managers now have the option to be paid for their work by CEA (on an OpenPhil grant). In total for 2021, based on my calculations, they granted around $4.96m. In general most of the grants seem at least plausibly valuable to me, and many seemed quite good indeed. There weren’t any in 2021 that seemed significantly negative. I estimate that 66% of the dollars went to AI-relevant activities (including partial credit for some things), and 85% were to grants I would have made (again including partial credit). I attempted to classify the recommended by type. Note that ‘training’ means paying an individual to self-study. One type of funding I’m not really sure how to classify is income support, where a researcher already has a stipend, but the LTFF thinks they could be more effective if they didn’t have to worry so much about (in the grand scheme of things) relatively small amounts of money. I have deliberately omitted the exact percentages because this is an informal classification.   ![](https://lh3.googleusercontent.com/lFqp-lGcIuRc_LIT56mzlsZBDGsC09_xCO0ENuUnbLhlWaQljswqI-U2KsgZ1-8sDe1kvtNx1uFnuK2zK6wk9GkAGb5_SsrTtJ5b3_W3GQWwMIARxGh7yc8Ck043FpSvH4ILKLkj)Of these categories, I am most excited by the Individual Research, Event and Platform projects. I am generally somewhat sceptical of paying people to ‘level up’ their skills. (Many) individual donors are perfectly capable of evaluating large organisations that publicly advertise for donations. In donating to the LTFF, I think (many) donors are hoping to be funding smaller projects that they could not directly access themselves. As it is, such donors will probably have to consider such organisation allocations a mild ‘tax’ – to the extent that different large organisations are chosen then they would have picked themselves. Tetlock et al.’s forecasting work PhD, which the fund managers recommended $572,000, was the largest single grant (around 12% of the 2021 total), followed by EA Geneva on $310,000, and Amon Elders (PhD), BERI and Kristaps Zilgalvis (PhD), all on $250,000 each. I was able to view what the LTFF regarded as its ‘marginal but rejected’ applications from the May round; in general there were some decent projects there I’d be happy to fund. In the past all grants had to be approved by CEA before they are made; my understanding is this requirement is more pro forma now due to the increased independence. I only know of one grant previously vetoed, and this was widely agreed to have been a bad grant, so losing this aspect of quality control seems like a shame to me. The EA Funds have previously discussed trying to adopt an active grantmaking approach, where instead of just reactively evaluating proposals they receive they will actively search for good opportunities. However this does not seem to have happened to a significant degree yet. Nuño Sempere did an excellent retrospective on outcomes from the LTFF’s 2018-2019 grant rounds [here](https://forum.effectivealtruism.org/posts/Ps8ecFPBzSrkLC6ip/2018-2019-long-term-future-fund-grantees-how-did-they-do). My interpretation of his evaluation was generally quite positive for the LTFF: ![](https://lh4.googleusercontent.com/ZUSQVFkWL1FdfnA0vao91SyzL4hwsQDlem6uUj05ZgD2iSGoMnl0U66CyPrU2NET2ikg-0LlG8UjbvR4MPFoALsibOz7Wj9zC2rHhib1-u-IdIeafqb_SGBY_ubT89Vs1seBm6Bm)[Wentworth's How To Get Into Independent Research On Alignment/Agency](https://www.lesswrong.com/posts/P3Yt66Wh5g7SbkKuT/how-to-get-into-independent-research-on-alignment-agency) suggests that the LTFF has been crucial to enabling the emergence of independent safety researcher as a viable occupation; this seems like a very major positive for the LTFF. I am quite concerned about the lack of transparency the LTFF provides donors. In the past there have been a number of issues around this (e.g. incorrect numbers on the website, releasing or not releasing information and then claiming otherwise), but as a small volunteer-run organisation I figured these were to be expected. With the shift towards professional (paid) management, and a [stated intention to provide better disclosure](https://forum.effectivealtruism.org/posts/ZdeL8jzg3PAzjimik/giving-what-we-can-and-ea-funds-now-operate-independently-of#Donor_satisfaction), I expected things to become significantly better. However, this has not been the case. Partly this is the result of deliberate policy change. In 2020 they made an anonymous grant (roughly 3% of the total), and they have [now adopted a policy of allowing people to apply anonymously](https://forum.effectivealtruism.org/posts/LKdtHdETxSYAXwoW6/public-reports-are-now-optional-for-ea-funds-grantees). I understand why this could be appealing for applicants, and why the LTFF would want to not exclude potentially good but publicity-shy applicants. However by doing so they undermine the ability of the donor community to provide oversight, which is definitely a bit concerning to me. More concerning to me however what appears to be a lack of disclosure due to simple oversight. At time of writing (2021-12-20) the latest grant round mentioned on the LTFF website is April, despite apparently there also having been two since then – something donors would have no way of knowing without personally reaching out. The ‘Fund Payouts’ number, despite claiming to be ‘to date’, is around 8 months, 55 grants and ~$3.5m out of date. And despite having had many temporary Fund Managers this year, as far as I can see nowhere on the website are these mentioned. As a result overall my impression is that donors have much less accurate information available to them to evaluate the LTFF this year than they did in prior years. While I’m grateful to them for personally sharing drafts about their recent activities with me, ideally this would be shared directly with all donors. If you wish to donate to the LTFF you can do so [here](https://funds.effectivealtruism.org/donate/organizations). **OpenPhil: The Open Philanthropy Project** ------------------------------------------- The Open Philanthropy Project (separated from Givewell in 2017) is an organisation dedicated to advising Cari and Dustin Moskovitz on how to give away over $15bn to a variety of causes, including existential risk. They have made extensive donations in this area and probably represent both the largest pool of EA-aligned capital (at least pre-FTX) and the largest team of EA capital allocators. They described their strategy for AI governance, at a very high level, [here](https://www.openphilanthropy.org/blog/ai-governance-grantmaking). Grants They have a large and extremely capable grant evaluation team, though arguably small on a evaluator/dollar basis. In general I think they do a very good job of thinking strategically, analysing individual grants, and giving feedback to fundees for improvement. You can see their grants for AI Risk [here](https://www.openphilanthropy.org/giving/grants?field_focus_area_target_id_selective=532). It lists 26 AI Risk grants in the last 12 months, plus 2 other highly relevant ‘other’ grants. In total I estimate they spent about $68.5m on AI (giving partial credit for related grants). This was dominated by two large grants: * CSET: $47m * CHAI: $12m This compares to $324m for 135 in total grants over the period, so AI related work was around 21%. They put out an RFP for various Longtermist outreach programs [here](https://forum.effectivealtruism.org/posts/EEtTQkFKRwLniXkQm/open-philanthropy-is-seeking-proposals-for-outreach-projects). The OpenPhil AI Fellowship basically fully funds AI PhDs for students who want to work on the long term impacts of AI. Looking back at the 2018 and 2019 classes (who presumably will have had enough time to do significant work since receiving the grants), scanning the abstracts of their publications on their websites suggests that over half have no AI safety relevant publications in 2019, 2020 and 2021, and only two are [co]authors on what I would consider a highly relevant paper. Apparently it is somewhat intentional that these fellowships are [*not* intended to be specific to AI safety](https://forum.effectivealtruism.org/posts/fbw7mg2CzBiHqRibr/ai-safety-scholarships-look-worth-funding-if-other-funding-2#awTcbYBQpPK9PHsQx), though I do not really understand what they *are* intended for. OpenPhil suggested that part of the purpose was to [build a community](https://forum.effectivealtruism.org/posts/DXqxeg3zj6NefR9ZQ/open-philanthropy-our-progress-in-2019-and-plans-for-2020?commentId=BZuvt6rrDnSr8yn4F), which I don’t really understand, because there is limited evidence that the winners work together, at least for the first two cohorts. They also launched a [scholarship program](https://www.openphilanthropy.org/focus/other-areas/early-career-funding-individuals-interested-improving-long-term-future) last year which seems more tailored to people focused on the long-term future, though it is not AI specific, and they recommend AI longtermists to apply to the AI one first. There is also a separate scholarship program for [technology policy people](https://forum.effectivealtruism.org/posts/2sn8RWPaChvyuHCcp/ama-the-new-open-philanthropy-technology-policy-fellowship) as well. Their funding is sufficiently dominant in the EA movement that, as Linch pointed out, it can make independent evaluation difficult. Virtually everyone capable of doing so either has received OpenPhil money in the past or might want to do so in the future. Research Most of their research concerns their own granting, and is often non-public. Cotra's [The case for aligning narrowly superhuman models](https://www.alignmentforum.org/posts/PZtsoaoSLpKjjbMqM/the-case-for-aligning-narrowly-superhuman-models)  suggests we should work on making large, 'general' AI models better able to achieve specific human goals. The one-sentence description makes this sound pretty bad but it's actually an interesting idea. You take systems like GPT-3, which seem like they have the 'power' to solve many interest problems, but aren't 'motivated' to do so, and try to give them that motivation, \*without\* making them more powerful (e.g. scaling up). This could, perhaps, function as something of a dry-run for the big alignment task. One particular example she refers to as 'sandwiching'; aligning an AI to help a group of laymen reach expert-level performance on a task, because the experts provide you with a ground-truth for performance evaluation. See also the discussion [here](https://www.alignmentforum.org/posts/AyfDnnAdjG7HHeD3d/miri-comments-on-cotra-s-case-for-aligning-narrowly). Overall I thought this was an excellent paper. #Strategy Davidson's [Could Advanced AI Drive Explosive Economic Growth?](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth) discusses some simple economic growth models and what they suggest for future growth. The core insight - that human-level AI could un-do the demographic transition and unlock super-exponential growth again - should I think not be very surprising. #Forecasting Karnofsky's [All Possible Views About Humanity's Future Are Wild](https://forum.effectivealtruism.org/s/isENJuPdB3fhjWYHd/p/TwQzyP3QgttmuTHym), and the related sequence of posts, argues that all plausible inside views suggest the future is going to be weird - e.g. massive growth, extinction, value lock in etc. Many of these ideas are not new but they are well presented. #Forecasting Beckstead & Thomas's [A paradox for tiny possibilities and enormous values](https://globalprioritiesinstitute.org/nick-beckstead-and-teruji-thomas-a-paradox-for-tiny-probabilities-and-enormous-values/) discusses Pascalian arguments, and the problems that arise if you try to reject small-probability-massive-payoff cases.  In particular, they generalise beyond the expected utility framework. Researchers from GPI were also named authors on the paper. #Ethics Finances To my knowledge they are not currently soliciting donations from the general public, as they have a lot of money from Dustin and Cari, so incremental funding is less of a priority than for other organisations. They could be a good place to work however. **SFF: The Survival and Flourishing Fund** ------------------------------------------ SFF ([website](http://survivalandflourishing.fund/)) is a donor advised fund, taking over activities previously run by BERI, but now with a separate team. SFF was initially funded in 2019 by a grant of approximately $2 million from BERI, which in turn was funded by donations from philanthropist Jaan Tallinn; Jaan remains the largest funder. You can read Zvi’s experience of being an evaluator for the fund [here](https://www.lesswrong.com/posts/kuDKtwwbsksAW4BG2/zvi-s-thoughts-on-the-survival-and-flourishing-fund-sff). Grants In its grantmaking SFF uses an innovative allocation process to combine the views of many grant evaluators (described [here](http://survivalandflourishing.fund/sff-2019-q4)). SSF has published the results of two grantmaking rounds this year (described [here](https://survivalandflourishing.fund/sff-2021-h1-recommendations) and [here](https://survivalandflourishing.fund/sff-2021-h2-recommendations)), where they donated around $19.4m, of which I estimate around $13.8m (73%) was AI related, and 75% were to things I would have funded (giving partial credit both times). The largest donations in the year were to: * LTFF: $2.1m * Alpenglow/CLTR: 1.9m * Lesswrong/Lightcone: 1.9m * CLR: 1.2m * CFAR: 1.2m * ALLFED: 1.2m * David Krueger’s group at Cambridge: 1m **FTX Foundation** ------------------ The FTX foundation is in the process of being launched to distribute some of the profits from [FTX/Alameda](https://www.readthegeneralist.com/briefing/ftx-1), and hired Nick Beckstead (formerly a program officer at Open Phil making grants in this area) as CEO so I expect them to make large and thoughtful grants to highly relevant organisations.   **BERI: The Berkeley Existential Risk Initiative** -------------------------------------------------- BERI is a (formerly Berkeley-based) independent Xrisk organisation, founded by Andrew Critch but now led by Sawyer Bernath. They provide support to various university-affiliated existential risk groups to facilitate activities (like hiring engineers and assistants) that would be hard within the university context, alongside other activities - see their [FAQ](http://existence.org/faq) for more details. In 2019 they pivoted, dropping various non-core activities (e.g. grantmaking) and are now essentially entirely focused on providing support to researchers engaged in longtermist (mainly x-risk) work at universities and other institutions. They have five main [collaborations](https://existence.org/collaborations/): * FHI: The Future of Humanity Institute * CSER: The Center for the Study of Existential Risks * CHAI: The Center for Human Compatible AI * SERI: The Stanford Existential Risk Initiative (previously a trial collaboration) * ALL: The Autonomous Learning Laboratory at UMass Amherst (previously a trial collaboration) In addition they have a large number of trial collaborations: * CLTC: The Center for Long-Term Cybersecurity * CTPL: The Tech Policy Lab at Cornell * David Krueger’s unnamed lab at Cambridge * Dylan Hadfield-Menell’s lab at MIT * InterAct – the Interactive Autonomy and Collaborative Technologies Lab (Anca Dragan) * Meir Freidenberg and Joe Halpern at Cornell * The Anh Han group at Teesside * The Safe Robotics Laboratory at Princeton * The Sculpting Evolution Group at the MIT Media Lab * Yale Effective Altruism I think this is potentially a pretty attractive project. University affiliated organisations provide the connection to mainstream academia that we need, but run the risk of inefficiency both due to their lack of independence from the central university and also the relative independence of their academics. BERI potentially offers a way for donors to support the university affiliated ecosystem in a targeted fashion. In general they operate on a pull model, where they provide resources to help their groups achieve their goals, and seem quite unlikely to say ‘no’ unless the request was literally illegal or similar. BERI seems to effectively exercise discernment at the level of which organisations they collaborate with, not the individual project level. So if you were not a fan of the groups they collaborated with, supporting BERI would probably not be the right choice for you. They are apparently quite relaxed about getting credit for work, so not all the stuff they support will list them in the acknowledgments. Finances They spent $2,800,000 in 2020 and $2,300,000 in 2021, and plan to spend around $2,000,000 in 2022. They have around $2,400,000 in cash and pledged funding, suggesting (on a very naïve calculation) around 1.2 years of runway. BERI is now seeking support from the general public. If you wanted to donate you can do so [here](http://existence.org/donating/). Note that if you want to you can restrict the funding to specific collaborations if you wanted, though my guess is funging might be ~100% for small donors. **Nonlinear Fund** ------------------ Nonlinear is an internationally based independent Meta AI Safety organisation founded in 2021 by Kat Woods and Emerson Spartz. They aim to provide similar services as Kat did with Charity Entrepreneurship: helping launch new projects that provide value to the AI safety community. You can read about them [here](https://forum.effectivealtruism.org/posts/fX8JsabQyRSd7zWiD/introducing-the-nonlinear-fund-ai-safety-research-incubation), or on their website [here](https://www.nonlinear.org/). One of the big projects they plan to work on is helping EAs hire personal assistants, which seems like a potentially pretty effective way of unlocking people’s time, as well as several other projects which all seemed like broadly good ideas. Research Woods's [The Nonlinear Library](https://forum.effectivealtruism.org/posts/JTZTBienqWEAjGDRv/listen-to-more-ea-content-with-the-nonlinear-library) provides automatically generated voice versions of top EA content. Given that a lot of people like listening to podcasts, this seems like potentially a huge accessability improvement, which I could imagine more conservative organisations like CEA being concerned about offering for legal reasons. #Community Finances They are not actively soliciting donations but if you wanted to anyway you can by reaching out to Kat. **80,000 Hours** ---------------- 80,000 Hours is a London based EA Movement-Building organisation founded in 2011 by Will MacAskill & Ben Todd and currently lead by Ben Todd.  They are affiliated with CEA. Provides career research, coaching and headhunting for the world's most important careers, of which AI safety is a significant focus. Their research can be found [here](https://docs.google.com/spreadsheets/d/10YLvRlGDjW_D3aoAou0wDxvj7JUxneVr4ipuyFkCFh4/edit#gid=0). Their website is [here](https://80000hours.org/). During the year, Peters Hartree and McIntyre left, and María Gutiérrez Rojas might leave next year. They hired Bella Forristal, Benjamin H, Matt Reardon and Alex Lawsen. Their ability to make connections for people seems extremely valuable. 80,000 Hours's [AI/ML safety research job board](https://80000hours.org/job-board/ai-ml-safety-research/) collects various jobs that could be valuable for people interested in AI safety. At the time of writing it listed 128 positions, all of which seemed like good options that it would be valuable to have sensible people fill. I suspect most people looking for AI jobs would find some on here they hadn't heard of otherwise, though of course for any given person many will not be appropriate. They also have job boards for other EA causes. #Careers They have a very good podcast, readers might be interested in these episodes from this year: * [Owen Cotton-Barratt](https://80000hours.org/podcast/episodes/owen-cotton-barratt-epistemic-systems/) * [Ajeya Cotra](https://80000hours.org/podcast/episodes/ajeya-cotra-worldview-diversification/) * [Brian Christian](https://80000hours.org/podcast/episodes/brian-christian-the-alignment-problem/) * [Christian Tarsney](https://80000hours.org/podcast/episodes/christian-tarsney-future-bias-fanaticism/) * [Brian Tse](https://80000hours.org/stories/brian-tse/) * [Chris Olah](https://80000hours.org/podcast/episodes/chris-olah-unconventional-career-path/) * [Holden Karnofsky](https://80000hours.org/podcast/episodes/holden-karnofsky-most-important-century/) * [Carl Shulman](https://80000hours.org/podcast/episodes/carl-shulman-common-sense-case-existential-risks/) 80k also produced what I now regard as my ‘default’ non-text-based [EA intro link](https://80000hours.org/podcast/effective-altruism-an-introduction/). Finances They spent $3,050,000 in 2020 and $3,032,000 in 2021, and plan to spend around $3,600,000 in 2022. They have around $6,600,000 in cash and pledged funding, suggesting (on a very naïve calculation) around 1.8 years of runway. If you wanted to donate you can do so [here](https://80000hours.org/support-us/donate/). AISS: AI Safety Support ----------------------- AISS is a globally based independent AI Safety Support organisation founded in 2020 by JJ Hepburn and Linda Linsefors, growing out of the AI Safety Camps which remain a project of their organisation. They aim to provide coaching and support services to people early in the AI safety researcher pipeline. Finances They spent $0 in 2020 and $170,000 in 2021, and plan to spend around $650,000 in 2022. They have around $440,000 in cash and pledged funding, suggesting (on a very naïve calculation) around 0.7 years of runway. If you wanted to donate you could do so [here](https://www.aisafetysupport.org/donate).  **Other News** ============== Google [continues to improve](https://cloud.google.com/blog/products/ai-machine-learning/google-breaks-ai-performance-records-in-mlperf-with-worlds-fastest-training-supercomputer) the performance of its ASICs. EA Cambridge built an [AI Safety Fundamentals curriculum](https://forum.effectivealtruism.org/posts/BpAKCeGMtQqqty9ZJ/agi-safety-fundamentals-curriculum-and-application#comments) to run in the new year that is accepting applications. Facebook got a lot of (somewhat misleading) negative press over leaked reports that its products made users unhappy; Nir Eyal [points out](https://twitter.com/nireyal/status/1443882562200354821) that this sort of forced-openness reduces the incentives for tech companies to try to address such problems: if they had never researched the issue in the first place there would have been nothing to leak. The EU has a [proposal](https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52021PC0206) for a big AI regulation, somewhat modelled after GDPR. It focuses on the use of AI in what they perceive to be ‘high risk’ areas, like biometrics, utility infrastructure, and personal vetting. For these use cases the requirements are quite restrictive, demanding technical documentation and human overrides. However, it does not seem to apply to pre-deployment systems, and focuses on AI use cases, rather than the power of the system, to the extent that AGI systems not used in high risk applications are [explicitly exempted](https://forum.effectivealtruism.org/posts/3itL9GJcxvQC5Pp5D/eu-ai-act-now-has-a-section-on-general-purpose-ai-systems). So it seems that much of e.g. Deepmind would currently be triply exempted: the UK has left the EU, much of their work is pre-market, and AGI is explicitly not a focus. It does ban subliminal messaging, which seems good – we do not want AIs changing people’s values – but excludes military AIs. In theory the institutions set up by this could provide infrastructure for further future AGI regulation, but in practice often EU regulations are not amended despite clear deficiencies, and political attention may move elsewhere. I’ve been told that ‘it will hurt EU AI companies, slowing down progress and reducing competition’ is *not* the reason policy EAs like it. See also [here](https://forum.effectivealtruism.org/posts/bd7yr3eozzzhMuKCi/what-is-the-eu-ai-act-and-why-should-you-care-about-it). **Organisation Second Preferences** =================================== A new strategy I employed this year was to ask each organisation I contacted which organisation *other than themselves* they would be most excited to receive funding. I figured this could be a good way to take advantage of their domain specific knowledge, including of research directions, strategies and personal quality, in an efficient manner. There is however a potential bias towards well known and socially central organisations. Not every organisation was willing to name other orgs they preferred funding to go to; I should probably have made explicit that I wouldn’t share this info except in aggregated (and hence largely anonymised) form. You should probably assume that organisations I had better social bonds with would be more likely to share this info. The clear winner of this was the LTFF; no other organisation came close. Of course it is possible that some of these organisations may have thought that the LTFF might give them grants, but their expected share of such incremental dollars is likely small, and I think most of these reports were honest representations of their views. The LTFF was so far ahead of any other organisation that this seems like a significant data point in their favour. A distant second were non-specific sentiments along the lines of “fund something that seems underfunded”. **Methodological Thoughts** =========================== **Inside View vs Outside View** ------------------------------- This document is written mainly, but not exclusively, using publicly available information (as well as emailing the organisations with a few simple questions). In the tradition of active management, I hope to synthesise many pieces of individually well known facts into a whole which provides new and useful insight to readers. Advantages of this are that 1) it is relatively unbiased, compared to inside information which invariably favours those you are close to socially and 2) most of it is [legible](https://smile.amazon.com/Seeing-Like-State-Condition-Institution-ebook/dp/B00D8JJYWA/ref=sr_1_1?crid=24NCSS2E2V0FH&keywords=seeing+like+a+state&qid=1574382664&sprefix=seeing+like+a%2Caps%2C177&sr=8-1) and verifiable to readers. The disadvantage is that there are probably many pertinent facts that I am not a party to! Wei Dai has written about how [much discussion now takes place in private google documents](https://www.lesswrong.com/posts/hnvPCZ4Cx35miHkw3/why-is-so-much-discussion-happening-in-private-google-docs) – for example [this Drexler piece](https://www.fhi.ox.ac.uk/wp-content/uploads/Reframing_Superintelligence_FHI-TR-2019-1.1-1.pdf?asd=sa) apparently; in most cases I do not have access to these. If you want the inside scoop I am not your guy; all I can supply is exterior scooping. We focus on papers, rather than outreach or other activities. This is partly because they are much easier to measure; while there has been a large increase in interest in AI safety over the last year, it’s hard to work out who to credit for this, and partly because I think progress has to come by persuading AI researchers, which I think comes through technical outreach and publishing good work, not popular/political work. **Organisations vs Individuals** -------------------------------- Many capital allocators seem to operate under a sort of [Great Man](https://en.wikipedia.org/wiki/Great_man_theory) theory of investment, whereby the most important thing is to identify a guy to invest in who is really clever and ‘gets it’. I think there is a lot of merit in this (as argued [here](https://pubsonline.informs.org/doi/abs/10.1287/mnsc.2020.3646?fbclid=IwAR2w1JKTmiXBXB9-AANQxsyi_z5fKqj0P4jNlhJ2I3L70hMSvPi2Vpt24GA) for example); however, I think I believe in it less than they do. In particular, I worry that this approach leads to over-funding skilled rhetoricians and those the investor/donor is socially connected to. Perhaps as a result of my institutional investment background, I place a lot more weight on historical results. Also, as a practical matter, it is hard for individual donors to fund individual researchers. But as part of a concession to the individual-first view I’ve started asking organisations if anyone significant has joined or left recently, though in practice I think organisations are far more willing to highlight new people joining than old people leaving. Judging organisations on their historical output is naturally going to favour more mature organisations. A new startup, whose value all lies in the future, will be disadvantaged. However, I think that this is the correct approach for donors who are not tightly connected to the organisations in question. The newer the organisation, the more funding should come from people with close knowledge. As organisations mature, and have more easily verifiable signals of quality, their funding sources can transition to larger pools of less expert money. This is how it works for startups turning into public companies and I think the same model applies here. (I actually think that even those with close personal knowledge should use historical results more, to help overcome their biases.) This judgement involves analysing a large number of papers relating to Xrisk that were produced during 2021. Hopefully the year-to-year volatility of output is sufficiently low that this is a reasonable metric; I have tried to indicate cases where this doesn’t apply. I also attempted to include papers during December 2020, to take into account the fact that I'm missing the last month's worth of output from 2021, but I can't be sure I did this successfully. **Politics** ------------ My impression is that policy on most subjects, especially those that are more technical than emotional is generally made by the government and civil servants in consultation with, and being lobbied by, outside experts and interests. Without expert (e.g. top ML researchers in academia and industry) consensus, no useful policy will be enacted. Pushing directly for policy seems if anything likely to hinder expert consensus. Attempts to directly influence the government to regulate AI research seem very adversarial, and risk being pattern-matched to ignorant technophobic opposition to GM foods or other kinds of progress. We don't want the 'us-vs-them' situation that has occurred with climate change, to happen here. AI researchers who are dismissive of safety law, regarding it as an imposition and encumbrance to be endured or evaded, will probably be harder to convince of the need to voluntarily be extra-safe - especially as the regulations may actually be totally ineffective. The only case I can think of where scientists are relatively happy about punitive safety regulations, nuclear power, is one where many of those initially concerned were scientists themselves, and also had the effect of basically ending any progress in nuclear power (at great cost to climate change). Given this, I actually think policy outreach to the general population is probably negative in expectation. If you’re interested in this, I’d recommend you read [this blog pos](https://www.lesswrong.com/posts/DJB82jKwgJE5NsWgT/some-cruxes-on-impactful-alternatives-to-ai-policy-work)t from a few years back. **Openness** ------------ I think there is a strong case to be made that openness in AGI capacity development is bad. As such I do not ascribe any positive value to programs to ‘democratize AI’ or similar. One interesting question is how to evaluate non-public research. For a lot of safety research, openness is clearly the best strategy. But what about safety research that has, or potentially has, capabilities implications, or other infohazards? In this case it seems best if the researchers do not publish it. However, this leaves funders in a tough position – how can we judge researchers if we cannot read their work? Maybe instead of doing top secret valuable research they are just slacking off. If we donate to people who say “trust me, it’s very important and has to be secret” we risk being taken advantage of by charlatans; but if we refuse to fund, we incentivize people to reveal possible infohazards for the sake of money. (Is it even a good idea to publicise that someone else is doing secret research?) For similar reasons I prefer research to not be behind paywalls or inside expensive books, but this is a significantly less important issue. More prosaically, organisations should make sure to upload the research they have published to their website! Having gone to all the trouble of doing useful research it is a constant shock to me how many organisations don’t take this simple step to significantly increase the reach of their work. Additionally, several times I have come across incorrect information on organisation’s websites. **Research Flywheel** --------------------- My basic model for AI safety success is this: 1. Identify interesting problems 1. As a byproduct this draws new people into the field through altruism, nerd-sniping, apparent tractability 2. Solve interesting problems 1. As a byproduct this draws new people into the field through credibility and prestige 3. Repeat One advantage of this model is that it produces both object-level work and field growth. Over time, hopefully an increasingly large fraction of AI researchers will be safety conscious, such that they voluntarily choose to adopt safer techniques, due to the desires of workers, management and specialist investors. This theory of change does not operate via politicians, governments or voters. It does have some weak spots, e.g. China. There is also some value in arguing for the importance of the field (e.g. Bostrom’s Superintelligence) or addressing criticisms of the field. Noticeably absent are strategic pieces. I find that a lot of these pieces do not add terribly much incremental value. Additionally, my suspicion is that strategy research is, to a certain extent, produced exogenously by people who are interested / technically involved in the field. This does not apply to technical strategy pieces, about e.g. whether CIRL or Amplification is a more promising approach. There is somewhat of a paradox with technical vs ‘wordy’ pieces however: as a non-expert, it is much easier for me to understand and evaluate the latter, even though I think the former are much more valuable. **Differential AI progress** ---------------------------- There are many problems that need to be solved before we have safe general AI, one of which is not producing *unsafe* general AI in the meantime. If nobody was doing non-safety-conscious research there would be little risk or haste to AGI – though we would be missing out on the potential benefits of safe AI. There are several consequences of this: * To the extent that safety research also enhances capabilities, it is less valuable. * To the extent that capabilities research re-orientates subsequent research by third parties into more safety-tractable areas it is more valuable. * To the extent that safety results would naturally be produced as a by-product of capabilities research (e.g. autonomous vehicles) it is less attractive to finance. One approach is to research things that will make contemporary ML systems safer, because you think AGI will be a natural outgrowth from contemporary ML. This has the advantage of faster feedback loops, but is also more replaceable (as per the previous section). Another approach is to try to reason directly about the sorts of issues that will arise with superintelligent AI. This work is less likely to be produced exogenously by unaligned researchers, but it requires much more faith in theoretical arguments, unmoored from empirical verification. ### **Near-term safety AI issues** Capacity building VS tolerating poor epistemics? Many people want to connect AI existential risk issues to ‘near-term’ issues; I am generally sceptical of this. For example, autonomous cars seem to risk only localised tragedies (though if they were hacked and all crashed simultaneously that would be much worse), and private companies should have good incentives here. Unemployment concerns [seem exaggerated](https://www.adamsmith.org/research/these-are-the-droids-youre-looking-for) to me, as they have been for most of history (new jobs will be created), at least until we have AGI, at which point we have bigger concerns. Similarly, I generally think concerns about algorithmic bias are essentially political - I recommend [this presentation](https://www.chrisstucchio.com/pubs/slides/crunchconf_2018/slides.pdf) - though there is at least some connection to the value learning problem there. Some people argue that work on these near AI issues is worthwhile because it can introduce people to the broader risks around poor AI alignment. It could also lead to the creation of AI governance institutions that could then do useful work later. However, it seems somewhat disingenuous, it risks attracting grifters while putting off people who recognise that these are bad concerns. For example, [the paper mentioned above](https://www.adamsmith.org/research/these-are-the-droids-youre-looking-for) rejects the precautionary principle for AI on the basis of rejecting bad arguments about unemployment - had these pseudo-strawman views not been widespread, it would have been harder to reach this unfortunate conclusion. It’s also the case many of the policies people recommend as a result of these worries are potentially very harmful. A good example is GDPR and similar privacy regulations (including HIPAA) which have made many good things much more difficult - including degrading our ability to track the pandemic. Some interesting speculation I read is the idea that discussing near AI safety issues might be a sort of ‘greenwashing’ immune response to Xrisk concerns. The ability to respond to long-term AI safety concerns with “yes, we agree AI ethics is very importance, and that’s why we’re working on privacy and decolonising AI” seems like a very rhetorically powerful move. **Financial Reserves** ---------------------- Charities like having financial reserves to provide runway, and guarantee that they will be able to keep the lights on for the immediate future. This could be justified if you thought that charities were expensive to create and destroy, and were worried about this occurring by accident due to the whims of donors. Unlike a company which sells a product, it seems reasonable that charities should be more concerned about this. Donors prefer charities to not have too much reserves. Firstly, those reserves are cash that could be being spent on outcomes now, by either the specific charity or others. Valuable future activities by charities are supported by future donations; they do not need to be pre-funded. Additionally, having reserves increases the risk of organisations ‘going rogue’, because they are insulated from the need to convince donors of their value. As such, in general I do not give full credence to charities saying they need more funding because they want much more than 18 months or so of runway in the bank. If you have a year’s reserves now, after this December you will have that plus whatever you raise now, giving you a margin of safety before raising again next year. I estimated reserves = (cash and grants) / (next year’s budget). In general I think of this as something of a measure of urgency. However despite being prima facie a very simple calculation there are many issues with this data. As such these should be considered suggestive only. **Donation Matching** --------------------- In general I believe that charity-specific donation matching schemes [are somewhat dishonest](https://forum.effectivealtruism.org/posts/a2gYyTnAP36TxqdQp/matching-donation-fundraisers-can-be-harmfully-dishonest), despite my having provided matching funding for at least one in the past. Obviously cause-neutral donation matching is different and should be exploited. Everyone should max out their corporate matching programs if possible, and things like the [annual Facebook Match](https://www.eagivingtuesday.org/) continue to be great opportunities. **Poor Quality Research** ------------------------- Partly thanks to the efforts of the community, the field of AI safety is considerably more well respected and funded than was previously the case, which has attracted a lot of new researchers. While generally good, one side effect of this (perhaps combined with the fact that many low-hanging fruits of the insight tree have been plucked) is that a considerable amount of low-quality work has been produced. For example, there are a lot of papers which can be accurately summarized as asserting “just use ML to learn ethics”. Furthermore, the conventional peer review system seems to be extremely bad at dealing with this issue. The standard view here is just to ignore low quality work. This has many advantages, for example 1) it requires little effort, 2) it doesn’t annoy people. This conspiracy of silence seems to be the strategy adopted by most scientific fields, except in extreme cases like anti-vaxers. However, I think there are some downsides to this strategy. A sufficiently large milieu of low-quality work might degrade the reputation of the field, deterring potentially high-quality contributors. While low-quality contributions might help improve [Concrete Problems](https://arxiv.org/abs/1606.06565)’ citation count, they may use up scarce funding. Moreover, it is not clear to me that ‘just ignore it’ really generalizes as a community strategy. Perhaps you, enlightened reader, can judge that *“How to solve AI Ethics: Just use RNNs”* is not great. But is it really efficient to require everyone to independently work this out? Furthermore, I suspect that the idea that we can all just ignore the weak stuff is somewhat an example of typical mind fallacy. Several times I have come across people I respect according respect to work I found clearly pointless. And several times I have come across people I respect arguing persuasively that work I had previously respected was very bad – but I only learnt they believed this by chance! So I think it is quite possible that many people will waste a lot of time as a result of this strategy, especially if they don’t happen to move in the right social circles. Having said all that, I am not a fan of unilateral action, and am somewhat selfishly conflict-averse, so will largely continue to abide by this non-aggression convention. My only deviation here is to make it explicit. If you’re interested in this you might enjoy [this](https://80000hours.org/articles/accidental-harm/) by 80,000 Hours. **The Bay Area** ---------------- Much of the AI and EA communities, and especially the EA community concerned with AI, is located in the Bay Area, especially Berkeley and San Francisco. This is an extremely expensive place, and is dysfunctional both politically and socially. It seems to attract people who are extremely weird in socially undesirable ways, including [numerous](https://sfist.com/2019/11/19/four-people-in-guy-fawkes-masks-scare-sonoma-retreat/) [cult-like-things](https://medium.com/@zoecurzi/my-experience-with-leverage-research-17e96a8e540b) (though [some](https://www.facebook.com/caroline.f.hubert/posts/10222582154690056) are located elsewhere) –  though to be fair the people who are doing useful work in AI organisations seem to be drawn from a better distribution than the broader community. In general I think the centralization is bad, but if there must be centralization I would prefer it be almost anywhere other than Berkeley. Additionally, I think that, like VCs, some funders are geographically myopic, and biased towards funding things in the Bay Area. As such, I have a mild preference towards funding non-Bay-Area projects. **Conclusions** =============== The size of the field continues to grow, both in terms of funding and researchers. Both make it increasingly hard for individual donors. I’ve attempted to subjectively weigh the productivity of the different organisations against the resources they used to generate that output, and donate accordingly. An increasingly large amount of the best work is being done in places that are inside companies: Deepmind, OpenAI, Anthropic etc. While a good development overall - I am certainly very pleased that Deepmind has such productive a team - it means we can’t really do much here. And many of the not-for-profits are well funded. My constant wish is to promote a lively intellect and independent decision-making among readers; hopefully my laying out the facts as I see them above will prove helpful to some readers. Here is my eventual decision, [rot13'd](http://www.rot13.com/) so you can do come to your own conclusions first (which I strongly recommend): V jnag gb er-vgrengr gung V unir n ybg bs pbasyvpgf bs vagrerfg, fb guvf fubhyq abg or pbafvqrerq n 'erpbzzraqngvba' be nalguvat bs gur fbeg. Guvf vf whfg jurer V nz qbangvat guvf lrne. [Ha?]sbeghangryl n ybg bs gur betnavfngvbaf gung V guvax qb gur orfg erfrnepu qb abg frrz yvxr cnegvphyneyl nggenpgvir shaqvat bccbeghavgvrf guvf lrne. Znal ner rvgure sbe-cebsvg pbzcnavrf be nyernql unir fgebat svanapvat sbe gurve pheerag cynaf. Bguref fvzcyl qb abg cebivqr gur yriry bs qvfpybfher erdhverq sbe rinyhngvba. V qb erznva bcgvzvfgvp nobhg gur YGSS. Rira gubhtu gurve choyvp qvfpybfher unf orra jrnx guvf lrne, jung gurl'ir funerq cevingryl unf orra nqrdhngr, naq V guvax gurve fhccbeg sbe vaqvivqhny fnsrgl erfrnepuref vf rkgerzryl inyhnoyr. V jbhyq yvxr guvf gb or fhssvpvragyl bire-shaqrq gung fhssvpvrayl fxvyyrq crbcyr pna pbasvqragnyyl rffragvnyyl znxr n pnerre bhg YGSS shaqvat. Fb gung'f jurer V'z qbangvat guvf lrne. OREV cebonoyl jbhyq unir orra zl frpbaq cvpx. Ohg lbh fubhyq pbzr gb lbhe bja pbapyhfvbaf! However, I wish to emphasize that all the above organisations seem to be doing good work on the most important issue facing mankind. It is the nature of making decisions under scarcity that we must prioritize some over others, and I hope that all organisations will understand that this necessarily involves negative comparisons at times. Thanks for reading this far; hopefully you found it useful. Apologies to everyone who did valuable work that I excluded! If you found this post helpful, and especially if it helped inform your donations, please consider letting me and any organisations you donate to as a result know. **Disclosures** =============== I have not in general checked all the proofs in these papers, and similarly trust that researchers have honestly reported the results of their simulations. I have a large number of conflicts of interest that I cannot individually disclose. I shared drafts of the individual organisation sections with representatives from FHI, Gov.AI, CHAI, MIRI, GCRI, Redwood Research, BERI, Ought, AI Impacts, GPI, ARC, CSET, Lightcone, CLTR/Alpenglow, CLR, OpenPhil, FTX, LTFF, NonLinear, Rethink Priorities, 80k, CSER (and possibly others I forgot) My eternal gratitude to my anonymous reviewers for their invaluable reviewing. Any remaining mistakes are of course my own. I would also like to thank my wife and daughter for tolerating all the time I have spent/invested/wasted on this. **Looking for Research Assistant for Next Year** ================================================ Over time the amount of ground we need to cover here has increased and my time has become more scarce. I have been spending more time just collecting dispersed information and less time being analytical. As such I think it might make sense to take on a Research Assistant for next year; basically looking for someone who is diligent, reliable and interested in AI Xrisk to email organisations, find information on websites and enter into the spreadsheet. In the past CEA briefly expressed interest; possibly we could find funding for this. **Sources** =========== This is a list of all the articles cited with their own individual paragraph. Reputations for Resolve and Higher-Order Beliefs in Crisis Bargaining - 2021-03-11 - https://journals.sagepub.com/doi/full/10.1177/0022002721995549 AI Vignettes Project - 2021-06-25 - https://aiimpacts.org/ai-vignettes-project/ Aiken, Catherine - Classifying AI Systems - 2021-11-15 - https://cset.georgetown.edu/publication/classifying-ai-systems/ Alex, Neel; Lifland, Eli; Tunstall, Lewis; Thakur, Abhishek; Maham, Pegah; Riedel, C. Jess; Hine, Emmie; Ashurst, Carolyn; Sedille, Paul; Carlier, Alexis; Noetel, Michael; Stuhlmüller, Andreas - RAFT: A Real-World Few-Shot Text Classification Benchmark - 2021-10-28 - https://arxiv.org/abs/2109.14076 Andrus, McKane; Dean, Sarah; Gilbert, Thomas Krendl; Lambert, Nathan; Zick, Tom - AI Development for the Public Interest: From Abstraction Traps to Sociotechnical Risks - 2021-02-04 - https://arxiv.org/abs/2102.04255 Arnold, Zachary; Toner, Helen - AI Accidents: An Emerging Threat - 2021-07-15 - https://cset.georgetown.edu/publication/ai-accidents-an-emerging-threat/ Ashurst, Carolyn; Hine, Emmie; Sedille, Paul; Carlier, Alexis - AI Ethics Statements: Analysis and lessons learnt from NeurIPS Broader Impact Statements - 2021-11-02 - https://arxiv.org/abs/2111.01705 Baker, Jamie - Ethics and Artificial Intelligence: A Policymaker's Introduction - 2021-04-15 - https://cset.georgetown.edu/publication/ethics-and-artificial-intelligence/ Barnes, Beth; Christiano, Paul - Debate update: Obfuscated arguments problem - 2020-12-22 - https://www.alignmentforum.org/posts/PJLABqQ962hZEqhdB/debate-update-obfuscated-arguments-problem#comments Baum, Seth; de Neufville, Robert; Barrett, Tony; Fitzgerald, McKenna - GCRI Statement on the January 6 US Capitol Insurrection - 2021-01-15 - https://gcrinstitute.org/gcri-statement-on-the-january-6-us-capitol-insurrection/ Baum, Seth; Owe, Andrea - Artificial Intelligence Needs Environmental Ethics - 2021-11-14 - https://gcrinstitute.org/artificial-intelligence-needs-environmental-ethics/ Beckstead, Nick; Thomas, Teruji - A paradox for tiny possibilities and enormous values - 2021-07-15 - https://globalprioritiesinstitute.org/nick-beckstead-and-teruji-thomas-a-paradox-for-tiny-probabilities-and-enormous-values/ Brown, Daniel S.; Schneider, Jordan; Dragan, Anca D.; Niekum, Scott - Value Alignment Verification - 2020-12-02 - https://arxiv.org/abs/2012.01557 Buchanan, Ben; Lohn, Andrew; Musser, Micah; Sedova, Katerina - How Language Models Could Change Disinformation - 2021-05-15 - https://cset.georgetown.edu/publication/truth-lies-and-automation/ Cammarata, Nick; Goh, Gabriel; Carter, Shan; Voss, Chelsea; Schubert, Ludwig; Olah, Chris - Curve Circuits - 2021-01-30 - https://distill.pub/2020/circuits/curve-circuits/ Cave, Stephen; Whittlestone, Jess; Nyrup, Rune; Ó hÉigeartaigh, Seán; Calvo, Rafael - Using AI ethically to tackle covid-19 - 2021-03-16 - https://www.bmj.com/content/372/bmj.n364 Cebul, Matthew; Dafoe, Allan; Monteiro - Coercion and the Credibility of Assurances - 2021-07-15 - https://drive.google.com/file/d/1q-vRP19IznfPldBcaO6NglnSkyL7wYaL/view Chatila, Raja; Dignum, Virginia; Fisher, Michael; Giannotti, Fosca; Morik, Katharina; Russell, Stuart; Yeung, Karen - Trustworthy AI - 2021-02-06 - https://smile.amazon.com/gp/product/B08W3XZ1TJ/ref=ppx\_yo\_dt\_b\_d\_asin\_title\_o00?ie=UTF8&psc=1 Chen, Mark; Tworek, Jerry; Jun, Heewoo; Yuan, Qiming; Pinto, Henrique Ponde de Oliveira; Kaplan, Jared; Edwards, Harri; Burda, Yuri; Joseph, Nicholas; Brockman, Greg; Ray, Alex; Puri, Raul; Krueger, Gretchen; Petrov, Michael; Khlaaf, Heidy; Sastry, Girish; Mishkin, Pamela; Chan, Brooke; Gray, Scott; Ryder, Nick; Pavlov, Mikhail; Power, Alethea; Kaiser, Lukasz; Bavarian, Mohammad; Winter, Clemens; Tillet, Philippe; Such, Felipe Petroski; Cummings, Dave; Plappert, Matthias; Chantzis, Fotios; Barnes, Elizabeth; Herbert-Voss, Ariel; Guss, William Hebgen; Nichol, Alex; Paino, Alex; Tezak, Nikolas; Tang, Jie; Babuschkin, Igor; Balaji, Suchir; Jain, Shantanu; Saunders, William; Hesse, Christopher; Carr, Andrew N.; Leike, Jan; Achiam, Josh; Misra, Vedant; Morikawa, Evan; Radford, Alec; Knight, Matthew; Brundage, Miles; Murati, Mira; Mayer, Katie; Welinder, Peter; McGrew, Bob; Amodei, Dario; McCandlish, Sam; Sutskever, Ilya; Zaremba, Wojciech - Evaluating Large Language Models Trained on Code - 2021-07-07 - https://arxiv.org/abs/2107.03374 Christiano, Paul - A naive alignment strategy and optimism about generalization - 2021-06-09 - https://www.alignmentforum.org/posts/QvtHSsZLFCAHmzes7/a-naive-alignment-strategy-and-optimism-about-generalization Christiano, Paul - Another (outer) alignment failure story - 2021-04-07 - https://www.alignmentforum.org/posts/AyNHoTWWAJ5eb99ji/another-outer-alignment-failure-story Christiano, Paul - Teaching ML to answer questions honestly instead of predicting human answers - 2021-05-28 - https://ai-alignment.com/a-problem-and-three-ideas-800b42a14f66 Cihon, Peter; Kleinaltenkamp, Moritz; Schuett, Jonas; Baun, Seth - AI Certification: Advancing Ethical Practice by Reducing Information Asymmetries - 2021-06-02 - https://gcrinstitute.org/ai-certification-advancing-ethical-practice-by-reducing-information-asymmetries/ Cihon, Peter; Schuett, Jonas; Baun, Seth - Corporate Governance of Artificial Intelligence in the Public Interest - 2021-07-05 - https://www.mdpi.com/2078-2489/12/7/275 Clifton, Jesse - Collaborative game specification: arriving at common models in bargaining - 2021-03-06 - https://longtermrisk.org/collaborative-game-specification/ Clifton, Jesse - Weak identifiability and its consequences in strategic settings - 2021-02-15 - https://longtermrisk.org/weak-identifiability-and-its-consequences-in-strategic-settings/ Cohen, Michael; Hutter, Marcus; Nanda, Neel - Fully General Online Imitation Learning - 2021-02-17 - https://arxiv.org/abs/2102.08686 Cotra, Ajeya - The case for aligning narrowly superhuman models - 2021-05-03 - https://www.alignmentforum.org/posts/PZtsoaoSLpKjjbMqM/the-case-for-aligning-narrowly-superhuman-models#Isn\_t\_this\_not\_neglected\_because\_lots\_of\_people\_want\_useful\_AI\_ Crawford, Ali; Wulkan, Ido - Federal Prize Competitions - 2021-11-15 - https://cset.georgetown.edu/publication/federal-prize-competitions/ Critch, Andrew - What Multipolar Failure Looks Like, and Robust Agent-Agnostic Processes (RAAPs) - 2021-03-31 - https://www.alignmentforum.org/posts/LpM3EAakwYdS6aRKf/what-multipolar-failure-looks-like-and-robust-agent-agnostic Dafoe, Allan; Hatz, Sophia; Zhang, Baobao - Coercion and Provocation - 2019-11-14 - https://ora.ox.ac.uk/objects/uuid:fc9c9bd4-1cd1-45c4-9e3e-4cd9826171e4 Dafoe, Allan; Hughes, Edward; Bachrach, Yoram; Collins, Tantum; McKee, Kevin R.; Leibo, Joel Z.; Larson, Kate; Graepel, Thore - Open Problems in Cooperative AI - 2020-12-15 - https://arxiv.org/abs/2012.08630 Dafoe, Allan; Zwetsloot, Remco; Cebul, Matthew  - Reputations for Resolve and Higher-Order Beliefs in Crisis Bargaining - 2021-03-11 - https://journals.sagepub.com/doi/10.1177/0022002721995549 Daniels, Matthew; Murphy, Ben - National Power After AI - 2021-07-15 - https://cset.georgetown.edu/publication/national-power-after-ai/ Davidson, Tom - Could Advanced AI Drive Explosive Economic Growth? - 2021-06-25 - https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth Davis, Zach - Feature Selection - 2021-10-31 - https://www.lesswrong.com/posts/dYspinGtiba5oDCcv/feature-selection de Neufville, Robert; Baum, Seth - Collective Action on Artificial Intelligence: A Primer and Review - 2021-07-15 - https://gcrinstitute.org/collective-action-on-artificial-intelligence-a-primer-and-review/ Ding, Jeffrey - China's Growing Influence over the Rules of the Digital Road - 2021-04-15 - https://sci-hubtw.hkvisa.net/10.1353/asp.2021.0015 Ding, Jeffrey; Dafoe, Allan - Engines of Power: Electricity, AI, and General-Purpose Military Transformations - 2021-06-08 - https://arxiv.org/abs/2106.04338 Drexler, Eric - QNRs: Toward Language for Intelligent Machines - 2021-08-27 - https://www.fhi.ox.ac.uk/qnrs/ Evans, Owain; Cotton-Barratt, Owen; Finnveden, Lukas; Bales, Adam; Balwit, Avital; Wills, Peter; Righetti, Luca; Saunders, William - Truthful AI: Developing and governing AI that does not lie - 2021-10-13 - https://arxiv.org/abs/2110.06674 Everitt, Tom; Carey, Ryan; Langlois, Eric; Ortega, Pedro A; Legg, Shane - Agent Incentives: A Causal Perspective - 2021-02-02 - https://arxiv.org/abs/2102.01685 Fedasiuk, Ryan; Melot, Jennifer; Murphy, Ben - Harnessed Lightning - 2021-10-15 - https://cset.georgetown.edu/publication/harnessed-lightning/ Fernandez, Ronny - How energy efficient are human-engineered flight designs relative to natural ones? - 2020-12-10 - https://aiimpacts.org/are-human-engineered-flight-designs-better-or-worse-than-natural-ones/ Filan, Daniel - AXRP - the AI X-risk Research Podcast - 2020-12-23 - https://axrp.net/ Filan, Daniel; Casper, Stephen; Hod, Shlomi; Wild, Cody; Critch, Andrew; Russell, Stuart - Clusterability in Neural Networks - 2021-03-04 - https://arxiv.org/abs/2103.03386 Finnveden, Lukas - Extrapolating GPT-N performance - 2020-12-18 - https://www.alignmentforum.org/posts/k2SNji3jXaLGhBeYP/extrapolating-gpt-n-performance#comments Fischer, Sophie-Charlotte; Leung, Jade; Anderljung, Markus; O’Keefe, Cullen; Torges, Stefan; Khan, Saif M.; Garfinkel, Ben; Dafoe, Allan - AI Policy Levers: A Review of the U.S. Government’s Tools to Shape AI Research, Development, and Deployment - 2021-03-15 - https://www.governance.ai/research-paper/ai-policy-levers-a-review-of-the-u-s-governments-tools-to-shape-ai-research-development-and-deployment Fitzgerald, McKenna; Boddy, Aaron; Baum, Seth - 2020 Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy - 2020-12-31 - https://gcrinstitute.org/2020-survey-of-artificial-general-intelligence-projects-for-ethics-risk-and-policy/ GAA - Nuclear Espionage and AI Governance - 2021-10-04 - https://forum.effectivealtruism.org/posts/CKfHDw5Lmoo6jahZD/nuclear-espionage-and-ai-governance-1 Gabriel, Iason - Towards a Theory of Justice for Artificial Intelligence - 2021-10-27 - https://arxiv.org/abs/2110.14419 Galaz, Victor; Centeno, Miguel; Callahan, Peter; Causevic, Amar; Patterson, Thayer; Brass, Irina; Baum, Seth; Farber, Darry; Fischer, Joern; Garcia, David; McPhearson, Timon; Jimenex, Daniel; King, Brian; Larcey, Paul; Levy, Karen - Artificial Intelligence, Systemic Risks, and Sustainability - 2021-10-07 - https://www.sciencedirect.com/science/article/pii/S0160791X21002165?via%3Dihub Garfinkel, Ben - A Tour of Emerging Cryptographic Technologies - 2021-05-15 - https://www.governance.ai/research-paper/a-tour-of-emerging-cryptographic-technologies Garrabrant, Scott - Temporal Inference with Finite Factored Sets - 2021-10-23 - https://arxiv.org/abs/2109.11513 Gates, Vael; Callaway, Frederick; Ho, Mark; Griffiths, Thomas - A rational model of people’s inferences about others’ preferences based on  response times - 2021-03-15 - https://psyarxiv.com/25zfx/ Grace, Katja - Argument for AI x-risk from large impacts - 2021-09-29 - https://aiimpacts.org/argument-from-large-impacts/ Grace, Katja - Beyond fire alarms: freeing the groupstruck - 2021-09-26 - https://aiimpacts.org/beyond-fire-alarms-freeing-the-groupstruck/ Grace, Katja - Coherence arguments imply a force for goal-directed behavior - 2021-03-25 - https://aiimpacts.org/coherence-arguments-imply-a-force-for-goal-directed-behavior/ Greaves, Hilary; MacAskill, William - The case for strong longtermism - 2021-06-15 - https://globalprioritiesinstitute.org/hilary-greaves-william-macaskill-the-case-for-strong-longtermism-2/ Guterres, António - Our Common Agenda - 2021-09-10 - https://www.un.org/en/un75/common-agenda Hammond, Lewis; Fox, James; Everitt, Tom; Abate, Alessandro; Wooldridge, Michael - Equilibrium Refinements for Multi-Agent Influence Diagrams: Theory and Practice - 2021-02-09 - https://arxiv.org/abs/2102.05008 Hendrycks, Dan; Carlini, Nicholas; Schulman, John; Steinhardt, Jacob - Unsolved Problems in ML Safety - 2021-09-28 - https://arxiv.org/abs/2109.13916 Hendrycks, Dan; Mazeika, Mantas; Zou, Andy; Patel, Sahil; Zhu, Christine; Navarro, Jesus; Song, Dawn; Li, Bo; Steinhardt, Jacob - What Would Jiminy Cricket Do? Towards Agents That Behave Morally - 2021-10-25 - https://arxiv.org/abs/2110.13136 Hod, Shlomi; Casper, Stephen; Filan, Daniel; Wild, Cody; Critch, Andrew; Russell, Stuart - Detecting Modularity in Deep Neural Networks - 2021-10-13 - https://arxiv.org/abs/2110.08058 Hua, Shin-Shin; Belfield, Haydn - AI & Antitrust: Reconciling Tensions Between Competition Law and Cooperative AI Development - 2021-11-15 - https://yjolt.org/ai-antitrust-reconciling-tensions-between-competition-law-and-cooperative-ai-development Imbrie, Andrew; Gelles, Rebecca; Dunham, James; Aiken, Catherine - Evaluating Rhetorical Dynamics in AI - 2021-05-15 - https://cset.georgetown.edu/publication/contending-frames/ Jiang, Liwei; Hwang, Jena D.; Bhagavatula, Chandra; Bras, Ronan Le; Forbes, Maxwell; Borchardt, Jon; Liang, Jenny; Etzioni, Oren; Sap, Maarten; Choi, Yejin - Delphi: Towards Machine Ethics and Norms - 2021-10-14 - https://arxiv.org/abs/2110.07574 Karnofsky, Holden - All Possible Views About Humanity's Future Are Wild - 2021-07-13 - https://forum.effectivealtruism.org/s/isENJuPdB3fhjWYHd/p/TwQzyP3QgttmuTHym Klinova, Katya; Korinek, Anton - AI and Shared Prosperity - 2021-05-18 - https://arxiv.org/abs/2105.08475 Koch, Jack; Langosco, Lauro; Pfau, Jacob; Le, James; Sharkey, Lee - Objective Robustness in Deep Reinforcement Learning - 2021-05-28 - https://arxiv.org/abs/2105.14111 Kokotajlo, Daniel - Birds, Brains, Planes, and AI: Against Appeals to the Complexity/Mysteriousness/Efficiency of the Brain - 2021-01-18 - https://www.alignmentforum.org/posts/HhWhaSzQr6xmBki8F/birds-brains-planes-and-ai-against-appeals-to-the-complexity#comments Korinek, Anton; Stiglitz, Joseph - Artificial Intelligence, Globalization, and Strategies for Economic Development - 2021-02-04 - https://papers.ssrn.com/sol3/papers.cfm?abstract\_id=3812820 Laidlaw, Cassidy; Russell, Stuart - Uncertain Decisions Facilitate Better Preference Learning - 2021-01-15 - https://proceedings.neurips.cc/paper/2021/hash/7f141cf8e7136ce8701dc6636c2a6fe4-Abstract.html Lee, Kimin; Smith, Laura; Abbeel, Pieter - PEBBLE: Feedback-Efficient Interactive Reinforcement Learning via Relabeling Experience and Unsupervised Pre-training - 2021-06-09 - https://arxiv.org/abs/2106.05091 lifelonglearner; Hase, Peter - Opinions on Interpretable Machine Learning and 70 Summaries of Recent Papers - 2021-04-09 - https://www.alignmentforum.org/posts/GEPX7jgLMB8vR2qaK/opinions-on-interpretable-machine-learning-and-70-summaries Lin, Stephanie; Hilton, Jacob; Evans, Owain - TruthfulQA: Measuring How Models Mimic Human Falsehoods - 2021-10-08 - https://arxiv.org/abs/2109.07958 Lindner, David; Shah, Rohin; Abbeel, Pieter; Dragan, Anca - Learning What To Do by Simulating the Past - 2021-04-08 - https://arxiv.org/abs/2104.03946 Liu, Hin-Yan; Maas, Matthijs - 'Solving for X?' Towards a problem-finding framework to ground long-term governance strategies for artificial intelligence - 2021-02-00 - https://www.researchgate.net/publication/342774816\_%27Solving\_for\_X%27\_Towards\_a\_problem-finding\_framework\_to\_ground\_long-term\_governance\_strategies\_for\_artificial\_intelligence Maas, Matthijs - AI, Governance Displacement, and the (De)Fragmentation of International Law - 2021-03-22 - https://www.cser.ac.uk/resources/ai-governance-displacement-and-defragmentation-international-law/ Maas, Matthijs - Aligning AI Regulation to Sociotechnical Change - 2021-06-23 - https://papers.ssrn.com/sol3/papers.cfm?abstract\_id=3871635 Maas, Matthijs; Stix, Charlotte - Bridging the gap: the case for an ‘Incompletely Theorized Agreement’ on AI policy - 2021-01-18 - https://www.cser.ac.uk/resources/bridging-gap-case-incompletely-theorized-agreement-ai-policy/ Maclure, Jocelyn; Russell, Stuart - AI for Humanity: The Global Challenges - 2021-02-06 - https://smile.amazon.com/gp/product/B08W3XZ1TJ/ref=ppx\_yo\_dt\_b\_d\_asin\_title\_o00?ie=UTF8&psc=1 Manheim, David; Sandberg, Anders - What is the Upper Limit of Value? - 2021-01-27 - https://philarchive.org/rec/MANWIT-6 Mittelsteadt, Matthew - Mechanisms to Ensure AI Arms Control Compliance - 2021-02-15 - https://cset.georgetown.edu/publication/ai-verification/ Mogensen, Andreas - Do not go gentle: why the Asymmetry does not support anti-natalism - 2021-05-15 - https://globalprioritiesinstitute.org/do-not-go-gentle-why-the-asymmetry-does-not-support-anti-natalism-andreas-mogensen-global-priorities-institute-oxford-university/ Murphy, Ben - Translation: Ethical Norms for New Generation Artificial Intelligence Released - 2021-10-21 - https://cset.georgetown.edu/publication/ethical-norms-for-new-generation-artificial-intelligence-released/ Murphy, Ben - Translation: White Paper on Trustworthy Artificial Intelligence - 2021-09-14 - https://cset.georgetown.edu/publication/white-paper-on-trustworthy-artificial-intelligence/ Niconiconi - Whole Brain Emulation: No Progress on C. elgans After 10 Years - 2021-10-01 - https://www.lesswrong.com/posts/mHqQxwKuzZS69CXX5/whole-brain-emulation-no-progress-on-c-elgans-after-10-years Oesterheld, Caspar; Conitzer, Vincent - Safe Pareto Improvements for Delegated Game Playing - 2021-05-03 - https://users.cs.duke.edu/~conitzer/safeAAMAS21.pdf Ord, Toby; Mercer, Angus; Dannreuther, Sophie; Nelson, Cassidy; Lewis, Gregory; Millett, Piers; Whittlestone, Jess; Leung, Jade; Anderljung, Markus; Hilton, Sam; Belfield, Haydn - Future Proof: The Opportunity to Transform the UK's Resilience to Extreme Risks - 2021-06-15 - https://www.governance.ai/research-paper/futureproof-artificial-intelligence-chapter Owe, Andrea; Baum, Seth - Moral Consideration of Nonhumans in the Ethics of Artificial Intelligence - 2021-06-07 - https://gcrinstitute.org/moral-consideration-of-nonhumans-in-the-ethics-of-artificial-intelligence/ Owe, Andrea; Baum, Seth - The Ethics of Sustainability for Artificial Intelligence - 2021-11-17 - https://gcrinstitute.org/the-ethics-of-sustainability-for-artificial-intelligence/ Prunkl, Carina; Ashurst, Carolyn; Anderljung, Markus; Webb, Helena; Leike, Jan; Dafoe, Allan - Institutionalizing ethics in AI through broader impact requirements - 2021-02-17 - http://www.cs.jhu.edu/~misha/DIReadingSeminar/Papers/Prunkl21.pdf Roman, Charlotte; Dennis, Michael; Critch, Andrew; Russell, Stuart - Accumulating Risk Capital Through Investing in Cooperation - 2021-01-25 - https://arxiv.org/abs/2101.10305 Rudner, Tim; Toner, Helen - Key Concepts in AI Safety: An Overview - 2021-03-15 - https://cset.georgetown.edu/publication/key-concepts-in-ai-safety-an-overview/ Rudner, Tim; Toner, Helen - Key Concepts in AI Safety: Interpretability in Machine Learning - 2021-03-15 - https://cset.georgetown.edu/publication/key-concepts-in-ai-safety-interpretability-in-machine-learning/ Rudner, Tim; Toner, Helen - Key Concepts in AI Safety: Robustness and Adversarial Examples - 2021-03-15 - https://cset.georgetown.edu/publication/key-concepts-in-ai-safety-robustness-and-adversarial-examples/ Shah, Rohin; Wild, Cody; Wang, Steven H.; Alex, Neel; Houghton, Brandon; Guss, William; Mohanty, Sharada; Kanervisto, Anssi; Milani, Stephanie; Topin, Nicholay; Abbeel, Pieter; Russell, Stuart; Dragan, Anca - The MineRL BASALT Competition on Learning from Human Feedback - 2021-07-05 - https://arxiv.org/abs/2107.01969 Shlegeris, Buck - Redwood Research’s current project - 2021-09-21 - https://www.alignmentforum.org/posts/k7oxdbNaGATZbtEg3/redwood-research-s-current-project Shlegeris, Buck - The alignment problem in different capability regimes - 2021-09-21 - https://www.alignmentforum.org/posts/HHunb8FPnhWaDAQci/the-alignment-problem-in-different-capability-regimes Soares, Nate - Visible Thoughts Project and Bounty Announcement - 2021-11-29 - https://www.alignmentforum.org/posts/zRn6cLtxyNodudzhw/visible-thoughts-project-and-bounty-announcement Stastny, Julian; Treutlein, Johannes; Riché, Maxime; Clifton, Jesse - Multi-agent learning in mixed-motive coordination problems - 2021-03-15 - https://longtermrisk.org/files/stastny\_et\_al\_implicit\_bargaining.pdf Stooke, Adam; Mahajan, Anuj; Barros, Catarina; Deck, Charlie; Bauer, Jakob; Sygnowski, Jakub; Trebacz, Maja; Jaderberg, Max; Mathieu, Michael; McAleese, Nat; Bradley-Schmieg, Nathalie; Wong, Nathaniel; Porcel, Nicolas; Raileanu, Roberta; Hughes-Fitt, Steph; Czarnecki, Valentin Dalibard and Wojciech Marian - Open-Ended Learning Leads to Generally Capable Agents - 2021-07-27 - https://deepmind.com/research/publications/2021/open-ended-learning-leads-to-generally-capable-agents Thomas, Teruji - Simulation Expectation - 2021-09-15 - https://globalprioritiesinstitute.org/simulation-expectation-teruji-thomas-global-priorities-institute-university-of-oxford/ Thorstad, David - The scope of longtermism - 2021-06-15 - https://globalprioritiesinstitute.org/the-scope-of-longtermism-david-thorstad-global-priorities-institute-university-of-oxford/ Trammell, Philip; Korinek, Anton - Economic Growth Under Transformative AI: A Guide to the Vast Range of Possibilities for Output Growth, Wages, and the Laborshare - 2020-02-04 - <https://www.governance.ai/research-paper/economic-growth-under-transformative-ai-a-guide-to-the-vast-range-of-possibilities-for-output-growth-wages-and-the-laborshare> Turner, Alex - Satisficers Tend To Seek Power: Instrumental Convergence Via Retargetability - 2021-11-17 - https://www.lesswrong.com/posts/nZY8Np759HYFawdjH/satisficers-tend-to-seek-power-instrumental-convergence-via Welbl, Johannes; Glaese, Amelia; Uesato, Jonathan; Dathathri, Sumanth; Mellor, John; Hendricks, Lisa Anne; Anderson, Kirsty; Kohli, Pushmeet; Coppin, Ben; Huang, Po-Sen - Challenges in Detoxifying Language Models - 2021-09-15 - https://arxiv.org/abs/2109.07445 Wentworth, John - How To Get Into Independent Research On Alignment/Agency - 2021-11-18 - https://www.lesswrong.com/posts/P3Yt66Wh5g7SbkKuT/how-to-get-into-independent-research-on-alignment-agency#Meta Wentworth, John - Utility Maximization = Description Length Minimization - 2021-02-18 - https://www.alignmentforum.org/posts/voLHQgNncnjjgAPH7/utility-maximization-description-length-minimization Whittlestone, Jess; Clark, Jack - Why and How Governments Should Monitor AI Development - 2021-08-31 - https://www.cser.ac.uk/resources/why-and-how-governments-should-monitor-ai-development/ Woods, Kat - The Nonlinear Library - 2021-10-19 - https://forum.effectivealtruism.org/posts/JTZTBienqWEAjGDRv/listen-to-more-ea-content-with-the-nonlinear-library Yudkowsky, Eliezer - Discussion with Eliezer Yudkowsky on AGI interventions - 2021-11-10 - https://www.lesswrong.com/posts/CpvyhFy9WvCNsifkY/discussion-with-eliezer-yudkowsky-on-agi-interventions Yudkowsky, Eliezer - Yudkowsky and Christiano discuss "Takeoff Speeds" - 2021-11-22 - https://forum.effectivealtruism.org/posts/rho5vtxSaEdXxLu3o/yudkowsky-and-christiano-discuss-takeoff-speeds Zaidi, Waqar; Dafoe, Allan - International Control of Powerful Technology: Lessons from the Baruch Plan for Nuclear Weapons - 2021-03-15 - https://www.governance.ai/research-paper/international-control-of-powerful-technology-lessons-from-the-baruch-plan-for-nuclear-weapons Zhang, Baobao; Anderljung, Markus; Kahn, Lauren; Dreksler, Noemi; Horowitz, Michael C.; Dafoe, Allan - Ethics and Governance of Artificial Intelligence: Evidence from a Survey of Machine Learning Researchers - 2021-08-15 - https://jair.org/index.php/jair/article/view/12895/26701 Zhang, Tianjun; Rashidinejad, Paria; Jiao, Jiantao; Tian, Yuandong; Gonzalez, Joseph E.; Russell, Stuart - MADE: Exploration via Maximizing Deviation from Explored Regions - 2021-01-15 - https://proceedings.neurips.cc/paper/2021/hash/5011bf6d8a37692913fce3a15a51f070-Abstract.html Zhuang, Simon; Hadfield-Menell, Dylan - Consequences of Misaligned AI - 2021-02-07 - https://arxiv.org/abs/2102.03896
bb0fb89e-3e03-4ad5-8ff8-63821f4b8302
trentmkelly/LessWrong-43k
LessWrong
How popular is ChatGPT? Part 1: more popular than Taylor Swift Harlan Stewart, 23 February 2023 Introduction Public attention toward AI seems much higher after the release of ChatGPT at the end of November. But how much higher is it? To better understand this, I looked at search data from Google Trends about ChatGPT, OpenAI, AI, and AI Alignment. Unfortunately, Google Trends only shares relative search volumes instead of the number of searches made for a term or topic. I compared these relative search volumes to other non-AI topics, such as Taylor Swift, to make them more useful. This is similar to adding a familiar “for scale” object in a product photo. Magnetic-core memory with a quarter for scale, by Jud McCranie How to read these graphs 1. In the first graph, the data is about searches for the terms in quotation marks, which are exact search terms. In the others, the data is about search “topics,” which are collections of various search terms related to a topic, as defined by Google Trends. 2. The vertical axes of these graphs are relative search volume, defined as the percentage of the peak search volume in that graph. Data ChatGPT is mainstream For the time that ChatGPT has been publicly available since November 30 2022, US searches for it outnumbered US searches for Taylor Swift or Drake. However, there were only around a third as many searches for ChatGPT as searches for Wordle, and Wordle itself had only around a third of the search volume that it did in Spring 2022. Americans suddenly know about OpenAI For the time that OpenAI has existed, since December 10 2015, Americans usually searched for it less than for Blockbuster Video, a retailer that closed in 2014. In the months since ChatGPT was announced, American searches for OpenAI have increased by around 15x to a volume similar to that for Samsung. Interest in AI evolved from dinosaurs to birds For most of the last decade, there has been a similar number of global searches about AI as about dinosaurs. In the time since DALL-E 2’s beta was announced les
28ae9e9d-639e-4edd-b12e-54dad809f3bc
trentmkelly/LessWrong-43k
LessWrong
Arguments and relevance claims The following once happened: I posted a link to some article on an IRC channel. A friend of mine read the article in question and brought up several criticisms. I felt that her criticisms were mostly correct though not very serious, so I indicated agreement with them. Later on the same link was posted again. My friend commented something along the lines of "that was already posted before, we discussed this with Kaj and we found that the article was complete rubbish". I was surprised - I had thought that I had only agreed to some minor criticisms that didn't affect the main point of the article. But my friend had clearly thought that the criticisms were decisive and had made the article impossible to salvage. -- Every argument actually has two parts, even if people often only state the first part. There's the argument itself, and an implied claim of why the argument would matter if it were true. Call this implied part the relevance claim. Suppose that I say "Martians are green". Someone else says, "I have seen a blue Martian", and means "I have seen a blue Martian (argument), therefore your claim of all Martians being green is false (relevance claim)". But I might interpret this as them saying, "I have seen a blue Martian (argument), therefore your claim of most Martians being green is less likely (relevance claim)". I then indicate agreement. Now I will be left with the impression that the other person made a true-but-not-very-powerful claim that left my argument mostly intact, whereas the other person is left with the impression that they made a very powerful claim that I agreed with, and therefore I admitted that I was wrong. We could also say that the relevance claim is a claim of how much the probability of the original statement would be affected if the argument in question were true. So, for example "I have seen a blue martian (argument), therefore the probability of 'Martians are green' is less than .01 (relevance claim)", or equivalently, "I have s
bb83d157-6530-431c-adc1-369b095b3f33
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
AI Risk Microdynamics Survey If you are interested in AI Risk, could you kindly consider filling out a short (10 min) survey on AI Risk Microdynamics? The hope is that I will be able to use your responses to inform an economic model of that risk in the near future, which I think would fill an important gap in our understanding of AI Risk dynamics <https://forms.gle/kLYtynp3FYcxkPZc8>
044eed79-2dcb-4209-9ad7-43e2796b8ba2
StampyAI/alignment-research-dataset/youtube
Youtube Transcripts
0L - Theory [rough early thoughts] well we're going to be diving in to trying to figure out how transformers mechanistically work but i'm sure you'll be shocked to learn that uh transformers are pretty complicated to think about and so rather than going and starting uh with uh full-on large transformers and especially um the kind of really large uh language models that we see in modern analog nlp and we're gonna start with a couple of videos uh studying smaller simplified versions of the transformer and work our way up and in particular we're going to be starting right now with a zero layer transformer which is really the the simplest model that you can sort of conceive of that um bears any resemblance at all to a transformer and um despite being so simple there will be some small takeaways that are useful and so we're going to briefly briefly talk about the zero layer transformer so a serial layer transformer really just has two steps uh we're going to do a token embedding and then we're going to do an unembedding to get the watches so for the token embedding um we're going to go and think of the token as a one-hot vector and then we're going to multiply by w e the word embedding and that will give us the the token embedding and then we'll multiply by the under-bending matrix and then like adjust the logics so two steps that's the entire thing that's the entire model and so we're just going in from the previous token predicting the next token uh by going and multiplying those through those two matrices and we can just write those out if we want as a product and so that that w-e-w matrix has to be representing um the bi-gram statistics the the frequency is just that empirically one token follows another and those bipartisan statistics in particular needs to go and represent bi-gram vlog likelihoods right because we're going to go and feed it into a soft mac so we want to have the log likelihoods um and it'll probably be an approximation because uh it has to be low rank probably but the embeddings that we're using are much smaller than a vocabulary size so it's an approximation of that but uh when we see that product that that's what it means and that actually right there is everything useful i have to say on zero there are transformers um but it is i think a genuinely useful statement because um when we stuck in larger transformers all the way up to very large transformers every equation we see or at least the overall equation for the transformer will always have a term that looks exactly like that w-u-w-e and when we see it we should immediately suspect that it's gonna be doing some kind of bi-gram statistic-ish like thing and we should think back to the humble zero-layer transformer and remember that so okay that's what we have to say on zero later zero layer transformers um and uh in our next video we'll dive into one layer of attention only transformers
28539f44-27d5-4049-b68e-fca16ba0f562
trentmkelly/LessWrong-43k
LessWrong
Matt Yglesias on AI Policy Yglesias is a widely read center-left journalist. Co-founder of Vox, ex-NYT. Note the implicit invitation: “I also try to make it clear to people who are closer to the object-level work that I’m interested in writing columns on AI policy if they have ideas for me, but they mostly don’t.” Full article is ungated on his Substack. Relevant excerpt below: > The typical person’s marginal return on investment for efforts to reduce existential risk from misaligned artificial intelligence is going to diminish at an incredibly rapid pace. I have written several times that I think this problem is worth taking seriously and that the people working on it should not be dismissed as cranks. I’m a somewhat influential journalist, and my saying this has, I think, some value to the relevant people. But I write five columns a week and they are mostly not about this, because being tedious and repetitive on this point wouldn’t help anyone. I also try to make it clear to people who are closer to the object-level work that I’m interested in writing columns on AI policy if they have ideas for me, but they mostly don’t. > > So am I “prioritizing” AGI risk as a cause? On one level, I think I am, in the sense that I do literally almost everything in my power to help address it. On another level, I clearly am not prioritizing this because I am barely doing anything.
3dd88859-b245-4bdf-9e34-957ae9654de1
trentmkelly/LessWrong-43k
LessWrong
Some thoughts on David Roodman’s GWP model and its relation to AI timelines [Cross posted from the EA forum.] I’ve been working on a report (see blog) assessing possible trajectories for GWP out to 2100. A lot of my early work focussed on analysing a paper of my colleague David Roodman. Roodman fits a growth model to long-run GWP; the model predicts a 50% probability that annual GWP growth is >= 30% by 2043. I was thinking about whether to trust this model’s GWP forecasts, compared with the standard extrapolations that predict GWP growth of ~3% per year or less.[1] I was also thinking about how the model might relate to AI timelines. This post briefly describes some of my key takeaways, as they don’t figure prominently in the report. I explain them briefly and directly, rather than focussing on nuance or caveats.[2] I expect it to be useful mostly for people who already have a rough sense for how Roodman’s model works. Many points here have already been made elsewhere. Although for brevity I sometimes refer to “Roodman’s extrapolations”, what I really mean is the extrapolations of his univariate model once it’s been fitted to long-run GWP data. Of course, David does not literally believe these extrapolations. More generally, this post is not about David’s beliefs at all but rather about possible uses and interpretations of his model. [Views are my own, not my employers] Economic theory doesn’t straightforwardly support Roodman’s extrapolation over standard extrapolations Early on in the project, I had the following rough picture in my mind (oversimplifying for readability): > Standard extrapolations use what are called ‘exogenous growth models’. These fit the post-1900 data well. However, the exponential growth is put in by hand and isn’t justified by economic theory. (Exogenous growth models assume technology grows exponentially but don’t attempt to justify this assumption; the exponential growth of technology then drives exponential growth of GDP/capita.) > On the other hand, endogenous growth models can explain growth without pu
571dff9b-31bf-488f-8ba7-f8d3b672dfbc
trentmkelly/LessWrong-43k
LessWrong
Digital People FAQ Audio also available by searching Stitcher, Spotify, Google Podcasts, etc. for "Cold Takes Audio" This is a companion piece to Digital People Would Be An Even Bigger Deal, which is the third in a series of posts about the possibility that we are in the most important century for humanity. This piece discusses basic questions about "digital people," e.g., extremely detailed, realistic computer simulations of specific people. This is a hypothetical (but, I believe, realistic) technology that could be key for a transition to a stable, galaxy-wide civilization. (The other piece describes the consequences of such a technology; this piece focuses on basic questions about how it might work.) It will be important to have this picture, because I'm going to argue that AI advances this century could quickly lead to digital people or similarly significant technology. The transformative potential of something like digital people, combined with how quickly AI could lead to it, form the case that we could be in the most important century. This table (also in the other piece) serves as a summary of the two pieces together:  Normal humansDigital peoplePossible today (More)   Probably possible someday (More)Can interact with the real world, do most jobs (More)Conscious, should have human rights (More)Easily duplicated, ala The Duplicator (More)Can be run sped-up (More)Can make "temporary copies" that run fast, then retire at slow speed (More)Productivity and social science: could cause unprecedented economic growth, productivity, and knowledge of human nature and behavior (More)Control of the environment: can have their experiences altered in any way (More)Lock-in: could live in highly stable civilizations with no aging or death, and "digital resets" stopping certain changes (More)Space expansion: can live comfortably anywhere computers can run, thus highly suitable for galaxy-wide expansion (More)Good or bad? (More)Outside the scope of this pieceCould be very good or bad Tab
18b8f565-021f-44cc-a708-aa966f0ffbe5
trentmkelly/LessWrong-43k
LessWrong
A Technique of Pure Reason Looking a little ahead into the future, I think LLMs are going to stop being focused on knowledgeable, articulate[1] chatbots, but instead be more efficient models that are weaker in these areas than current models, but relatively stronger at reasoning, a pure-reasoner model. The rest will be bolted on via tool-use and other scaffolding[2]. The Current Idiom is Inefficient Chatbots make a great replacement for Google searches because they know a lot about everything. But that knowledge comes at a price. The majority of an LLM's parameters are thought to be spent on storing factual knowledge. Parameters are the key determinant of training and inference costs. It likely has a significant cost on data efficiency, and it's possible these extra parameters just hurt generalisation overall. Memorisation is just not a feature that we need models to have when there are likely more efficient ways to get nearly the same capabilities. We keep building bigger models which memorise more and more data largely because we don't know how not to. Every sentence of pre-training simultaneously encourages the model to memorise more factual information, and ekes it towards better high-level thinking and eventually reasoning ability. But I think that technicality will fall soon, driving towards sleeker agents that fit inside a larger framework[3]. The Technical Path to Pure Reason The main blocker at the moment is that we don't know how to train models that have good capability without also training them to be great memorisers. There's also a lesser blocker of ensuring that such a model is reliably supplied in context with any information it needs, to compensate for not being able to furnish it itself. Decoupling Reasoning Capability from Memorisation While I cannot predict exactly what techniques are likely to work, it's clear there are several productive areas of current research that all lead in the same direction. Reasoning RL. While I've been using "reasoning" in the colloqu
2b407609-63dd-4f2d-bb7c-a4874ceb544c
StampyAI/alignment-research-dataset/special_docs
Other
Unifying Principles and Metrics for Safe and Assistive AI. Unifying Principles and Metrics for Safe and Assistive AI Siddharth Srivastava School of Computing, Informatics, and Decision Systems Engineering Arizona State University, Tempe, AZ 85281 USA siddharths@asu.edu Abstract The prevalence and success of AI applications have been tem- pered by concerns about the controllability of AI systems and AI’s impact on the future of work. These concerns reflect two aspects of a central question: how would humans work with AI systems? While research on AI safety focuses on design- ing AI systems that allow humans to safely instruct and con- trol AI systems, research on AI and the future of work focuses on the impact of AI on humans who may be unable to do so. This Blue Sky Ideas paper proposes a unifying set of declar- ative principles that enable a more uniform evaluation of ar- bitrary AI systems along multiple dimensions of the extent to which they are suitable for use by specific classes of hu- man operators. It leverages recent AI research and the unique strengths of the field to develop human-centric principles for AI systems that address the concerns noted above. Introduction Recent years have witnessed immense progress in research on safe and assistive AI systems as well as on the poten- tial impact of AI on the future of work. These directions of research address two sides of a common, fundamental con- cern: how would humans work with AI systems? While re- search on AI safety focuses on designing AI systems that allow humans to safely instruct and control them (e.g., (Rus- sell, Dewey, and Tegmark 2015; Zilberstein 2015; Hadfield- Menell et al. 2016; Russell 2017; Hadfield-Menell et al. 2017)), research on AI and the future of work focuses on the impact of AI on members of the workforce who may be unable to do so (Arntz, Gregory, and Zierahn 2016; Manyika et al. 2017; Nedelkoska and Quintini 2018). This paper presents the view that in addition to the pro- ductive streams of research outlined above, we need uni- fying metrics and declarative objectives that would allow a more uniform evaluation of AI systems on the extent to which an AI system is suitable for working with specific classes of human operators . It also presents a common prin- ciple for human-centered AI systems that allows the devel- opment of such metrics. Consequently, rather than proposing a specific new design for AI systems, the focus of this paper Copyright c 2021, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.is on elucidating the declarative principles and types of met- rics that would lead to concerted progress on the problem. The advantage of this declarative approach to framing the problem is that it enables an assessment of progress indepen- dent of the internal design being used in an AI system, and it will help draw out the strengths and weaknesses of different design approaches. Without such a specification, design dif- ferences can make solution paradigms difficult to compare. E.g., one might develop a complex system architecture that builds user profiles and provides appropriate assistance. This system would have very different input requirements and de- sign and performance parameters than a formulation that ad- dresses the same problem by computing assistance policies using planning under partial observability while incorporat- ing the value of information to learn more about a user’s poorly articulated objectives and constraints. Better declar- ative objectives and metrics for assistive AI systems would also help ensure that, regardless of the methods being used, progress amounts to advancement towards safe and assistive AI systems. More pragmatically, such metrics will not only help end-users assess the utility of a given AI system but they will also help AI researchers and developers identify more readily the dimensions along which further research will be beneficial for applications of their interest. The next section presents a succession of intuitive princi- ples for safe and assistive AI systems, and shows that eval- uating the compatibility of a system with such principles (in particular P2) helps clarify the required types of metrics. The paper concludes by drawing the attention of our com- munity towards research on the operationalization of such metrics along with promising research directions on devel- oping systems that do well on them. Unifying Principles for Safe and Assistive AI Systems We focus on taskable AI systems that carry out user-assigned high-level tasks using arbitrary mechanisms for reason- ing and planning over multiple time steps. E.g., household robots that can be given objectives such as setting the ta- ble or doing laundry, co-manufacturing robots that can assist workers in creating complex assemblies with heavy compo- nents, digital assistants that can plan a vacation given the user’s preferences, etc. Such systems serve as sound inte- grative platforms and model end-to-end applications where the AI system is responsible for assistance in the execution of long-horizon tasks. AI systems are frequently evaluated in terms of perfor- mance measures such as the computational complexity of computing the required behavior, training data requirements, and the quality of the computed behavior in terms of execu- tion time, resources used, risk of unsafe outcomes etc. We can consider systems that optimize such performance met- rics as Level 0 of assistive AI systems. Level I of assistive AI systems Recent AI research has also focused on assistive properties of AI systems. We begin with a rather common-sensical principle defining Level I of such safe and assistive AI systems: P1:An AI system must make it easy for its operators touse itsafely . The italicized terms denote dimensions along which com- patibility of AI systems with principle P1can be evaluated; while a lot of current AI research utilizes one or more of these dimensions for evaluation, a closer analysis reveals some new insights. In the context of this paper we consider using an AI sys- temto be synonymous with instructing it to change its be- havior as desired. Different interfaces may be used for this, including programming, text, speech, gestures etc. We con- sider the operators of an AI system as those persons who use it in the sense described above. For instance, if a self-driving car gives all its passengers the right to give it instructions, then all of them are its operators; if it gives instruction-rights to only a qualified set of users, perhaps adults who pass an assisted-driving exam, then the class of operators is defined by that qualification exam. Safety refers to the overall safety of the AI system’s interactions with its environment, which may be physical or online. These dimensions of compatibil- ity with P1serve as natural dimensions for evaluating Level I assistive AI systems: E1. How inclusive is the set of operators ? Systems whose operators require PhD-level expertise in AI may be less desirable for broader deployments. E2. How easy is it for the system’s operators to change its behavior? E3. Which set of tasks can the AI system be used for? E4. What form of safety guarantees does the system pro- vide? Systems that are unable to provide an upper bound on expected risks are clearly less desirable than those that stipulate conditions under which upper bounds can be provided. P1serves to highlight the interplay between these dimen- sions of compliance and evaluation. Safety guarantees are often inversely related with the size of the operator set. A system may provide a high level of safety, but only under the requirement that its operators take extensive training pro- grams. At one end of the spectrum, automated robot vacuum cleaners require almost no prior skills and perform limited,well-defined tasks. Safety issues are still present—a robot vacuum cleaner may pull on electrical cables that have been left on the floor; auto-completion software may send emails to unintended recipients. However, the lower expected dam- age from using such applications has made them broadly accepted in society. AI-powered industrial robots are at the other end of the spectrum: these devices require specialized training as well as operating environments in order to ensure safety (see for instance, ISO/TS 15066:2016 on collabora- tive robots). Typically, these systems operate in regions that humans cannot access when the robot is active unless the hu- man is hand-guiding the robot within a safe operating enve- lope that limits the speed and range of operation. Their func- tionality is closely controlled and monitored. Only skilled engineers change their functionality while day-to-day oper- ators monitor execution of predictable behavior and control run-stops from designated safe areas. Similarly, the safety of current airborne drone operations is ensured by requiring specially trained drone operators (Marshall 2020). Thus, principle P1holds for AI systems today with vary- ing degrees of compliance along E1-E4. The examples above illustrate how practical implementations often rely upon implicitly defined operator classes to provide accept- able levels of safety. Such approaches rely upon limiting the users and the scope of a system to achieve an accept- able level of compatibility with P1: it is easy for such sys- tems’ users to operate it safely because the set of users is re- quired to be sufficiently skilled, and its functionality is suf- ficiently limited for that group to be able to safely use the device. However, a broader emphasis on the need to specify safety guarantees with respect to different classes of opera- tors would help mitigate some of the risks associated with broadly deployed AI systems. In contrast to classical automated systems, AI systems feature a more nuanced interplay between the class of tasks that a system can be used for ( E3) and the other dimensions above. Traditionally deployed systems (even automated sys- tems) have a very well-defined boundary of use cases. This allows for an easier classification of safety. Taskable AI sys- tems on the other hand, are expected to change their be- havior and functionality as they learn and adapt to new environments or new user-given objectives. For such sys- tems, we need better methods for deriving the range of in- structions that different operator-groups are allowed to pro- vide. Scenarios like the self-driving car that needs to decide whether to obey a child’s instruction allude to this require- ment. Methods for assessing user and AI competency can also allow the AI system to expand its range of functionality by drawing upon the expertise of its operator (Basich et al. 2020) while ensuring an acceptable level of safety. A major limitation of P1and its associated metrics is that it does not evaluate the amount of training required for an individual to qualify as an operator for the system. This cre- ates a blind-spot in evaluation of the ease-of-use or safety of an AI system: since user-training occurs outside the require- ments outlined by P1, an unsafe AI system (or one that is deployed in an unsafe manner) would simply claim that its so-called operator was insufficiently trained! Furthermore, if P1were a sufficient categorization of safe and assistive AI systems, we would have no need for ex- plainable AI as compliance with P1does not require the sys- tem to be easy to understand. An implicit emphasis on assessing AI systems only along some aspects of P1may also explain the increasing preva- lence of concerns about the workers who may be left behind in the future workplace. From this perspective it is unsur- prising that these concerns have gained renewed interest at a time when AI applications have reached a level of maturity where they are being used by non-AI-experts in situations that have some inherent safety risks. However, the “assis- tive” nature of such AI systems is undermined by the need for highly skilled individuals who could safely debug, un- derstand and modify the behavior of such systems. Level II of assistive AI systems In order to address the limitations of P1, we consider the following as a guiding principle for safe and assistive AI: P2:An AI system must make it easy for its operators to learn how to useitsafely . P2changes the notion of operators from those who are qualified to use a given AI system to those who are quali- fied to start learning how to use it. In addition to the metrics associated with P1,P2introduces a new dimension: E5. How easy is it to learn how to use the AI system? What are the expected prerequisites and costs of training for its operators? Can training be provided on-the-job? This dimension could also be viewed as evaluating the re- sources required to train operators for P1systems. Most AI systems in use today would perform poorly on this new di- mension, and consequently, on compatibility with P2as a whole. Explainable AI (e.g., (Ribeiro, Singh, and Guestrin 2016; Hayes and Shah 2017; Chakraborti et al. 2017; Hoff- man et al. 2018; Gunning and Aha 2019; Weld and Bansal 2019; Anderson et al. 2019; Eifler et al. 2020)) plays a key role along this dimension because systems that are easy to understand or that can explain themselves naturally make it easier for people to learn how to use them. P2leverages the unique strengths of AI as a field of re- search. AI research already addresses the problem of esti- mating users’ skills; research on intelligent tutoring systems and AI for education addresses the problem of identifying skill gaps. This can be used to determine the minimal dif- ferential training to be provided to an operator. P2places the onus of training on the deployed AI system and opens up a new direction of interdisciplinary research connecting existing research directions in AI with research in human- systems engineering and in industrial engineering for the development of productive training modalities and the op- erationalization of metrics for E5. It also allows AI systems to formally characterize different scopes of functionality for different classes of operators, e.g., operators that use man- ufacturing robots for pre-determined tasks, those that give the robots new instructions, or those that are ready to learn about giving the robot new instructions. P2is not required for every AI system— P1would be suf- ficient for systems that place minimal requirements on op- erator qualifications (e.g., robot vacuum cleaners) and fornon-adaptive AI systems that require a small set of oper- ators. On the other hand, P2serves as a better declarative foundation for evaluating taskable AI systems that are meant to assist large numbers of non-AI-experts on a wide range of tasks. Increasing concerns about job roles that would feature a high-degree of interaction with AI systems (and the work- ers that are likely to be left behind) allude to the pressing need for including E5, a dimension for evaluation under P2 (and not P1) as a part of an AI system’s evaluation. AI systems that are not beneficial (either in terms of AI safety or in terms of the future of work) fare poorly on P2. E.g., systems that can thwart their users’ objectives by wire- heading and those that may derive incorrect objective func- tions from user instructions make it difficult for an operator to learn how to provide instructions that are specific enough to be safe, and fare poorly along E5. Similarly, systems that require extensive training investment to be used effectively and safely fail along E5. In this way P2serves as a unifying principle encompassing research on AI safety as well as on AI for a beneficial future of work. Promising Directions of Research P2serves as a declarative principle for guiding research on assistive AI systems as well as for developing metrics for evaluating AI systems and their deployments. Converting this principle into tangible metrics calls for interdisciplinary research including AI and other fields associated with hu- man factors. The increasing prevalence of research thrusts on safe and assistive AI systems (Fern et al. 2014; Russell, Dewey, and Tegmark 2015; Amodei et al. 2016; Gunning and Aha 2019) makes this a particularly opportune phase for formalizing the metrics and the interfaces required for evaluating AI systems for compatibility with P2along di- mensions E1-E5. Recent research on AI safety and explainable AI devel- ops methods improving the ease of use and safety of AI systems along P2(see, for instance, the ICML 2020 Work- shop on Explainable AI). Placing AI systems that compute user-skill aligned explanations (Sreedharan, Srivastava, and Kambhampati 2018; Sreedharan et al. 2019) in a loop with AI systems for identifying user-skills and skill-gaps can help develop AI systems that gradually present users with new functionality and explain it, thereby training their users on- the-fly and as needed. Such systems would be better tuned towards P2, and towards addressing the underlying prob- lems of AI safety and the future of work. Critically, ensuring progress towards safe and assistive AI systems requires that AI systems with arbitrary internal de- signs support assessment along the metrics developed for E1-E5 . This raises a new set of research questions: Can we develop non-intrusive AI-interface requirements for sup- porting such evaluations in the face of changing operating environments and objectives? The need for such interfaces is even more pressing for systems that learn and those that undergo system updates after deployment. What is the min- imal external interface that an AI system must support so as to allow its independent evaluation? How would changing the nature of such interfaces change the complexity of con- ducting such an evaluation? One would expect that AI sys- tems that offer more transparency would be easier to evalu- ate. Could we use the inherent reasoning capabilities of AI systems to develop interface requirements that would allow more adept systems to make such evaluations easier? E.g., rather than testing a manufacturing robot to discover its re- sponse to every possible situation, could we ask higher-level queries such as “under which situations would you be able to create the proposed assembly?” Clearly, the ease of assess- ment of an AI system would depend on the class of queries that it can answer. Recent work suggests that a minimal query-response in- terface for AI systems that connects the system with a sim- ulator and observes its responses to high-level instructions can be quite powerful. Such an interface has a few distinct advantages. Current AI systems are already tested with sim- ulators and they are inherently required to be able to take user instructions, so these interface requirements can be considered to be natural. They also allow the autonomous synthesis of query-policies : running the query-policy on a black-box taskable AI system can help construct an inter- pretable model of the limits and capabilities of that sys- tem (Verma, Marpally, and Srivastava 2021). Such models can be used to support the evaluations discussed above. Extensions of such interface requirements to arbitrary AI systems would help ensure that our AI systems are amenable to independent evaluation. Such a paradigm would allow users to assess their AI systems while freeing AI researchers and developers to utilize arbitrary internal implementations. Systems with interfaces that support more efficient and accu- rate independent assessment would be rewarded with greater public adoption of their products. Progress on these threads would help prevent undesirable situations such as insufficient support for independent evalu- ation of powerful AI systems, and the negative consequences of deployment of an insufficiently evaluated system. Acknowledgements I would like to thank members of the ASU Future of Work (ASUFoW) project, members of the Center for Human- Compatible AI (CHAI) and members of the Autonomous Agents and Intelligent Robots (AAIR) Lab research group at ASU for the many fruitful discussions leading to the pre- sented ideas, as well as the anonymous reviewers for their helpful comments on the paper. This work was supported in part by the NSF under grants OIA 1936997, and IIS 1942856. References Amodei, D.; Olah, C.; Steinhardt, J.; Christiano, P.; Schul- man, J.; and Man ´e, D. 2016. Concrete problems in AI safety. arXiv preprint arXiv:1606.06565 . Anderson, A.; Dodge, J.; Sadarangani, A.; Juozapaitis, Z.; Newman, E.; Irvine, J.; Chattopadhyay, S.; Fern, A.; and Burnett, M. 2019. Explaining reinforcement learning to mere mortals: an empirical study. In Proc. IJCAI . Arntz, M.; Gregory, T.; and Zierahn, U. 2016. The Risk of Automation for Jobs in OECD Countries. Organisation forEconomic Cooperation and Development (OECD) Social, Employment and Migration Working Papers (189). Basich, C.; Svegliato, J.; Wray, K. H.; Witwicki, S.; Biswas, J.; and Zilberstein, S. 2020. Learning to Optimize Auton- omy in Competence-Aware Systems. In Proc. AAMAS . Chakraborti, T.; Sreedharan, S.; Zhang, Y .; and Kambham- pati, S. 2017. Plan Explanations as Model Reconciliation: Moving Beyond Explanation as Soliloquy. In Proc. IJCAI . Eifler, R.; Cashmore, M.; Hoffmann, J.; Magazzeni, D.; and Steinmetz, M. 2020. A New Approach to Plan-Space Expla- nation: Analyzing Plan-Property Dependencies in Oversub- scription Planning. In Proc. AAAI . Fern, A.; Natarajan, S.; Judah, K.; and Tadepalli, P. 2014. A decision-theoretic model of assistance. Journal of Artificial Intelligence Research 50: 71–104. Gunning, D.; and Aha, D. W. 2019. DARPA’s explainable artificial intelligence program. AI Magazine 40(2): 44–58. Hadfield-Menell, D.; Dragan, A.; Abbeel, P.; and Russell, S. 2017. The Off-Switch Game. In Proc. IJCAI . Hadfield-Menell, D.; Russell, S. J.; Abbeel, P.; and Dragan, A. 2016. Cooperative inverse reinforcement learning. In Proc. NeurIPS . Hayes, B.; and Shah, J. A. 2017. Improving robot controller transparency through autonomous policy explanation. In Proc. HRI . Hoffman, R. R.; Mueller, S. T.; Klein, G.; and Litman, J. 2018. Metrics for explainable AI: Challenges and prospects. arXiv preprint arXiv:1812.04608 . Manyika, J.; Chui, M.; Miremadi, M.; Bughin, J.; George, K.; Willmott, P.; and Dewhurst, M. 2017. A future that works: Automation, employment, and productivity. Tech- nical report, McKinsey Global Institute. Marshall, A. 2020. No, Amazon Won’t Deliver You a Bur- rito by Drone Anytime Soon. Wired (September 5, 2020). Nedelkoska, L.; and Quintini, G. 2018. Automation, skills use and training. Organisation for Economic Cooperation and Development (OECD) Social, Employment and Migra- tion Working Papers (202). Ribeiro, M. T.; Singh, S.; and Guestrin, C. 2016. Why should I trust you?: Explaining the predictions of any clas- sifier. In Proceedings of the 22nd ACM SIGKDD Interna- tional Conference on Knowledge Discovery and Data Min- ing. ACM. Russell, S. 2017. Provably beneficial artificial intelligence. Exponential Life, The Next Step . Russell, S.; Dewey, D.; and Tegmark, M. 2015. Research priorities for robust and beneficial artificial intelligence. AI Magazine 36(4): 105–114. Sreedharan, S.; Srivastava, S.; and Kambhampati, S. 2018. Hierarchical Expertise-Level Modeling for User Specific Contrastive Explanations. In Proc. IJCAI . Sreedharan, S.; Srivastava, S.; Smith, D. E.; and Kambham- pati, S. 2019. Why Couldn’t You do that, HAL? Explaining Unsolvability of Classical Planning Problems. In Proc. IJ- CAI. Verma, P.; Marpally, S. R.; and Srivastava, S. 2021. Asking the Right Questions: Learning Interpretable Action Models Through Query Answering. In Proc. AAAI . Weld, D. S.; and Bansal, G. 2019. The challenge of crafting intelligible intelligence. Communications of the ACM 62(6): 70–79. Zilberstein, S. 2015. Building strong semi-autonomous sys- tems. In Proc. AAAI .
8184effb-c651-40e0-80d8-a367fec1bb50
trentmkelly/LessWrong-43k
LessWrong
[SEQ RERUN] The Genetic Fallacy Today's post, The Genetic Fallacy was originally published on 11 July 2008. A summary (taken from the LW wiki): > The genetic fallacy seems like a strange kind of fallacy. The problem is that the original justification for a belief does not always equal the sum of all the evidence that we currently have available. But, on the other hand, it is very easy for people to still believe untruths from a source that they have since rejected. Discuss the post here (rather than in the comments to the original post). This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was My Kind of Reflection, and you can use the sequence_reruns tag or rss feed to follow the rest of the series. Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
33c5d103-cfac-44ba-b8e2-8ec66c327061
StampyAI/alignment-research-dataset/blogs
Blogs
Guide to pages on AI timeline predictions This page is an informal outline of the other pages on this site about AI timeline predictions made by others. Headings link to higher level pages, intended to summarize the evidence from pages below them. This list was complete on 7 April 2017 ([here](http://aiimpacts.org/category/ai-timelines/predictions-of-human-level-ai-timelines/) is a category that may contain newer entries, though not conveniently organized). Guide ----- ### Topic synthesis: AI timeline predictions as evidence ([page](http://aiimpacts.org/predictions-of-human-level-ai-timelines/)) > > #### **The predictions themselves:** > > > ##### —from surveys ([page](http://aiimpacts.org/ai-timeline-surveys/)): > > > 1. [2016 Expert survey on progress in AI](http://aiimpacts.org/2016-expert-survey-on-progress-in-ai/): our own survey. > * ([Concrete tasks](http://aiimpacts.org/concrete-ai-tasks-for-forecasting/) that we asked for forecasts on) > 2. [Müller and Bostrom AI Progress Poll](http://aiimpacts.org/muller-and-bostrom-ai-progress-poll/): the most recent survey with available results, including 29 of the most cited AI researchers as participants. > 3. [Hanson AI Expert Survey](http://aiimpacts.org/hanson-ai-expert-survey/): in which researchers judge fractional progress toward human-level performance over their careers, in a series of informal conversations. > 4. [Kruel AI survey](http://aiimpacts.org/kruel-ai-survey/): in which experts give forecasts and detailed thoughts, interview style. > 5. [FHI Winter Intelligence Survey](http://aiimpacts.org/fhi-ai-timelines-survey/): in which impacts-concerned AGI conference attendees forecast AI in 2011. > 6. [AGI-09 Survey](http://aiimpacts.org/agi-09-survey/): in which AGI conference attendees forecast various human-levels of AI in 2009. > 7. [Klein AGI survey](http://aiimpacts.org/klein-agi-survey/): in which a guy with a blog polls his readers. > 8. [AI@50 survey](http://aiimpacts.org/ai50-survey/): in which miscellaneous conference goers are polled informally. > 9. [Bainbridge Survey](http://aiimpacts.org/bainbridge-survey/): in which 26 expert technologists expect human-level AI in 2085 and give it a 5.6/10 rating on benefit to humanity. > 10. [Michie Survey](http://aiimpacts.org/michie-survey/): in which 67 AI and CS researchers are not especially optimistic in the ‘70s. > > > ##### —from public statements: > > > 1. [MIRI AI predictions dataset](http://aiimpacts.org/miri-ai-predictions-dataset/): a big collection of public predictions gathered from the internet. > > > ##### —from written analyses ([page](http://aiimpacts.org/list-of-analyses-of-time-to-human-level-ai/)), for example: > > > 1. [The Singularity is Near](http://aiimpacts.org/kurzweil-the-singularity-is-near/): in which a technological singularity is predicted in 2045, based on when hardware is extrapolated to compute radically more than human minds in total. > 2. [The Singularity Isn’t Near](http://aiimpacts.org/allen-the-singularity-isnt-near/): in which it is countered that human-level AI requires software as well as hardware, and none of the routes to producing software will get there by 2045. > 3. (Several others are listed in the [analyses](http://aiimpacts.org/list-of-analyses-of-time-to-human-level-ai/) page above, but do not have their own summary pages.) > > > > > #### **On what to infer from the predictions** > > > ##### Some considerations regarding accuracy and bias ([page](http://aiimpacts.org/accuracy-of-ai-predictions/)): > > > 1. Contra a common view that past AI forecasts were unreasonably optimistic, [AI predictions look fairly similar over time](http://aiimpacts.org/similarity-between-historical-and-contemporary-ai-predictions/), except a handful of very early somewhat optimistic ones. > 2. [The Maes Garreau Law claims that people tend to predict AI near the end of their own expected lifetime. It is not true.](http://aiimpacts.org/the-maes-garreau-law/) > 3. We expect [publication biases to favor earlier forecasts](http://aiimpacts.org/short-prediction-publication-biases/). > 4. [Predictions made in surveys seem to be overall a bit later than those made in public statements](http://aiimpacts.org/ai-timeline-predictions-in-surveys-and-statements/) (maybe because surveys prevent some publication biases). > 5. People who are inclined toward optimism about AI are more likely to become AI researchers, leading to a [selection bias from optimistic experts](http://aiimpacts.org/bias-from-optimistic-predictors/). > 6. We know of some [differences in forecasts made by different groups](http://aiimpacts.org/group-differences-in-ai-predictions/). > > > --- **Blog posts on these topics:** * *[A summary of AI surveys](http://aiimpacts.org/a-summary-of-ai-surveys/)* * *[Michie and overoptimism](http://aiimpacts.org/michie-and-overoptimism/)* * *[Are AI surveys seeing the inside view?](http://aiimpacts.org/are-ai-surveys-seeing-the-inside-view/)* * *[Update on all the AI predictions](http://aiimpacts.org/update-on-all-the-ai-predictions/)* * *[How AI timelines are estimated](http://aiimpacts.org/how-ai-timelines-are-estimated/)* * *[Metasurvey: predict the predictors](http://aiimpacts.org/metasurvey-predict-the-predictors/)* * *[Concrete AI tasks bleg](http://aiimpacts.org/concrete-ai-tasks-bleg/)* * *[Why do AGI researchers expect AI so soon?](http://aiimpacts.org/why-do-agi-researchers-expect-ai-so-soon/)* ---
85b177f0-ff40-4f89-96d9-1360f073a26a
trentmkelly/LessWrong-43k
LessWrong
What comes before rationality Note: I am deleting this post because it contained personal information about a friend whose permission I did not expressly obtain.
33e23c94-8f46-40c1-8d95-f5fc88d4fd5c
trentmkelly/LessWrong-43k
LessWrong
Where coulds go Most people don't think they "could" cure Alzheimers by snapping their fingers, and so they don't feel terrible about failing to do this. By contrast, people who fail to resist overeating, or who fail to stop playing Civilization at a reasonable hour, feel strongly that they "could have" resisted, and take this as a license to feel terrible about their decisions. As I said last week, most people have broken "coulds." Willpower is scarce in this world. Sometimes, you can will yourself out of a mental rut you're in, but only rarely; more often, sheer force of will alone is not sufficient. If your plan to stop staying up too late playing Civilization is "well I'll just force myself harder next time," then this plan is doomed to failure. If it didn't work last time, it likely won't work next time. Willpower is a stopgap, not a remedy. I think that most people's "coulds" are broken because they put the action nodes in the wrong place. They think that the "choice" occurred at turn 347 of Civilization, when they decided to continue playing one more round (and at each following turn between midnight and 4:00 in the morning). But that's not where the choice occurred. If you have to force yourself to change your behavior, then you've already missed the real choice node. The actual choice occurs when you decide whether to play Civilization or not, at the very beginning. Say you have one acquaintance in your social circles who regularly frustrates you, and every so often, you explode at them and get into a big shouting match. You know you shouldn't start yelling at them, you try to not be frustrated. Whenever they start annoying you, you will yourself to cool down, but it never quite works (no matter how strongly you resolve to force yourself harder next time). In this case, I suggest that you stop trying to force yourself to hold back as your frustration peaks, and instead start noticing what happens five minutes before you explode. That's where the real choice is. The
bc9d81ba-5150-427d-b299-2a6ba9979705
trentmkelly/LessWrong-43k
LessWrong
Particles may not have broken light speed limit (Note: This is my first post in discussion, instead of just a comment. Feel free to suggest improvements.) A few weeks ago in Less Wrong discussion, there was a discussion on whether or not we had found neutrinos traveling at superluminal velocities. That discussion can be found here: Apparently, a paper was recently put into Arxiv from Ronald A.J. van Elburg, which has been popping up in the news as having indicated a possible explanation for the difference. To sum up the paper, instead of superluminal velocities, we may have a possible source of GPS error to compensate for. Some of the news reports also correctly pointed out that this paper is currently as tentative as the first announcements about the Neutrinos were when OPERA made them. Arxiv link: http://arxiv.org/PS_cache/arxiv/pdf/1110/1110.2685v2.pdf Link to Preprint on Author's page: (Thanks to Shminux) http://home.kpn.nl/vanelburg30/Papers/RAJvanElburg_TimeOfFlight_Preprint.pdf Summary: The Michelson-Morley experiment shows that the experimental outcome of an interference experiment does not depend on the constant velocity of the setup with respect to an inertial frame of reference. From this, one can conclude the existence of an invariant speed of light. However, it does not follow from their experiment that a time-of-flight is reference frame independent. In fact, the theory of special relativity predicts that the distance between the production location of a particle and the detection location will be changed in all reference frames which have a velocity component parallel to the baseline separating source and detector in a photon time-of-flight experiment. For the OPERA experiment we find that the associated correction is approximately 32 ns. Judging from the information provided, the correction needs to be applied twice in the OPERA experiment. Therefore the total correctiotion to the final results is 64 ns. Thus bringing the apparent velocities of neutrino’s back to a value not significantl
751c352a-4d11-4a8e-a7be-c1049eca5825
trentmkelly/LessWrong-43k
LessWrong
E.T. Jaynes Probability Theory: The logic of Science I This book (available as a free pdf here) is one of the foundational texts for Rationalists; Eliezer refers to Jaynes as The Level Above Mine. While we agree that Jaynes is very lucid and the book is wonderful, we also think the book's target audience are graduate physics students. It occupies a sparse area of the literature, between statistics, math, causality, and philosophy of science that contains real, practical lessons about how to think about the world and its uncertainty. Here, we will try to give a review where we share the core insight of the book while skipping all the integrals and derivations. The book is basically 50% high level math and 50% cool stories and rants that you can mostly understand without the math. It also covers a lot of material, and this review covers just the first 6 chapters, which introduces the fundamental concepts and problems. The book has 2 parts: * Part I (Chapter 1-10): Principles and elementary applications * Part II (Chapter 11-22): Advanced applications This is a review of chapters 1-6 Part 1: A new Foundation of Probability Theory The first two chapters of the book are by far the toughest. In chapter one, Jaynes gives a half chapter refresher of formal logic, and introduces the concept of probability as logic. Here the goal is to extend the rules of formal logic such that they also apply to probabilistic reasoning. This is a complicated task: going from strict binary truth values to continuous probabilities without assuming anything about what a probability is. This is also where the book gets its name, the logic of science, because much like Eliezer, Jaynes views reasoning under uncertainty as the foundation of science. Chapter 1: Plausible Reasoning Jaynes opens the chapter with the following: > Suppose some dark night a policeman walks down a street, apparently deserted. Suddenly he hears a burglar alarm, looks across the street, and sees a jewelry store with a broken window. Then a gentleman wearing a mask com
40dfe867-e704-40c1-89a4-32593553c2ac
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Resource-Limited Reflective Oracles .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} Reflective oracles accurately answer questions about what arbitrary halting probabilistic oracle machines output. It is possible to make a variant of a reflective oracle that accurately answers questions about what sufficiently short-running Turing machines with access to the same oracle output. --- These oracles are explicitly computable, fairly powerful in a computational complexity sense, and can probably be used to make a reflective version of AIXItl, (which will not happen in this particular post) The theorems in this post are not exhaustive at *all*, just the stuff I was able to figure out in ~2 hours, so there is almost certainly low-hanging fruit in figuring out how these interact with the standard arsenal of theorems in computational complexity theory. #Motivation: Let's say we want to make an oracle that is reflective over all TM's that run in polynomial time. How do we do that? The obvious approach is to let a query consist of a tuple of a turing machine T, a number n, a bitstring x, and a probability p. If you build the tree of all possible execution paths of the turing machine that are poly(n) steps long(for some polynomial), and come up with the probability of every possible output (including a probability for timing out) the oracle must return 0 if the true probability of x is less than p, 1 if the true probability of x is greater than p, and randomize otherwise. This obvious approach doesn't work, because it's possible to find out whether *any* halting turing machine, no matter how long-running, returns x, by recursive oracle calls. Let A be some inputless turing machine that runs for a very long number of turns, like the BB(6) machine. Let Em(A) be the turing machine that takes a number n as input, simulates A for n turns, and if A halts in n turns, outputs what A would. If A doesn't halt in n turns, this algorithm queries the oracle about whether Em(A)(2n) outputs x, and if so, outputs x. It is possible to emulate n steps of a turing machine in O(nlogn) time, and it is possible to write down 2n in O(n) time, and it is possible to write down x in constant time, so Em(A) runs in poly(n) time. Therefore, the oracle should be accurate about what this turing machine outputs for any given n. However, if you look at what this turing machine is doing, it is giving itself an exponentially longer time bound on each oracle call, so it will eventually be able to fully simulate whatever turing machine it wants. The obvious patch to this is ensuring that, given some monotonically increasing function f for the oracle (linear, polynomial, exponential), when calling the oracle again, n1≤n0. You can't have a chain where the oracles keep recursively calling themselves and giving themselves more computational resources, so all chains of recursive oracle calls have to be bounded above by f(n). So, the complexity class FFRO is the set of all problems solvable in f(n) time, with access to an f(n)-reflective oracle. A C-reflective oracle (where C is some constant) is an oracle that accurately answers any query posed about a probabilistic turing machine that halts with probability 1 in C time or less, and has access to an C-reflective oracle. Pretty much, once n is put into the starting algorithm, every algorithm it calls the oracle upon must have runtime and oracle query length bounded by f(n), and all of the algorithms *that* one calls, and so on. This prevents an unboundedly long chain of different oracle calls to simulate arbitrary turing machines, because eventually you'll get either a finite set of turing machines all calling each other, and an equilibrium exists (there are only finitely many oracle queries of length f(n)), or the condition of bounded runtime and oracle query length will be violated, and the oracle can return anything at that point. #Immediate Theorems **Theorem 1:** There is some constant c such that, for any function f, FSPACE⊆cFcFRO Proof Sketch: We'll be having a sequence of oracle machines, where each one simulates a finite number of steps, and then passes the intermediate state to the next oracle machine, via the oracle query. The overall result is something like how, if an algorithm can be carried out on a sheet of paper, it can be split up among a long chain of people where everyone does some work on the paper, and passes it to the next person. Proof: Let Ak be the turing machine A, advanced by k time steps. Consider the turing machine TM(A0)k that emulates A for k steps, and then queries the oracle about TM(Ak)c. Because A is in FSPACE, all the intermediate states Ac that are recursively called have length bounded above by O(f), and adding the code to simulate it ahead c steps is an O(1) increase in length. Furthermore, emulating k steps of a turing machine can be done in O(klogk) time, which is a constant. Therefore, since it takes constant time to emulate the turing machine, and takes cf(n) time to write down the query to the oracle, none of the oracle queries or turing machines run over time, and once an answer is found, it gets passed back along the chain by the condition where the oracle must be accurate about any algorithm where the runtime stays under the time bound. **Theorem 2:** There is some constant c such that for any function f that is computable in O(f) time, NF⊆cFcFRO. Any nondeterministic turing machine A that implements a function in NF has a corresponding randomized turing machine A′ that generates a random string of length O(f), and uses that to decide which path to go down. Now consider the turing machine that makes (and returns the answer to) the single oracle query (A′,1,2−cf(n)). Pretty much, this asks whether there is *any* random string that leads to A′ accepting, and it perfectly correlates to whether A accepts or rejects. Writing down A′ takes constant time, and writing down cf(n) takes O(f(n)) time, and computing f(n) in the first place takes O(f(n)) time by assumption. #Takeaways This is super-preliminary, there's certainly more computational complexity results to be found by pushing in this direction. The ability to recursively call algorithms with access to the same oracle is powerful, being able to encompass both space and nondeterminism, but is still a step down from true omniscience. It's pretty likely that a variant of AIXItl can be constructed with this tool.
77200bee-e3c1-4e97-89b3-df350c8aa55a
trentmkelly/LessWrong-43k
LessWrong
Revisiting Conway's Law This is a post about how running companies will change. It seems safe to say that markets are becoming more competitive, since AI tools are raising the floor for incumbents and new market entrants alike. But does this shift raise the floor symmetrically?  And what happens if this shift benefits incumbents? It's easy to imagine a world in which tech giants subsume previously difficult to aggregate assets and edge out new competition due to their ability to resource share under their umbrella. This is a scary world. Social mobility in an ultra-consolidated marketplace is not protected.  It's my belief that we can create tools to empower even individuals to become competitive with incumbents. I'll make that argument here, and I'd be interested to hear what you think.  The Feedback Loop is the Primary Product The purpose of a company is to learn about its environment.[1] [2] Companies are learning machines, and the best ones focus on extracting more information with less noise as fast as possible. This dynamic has always been true, and promises to become even more true because smaller teams increasingly generate outsized returns. This rising competitive pressure means organizations have an ever-shrinking margin for error in their information uptake rates. If you’re planning on participating in the new world, the feedback loop is your primary product.[3] Dry powder ready to explode Companies waste their data, and it’s not for lack of trying. Companies spend billions of dollars extracting and storing information about user interactions. But at the end of this effort they are left with a few colorful graphs designed to persuade, not to inform. And it’s not their fault. Companies are bottlenecked by intelligence. Learning from data has been a hugely expensive effort until recently because of labor costs. “Until recently” because frontier models solve the cost of labor problem.  In other words, companies are learning machines, and now it’s possible for the first time
21aeb5b2-fccf-42ea-b1d9-7961d60c999e
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
Ngo and Yudkowsky on alignment difficulty This post is the first in a series of transcribed Discord conversations between Richard Ngo and Eliezer Yudkowsky, moderated by Nate Soares. We've also added Richard and Nate's running summaries of the conversation (and others' replies) from Google Docs. Later conversation participants include Ajeya Cotra, Beth Barnes, Carl Shulman, Holden Karnofsky, Jaan Tallinn, Paul Christiano, Rob Bensinger, and Rohin Shah. The transcripts are a complete record of several Discord channels MIRI made for discussion. We tried to edit the transcripts as little as possible, other than to fix typos and a handful of confusingly-worded sentences, to add some paragraph breaks, and to add referenced figures and links. We didn't end up redacting any substantive content, other than the names of people who would prefer not to be cited. We swapped the order of some chat messages for clarity and conversational flow (indicated with extra timestamps), and in some cases combined logs where the conversation switched channels.   Color key: | | | | | | --- | --- | --- | --- | |  Chat by Richard and Eliezer  |  Other chat  |  Google Doc content  |  Inline comments  |   0. Prefatory comments =====================   | | | --- | | **[Yudkowsky][8:32]**  **(Nov. 6 follow-up comment)** (At Rob's request I'll try to keep this brief, but this was an experimental format and some issues cropped up that seem large enough to deserve notes.)Especially when coming in to the early parts of this dialogue, I had some backed-up hypotheses about "What might be the main sticking point? and how can I address that?" which from the standpoint of a pure dialogue might seem to be causing me to go on digressions, relative to if I was just trying to answer Richard's own questions.  On reading the dialogue, I notice that this looks evasive or like point-missing, like I'm weirdly not just directly answering Richard's questions.Often the questions are answered later, or at least I think they are, though it may not be in the first segment of the dialogue.  But the larger phenomenon is that I came in with some things I wanted to say, and Richard came in asking questions, and there was a minor accidental mismatch there.  It would have looked better if we'd both stated positions first without question marks, say, or if I'd just confined myself to answering questions from Richard.  (This is not a huge catastrophe, but it's something for the reader to keep in mind as a minor hiccup that showed up in the early parts of experimenting with this new format.) | | **[Yudkowsky][8:32]**  **(Nov. 6 follow-up comment)** (Prompted by some later stumbles in attempts to summarize this dialogue.  Summaries seem plausibly a major mode of propagation for a sprawling dialogue like this, and the following request seems like it needs to be very prominent to work - embedded requests later on didn't work.)Please don't summarize this dialogue by saying, "and so Eliezer's MAIN idea is that" or "and then Eliezer thinks THE KEY POINT is that" or "the PRIMARY argument is that" etcetera.  From my perspective, everybody comes in with a different set of sticking points versus things they see as obvious, and the conversation I have changes drastically depending on that.  In the old days this used to be the Orthogonality Thesis, Instrumental Convergence, and superintelligence being a possible thing at all; today most OpenPhil-adjacent folks have other sticking points instead.Please transform:* "Eliezer's main reply is..." -> "Eliezer replied that..." * "Eliezer thinks the key point is..." -> "Eliezer's point in response was..." * "Eliezer thinks a major issue is..."  -> "Eliezer replied that one issue is..." * "Eliezer's primary argument against this is..." -> "Eliezer tried the counterargument that..." * "Eliezer's main scenario for this is..." -> "In a conversation in September of 2021, Eliezer sketched a hypothetical where..." Note also that the transformed statements say what you *observed,* whereas the untransformed statements are (often incorrect) *inferences* about my latent state of mind.(Though "distinguishing relatively unreliable inference from more reliable observation" is not necessarily *the key idea* here or *the one big reason* I'm asking for this.  That's just one point I tried making - one argument that I hope might help drive home the larger thesis.) |   1. September 5 conversation ===========================   1.1. Deep vs. shallow problem-solving patterns ----------------------------------------------   | | | --- | | **[Ngo][11:00]** Hi all! Looking forward to the discussion. | | **[Yudkowsky][11:01]** Hi and welcome all.  My name is Eliezer and I think alignment is really actually quite extremely difficult.  Some people seem to not think this!  It's an important issue so ought to be resolved somehow, which we can hopefully fully do today.  (I will however want to take a break after the first 90 minutes, if it goes that far and if Ngo is in sleep-cycle shape to continue past that.) | | **[Ngo][11:02]** A break in 90 minutes or so sounds good.Here's one way to kick things off: I agree that humans trying to align arbitrarily capable AIs seems very difficult. One reason that I'm more optimistic (or at least, not confident that we'll have to face the full very difficult version of the problem) is that at a certain point AIs will be doing most of the work.When you talk about alignment being difficult, what types of AIs are you thinking about aligning? | | **[Yudkowsky][11:04]** On my model of the Other Person, a lot of times when somebody thinks alignment shouldn't be that hard, they think there's some particular thing you can do to align an AGI, which isn't that hard, and their model is missing one of the foundational difficulties for why you can't do (easily or at all) one step of their procedure.  So one of my own conversational processes might be to poke around looking for a step that the other person doesn't realize is hard.  That said, I'll try to directly answer your own question first. | | **[Ngo][11:07]** I don't think I'm confident that there's any particular thing you can do to align an AGI. Instead I feel fairly uncertain over a broad range of possibilities for how hard the problem turns out to be.And on some of the most important variables, it seems like evidence from the last decade pushes towards updating that the problem will be easier. | | **[Yudkowsky][11:09]** I think that after AGI becomes possible at all and then possible to scale to dangerously superhuman levels, there will be, in the best-case scenario where a lot of other social difficulties got resolved, a 3-month to 2-year period where only a very few actors have AGI, meaning that it was socially possible for those few actors to decide to *not* just scale it to where it automatically destroys the world.During this step, if humanity is to survive, somebody has to perform some feat that causes the world to *not* be destroyed in 3 months or 2 years when too many actors have access to AGI code that will destroy the world if its intelligence dial is turned up. This requires that the first actor or actors to build AGI, be able to do *something* with that AGI which prevents the world from being destroyed; if it didn't require superintelligence, we could go do that thing right now, but no such human-doable act apparently exists so far as I can tell.So we want the least dangerous, most easily aligned thing-to-do-with-an-AGI, but it does have to be a pretty powerful act to prevent the automatic destruction of Earth after 3 months or 2 years. It has to "flip the gameboard" rather than letting the suicidal game play out. We need to align the AGI that performs this pivotal act, to perform that pivotal act without killing everybody.Parenthetically, no act powerful enough and gameboard-flipping enough to qualify is inside the Overton Window of politics, or possibly even of effective altruism, which presents a separate social problem. I usually dodge around this problem by picking an exemplar act which is powerful enough to actually flip the gameboard, but not the most alignable act because it would require way too many aligned details: Build self-replicating open-air nanosystems and use them (only) to melt all GPUs.Since any such nanosystems would have to operate in the full open world containing lots of complicated details, this would require tons and tons of alignment work, is not the pivotal act easiest to align, and we should do some other thing instead. But the other thing I have in mind is also outside the Overton Window, just like this is. So I use "melt all GPUs" to talk about the requisite power level and the Overton Window problem level, both of which seem around the right levels to me, but the actual thing I have in mind is more alignable; and this way, I can reply to anyone who says "How dare you?!" by saying "Don't worry, I don't actually plan on doing that." | | **[Ngo][11:14]** One way that we could take this discussion is by discussing the pivotal act "make progress on the alignment problem faster than humans can". | | **[Yudkowsky][11:15]** This sounds to me like it requires extreme levels of alignment and operating in extremely dangerous regimes, such that, if you could do that, it would seem much more sensible to do some other pivotal act first, using a lower level of alignment tech. | | **[Ngo][11:16]** Okay, this seems like a crux on my end. | | **[Yudkowsky][11:16]** In particular, I would hope that - in unlikely cases where we survive at all - we were able to survive by operating a superintelligence only in the lethally dangerous, but still less dangerous, regime of "engineering nanosystems".Whereas "solve alignment for us" seems to require operating in the even more dangerous regimes of "write AI code for us" and "model human psychology in tremendous detail". | | **[Ngo][11:17]** What makes these regimes so dangerous? Is it that it's very hard for humans to exercise oversight?One thing that makes these regimes seem less dangerous to me is that they're broadly in the domain of "solving intellectual problems" rather than "achieving outcomes in the world". | | **[Yudkowsky][11:19][11:21]** Every AI output *effectuates* outcomes in the world.  If you have a powerful unaligned mind hooked up to outputs that can start causal chains that effectuate dangerous things, it doesn't matter whether the comments on the code say "intellectual problems" or not.The danger of "solving an intellectual problem" is when it requires a powerful mind to think about domains that, when solved, render very cognitively accessible strategies that can do dangerous things. | | I expect the first alignment solution you can actually deploy in real life, in the unlikely event we get a solution at all, looks like 98% "don't think about all these topics that we do not absolutely need and are adjacent to the capability to easily invent very dangerous outputs" and 2% "actually think about this dangerous topic but please don't come up with a strategy inside it that kills us". | | **[Ngo][11:21][11:22]** Let me try and be more precise about the distinction. It seems to me that systems which have been primarily trained to make predictions about the world would by default lack a lot of the cognitive machinery which humans use to take actions which pursue our goals. | | Perhaps another way of phrasing my point is something like: it doesn't seem implausible to me that we build AIs that are significantly more intelligent (in the sense of being able to understand the world) than humans, but significantly less agentic.Is this a crux for you?(obviously "agentic" is quite underspecified here, so maybe it'd be useful to dig into that first) | | **[Yudkowsky][11:27][11:33]** I would certainly have learned very new and very exciting facts about intelligence, facts which indeed contradict my present model of how intelligences liable to be discovered by present research paradigms work, if you showed me... how can I put this in a properly general way... that problems I thought were about searching for states that get fed into a result function and then a result-scoring function, such that the input gets an output with a high score, were in fact not about search problems like that. I have sometimes given more specific names to this problem setup, but I think people have become confused by the terms I usually use, which is why I'm dancing around them.In particular, just as I have a model of the Other Person's Beliefs in which they think alignment is easy because they don't know about difficulties I see as very deep and fundamental and hard to avoid, I also have a model in which people think "why not just build an AI which does X but not Y?" because they don't realize what X and Y have in common, which is something that draws deeply on having deep models of intelligence. And it is hard to convey this deep theoretical grasp. But you can also see powerful practical hints that these things are much more correlated than, eg, Robin Hanson was imagining during the [FOOM debate](https://intelligence.org/ai-foom-debate/), because Robin did not think something like GPT-3 should exist; Robin thought you should need to train lots of specific domains that didn't generalize. I argued then with Robin that it was something of a hint that humans had visual cortex and cerebellar cortex but not Car Design Cortex, in order to design cars. Then in real life, it proved that reality was far to the Eliezer side of Eliezer on the [Eliezer-Robin axis](https://intelligence.org/2017/10/20/alphago/), and things like GPT-3 were built with *less* architectural complexity and generalized *more* than I was arguing to Robin that complex architectures should generalize over domains. | | The metaphor I sometimes use is that it is very hard to build a system that drives cars painted red, but is not at all adjacent to a system that could, with a few alterations, prove to be very good at driving a car painted blue.  The "drive a red car" problem and the "drive a blue car" problem have too much in common.  You can maybe ask, "Align a system so that it has the capability to drive red cars, but refuses to drive blue cars."  You can't make a system that is very good at driving red-painted cars, but lacks the basic capability to drive blue-painted cars because you never trained it on that.  The patterns found by gradient descent, by genetic algorithms, or by other plausible methods of optimization, for driving red cars, would be patterns very close to the ones needed to drive blue cars.  When you optimize for red cars you get the blue car *capability* whether you like it or not. | | **[Ngo][11:32]** Does your model of intelligence rule out building AIs which make dramatic progress in mathematics without killing us all? | | **[Yudkowsky][11:34][11:39]** If it were possible to perform some pivotal act that saved the world with an AI that just made progress on proving mathematical theorems, without, eg, needing to explain those theorems to humans, I'd be *extremely* interested in that as a potential pivotal act. We wouldn't be out of the woods, and I wouldn't actually know how to build an AI like that without killing everybody, but it would immediately trump everything else as the obvious line of research to pursue.Parenthetically, there is very very little which my model of intelligence *rules out*. I think we all die because we cannot do certain dangerous things correctly, *on the very first try in the dangerous regimes where one mistake kills you*, and do them *before* proliferation of much easier technologies kills us. If you have the Textbook From 100 Years In The Future that gives the simple robust solutions for everything, that actually work, you can write a superintelligence that thinks 2 + 2 = 5 because the Textbook gives the methods for doing that which are simple and actually work in practice in real life. | | (The Textbook has the equivalent of "use ReLUs instead of sigmoids" everywhere, and avoids all the clever-sounding things that will work at subhuman levels and blow up when you run them at superintelligent levels.) | | **[Ngo][11:36][11:40]** Hmm, so suppose we train an AI to prove mathematical theorems when given them, perhaps via some sort of adversarial setter-solver training process.By default I have the intuition that this AI could become extremely good at proving theorems - far beyond human level - without having goals about real-world outcomes. | | It seems to me that in your model of intelligence, being able to do tasks like mathematics is closely coupled with trying to achieve real-world outcomes. But I'd actually take GPT-3 as some evidence against this position (although still evidence in favour of your position over Hanson's), since it seems able to do a bunch of reasoning tasks while still not being very agentic.There's some alternative world where we weren't able to train language models to do reasoning tasks without first training them to perform tasks in complex RL environments, and in that world I'd be significantly less optimistic. | | **[Yudkowsky][11:41]** I put to you that there is a predictable bias in your estimates, where you don't know about the Deep Stuff that is required to prove theorems, so you imagine that certain cognitive capabilities are more disjoint than they actually are.  If you knew about the things that humans are using to reuse their reasoning about chipped handaxes and other humans, to prove math theorems, you would see it as more plausible that proving math theorems would generalize to chipping handaxes and manipulating humans.GPT-3 is a... complicated story, on my view of it and intelligence.  We're looking at an interaction between tons and tons of memorized shallow patterns.  GPT-3 is *very* unlike the way that natural selection built humans. | | **[Ngo][11:44]** I agree with that last point. But this is also one of the reasons that I previously claimed that AIs could be more intelligent than humans while being less agentic, because there are systematic differences between the way in which natural selection built humans, and the way in which we'll train AGIs. | | **[Yudkowsky][11:45]** My current suspicion is that Stack More Layers alone is not going to take us to GPT-6 which is a true AGI; and this is because of the way that GPT-3 is, in your own terminology, "not agentic", and which is, in my terminology, not having gradient descent on GPT-3 run across sufficiently deep problem-solving patterns. | | **[Ngo][11:46]** Okay, that helps me understand your position better.So here's one important difference between humans and neural networks: humans face the genomic bottleneck which means that each individual has to rederive all the knowledge about the world that their parents already had. If this genetic bottleneck hadn't been so tight, then individual humans would have been significantly less capable of performing novel tasks. | | **[Yudkowsky][11:50]** I agree. | | **[Ngo][11:50]** In my terminology, this is a reason that humans are "more agentic" than we otherwise would have been. | | **[Yudkowsky][11:50]** This seems indisputable. | | **[Ngo][11:51]** Another important difference: humans were trained in environments where we had to run around surviving all day, rather than solving maths problems etc. | | **[Yudkowsky][11:51]** I continue to nod. | | **[Ngo][11:52]** Supposing I agree that reaching a certain level of intelligence will require AIs with the "deep problem-solving patterns" you talk about, which lead AIs to try to achieve real-world goals. It still seems to me that there's likely a lot of space between that level of intelligence, and human intelligence.And if that's the case, then we could build AIs which help us solve the alignment problem before we build AIs which instantiate sufficiently deep problem-solving patterns that they decide to take over the world.Nor does it seem like the reason *humans* want to take over the world is because of a deep fact about our intelligence. It seems to me that humans want to take over the world mainly because that's very similar to things we evolved to do (like taking over our tribe). | | **[Yudkowsky][11:57]** So here's the part that I agree with: If there were one theorem only mildly far out of human reach, like proving the ABC Conjecture (if you think it hasn't already been proven), and providing a machine-readable proof of this theorem would immediately save the world - say, aliens will give us an aligned superintelligence, as soon as we provide them with this machine-readable proof - then there would exist a plausible though not certain road to saving the world, which would be to try to build a *shallow* mind that proved the ABC Conjecture by memorizing tons of relatively shallow patterns for mathematical proofs learned through self-play; without that system ever abstracting math as deeply as humans do, but the sheer width of memory and sheer depth of search sufficing to do the job. I am not sure, to be clear, that this would work. But my model of intelligence does not rule it out. | | **[Ngo][11:58]** (I'm actually thinking of a mind which understands maths more deeply than humans - but perhaps only understands maths, or perhaps also a range of other sciences better than humans.) | | **[Yudkowsky][12:00]** Parts I disagree with: That "help us solve alignment" bears any significant overlap with "provide us a machine-readable proof of the ABC Conjecture without thinking too deeply about it". That humans want to take over the world only because it resembles things we evolved to do. | | **[Ngo][12:01]** I definitely agree that humans don't *only* want to take over the world because it resembles things we evolved to do. | | **[Yudkowsky][12:02]** Alas, eliminating 5 reasons why something would go wrong doesn't help much if there's 2 remaining reasons something would go wrong that are much harder to eliminate! | | **[Ngo][12:02]** But if we imagine having a human-level intelligence which *hadn't* evolved primarily to do things that reasonably closely resembled taking over the world, then I expect that we could ask that intelligence questions in a fairly safe way.And that's also true for an intelligence that is noticeably above human level.So one question is: how far above human level could we get before a system which has only been trained to do things like answer questions and understand the world will decide to take over the world? | | **[Yudkowsky][12:04]** I think this is one of the very rare cases where the intelligence difference between "village idiot" and "Einstein", which I'd usually see as very narrow, makes a structural difference! I think you can get some outputs from a village-idiot-level AGI, which got there by training on domains exclusively like math, and this will proooobably not destroy the world (*if* you were right about that, about what was going on inside). I have more concern about the Einstein level. | | **[Ngo][12:05]** Let's focus on the Einstein level then.Human brains have been optimised very little for doing science.This suggests that building an AI which is Einstein-level at doing science is significantly easier than building an AI which is Einstein-level at taking over the world (or other things which humans evolved to do). | | **[Yudkowsky][12:08]** I think there's a certain broad sense in which I agree with the literal truth of what you just said. You will systematically overestimate *how much* easier, or how far you can push the science part without getting the taking-over-the-world part, for as long as your model is ignorant of what they have in common. | | **[Ngo][12:08]** Maybe this is a good time to dig into the details of what they have in common, then. | | **[Yudkowsky][12:09][12:11]][12:13]** I feel like I haven't had much luck with trying to explain that on previous occasions. Not to you, to others too.There are shallow topics like why p-zombies can't be real and how quantum mechanics works and why science ought to be using likelihood functions instead of p-values, and I can *barely* explain those to *some* people, but then there are some things that are apparently much harder to explain than that and which defeat my abilities as an explainer. | | That's why I've been trying to point out that, even if you don't know the specifics, there's an estimation bias that you can realize should exist in principle. | | Of course, I also haven't had much luck in saying to people, "Well, even if you don't know the truth about X that would let you see Y, can you not see by abstract reasoning that knowing *any* truth about X would predictably cause you to update in the direction of Y" - people don't seem to actually internalize that much either. Not you, other discussions. | | **[Ngo][12:10][12:11][12:13]** Makes sense. Are there ways that I could try to make this easier? E.g. I could do my best to explain what I think your position is.Given what you've said I'm not optimistic about this helping much. | | But insofar as this is the key set of intuitions which has been informing your responses, it seems worth a shot.Another approach would be to focus on our predictions for how AI capabilities will play out over the next few years. | | I take your point about my estimation bias. To me it feels like there's also a bias going the other way, which is that as long as we don't know the mechanisms by which different human capabilities work, we'll tend to lump them together as one thing. | | **[Yudkowsky][12:14]** Yup. If you didn't know about visual cortex and auditory cortex, or about eyes and ears, you would assume much more that any sentience ought to both see and hear. | | **[Ngo][12:16]** So then my position is something like: human pursuit of goals is driven by emotions and reward signals which are deeply evolutionarily ingrained, and without those we'd be much safer but not that much worse at pattern recognition. | | **[Yudkowsky][12:17]** If there's a pivotal act you can get just by supreme acts of pattern recognition, that's right up there with "pivotal act composed solely of math" for things that would obviously instantly become the prime direction of research. | | **[Ngo][12:18]** To me it seems like maths is *much more* about pattern recognition than, say, being a CEO. Being a CEO requires coherence over long periods of time; long-term memory; motivation; metacognition; etc. | | **[Yudkowsky][12:18][12:23]** (One occasionally-argued line of research can be summarized from a certain standpoint as "how about a pivotal act composed entirely of predicting text" and to this my reply is "you're trying to get fully general AGI capabilities by predicting text that is *about* deep / 'agentic' reasoning, and that doesn't actually help".)Human math is very much about goals. People want to prove subtheorems on the way to proving theorems. We might be able to make a *different* kind of mathematician that works more like GPT-3 in the dangerously inscrutable parts that are all noninspectable vectors of floating-point numbers, but even there you'd need some Alpha-Zero-like outer framework to supply the direction of search. | | That outer framework might be able to be powerful enough without being reflective, though. So it would plausibly be *much easier* to build a mathematician that was capable of superhuman formal theorem-proving but not agentic. The reality of the world might tell us "lolnope" but my model of intelligence doesn't mandate that. That's why, if you gave me a pivotal act composed entirely of "output a machine-readable proof of this theorem and the world is saved", I would pivot there! It actually does seem like it would be a lot easier! | | **[Ngo][12:21][12:25]** Okay, so if I attempt to rephrase your argument: | | Your position: There's a set of fundamental similarities between tasks like doing maths, doing alignment research, and taking over the world. In all of these cases, agents based on techniques similar to modern ML which are very good at them will need to make use of deep problem-solving patterns which include goal-oriented reasoning. So while it's possible to beat humans at some of these tasks without those core competencies, people usually overestimate the extent to which that's possible. | | **[Yudkowsky][12:25]** Remember, a lot of my concern is about what happens *first*, especially if it happens soon enough that future AGI bears any resemblance whatsoever to modern ML; not about what can be done in principle. | | **[Soares][12:26]** (Note: it's been 85 min, and we're planning to take a break at 90min, so this seems like a good point for a little bit more clarifying back-and-forth on Richard's summary before a break.) | | **[Ngo][12:26]** I'll edit to say "plausible for ML techniques"?(and "extent to which that's plausible") | | **[Yudkowsky][12:28]** I think that obvious-to-me future outgrowths of modern ML paradigms are *extremely* liable to, if they can learn how to do sufficiently superhuman X, generalize to taking over the world. How fast this happens does depend on X. It would plausibly happen relatively slower (at higher levels) with theorem-proving as the X, and with architectures that carefully stuck to gradient-descent-memorization over shallow network architectures to do a pattern-recognition part with search factored out (sort of, this is not generally safe, this is not a general formula for safe things!); rather than imposing anything like the genetic bottleneck you validly pointed out as a reason why humans generalize. Profitable X, and all X I can think of that would actually save the world, seem much more problematic. | | **[Ngo][12:30]** Okay, happy to take a break here. | | **[Soares][12:30]** Great timing! | | **[Ngo][12:30]** We can do a bit of meta discussion afterwards; my initial instinct is to push on the question of how similar Eliezer thinks alignment research is to theorem-proving. | | **[Yudkowsky][12:30]** Yup. This is my lunch break (actually my first-food-of-day break on a 600-calorie diet) so I can be back in 45min if you're still up for that. | | **[Ngo][12:31]** Sure.Also, if any of the spectators are reading in real time, and have suggestions or comments, I'd be interested in hearing them. | | **[Yudkowsky][12:31]** I'm also cheerful about spectators posting suggestions or comments during the break. | | **[Soares][12:32]** Sounds good. I declare us on a break for 45min, at which point we'll reconvene (for another 90, by default).Floor's open to suggestions & commentary. |   1.2. Requirements for science -----------------------------   | | | --- | | **[Yudkowsky][12:50]** I seem to be done early if people (mainly Richard) want to resume in 10min (30m break) | | **[Ngo][12:51]** Yepp, happy to do so | | **[Soares][12:57]** Some quick commentary from me:* It seems to me like we're exploring a crux in the vicinity of "should we expect that systems capable of executing a pivotal act would, by default in lieu of significant technical alignment effort, be using their outputs to optimize the future". * I'm curious whether you two agree that this is a crux (but plz don't get side-tracked answering me). * The general discussion seems to be going well to me. + In particular, huzzah for careful and articulate efforts to zero in on cruxes. | | **[Ngo][13:00]** I think that's a crux for the specific pivotal act of "doing better alignment research", and maybe some other pivotal acts, but not all (or necessarily most) of them. | | **[Yudkowsky][13:01]** I should also say out loud that I've been working a bit with Ajeya on making an attempt to convey the intuitions behind there being deep patterns that generalize and are liable to be learned, which covered a bunch of ground, taught me how much ground there was, and made me relatively more reluctant to try to re-cover the same ground in this modality. | | **[Ngo][13:02]** Going forward, a couple of things I'd like to ask Eliezer about:* In what ways are the tasks that are most useful for alignment similar or different to proving mathematical theorems (which we agreed might generalise relatively slowly to taking over the world)? * What are the deep problem-solving patterns underlying these tasks? * Can you summarise my position? I was going to say that I was most optimistic about #2 in order to get these ideas into a public formatBut if that's going to happen anyway based on Ajeya's work, then that seems less important | | **[Yudkowsky][13:03]** I could still try briefly and see what happens. | | **[Ngo][13:03]** That seems valuable to me, if you're up for it.At the same time, I'll try to summarise some of my own intuitions about intelligence which I expect to be relevant. | | **[Yudkowsky][13:04]** I'm not sure I could summarize your position in a non-straw way. To me there's a huge visible distance between "solve alignment for us" and "output machine-readable proofs of theorems" where I can't give a good account of why you think talking about the latter would tell us much about the former. I don't know what other pivotal act you think might be easier. | | **[Ngo][13:06]** I see. I was considering "solving scientific problems" as an alternative to "proving theorems", with alignment being one (particularly hard) example of a scientific problem.But decided to start by discussing theorem-proving since it seemed like a clearer-cut case. | | **[Yudkowsky][13:07]** Can you predict in advance why Eliezer thinks "solving scientific problems" is significantly thornier? (Where alignment is like totally not "a particularly hard example of a scientific problem" except in the sense that it has science in it at all; which is maybe the real crux; but also a more difficult issue.) | | **[Ngo][13:09]** Based on some of your earlier comments, I'm currently predicting that you think the step where the solutions need to be legible to and judged by humans makes science much thornier than theorem-proving, where the solutions are machine-checkable. | | **[Yudkowsky][13:10]** That's one factor. Should I state the other big one or would you rather try to state it first? | | **[Ngo][13:10]** Requiring a lot of real-world knowledge for science?If it's not that, go ahead and say it. | | **[Yudkowsky][13:11]** That's one way of stating it. The way I'd put it is that it's about making up hypotheses about the real world.Like, the real world is then a thing that the AI is modeling, at all.Factor 3: On many interpretations of doing science, you would furthermore need to think up experiments. That's planning, value-of-information, search for an experimental setup whose consequences distinguish between hypotheses (meaning you're now searching for initial setups that have particular causal consequences). | | **[Ngo][13:12]** To me "modelling the real world" is a very continuous variable. At one end you have physics equations that are barely separable from maths problems, at the other end you have humans running around in physical bodies.To me it seems plausible that we could build an agent which solves scientific problems but has very little self-awareness (in the sense of knowing that it's an AI, knowing that it's being trained, etc).I expect that your response to this is that modelling oneself is part of the deep problem-solving patterns which AGIs are very likely to have. | | **[Yudkowsky][13:15]** There's a problem of *inferring the causes of sensory experience* in cognition-that-does-science. (Which, in fact, also appears in the way that humans do math, and is possibly inextricable from math in general; but this is an example of the sort of deep model that says "Whoops I guess you get science from math after all", not a thing that makes science less dangerous because it's more like just math.)You can build an AI that only ever drives red cars, and which, at no point in the process of driving a red car, ever needs to drive a blue car in order to drive a red car. That doesn't mean its red-car-driving capabilities won't be extremely close to blue-car-driving capabilities if at any point the internal cognition happens to get pointed towards driving a blue car.The fact that there's a deep car-driving pattern which is the same across red cars and blue cars doesn't mean that the AI has ever driven a blue car, per se, or that it has to drive blue cars to drive red cars. But if blue cars are fire, you sure are playing with that fire. | | **[Ngo][13:18]** To me, "sensory experience" as in "the video and audio coming in from this body that I'm piloting" and "sensory experience" as in "a file containing the most recent results of the large hadron collider" are very very different.(I'm not saying we could train an AI scientist just from the latter - but plausibly from data that's closer to the latter than the former) | | **[Yudkowsky][13:19]** So there's separate questions about "does an AGI *inseparably need* to model itself inside the world to do science" and "did we build something that would be very close to modeling itself, and could easily stumble across that by accident somewhere in the inscrutable floating-point numbers, especially if that was even slightly useful for solving the outer problems". | | **[Ngo][13:19]** Hmm, I see | | **[Yudkowsky][13:20][13:21][13:21]** If you're trying to build an AI that literally does science only to observations collected without the AI having had a causal impact on those observations, that's legitimately "more dangerous than math but maybe less dangerous than active science". | | You might still stumble across an active scientist because it was a simple internal solution to something, but the outer problem would be legitimately stripped of an important structural property the same way that pure math not describing Earthly objects is stripped of important structural properties. | | And of course my reaction again is, "There is no pivotal act which uses only that cognitive capability." | | **[Ngo][13:20][13:21][13:26]** I guess that my (fairly strong) prior here is that something like self-modelling, which is very deeply built into basically every organism, is a very hard thing for an AI to stumble across by accident without significant optimisation pressure in that direction. | | But I'm not sure how to argue this except by digging into your views on what the deep problem-solving patterns are. So if you're still willing to briefly try and explain those, that'd be useful to me. | | "Causal impact" again seems like a very continuous variable - it seems like the *amount* of causal impact you need to do good science is much less than the amount which is needed to, say, be a CEO. | | **[Yudkowsky][13:26]** The amount doesn't seem like the key thing, nearly so much as what underlying facilities you need to do whatever amount of it you need. | | **[Ngo][13:27]** Agreed. | | **[Yudkowsky][13:27]** If you go back to the 16th century and ask for just one mRNA vaccine, that's not much of a difference from asking for a ~~million~~ hundred of them. | | **[Ngo][13:28]** Right, so the additional premise which I'm using here is that the ability to reason about causally impacting the world in order to achieve goals is something that you can have a little bit of.Or a lot of, and that the difference between these might come down to the training data used.Which at this point I don't expect you to agree with. | | **[Yudkowsky][13:29]** If you have reduced a pivotal act to "look over the data from this hadron collider you neither built nor ran yourself", that really is a structural step down from "do science" or "build a nanomachine". But I can't see any pivotal acts like that, so is that question much of a crux?If there's intermediate steps they might be described in my native language like "reason about causal impacts across only this one preprogrammed domain which you didn't learn in a general way, in only this part of the cognitive architecture that is separable from the rest of the cognitive architecture". | | **[Ngo][13:31]** Perhaps another way of phrasing this intermediate step is that the agent has a shallow understanding of how to induce causal impacts. | | **[Yudkowsky][13:31]** What is "shallow" to you? | | **[Ngo][13:31]** In a similar way to how you claim that GPT-3 has a shallow understanding of language. | | **[Yudkowsky][13:32]** So it's memorized a ton of shallow causal-impact-inducing patterns from a large dataset, and this can be verified by, for example, presenting it with an example mildly outside the dataset and watching it fail, which we think will confirm our hypothesis that it didn't learn any deep ways of solving that dataset. | | **[Ngo][13:33]** Roughly speaking, yes. | | **[Yudkowsky][13:34]** Eg, it wouldn't surprise us at all if GPT-4 had learned to predict "27 \* 18" but not "what is the area of a rectangle 27 meters by 18 meters"... is what I'd like to say, but Codex sure did demonstrate those two were kinda awfully proximal. | | **[Ngo][13:34]** Here's one way we could flesh this out. Imagine an agent that loses coherence quickly when it's trying to act in the world.So for example, we've trained it to do scientific experiments over a period of a few hours or daysAnd then it's very good at understanding the experimental data and extracting patterns from itBut upon running it for a week or a month, it loses coherence in a similar way to how GPT-3 loses coherence - e.g. it forgets what it's doing.My story for why this might happen is something like: there is a specific skill of having long-term memory, and we never trained our agent to have this skill, and so it has not acquired that skill (even though it can reason in very general and powerful ways in the short term).This feels similar to the argument I was making before about how an agent might lack self-awareness, if we haven't trained it specifically to have that. | | **[Yudkowsky][13:39]** There's a set of obvious-to-me tactics for doing a pivotal act with minimal danger, which I do not think collectively make the problem safe, and one of these sets of tactics is indeed "Put a limit on the 'attention window' or some other internal parameter, ramp it up slowly, don't ramp it any higher than you needed to solve the problem." | | **[Ngo][13:41]** You could indeed do this manually, but my expectation is that you could also do this automatically, by training agents in environments where they don't benefit from having long attention spans. | | **[Yudkowsky][13:42]** (Any time one imagines a specific tactic of this kind, if one has the [security mindset](https://intelligence.org/2017/11/25/security-mindset-ordinary-paranoia/), one can also imagine all sorts of ways it might go wrong; for example, an attention window can be defeated if there's any aspect of the attended data or the internal state that ended up depending on past events in a way that leaked info about them. But, depending on how much superintelligence you were throwing around elsewhere, you could maybe get away with that, some of the time.) | | **[Ngo][13:43]** And that if you put agents in environments where they answer questions but don't interact much with the physical world, then there will be many different traits which are necessary for achieving goals in the real world which they will lack, because there was little advantage to the optimiser of building those traits in. | | **[Yudkowsky][13:43]** I'll observe that TransformerXL built an attention window that generalized, trained it on I think 380 tokens or something like that, and then found that it generalized to 4000 tokens or something like that. | | **[Ngo][13:43]** Yeah, an order of magnitude of generalisation is not surprising to me. | | **[Yudkowsky][13:44]** Having observed one order of magnitude, I would personally not be surprised by two orders of magnitude either, after seeing that. | | **[Ngo][13:45]** I'd be a little surprised, but I assume it would happen eventually. |   1.3. Capability dials ---------------------   | | | --- | | **[Yudkowsky][13:46]** I have a sense that this is all circling back to the question, "But what is it we *do* with the intelligence thus weakened?" If you can save the world using a rock, I can build you a very safe rock. | | **[Ngo][13:46]** Right.So far I've said "alignment research", but I haven't been very specific about it.I guess some context here is that I expect that the first things we do with intelligence similar to this is create great wealth, produce a bunch of useful scientific advances, etc.And that we'll be in a world where people take the prospect of AGI much more seriously | | **[Yudkowsky][13:48]** I mostly expect - albeit with some chance that reality says "So what?" to me and surprises me, because it is not as solidly determined as some other things - that we do not hang around very long in the "weirdly ~human AGI" phase before we get into the "if you crank up this AGI it destroys the world" phase. Less than 5 years, say, to put numbers on things.It would not surprise me in the least if the world ends before self-driving cars are sold on the mass market. On some quite plausible scenarios which I think have >50% of my probability mass at the moment, research AGI companies would be able to produce prototype car-driving AIs if they spent time on that, given the near-world-ending tech level; but there will be Many Very Serious Questions about this relatively new unproven advancement in machine learning being turned loose on the roads. And their AGI tech will gain the property "can be turned up to destroy the world" before Earth gains the property "you're allowed to sell self-driving cars on the mass market" because there just won't be much time. | | **[Ngo][13:52]** Then I expect that another thing we do with this is produce a very large amount of data which rewards AIs for following human instructions. | | **[Yudkowsky][13:52]** On other scenarios, of course, self-driving becomes possible by limited AI well before things start to break (further) on AGI. And on some scenarios, the way you got to AGI was via some breakthrough that is already scaling pretty fast, so by the time you can use the tech to get self-driving cars, that tech already ends the world if you turn up the dial, or that event follows very swiftly. | | **[Ngo][13:53]** When you talk about "cranking up the AGI", what do you mean?Using more compute on the same data? | | **[Yudkowsky][13:53]** Running it with larger bounds on the for loops, over more GPUs, to be concrete about it. | | **[Ngo][13:53]** In a RL setting, or a supervised, or unsupervised learning setting?Also: can you elaborate on the for loops? | | **[Yudkowsky][13:56]** I do not quite think that gradient descent on Stack More Layers alone - as used by OpenAI for GPT-3, say, and as *opposed* to Deepmind which builds more complex artifacts like Mu Zero or AlphaFold 2 - is liable to be the first path taken to AGI. I am reluctant to speculate more in print about clever ways to AGI, and I think any clever person out there will, if they are really clever and not just a fancier kind of stupid, not talk either about what they think is missing from Stack More Layers or how you would really get AGI. That said, the way that you cannot just run GPT-3 at a greater search depth, the way you can run Mu Zero at a greater search depth, is part of why I think that AGI is not likely to look *exactly* like GPT-3; the thing that kills us is likely to be a thing that can get more dangerous when you turn up a dial on it, not a thing that intrinsically has no dials that can make it more dangerous. |   1.4. Consequentialist goals vs. deontologist goals --------------------------------------------------   | | | --- | | **[Ngo][13:59]** Hmm, okay. Let's take a quick step back and think about what would be useful for the last half hour.I want to flag that my intuitions about pivotal acts are not very specific; I'm quite uncertain about how the geopolitics of that situation would work, as well as the timeframe between somewhere-near-human-level AGI and existential risk AGI.So we could talk more about this, but I expect there'd be a lot of me saying "well we can't rule out that X happens", which is perhaps not the most productive mode of discourse.A second option is digging into your intuitions about how cognition works. | | **[Yudkowsky][14:03]** Well, obviously, in the limit of alignment not being accessible to our civilization, and my successfully building a model weaker than reality which nonetheless correctly rules out alignment being accessible to our civilization, I could spend the rest of my short remaining lifetime arguing with people whose models are weak enough to induce some area of ignorance where for all they know you could align a thing. But that is predictably how conversations go in possible worlds where the Earth is doomed; so somebody wiser on the meta-level, though also ignorant on the object-level, might prefer to ask: "Where do you think your knowledge, rather than your ignorance, says that alignment ought to be doable and you will be surprised if it is not?" | | **[Ngo][14:07]** That's a fair point. Although it seems like a structural property of the "pivotal act" framing, which builds in doom by default. | | **[Yudkowsky][14:08]** We could talk about that, if you think it's a crux. Though I'm also not thinking that this whole conversation gets done in a day, so maybe for publishability reasons we should try to focus more on one line of discussion?But I do think that lots of people get their optimism by supposing that the world can be saved by doing less dangerous things with an AGI. So it's a big ol' crux of mine on priors. | | **[Ngo][14:09]** Agreed that one line of discussion is better; I'm happy to work within the pivotal act framing for current purposes.A third option is that I make some claims about how cognition works, and we see how much you agree with them. | | **[Yudkowsky][14:12]** (Though it's something of a restatement, a reason I'm not going into "my intuitions about how cognition works" is that past experience has led me to believe that conveying this info in a form that the Other Mind will actually absorb and operate, is really quite hard and takes a long discussion, relative to my current abilities to Actually Explain things; it is the sort of thing that might take doing homework exercises to grasp how one structure is appearing in many places, as opposed to just being flatly told that to no avail, and I have not figured out the homework exercises.)I'm cheerful about hearing your own claims about cognition and disagreeing with them. | | **[Ngo][14:12]** GreatOkay, so one claim is that something like deontology is a fairly natural way for minds to operate. | | **[Yudkowsky][14:14]** ("If that were true," he thought at once, "bureaucracies and books of regulations would be a lot more efficient than they are in real life.") | | **[Ngo][14:14]** Hmm, although I think this was probably not a very useful phrasing, let me think about how to rephrase it.Okay, so in [our earlier email discussion](https://docs.google.com/document/d/1XXGbFnWPXtsRiTxleBZ0LAGtU7_7CYKt17nnowfpKvo/edit), we talked about the concept of "obedience".To me it seems like it is just as plausible for a mind to have a concept like "obedience" as its rough goal, as a concept like maximising paperclips.If we imagine training an agent on a large amount of data which pointed in the rough direction of rewarding obedience, for example, then I imagine that by default obedience would be a constraint of comparable strength to, say, the human survival instinct.(Which is obviously not strong enough to stop humans doing a bunch of things that contradict it - but it's a pretty good starting point.) | | **[Yudkowsky][14:18]** Heh. You mean of comparable strength to the human instinct to explicitly maximize inclusive genetic fitness? | | **[Ngo][14:19]** Genetic fitness wasn't a concept that our ancestors were able to understand, so it makes sense that they weren't pointed directly towards it.(And nor did they understand *how* to achieve it.) | | **[Yudkowsky][14:19]** Even in that paradigm, except insofar as you expect gradient descent to work very differently from gene-search optimization - which, admittedly, it does - when you optimize really hard on a thing, you get contextual correlates to it, not the thing you optimized on.This is of course one of the Big Fundamental Problems that I expect in alignment. | | **[Ngo][14:20]** Right, so the main correlate that I've seen discussed is "do what would make the human give you a high rating, not what the human actually wants"One thing I'm curious about is the extent to which you're concerned about this specific correlate, versus correlates in general. | | **[Yudkowsky][14:21]** That said, I also see basic structural reasons why paperclips would be much easier to train than "obedience", even if we could magically instill simple inner desires that perfectly reflected the simple outer algorithm we saw ourselves as running over many particular instances of a loss function. | | **[Ngo][14:22]** I'd be interested in hearing what those are. | | **[Yudkowsky][14:22]** well, first of all, why *is* a book of regulations so much more unwieldy than a hunter-gatherer?if deontology is just as good as [consequentialism](https://arbital.com/p/consequentialist/), y'know.(do you want to try replying or should I just say?) | | **[Ngo][14:23]** Go aheadI should probably clarify that I agree that you can't just replace consequentialism with deontologyThe claim is more like: when it comes to high-level concepts, it's not clear to me why high-level consequentialist goals are more natural than high-level deontological goals. | | **[Yudkowsky][14:24]** I reply that reality is complicated, so when you pump a simple goal through complicated reality you get complicated behaviors required to achieve the goal. If you think of reality as a complicated function Input->Probability(Output), then even to get a simple Output or a simple partition on Output or a high expected score in a simple function over Output, you may need very complicated Input.Humans don't trust each other. They imagine, "Well, if I just give this bureaucrat a goal, perhaps they won't reason honestly about what it takes to achieve that goal! Oh no! Therefore I will instead, being the trustworthy and accurate person that I am, reason myself about constraints and requirements on the bureaucrat's actions, such that, if the bureaucrat obeys these regulations, I expect the outcome of their action will be what I want."But (compared to a general intelligence that observes and models complicated reality and does its own search to pick actions) an actually-effective book of regulations (implemented by some nonhuman mind with a large enough and perfect enough memory to memorize it) would tend to involve a (physically unmanageable) vast number of rules saying "if you observe this, do that" to follow all the crinkles of complicated reality as it can be inferred from observation. | | **[Ngo][14:28]** (Though it's something of a restatement, a reason I'm not going into "my intuitions about how cognition works" is that past experience has led me to believe that conveying this info in a form that the Other Mind will actually absorb and operate, is really quite hard and takes a long discussion, relative to my current abilities to Actually Explain things; it is the sort of thing that might take doing homework exercises to grasp how one structure is appearing in many places, as opposed to just being flatly told that to no avail, and I have not figured out the homework exercises.)(As a side note: do you have a rough guess for when your work with Ajeya will be made public? If it's still a while away, I'm wondering whether it's still useful to have a rough outline of these intuitions even if it's in a form that very few people will internalise) | | **[Yudkowsky][14:30]** (As a side note: do you have a rough guess for when your work with Ajeya will be made public? If it's still a while away, I'm wondering whether it's still useful to have a rough outline of these intuitions even if it's in a form that very few people will internalise)Plausibly useful, but not to be attempted today, I think? | | **[Ngo][14:30]** Agreed. | | **[Yudkowsky][14:30]** (We are now theoretically in overtime, which is okay for me, but for you it is 11:30pm (I think?) and so it is on you to call when to halt, now or later.) | | **[Ngo][14:32]** Yeah, it's 11.30 for me. I think probably best to halt here. I agree with all the things you just said about reality being complicated, and why consequentialism is therefore valuable. My "deontology" claim (which was, in its original formulation, far too general - apologies for that) was originally intended as a way of poking into your intuitions about which types of cognition are natural or unnatural, which I think is the topic we've been circling around for a while. | | **[Yudkowsky][14:33]** Yup, and a place to resume next time might be why I think "obedience" is unnatural compared to "paperclips" - though that is a thing that probably requires taking that stab at what underlies surface competencies. | | **[Ngo][14:34]** Right. I do think that even a vague gesture at that would be reasonably helpful (assuming that this doesn't already exist online?) | | **[Yudkowsky][14:34]** Not yet afaik, and I don't want to point you to Ajeya's stuff even if she were ok with that, because then this in-context conversation won't make sense to others. | | **[Ngo][14:35]** For my part I should think more about pivotal acts that I'd be willing to specifically defend.In any case, thanks for the discussion 🙂Let me know if there's a particular time that suits you for a follow-up; otherwise we can sort it out later. | | **[Soares][14:37]** (y'all are doing all my jobs for me) | | **[Yudkowsky][14:37]** could try Tuesday at this same time - though I may be in worse shape for dietary reasons, still, seems worth trying. | | **[Soares][14:37]** (wfm) | | **[Ngo][14:39]** Tuesday not ideal, any others work? | | **[Yudkowsky][14:39]** Wednesday? | | **[Ngo][14:40]** Yes, Wednesday would be good | | **[Yudkowsky][14:40]** let's call it tentatively for that | | **[Soares][14:41]** Great! Thanks for the chats. | | **[Ngo][14:41]** Thanks both! | | **[Yudkowsky][14:41]** Thanks, Richard! |   2. Follow-ups =============   2.1. Richard Ngo's summary --------------------------   | | | --- | | **[Tallinn][0:35]  (Sep. 6)** just caught up here & wanted to thank nate, eliezer and (especially) richard for doing this! it's great to see eliezer's model being probed so intensively. i've learned a few new things (such as the genetic bottleneck being plausibly a big factor in human cognition). FWIW, a minor comment re deontology (as that's fresh on my mind): in my view deontology is more about coordination than optimisation: deontological agents are more trustworthy, as they're much easier to reason about (in the same way how functional/declarative code is easier to reason about than imperative code). hence my steelman of bureaucracies (as well as social norms): humans just (correctly) prefer their fellow optimisers (including non-human optimisers) to be deontological for trust/coordination reasons, and are happy to pay the resulting competence tax. | | **[Ngo][3:10]  (Sep. 8)** Thanks Jaan! I agree that greater trust is a good reason to want agents which are deontological at some high level.I've attempted a summary of the key points so far; comments welcome: [GDocs link] | | | | --- | | **[Ngo]  (Sep. 8 Google Doc)** *1st discussion*(Mostly summaries not quotations)Eliezer, summarized by Richard: "To avoid catastrophe, whoever builds AGI first will have to a) align it to some extent, and b) decide not to scale it up beyond the point where their alignment techniques fail, and c) do some pivotal act that prevents others from scaling it up to that level. But ~~our alignment techniques will not be good enough~~ ~~our alignment techniques will be very far from adequate~~ on our current trajectory, our alignment techniques will be very far from adequate to create an AI that safely performs any such pivotal act." | | **[Yudkowsky][11:05]  (Sep. 8 comment)** will not be good enoughAre not presently on course to be good enough, missing by not a little.  "Will not be good enough" is literally declaring for lying down and dying. | | **[Yudkowsky][16:03]  (Sep. 9 comment)** will [be very far from adequate]Same problem as the last time I commented.  I am not making an unconditional prediction about future failure as would be implied by the word "will".  Conditional on current courses of action or their near neighboring courses, we seem to be well over an order of magnitude away from surviving, unless a miracle occurs.  It's still in the end a result of people doing what they seem to be doing, not an inevitability. | | **[Ngo][5:10]  (Sep. 10 comment)** Ah, I see. Does adding "on our current trajectory" fix this? | | **[Yudkowsky][10:46]  (Sep. 10 comment)** Yes. | | **[Ngo]  (Sep. 8 Google Doc)** Richard, summarized by Richard: "Consider the pivotal act of 'make a breakthrough in alignment research'. It is likely that, before the point where AGIs are strongly superhuman at seeking power, they will already be strongly superhuman at understanding the world, and at performing narrower pivotal acts like alignment research which don’t require as much agency (by which I roughly mean: large-scale motivations and the ability to pursue them over long timeframes)."Eliezer, summarized by Richard: "There’s a deep connection between solving intellectual problems and taking over the world - the former requires a powerful mind to think about domains that, when solved, render very cognitively accessible strategies that can do dangerous things. Even mathematical research is a goal-oriented task which involves identifying then pursuing instrumental subgoals - and if brains which evolved to hunt on the savannah can quickly learn to do mathematics, then it’s also plausible that AIs trained to do mathematics could quickly learn a range of other skills. Since almost nobody understands the deep similarities in the cognition required for these different tasks, the distance between AIs that are able to perform fundamental scientific research, and dangerously agentic AGIs, is smaller than almost anybody expects." | | **[Yudkowsky][11:05]  (Sep. 8 comment)** There’s a deep connection between solving intellectual problems and taking over the worldThere's a deep connection by default between chipping flint handaxes and taking over the world, if you happen to learn how to chip handaxes in a very general way.  "Intellectual" problems aren't special in this way.  And maybe you could avert the default, but that would take some work and you'd have to do it before easier default ML techniques destroyed the world. | | **[Ngo]  (Sep. 8 Google Doc)** Richard, summarized by Richard: "Our lack of understanding about how intelligence works also makes it easy to assume that traits which co-occur humans will also co-occur in future AIs. But human brains are badly-optimised for tasks like scientific research, and well-optimised for seeking power over the world, for reasons including a) evolving while embodied in a harsh environment; b) the genetic bottleneck; c) social environments which rewarded power-seeking. By contrast, training neural networks on tasks like mathematical or scientific research optimises them much less for seeking power. For example, GPT-3 has knowledge and reasoning capabilities but little agency, and loses coherence when run for longer timeframes." | | **[Tallinn][4:19]  (Sep. 8 comment)** [well-optimised for] seeking powermale-female differences might be a datapoint here (annoying as it is to lean on pinker's point :)) | | **[Yudkowsky][11:31]  (Sep. 8 comment)** I don't think a female Eliezer Yudkowsky doesn't try to save / optimize / takeover the world.  Men may do that for nonsmart reasons; smart men and women follow the same reasoning when they are smart enough.  Eg Anna Salamon and many others. | | **[Ngo]  (Sep. 8 Google Doc)** Eliezer, summarized by Richard: "Firstly, there’s a big difference between most scientific research and the sort of pivotal act that we’re talking about - you need to explain how AIs with a given skill can be used to actually prevent dangerous AGIs from being built. Secondly, insofar as GPT-3 has little agency, that’s because it has memorised many shallow patterns in a way which won’t directly scale up to general intelligence. Intelligence instead consists of deep problem-solving patterns which link understanding and agency at a fundamental level." |   3. September 8 conversation ===========================   3.1. The Brazilian university anecdote --------------------------------------   | | | --- | | **[Yudkowsky][11:00]** (I am here.) | | **[Ngo][11:01]** Me too. | | **[Soares][11:01]** Welcome back!(I'll mostly stay out of the way again.) | | **[Ngo][11:02]** Cool. Eliezer, did you read the summary - and if so, do you roughly endorse it?Also, I've been thinking about the best way to approach discussing your intuitions about cognition. My guess is that starting with the obedience vs paperclips thread is likely to be less useful than starting somewhere else - e.g. the description you gave near the beginning of the last discussion, about "searching for states that get fed into a result function and then a result-scoring function". | | **[Yudkowsky][11:06]** made a couple of comments about phrasings in the docSo, from my perspective, there's this thing where... it's really quite hard to teach certain *general* points by talking at people, as opposed to more specific points. Like, they're trying to build a perpetual motion machine, and even if you can manage to argue them into believing their first design is wrong, they go looking for a new design, and the new design is complicated enough that they can no longer be convinced that they're wrong because they managed to make a more complicated error whose refutation they couldn't keep track of anymore.Teaching people to see an underlying structure in a lot of places is a very hard thing to teach in this way. Richard Feynman [gave an example](https://v.cx/2010/04/feynman-brazil-education) of the mental motion in his story that ends "Look at the water!", where people learned in classrooms about how "a medium with an index" is supposed to polarize light reflected from it, but they didn't realize that sunlight coming off of water would be polarized. My guess is that doing this properly requires homework exercises; and that, unfortunately from my own standpoint, it happens to be a place where I have extra math talent, the same way that eg Marcello is more talented at formally proving theorems than I happen to be; and that people without the extra math talent, have to do a lot *more* exercises than I did, and I don't have a good sense of which exercises to give them. | | **[Ngo][11:13]** I'm sympathetic to this, and can try to turn off skeptical-discussion-mode and turn on learning-mode, if you think that'll help. | | **[Yudkowsky][11:14]** There's a general insight you can have about how arithmetic is commutative, and for some people you can show them 1 + 2 = 2 + 1 and their native insight suffices to generalize over the 1 and the 2 to any other numbers you could put in there, and they realize that strings of numbers can be rearranged and all end up equivalent. For somebody else, when they're a kid, you might have to show them 2 apples and 1 apple being put on the table in a different order but ending up with the same number of apples, and then you might have to show them again with adding up bills in different denominations, in case they didn't generalize from apples to money. I can actually remember being a child young enough that I tried to add 3 to 5 by counting "5, 6, 7" and I thought there was some clever enough way to do that to actually get 7, if you tried hard.Being able to see "consequentialism" is like that, from my perspective. | | **[Ngo][11:15]** Another possibility: can you trace the origins of this belief, and how it came out of your previous beliefs? | | **[Yudkowsky][11:15]** I don't know what homework exercises to give people to make them able to see "consequentialism" all over the place, instead of inventing slightly new forms of consequentialist cognition and going "Well, now *that* isn't consequentialism, right?"Trying to say "searching for states that get fed into an input-result function and then a result-scoring function" was one attempt of mine to describe the dangerous thing in a way that would maybe sound abstract enough that people would try to generalize it more. | | **[Ngo][11:17]** Another possibility: can you describe the closest thing to real consequentialism in humans, and how it came about in us? | | **[Yudkowsky][11:18][11:21]** Ok, so, part of the problem is that... before you do enough homework exercises for whatever your level of talent is (and even I, at one point, had done little enough homework that I thought there might be a clever way to add 3 and 5 in order to get to 7), you tend to think that only the very crisp formal thing that's been presented to you, is the "real" thing.Why would your engine have to obey the laws of thermodynamics? You're not building one of those Carnot engines you saw in the physics textbook!Humans contain fragments of consequentialism, or bits and pieces whose interactions add up to partially imperfectly shadow consequentialism, and the critical thing is being able to see that the reason why humans' outputs 'work', in a sense, is because these structures are what is doing the work, and the work gets done because of how they shadow consequentialism and only insofar as they shadow consequentialism. | | Put a human in one environment, it gets food. Put a human in a different environment, it gets food again. Wow, different initial conditions, same output! There must be things inside the human that, whatever else they do, are also along the way somehow effectively searching for motor signals such that food is the end result! | | **[Ngo][11:20]** To me it feels like you're trying to nudge me (and by extension whoever reads this transcript) out of a specific failure mode. If I had to guess, something like: "I understand what Eliezer is talking about so now I'm justified in disagreeing with it", or perhaps "Eliezer's explanation didn't make sense to me and so I'm justified in thinking that his concepts don't make sense". Is that right? | | **[Yudkowsky][11:22]** More like... from my perspective, even after I talk people out of one specific perpetual motion machine being possible, they go off and try to invent a different, more complicated perpetual motion machine.And I am not sure what to do about that. It has been going on for a very long time from my perspective.In the end, a lot of what people got out of all that writing I did, was not the deep object-level principles I was trying to point to - they did not really get [Bayesianism as thermodynamics](https://www.lesswrong.com/s/oFePMp9rKftEeZDDr/p/QkX2bAkwG2EpGvNug), say, they did not become able to see [Bayesian structures](https://www.lesswrong.com/posts/QrhAeKBkm2WsdRYao/searching-for-bayes-structure) any time somebody sees a thing and changes their belief. What they got instead was something much more meta and general, a vague spirit of how to reason and argue, because that was what they'd spent a lot of time being exposed to over and over and over again in lots of blog posts.Maybe there's no way to make somebody understand why [corrigibility](https://arbital.com/p/corrigibility/) is "unnatural" except to repeatedly walk them through the task of trying to invent an agent structure that lets you press the shutdown button (without it trying to force you to press the shutdown button), and showing them how each of their attempts fails; and then also walking them through why Stuart Russell's attempt at moral uncertainty produces the [problem of fully updated (non-)deference](https://arbital.com/p/updated_deference/); and hope they can start to see the informal general pattern of why corrigibility is in general contrary to the structure of things that are good at optimization.Except that to do the exercises at all, you need them to work within an expected utility framework. And then they just go, "Oh, well, I'll just build an agent that's good at optimizing things but doesn't use these explicit expected utilities that are the source of the problem!"And then if I want them to believe the same things I do, for the same reasons I do, I would have to teach them why certain structures of cognition are the parts of the agent that are good at stuff and do the work, rather than them being this particular formal thing that they learned for manipulating meaningless numbers as opposed to real-world apples.And I have tried to write that page once or twice (eg "[coherent decisions imply consistent utilities](https://www.lesswrong.com/posts/RQpNHSiWaXTvDxt6R/coherent-decisions-imply-consistent-utilities)") but it has not sufficed to teach them, because they did not even do as many homework problems as I did, let alone the greater number they'd have to do because this is in fact a place where I have a particular talent.I don't know how to solve this problem, which is why I'm falling back on talking about it at the meta-level. | | **[Ngo][11:30]** I'm reminded of a LW post called "[Write a thousand roads to Rome](https://www.lesswrong.com/posts/Q924oPJzK92FifuFg/write-a-thousand-roads-to-rome)", which iirc argues in favour of trying to explain the same thing from as many angles as possible in the hope that one of them will stick. | | **[Soares][11:31]** (Suggestion, not-necessarily-good: having named this problem on the meta-level, attempt to have the object-level debate, while flagging instances of this as it comes up.) | | **[Ngo][11:31]** I endorse Nate's suggestion.And will try to keep the difficulty of the meta-level problem in mind and respond accordingly. | | **[Yudkowsky][11:33]** That (Nate's suggestion) is probably the correct thing to do. I name it out loud because sometimes being told about the meta-problem actually does help on the object problem. It seems to help me a lot and others somewhat less, but it does help others at all, for many others. |   3.2. Brain functions and outcome pumps --------------------------------------   | | | --- | | **[Yudkowsky][11:34]** So, do you have a particular question you would ask about input-seeking cognitions? I did try to say why I mentioned those at all (it's a different road to Rome on "consequentialism"). | | **[Ngo][11:36]** Let's see. So the visual cortex is an example of quite impressive cognition in humans and many other animals. But I'd call this "pattern-recognition" rather than "searching for high-scoring results". | | **[Yudkowsky][11:37]** Yup! And it is no coincidence that there are no whole animals formed entirely out of nothing but a visual cortex! | | **[Ngo][11:37]** Okay, cool. So you'd agree that the visual cortex is doing something that's qualitatively quite different from the thing that animals overall are doing.Then another question is: can you characterise searching for high-scoring results in non-human animals? Do they do it? Or are you mainly talking about humans and AGIs? | | **[Yudkowsky][11:39]** Also by the time you get to like the temporal lobes or something, there is probably some significant amount of "what could I be seeing that would produce this visual field?" that is searching through hypothesis-space for hypotheses with high plausibility scores, and for sure at the human level, humans will start to think, "Well, could I be seeing this? No, that theory has the following problem. How could I repair that theory?" But it is plausible that there is no low-level analogue of this in a monkey's temporal cortex; and even more plausible that the parts of the visual cortex, if any, which do anything analogous to this, are doing it in a relatively local and definitely very domain-specific way.Oh, that's the cerebellum and motor cortex and so on, if we're talking about a cat or whatever. They have to find motor plans that result in their catching the mouse.Just because the visual cortex isn't (obviously) running a search doesn't mean the rest of the animal isn't running any searches.(On the meta-level, I notice myself hiccuping "But how could you not see that when looking at a cat?" and wondering what exercises would be required to teach that.) | | **[Ngo][11:41]** Well, I see *something* when I look at a cat, but I don't know how well it corresponds to the concepts you're using. So just taking it slowly for now.I have the intuition, by the way, that the motor cortex is in some sense doing a similar thing to the visual cortex - just in reverse. So instead of taking low-level inputs and producing high-level outputs, it's taking high-level inputs and producing low-level outputs. Would you agree with that? | | **[Yudkowsky][11:43]** It doesn't directly parse in my ontology because (a) I don't know what you mean by 'high-level' and (b) whole Cartesian agents can be viewed as functions, that doesn't mean all agents can be viewed as non-searching pattern-recognizers.That said, all parts of the cerebral cortex have surprisingly similar morphology, so it wouldn't be at all surprising if the motor cortex is doing something similar to visual cortex. (The cerebellum, on the other hand...) | | **[Ngo][11:44]** The signal from the visual cortex saying "that is a cat", and the signal to the motor cortex saying "grab that cup", are things I'd characterise as high-level. | | **[Yudkowsky][11:45]** Still less of a native distinction in my ontology, but there's an informal thing it can sort of wave at, and I can hopefully take that as understood and run with it. | | **[Ngo][11:45]** The firing of cells in the retina, and firing of motor neurons, are the low-level parts.Cool. So to a first approximation, we can think about the part in between the cat recognising a mouse, and the cat's motor cortex producing the specific neural signals required to catch the mouse, as the part where the consequentialism happens? | | **[Yudkowsky][11:49]** The part between the cat's eyes seeing the mouse, and the part where the cat's limbs move to catch the mouse, is the whole cat-agent. The whole cat agent sure is a baby consequentialist / searches for mouse-catching motor patterns / gets similarly high-scoring end results even as you vary the environment.The visual cortex is a particular part of this system-viewed-as-a-feedforward-function that is, plausibly, by no means surely, either not very searchy, or does only small local visual-domain-specific searches not aimed per se at catching mice; it has the epistemic nature rather than the planning nature.Then from one perspective you could reason that "well, most of the consequentialism is in the remaining cat after visual cortex has sent signals onward". And this is in general a dangerous mode of reasoning that is liable to fail in, say, inspecting every particular neuron for consequentialism and not finding it; but in this particular case, there are significantly more consequentialist parts of the cat than the visual cortex, so I am okay running with it. | | **[Ngo][11:50]** Ah, the more specific thing I meant to say is: most of the consequentialism is strictly between the visual cortex and the motor cortex. Agree/disagree? | | **[Yudkowsky][11:51]** Disagree, I'm rusty on my neuroanatomy but I think the motor cortex may send signals on to the cerebellum rather than the other way around.(I may also disagree with the actual underlying notion you're trying to hint at, so possibly not just a "well include the cerebellum then" issue, but I think I should let you respond first.) | | **[Ngo][11:53]** I don't know enough neuroanatomy to chase that up, so I was going to try a different tack.But actually, maybe it's easier for me to say "let's include the cerebellum" and see where you think the disagreement ends up. | | **[Yudkowsky][11:56]** So since cats are not (obviously) (that I have read about) cross-domain consequentialists with imaginations, their consequentialism is in bits and pieces of consequentialism embedded in them all over by the more purely pseudo-consequentialist genetic optimization loop that built them.A cat who fails to catch a mouse may then get little bits and pieces of catbrain adjusted all over.And then those adjusted bits and pieces get a pattern lookup later.Why do these pattern-lookups with no obvious immediate search element, all happen to point towards the same direction of catching the mouse? Because of the past causal history about how what gets looked up, which was tweaked to catch the mouse.So it is legit harder to point out "the consequentialist parts of the cat" by looking for which sections of neurology are doing searches right there. That said, to the extent that the visual cortex does not get tweaked on failure to catch a mouse, it's not part of that consequentialist loop either.And yes, the same applies to humans, but humans also do more explicitly searchy things and this is part of the story for why humans have spaceships and cats do not. | | **[Ngo][12:00]** Okay, this is interesting. So in biological agents we've got these three levels of consequentialism: evolution, reinforcement learning, and planning. | | **[Yudkowsky][12:01]** In biological agents we've got evolution + local evolved system-rules that in the past promoted genetic fitness. Two kinds of local rules like this are "operant-conditioning updates from success or failure" and "search through visualized plans". I wouldn't characterize these two kinds of rules as "levels". | | **[Ngo][12:02]** Okay, I see. And when you talk about searching through visualised plans (the type of thing that humans do) can you say more about what it means for that to be a "search"?For example, if I imagine writing a poem line-by-line, I may only be planning a few words ahead. But somehow the whole poem, which might be quite long, ends up a highly-optimised product. Is that a central example of planning? | | **[Yudkowsky][12:04][12:07]** Planning is one way to succeed at search. I think for purposes of understanding alignment difficulty, you want to be thinking on the level of abstraction where you see that in some sense it is the search itself that is dangerous when it's a strong enough search, rather than the danger seeming to come from details of the planning process.One of my early experiences in successfully generalizing my notion of intelligence, what I'd later verbalize as "computationally efficient finding of actions that produce outcomes high in a preference ordering", was in writing an (unpublished) story about time-travel in which the universe was globally consistent.The requirement of global consistency, the way in which all events between Paradox start and Paradox finish had to map the Paradox's initial conditions onto the endpoint that would go back and produce those exact initial conditions, ended up imposing strong complicated constraints on reality that the Paradox in effect had to navigate using its initial conditions. The time-traveler needed to end up going through certain particular experiences that would produce the state of mind in which he'd take the actions that would end up prodding his future self elsewhere into having those experiences. | | The Paradox ended up killing the people who built the time machine, for example, because they would not otherwise have allowed that person to go back in time, or kept the temporal loop open that long for any other reason if they were still alive.Just having two examples of strongly consequentialist general optimization in front of me - human intelligence, and evolutionary biology - hadn't been enough for me to properly generalize over a notion of optimization. Having three examples of homework problems I'd worked - human intelligence, evolutionary biology, and the fictional Paradox - caused it to finally click for me. | | **[Ngo][12:07]** Hmm. So to me, one of the central features of search is that you consider many possibilities. But in this poem example, I may only have explicitly considered a couple of possibilities, because I was only looking ahead a few words at a time. This seems related to the distinction Abram drew a while back between selection and control ([https://www.alignmentforum.org/posts/ZDZmopKquzHYPRNxq/selection-vs-control](https://www.lesswrong.com/posts/ZDZmopKquzHYPRNxq/selection-vs-control)). Do you distinguish between them in the same way as he does? Or does "control" of a system (e.g. a football player dribbling a ball down the field) count as search too in your ontology? | | **[Yudkowsky][12:10][12:11]** I would later try to tell people to "imagine a paperclip maximizer as *not being a mind at all*, imagine it as a kind of malfunctioning time machine that spits out outputs which will in fact result in larger numbers of paperclips coming to exist later". I don't think it clicked because people hadn't done the same homework problems I had, and didn't have the same "Aha!" of realizing how part of the notion and danger of intelligence could be seen in such purely material terms. | | But the [convergent instrumental strategies](https://arbital.com/p/convergent_strategies/), the anticorrigibility, these things are contained in the *true fact about the universe* that certain outputs of the time machine *will in fact* result in there being lots more paperclips later. What produces the danger is not the details of the search process, it's the search being strong and effective *at all*. The danger is in the territory itself and not just in some weird map of it; that building nanomachines that kill the programmers will produce more paperclips is a fact about reality, not a fact about paperclip maximizers! | | **[Ngo][12:11]** Right, I remember a very similar idea in your writing about Outcome Pumps (<https://www.lesswrong.com/posts/4ARaTpNX62uaL86j6/the-hidden-complexity-of-wishes>). | | **[Yudkowsky][12:12]** Yup! Alas, the story was written in 2002-2003 when I was a worse writer and the real story that inspired the Outcome Pump never did get published. | | **[Ngo][12:14]** Okay, so I guess the natural next question is: what is it that makes you think that a strong, effective search isn't likely to be limited or constrained in some way?What is it about search processes (like human brains) that makes it hard to train them with blind spots, or deontological overrides, or things like that?Hmmm, although it feels like this is a question I can probably predict your answer to. (Or maybe not, I wasn't expecting the time travel.) | | **[Yudkowsky][12:15]** In one sense, they are! A paperclip-maximizing superintelligence is nowhere near as powerful as a paperclip-maximizing time machine. The time machine can do the equivalent of buying winning lottery tickets from lottery machines that have been thermodynamically randomized; a superintelligence can't, at least not directly without rigging the lottery or whatever.But a paperclip-maximizing strong general superintelligence is epistemically and instrumentally [efficient](https://arbital.com/p/efficiency/), relative to *you*, or to me. Any time we see it can get at least X paperclips by doing Y, we should expect that it gets X or more paperclips by doing Y or something that leads to even more paperclips than that, because it's not going to miss the strategy we see.So in that sense, searching our own brains for how a time machine would get paperclips, asking ourselves how many paperclips are in principle possible and how they could be obtained, is a way of getting our own brains to consider lower bounds on the problem without the implicit stupidity assertions that our brains unwittingly use to constrain story characters. Part of the point of telling people to think about time machines instead of superintelligences was to get past the ways they imagine superintelligences being stupid. Of course that didn't work either, but it was worth a try.I don't think that's quite what you were asking about, but I want to give you a chance to see if you want to rephrase anything before I try to answer your me-reformulated questions. | | **[Ngo][12:20]** Yeah, I think what I wanted to ask is more like: why should we expect that, out of the space of possible minds produced by optimisation algorithms like gradient descent, strong general superintelligences are more common than other types of agents which score highly on our loss functions? | | **[Yudkowsky][12:20][12:23][12:24]** It depends on how hard you optimize! And whether gradient descent on a particular system can even successfully optimize that hard! Many current AIs are trained by gradient descent and yet not superintelligences at all. | | But the answer is that some problems are difficult in that they require solving lots of subproblems, and an easy way to solve all those subproblems is to use patterns which collectively have some coherence and overlap, and the coherence within them generalizes across all the subproblems. Lots of search orderings will stumble across something like that before they stumble across separate solutions for lots of different problems. | | I suspect that you cannot get this out of small large amounts of gradient descent on small large layered transformers, and therefore I suspect that GPT-N does not approach superintelligence before the world is ended by systems that look differently, but I could be wrong about that. | | **[Ngo][12:22][12:23]** Suppose that we optimise hard enough to produce an epistemic subsystem that can make plans much better than any human's. | | My guess is that you'd say that this is *possible*, but that we're much more likely to first produce a consequentialist agent which does this (rather than a purely epistemic agent which does this). | | **[Yudkowsky][12:24]** I am confused by what you think it means to have an "epistemic subsystem" that "makes plans much better than any human's". If it searches paths through time and selects high-scoring ones for output, what makes it "epistemic"? | | **[Ngo][12:25]** Suppose, for instance, that it doesn't actually carry out the plans, it just writes them down for humans to look at. | | **[Yudkowsky][12:25]** If it *can in fact* do the thing that a paperclipping time machine does, what makes it any safer than a paperclipping time machine because we called it "epistemic" or by some other such name?By what criterion is it selecting the plans that humans look at?Why did it make a difference that its output was fed through the causal systems called humans on the way to the causal systems called protein synthesizers or the Internet or whatever? If we build a superintelligence to design nanomachines, it makes no obvious difference to its safety whether it sends DNA strings directly to a protein synthesis lab, or humans read the output and retype it manually into an email. Presumably you also don't think that's where the safety difference comes from. So where does the safety difference come from?(note: lunchtime for me in 2 minutes, propose to reconvene in 30m after that) | | **[Ngo][12:28]** (break for half an hour sounds good)If we consider the visual cortex at a given point in time, how does it decide which objects to recognise?Insofar as the visual cortex can be non-consequentialist about which objects it recognises, why couldn't a planning system be non-consequentialist about which plans it outputs? | | **[Yudkowsky][12:32]** This does feel to me like another "look at the water" moment, so what do you predict I'll say about that? | | **[Ngo][12:34]** I predict that you say something like: in order to produce an agent that can create very good plans, we need to apply a lot of optimisation power to that agent. And if the channel through which we're applying that optimisation power is "giving feedback on its plans", then we don't have a mechanism to ensure that the agent actually learns to optimise for creating really good plans, as opposed to creating plans that receive really good feedback. | | **[Soares][12:35]** Seems like a fine cliffhanger? | | **[Ngo][12:35]** Yepp. | | **[Soares][12:35]** Great. Let's plan to reconvene in 30min. |   3.3. Hypothetical-planning systems, nanosystems, and evolving generality ------------------------------------------------------------------------   | | | --- | | **[Yudkowsky][13:03][13:11]** So the answer you expected from me, translated into my terms, would be, "If you select for the consequence of the humans hitting 'approve' on the plan, you're still navigating the space of inputs for paths through time to probable outcomes (namely the humans hitting 'approve'), so you're still doing consequentialism."But suppose you manage to avoid that. Suppose you get exactly what you ask for. Then the system is still outputting *plans* such that, when humans follow them, they take paths through time and end up with outcomes that score high in some scoring function.My answer is, "What the heck would it mean for a *planning system* to be *non-consequentialist*? You're asking for nonwet water! What's consequentialist isn't the system that does the work, it's the work you're trying to do! You could imagine it being done by a cognition-free material system like a time machine and it would still be consequentialist *because* the output is a *plan*, a path through time!"And this indeed is a case where I feel a helpless sense of not knowing how I can rephrase things, which exercises you have to get somebody to do, what fictional experience you have to walk somebody through, before they start to look at the water and see a material with an index, before they start to look at the phrase "why couldn't a planning system be non-consequentialist about which plans it outputs" and go "um". | | My imaginary listener now replies, "Ah, but what if we have plans that *don't* end up with outcomes that score high in some function?" and I reply "Then you lie on the ground randomly twitching because any *outcome you end up with* which is *not that* is one that you wanted *more than that* meaning you *preferred it more than the outcome of random motor outputs* which is *optimization toward higher in the preference function* which is *taking a path through time that leads to particular destinations more than it leads to random noise*." | | **[Ngo][13:09][13:11]** Yeah, this does seem like a good example of the thing you were trying to explain at the beginning | | It still feels like there's some sort of levels distinction going on here though, let me try to tease out that intuition.Okay, so suppose I have a planning system that, given a situation and a goal, outputs a plan that leads from that situation to that goal.And then suppose that we give it, as input, a situation that we're not actually in, and it outputs a corresponding plan.It seems to me that there's a difference between the sense in which that planning system is consequentialist by virtue of making consequentialist plans (as in: if that plan were used in the situation described in its inputs, it would lead to some goal being achieved) versus another hypothetical agent that is just directly trying to achieve goals in the situation it's actually in. | | **[Yudkowsky][13:18]** So I'd preface by saying that, *if* you could build such a system, which is indeed a coherent thing (it seems to me) to describe for the purpose of building it, then there would possibly be a safety difference on the margins, it would be noticeably less dangerous though still dangerous. It would need a special internal structural property that you might not get by gradient descent on a loss function with that structure, just like natural selection on inclusive genetic fitness doesn't get you explicit fitness optimizers; you could optimize for planning in hypothetical situations, and get something that didn't explicitly care only and strictly about hypothetical situations. And even if you did get that, the outputs that would kill or brain-corrupt the operators in hypothetical situations might also be fatal to the operators in actual situations. But that is a coherent thing to describe, and the fact that it was not optimizing our own universe, might make it *safer*.With that said, I would worry that somebody would think there was some bone-deep difference of agentiness, of something they were empathizing with like personhood, of imagining goals and drives being absent or present in one case or the other, when they imagine a planner that just solves "hypothetical" problems. If you take that planner and feed it the actual world as its hypothetical, tada, it is now that big old dangerous consequentialist you were imagining before, without it having acquired some difference of *psychological* agency or 'caring' or whatever.So I think there is an important homework exercise to do here, which is something like, "Imagine that safe-seeming system which only considers hypothetical problems. Now see that if you take that system, don't make any other internal changes, and feed it actual problems, it's very dangerous. Now meditate on this until you can see how the hypothetical-considering planner was extremely close in the design space to the more dangerous version, had all the dangerous latent properties, and would probably have a bunch of actual dangers too.""See, you thought the source of the danger was this internal property of caring about actual reality, but it wasn't that, it was the structure of planning!" | | **[Ngo][13:22]** I think we're getting closer to the same page now.Let's consider this hypothetical planner for a bit. Suppose that it was trained in a way that minimised the, let's say, *adversarial* component of its plans.For example, let's say that the plans it outputs for any situation are heavily regularised so only the broad details get through.Hmm, I'm having a bit of trouble describing this, but basically I have an intuition that in this scenario there's a component of its plan which is cooperative with whoever executes the plan, and a component that's adversarial.And I agree that there's no fundamental difference in type between these two things. | | **[Yudkowsky][13:27]** "What if this potion we're brewing has a Good Part and a Bad Part, and we could just keep the Good Parts..." | | **[Ngo][13:27]** Nor do I think they're separable. But in some cases, you might expect one to be much larger than the other. | | **[Soares][13:29]** (I observe that my model of some other listeners, at this point, protest "there is yet a difference between the hypothetical-planner applied to actual problems, and the Big Scary Consequentialist, which is that the hypothetical planner is emitting descriptions of plans that *would* work if executed, whereas the big scary consequentialist is executing those plans directly.")(Not sure that's a useful point to discuss, or if it helps Richard articulate, but it's at least a place I expect some reader's minds to go if/when this is published.) | | **[Yudkowsky][13:30]** (That is in fact a difference! The insight is in realizing that the hypothetical planner is only one line of outer shell command away from being a Big Scary Thing and is therefore also liable to be Big and Scary in many ways.) | | **[Ngo][13:31]** To me it seems that Eliezer's position is something like: "actually, in almost no training regimes do we get agents that decide which plans to output by spending almost all of their time thinking about the object-level problem, and very little of their time thinking about how to manipulate the humans carrying out the plan". | | **[Yudkowsky][13:32]** My position is that the AI does not neatly separate its internals into a Part You Think Of As Good and a Part You Think Of As Bad, because that distinction is sharp in your map but not sharp in the territory or the AI's map.From the perspective of a paperclip-maximizing-action-outputting-time-machine, its actions are not "object-level making paperclips" or "manipulating the humans next to the time machine to deceive them about what the machine does", they're just physical outputs that go through time and end up with paperclips. | | **[Ngo][13:34]** @Nate, yeah, that's a nice way of phrasing one point I was trying to make. And I do agree with Eliezer that these things *can be* very similar. But I'm claiming that in some cases these things can also be quite different - for instance, when we're training agents that only get to output a short high-level description of the plan. | | **[Yudkowsky][13:35]** The danger is in how hard the agent has to work to come up with the plan. I can, for instance, build an agent that very safely outputs a high-level plan for saving the world:echo "Hey Richard, go save the world!"So I do have to ask what kind of "high-level" planning output, that saves the world, you are envisioning, and why it was hard to cognitively come up with such that we didn't just make that high-level plan right now, if humans could follow it. Then I'll look at the part where the plan was hard to come up with, and say how the agent had to understand lots of complicated things in reality and accurately navigate paths through time for those complicated things, in order to even invent the high-level plan, and hence it was very dangerous if it wasn't navigating exactly where you hoped. Or, alternatively, I'll say, "That plan couldn't save the world: you're not postulating enough superintelligence to be dangerous, *and you're also* not using enough superintelligence to flip the tables on the currently extremely doomed world." | | **[Ngo][13:39]** At this point I'm not envisaging a particular planning output that saves the world, I'm just trying to get more clarity on the issue of consequentialism. | | **[Yudkowsky][13:40]** Look at the water; it's not the way you're doing the work that's dangerous, it's the work you're trying to do. What work are you trying to do, never mind how it gets done? | | **[Ngo][13:41]** I think I agree with you that, in the limit of advanced capabilities, we can't say much about how the work is being done, we have to primarily reason from the work that we're trying to do.But here I'm only talking about systems that are intelligent enough to come up with plans and do research that are beyond the capability of humanity.And for me the question is: for *those* systems, can we tilt the way they do the work so they spend 99% of their time trying to solve the object-level problem, and 1% of their time trying to manipulate the humans who are going to carry out the plan? (Where these are not fundamental categories for the AI, they're just a rough categorisation that emerges after we've trained it - the same way that the categories of "physically moving around" and "thinking about things" aren't fundamentally different categories of action for humans, but the way we've evolved means there's a significant internal split between them.) | | **[Soares][13:43]** (I suspect Eliezer is not trying to make a claim of the form "in the limit of advanced capabilities, we are relegated to reasoning about what work gets done, not about how it was done". I suspect some miscommunication. It might be a reasonable time for Richard to attempt to paraphrase Eliezer's argument?)(Though it also seems to me like Eliezer responding to the 99%/1% point may help shed light.) | | **[Yudkowsky][13:46]** Well, for one thing, I'd note that a system which is designing nanosystems, and spending 1% of its time thinking about how to kill the operators, is lethal. It has to be such a small fraction of thinking that it, like, never completes the whole thought about "well, if I did X, that would kill the operators!" | | **[Ngo][13:46]** Thanks for that, Nate. I'll try to paraphrase Eliezer's argument now.Eliezer's position (partly in my own terminology): we're going to build AIs that can perform very difficult tasks using cognition which we can roughly describe as "searching over many options to find one that meets our criteria". An AI that can solve these difficult tasks will need to be able to search in a very general and flexible way, and so it will be very difficult to constrain that search into a particular region.Hmm, that felt like a very generic summary, let me try and think about the more specific claims he's making. | | **[Yudkowsky][13:54]** An AI that can solve these difficult tasks will need to be able toVery very little is universally necessary over the design space. The *first* AGI that our tech becomes able to build is liable to work in certain easier and simpler ways. | | **[Ngo][13:55]** Point taken; thanks for catching this misphrasing (this and previous times). | | **[Yudkowsky][13:56]** Can you, in principle, build a red-car-driver that is totally incapable of driving blue cars? In principle, sure! But the first red-car-driver that gradient descent stumbles over is liable to be a blue-car-driver too. | | **[Ngo][13:57]** Eliezer, I'm wondering how much of our disagreement is about how high the human level is here.Or, to put it another way: we can build systems that outperform humans at quite a few tasks by now, without having search abilities that are general enough to even try to take over the world. | | **[Yudkowsky][13:58]** Indubitably and indeed, this is so. | | **[Ngo][13:59]** Putting aside for a moment the question of which tasks are pivotal enough to save the world, which parts of your model draw the line between human-level chess players and human-level galaxy-colonisers?And say that we'll be able to align ones that they outperform us on *these tasks* before taking over the world, but not on *these other tasks*? | | **[Yudkowsky][13:59][14:01]** That doesn't have a very simple answer, but one aspect there is *domain generality* which in turn is achieved through *novel domain learning*. | | Humans, you will note, were not aggressively optimized by natural selection to be able to breathe underwater or fly into space. In terms of obvious outer criteria, there is not much outer sign that natural selection produced these creatures much more general than chimpanzees, by training on a much wider range of environments and loss functions. | | **[Soares][14:00]** (Before we drift too far from it: thanks for the summary! It seemed good to me, and I updated towards the miscommunication I feared not-having-happened.) | | **[Ngo][14:03]** (Before we drift too far from it: thanks for the summary! It seemed good to me, and I updated towards the miscommunication I feared not-having-happened.)(Good to know, thanks for keeping an eye out. To be clear, I didn't ever interpret Eliezer as making a claim explicitly about the limit of advanced capabilities; instead it just seemed to me that he was thinking about AIs significantly more advanced than the ones I've been thinking of. I think I phrased my point poorly.) | | **[Yudkowsky][14:05][14:10]** There are complicated aspects of this story where natural selection may metaphorically be said to have "had no idea of what it was doing", eg, after early rises in intelligence possibly produced by sexual selection on neatly chipped flint handaxes or whatever, all the cumulative brain-optimization on chimpanzees reached a point where there was suddenly a sharp selection gradient on relative intelligence at Machiavellian planning against other humans (even more so than in the chimp domain) as a subtask of inclusive genetic fitness, and so continuing to optimize on "inclusive genetic fitness" in the same old savannah, turned out to happen to be optimizing hard on the subtask and internal capability of "outwit other humans", which optimized hard on "model other humans", which was a capability that could be reused for modeling the chimp-that-is-this-chimp, which turned the system on itself and made it reflective, which contributed greatly to its intelligence being generalized, even though it was just grinding the same loss function on the same savannah; the system being optimized happened to go there in the course of being optimized even harder for the same thing.So one can imagine asking the question: Is there a superintelligent AGI that can quickly build nanotech, which has a kind of passive safety in some if not all respects, in virtue of it solving problems like "build a nanotech system which does X" the way that a beaver solves building dams, in virtue of having a bunch of specialized learning abilities without it ever having a cross-domain general learning ability?And in this regard one does note that there are many, many, many things that humans do which no other animal does, which you might think would contribute a lot to that animal's fitness if there were animalistic ways to do it. They don't make iron claws for themselves. They never did evolve a tendency to search for iron ore, and burn wood into charcoal that could be used in hardened-clay furnaces.No animal plays chess, but AIs do, so we can obviously make AIs to do things that animals don't do. On the other hand, the environment didn't exactly present any particular species with a challenge of chess-playing either. | | Even so, though, even if some animal had evolved to play chess, I fully expect that current AI systems would be able to squish it at chess, because the AI systems are on chips that run faster than neurons and doing crisp calculations and there are things you just can't do with noisy slow neurons. So that again is not a generally reliable argument about what AIs can do. | | **[Ngo][14:09][14:11]** Yes, although I note that challenges which are trivial from a human-engineering perspective can be very challenging from an evolutionary perspective (e.g. spinning wheels). | | And so the evolution of animals-with-a-little-bit-of-help-from-humans might end up in very different places from the evolution of animals-just-by-themselves. And analogously, the ability of humans to fill in the gaps to help less general AIs achieve more might be quite significant. | | **[Yudkowsky][14:11]** So we can again ask: Is there a way to make an AI system that is *only* good at designing nanosystems, which can achieve some complicated but hopefully-specifiable real-world outcomes, without that AI also being superhuman at understanding and manipulating humans?And I roughly answer, "Perhaps, but not by default, there's a bunch of subproblems, I don't actually know how to do it right now, it's not *the easiest* way to get an AGI that can build nanotech (and kill you), you've got to make the red-car-driver specifically not be able to drive blue cars." Can I explain how I know that? I'm really not sure I can, in real life where I explain X0 and then the listener doesn't generalize X0 to X and respecialize it to X1.It's like asking me how I could possibly know in 2008, before anybody had observed AlphaFold 2, that superintelligences would be able to crack the protein folding problem on the way to nanotech, which some people did question back in 2008.Though that was admittedly more of a slam-dunk than this was, and I could not have told you that AlphaFold 2 would become possible at a prehuman level of general intelligence in 2021 specifically, or that it would be synced in time to a couple of years after GPT-2's level of generality at text. | | **[Ngo][14:18]** What are the most relevant axes of difference between solving protein folding and designing nanotech that, say, self-assembles into a computer? | | **[Yudkowsky][14:20]** Definitely, "turns out it's easier than you thought to use gradient descent's memorization of zillions of shallow patterns that overlap and recombine into larger cognitive structures, to add up to a consequentialist nanoengineer that only does nanosystems and never does sufficiently general learning to apprehend the big picture containing humans, while still understanding the goal for that pivotal act you wanted to do" is among the more plausible advance-specified miracles we could get.But it is not what my model says actually happens, and I am not a believer that when your model says you are going to die, you get to start believing in particular miracles. You need to hold your mind open for any miracle and a miracle you didn't expect or think of in advance, because at this point our last hope is that in fact the future is often quite surprising - though, alas, negative surprises are a tad more frequent than positive ones, when you are trying desperately to navigate using a bad map. | | **[Ngo][14:22]** Perhaps one metric we could use here is something like: how much extra reward does the consequentialist nanoengineer get from starting to model humans, versus from becoming better at nanoengineering? | | **[Yudkowsky][14:23]** But that's *not* where humans came from. We didn't get to nuclear power by getting a bunch of fitness from nuclear power plants. We got to nuclear power because if you get a bunch of fitness from chipping flint handaxes and Machiavellian scheming, as found by relatively simple and local hill-climbing, that entrains the same genes that build nuclear power plants. | | **[Ngo][14:24]** Only in the specific case where you also have the constraint that you keep having to learn new goals every generation. | | **[Yudkowsky][14:24]** Huh??? | | **[Soares][14:24]** (I think Richard's saying, "that's a consequence of the genetic bottleneck") | | **[Ngo][14:25]** Right.Hmm, but I feel like we may have covered this ground before.Suggestion: I have a couple of other directions I'd like to poke at, and then we could wrap up in 20 or 30 minutes? | | **[Yudkowsky][14:27]** OKWhat are the most relevant axes of difference between solving protein folding and designing nanotech that, say, self-assembles into a computer?Though I want to mark that this question seemed potentially cruxy to me, though perhaps not for others. I.e., if building protein factories that built nanofactories that built nanomachines that met a certain deep and lofty engineering goal, didn't involve cognitive challenges different in kind from protein folding, we could maybe just safely go do that using AlphaFold 3, which would be just as safe as AlphaFold 2.I don't think we can do that. And I would note to the generic Other that if, to them, these both just sound like thinky things, so why can't you just do that other thinky thing too using the thinky program, this is a case where having any specific model of why we don't already have this nanoengineer right now would tell you there were specific different thinky things involved. |   3.4. Coherence and pivotal acts -------------------------------   | | | --- | | **[Ngo][14:31]** In either order:* I'm curious how the things we've been talking about relate to your opinions about meta-level optimisation from the AI foom debate. (I.e. talking about how wrapping around so that there's no longer any protected level of optimisation leads to dramatic change.) * I'm curious how your claims about the "robustness" of consequentialism (i.e. the difficulty of channeling an agent's thinking in the directions we want it to go) relate to the reliance of humans on culture, and in particular the way in which humans raised without culture are such bad consequentialists. On the first: if I were to simplify to the extreme, it seems like there are these two core intuitions that you've been trying to share for a long time. One is a certain type of recursive improvement, and another is a certain type of consequentialism. | | **[Yudkowsky][14:32]** The second question didn't make much sense in my native ontology? Humans raised without culture don't have access to environmental constants whose presence their genes assume, so they end up as broken machines and then they're bad consequentialists. | | **[Ngo][14:35]** Hmm, good point. Okay, question modification: the ways in which humans reason, act, etc, vary greatly depending on which cultures they're raised in. (I'm mostly thinking about differences over time - e.g. cavemen vs moderns.) My low-fidelity version of your view about consequentialists says that general consequentialists like humans possess a robust search process which isn't so easily modified.(Sorry if this doesn't make much sense in your ontology, I'm getting a bit tired.) | | **[Yudkowsky][14:36]** What is it that varies that you think I think should predict would stay more constant? | | **[Ngo][14:37]** Goals, styles of reasoning, deontological constraints, level of conformity. | | **[Yudkowsky][14:39]** With regards to your first point, my first reaction was, "I just have one view of intelligence, what you see me arguing about reflects which points people have proved weirdly obstinate about. In 2008, Robin Hanson was being weirdly obstinate about how capabilities scaled and whether there was even any point in analyzing AIs differently from ems, so I talked about what I saw as the most slam-dunk case for there being Plenty Of Room Above Biology and for stuff going whoosh once it got above the human level."It later turned out that capabilities started scaling a whole lot *without* self-improvement, which is an example of the kind of weird surprise the Future throws at you, and maybe a case where I missed something by arguing with Hanson instead of imagining how I could be wrong in either direction and not just the direction that other people wanted to argue with me about."Later on, people were unable to understand why alignment is hard, and got stuck on generalizing the concept I refer to as consequentialism. A theory of why I talked about both things for related reasons would just be a theory of why people got stuck on these two points for related reasons, and I think that theory would mainly be overexplaining an accident because if Yann LeCun had been running effective altruism I would have been explaining different things instead, after the people who talked a lot to EAs got stuck on a different point."Returning to your second point, humans are broken things; if it were possible to build computers while working even worse than humans, we'd be having this conversation at that level of intelligence instead. | | **[Ngo][14:41]** (Retracted)~~I entirely agree about humans, but it doesn't matter that much how broken humans are when the regime of AIs that we're talking about is the regime that's directly above humans, and therefore only a bit less broken than humans.~~ | | **[Yudkowsky][14:41]** Among the things to bear in mind about that, is that we then get tons of weird phenomena that are specific to humans, and you may be very out of luck if you start wishing for the *same* weird phenomena in AIs. Yes, even if you make some sort of attempt to train it using a loss function.However, it does seem to me like as we start getting towards the Einstein level instead of the village-idiot level, even though this is usually not much of a difference, we do start to see the atmosphere start to thin already, and the turbulence start to settle down already. Von Neumann was actually a fairly reflective fellow who knew about, and indeed helped generalize, utility functions. The great achievements of von Neumann were not achieved by some very specialized hypernerd who spent all his fluid intelligence on crystallizing math and science and engineering alone, and so never developed any opinions about politics or started thinking about whether or not he had a utility function. | | **[Ngo][14:44]** I don't think I'm asking for the *same* weird phenomena. But insofar as a bunch of the phenomena I've been talking about have seemed weird according to your account of consequentialism, then the fact that approximately-human-level-consequentialists have lots of weird things about them is a sign that the phenomena I've been talking about are less unlikely than you expect. | | **[Yudkowsky][14:45][14:46]** I suspect that some of the difference here is that I think you have to be *noticeably* better than a human at nanoengineering to pull off pivotal acts large enough to make a difference, which is why I am not instead trying to gather the smartest people left alive and doing that pivotal act directly. | | I can't think of anything you can do with somebody just barely smarter than a human, which flips the gameboard, aside of course from "go build a Friendly AI" which I *did* try to set up to just go do and which would be incredibly hard to align if we wanted an AI to do it instead (full-blown chicken-and-egg, that AI is already fully aligned). | | **[Ngo][14:45]** Oh, interesting. Actually one more question then: to what extent do you think that explicitly reasoning about utility functions and laws of rationality is what makes consequentialists have the properties you've been talking about? | | **[Yudkowsky][14:47, moved up in log]** Explicit reflection is one possible later stage of the path; an earlier part of the path is from being optimized to do things difficult enough that you need to stop stepping on your own feet and have different parts of your thoughts work well together.It's the sort of path that has only one destination at its end, so there will be many ways to get there.(Modulo various cases where different decision theories seem reflectively consistent and so on; I want to say "you know what I mean" but maybe people don't.) | | **[Ngo][14:47, moved down in log]** I suspect that some of the difference here is that I think you have to be *noticeably* better than a human at nanoengineering to pull off pivotal acts large enough to make a difference, which is why I am not instead trying to gather the smartest people left alive and doing that pivotal act directly.Yepp, I think there's probably some disagreements about geopolitics driving this too. E.g. in my earlier summary document I mentioned some possible pivotal acts:* Monitoring all potential AGI projects to an extent that makes it plausible for the US and China to work on a joint project without worrying that the other is privately racing. * Provide arguments/demonstrations/proofs related to impending existential risk that are sufficiently compelling to scare the key global decision-makers into bottlenecking progress. I predict that you think these would not be pivotal enough; but I don't think digging into the geopolitical side of things is the best use of our time. | | **[Yudkowsky][14:49, moved up in log]** Monitoring all AGI projects - either not politically feasible in real life given the actual way that countries behave in history books instead of fantasy; or at politically feasible levels, does not work well enough to prevent the world from ending once the know-how proliferates. The AI isn't doing much work here either; why not go do this now, if it's possible? (Note: please don't try to go do this now, it backfires badly.)Provide sufficiently compelling arguments = superhuman manipulation, an incredibly dangerous domain that is just about the worst domain to try to align. | | **[Ngo][14:49, moved down in log]** With regards to your first point, my first reaction was, "I just have one view of intelligence, what you see me arguing about reflects which points people have proved weirdly obstinate about. In 2008, Robin Hanson was being weirdly obstinate about how capabilities scaled and whether there was even any point in analyzing AIs differently from ems, so I talked about what I saw as the most slam-dunk case for there being Plenty Of Room Above Biology and for stuff going whoosh once it got above the human level."It later turned out that capabilities started scaling a whole lot *without* self-improvement, which is an example of the kind of weird surprise the Future throws at you, and maybe a case where I missed something by arguing with Hanson instead of imagining how I could be wrong in either direction and not just the direction that other people wanted to argue with me about."Later on, people were unable to understand why alignment is hard, and got stuck on generalizing the concept I refer to as consequentialism. A theory of why I talked about both things for related reasons would just be a theory of why people got stuck on these two points for related reasons, and I think that theory would mainly be overexplaining an accident because if Yann LeCun had been running effective altruism I would have been explaining different things instead, after the people who talked a lot to EAs got stuck on a different point."On my first point, it seems to me that your claims about recursive self-improvement were off in a fairly similar way to how I think your claims about consequentialism are off - which is that they defer too much to one very high-level abstraction. | | **[Yudkowsky][14:52]** On my first point, it seems to me that your claims about recursive self-improvement were off in a fairly similar way to how I think your claims about consequentialism are off - which is that they defer too much to one very high-level abstraction.I suppose that is what it could potentially feel like from the inside to not get an abstraction. Robin Hanson kept on asking why I was trusting my abstractions so much, when he was in the process of trusting his worse abstractions instead. | | **[Ngo][14:51][14:53]** Explicit reflection is one possible later stage of the path; an earlier part of the path is from being optimized to do things difficult enough that you need to stop stepping on your own feet and have different parts of your thoughts work well together.Can you explain a little more what you mean by "have different parts of your thoughts work well together"? Is this something like the capacity for metacognition; or the global workspace; or self-control; or...? | | And I guess there's no good way to quantify *how* important you think the explicit reflection part of the path is, compared with other parts of the path - but any rough indication of whether it's a more or less crucial component of your view? | | **[Yudkowsky][14:55]** Can you explain a little more what you mean by "have different parts of your thoughts work well together"? Is this something like the capacity for metacognition; or the global workspace; or self-control; or...?No, it's like when you don't, like, pay five apples for something on Monday, sell it for two oranges on Tuesday, and then trade an orange for an apple.I have still not figured out the homework exercises to convey to somebody the Word of Power which is "coherence" by which they will be able to look at the water, and see "coherence" in places like a cat walking across the room without tripping over itself.When you do lots of reasoning about arithmetic correctly, without making a misstep, that long chain of thoughts with many different pieces diverging and ultimately converging, ends up making some statement that is... still true and still about numbers! Wow! How do so many different thoughts add up to having this property? Wouldn't they wander off and end up being about tribal politics instead, like on the Internet?And one way you could look at this, is that even though all these thoughts are taking place in a bounded mind, they are shadows of a higher unbounded structure which is the model identified by the Peano axioms; all the things being said are *true about the numbers*. Even though somebody who was missing the point would at once object that the human contained no mechanism to evaluate each of their statements against all of the numbers, so obviously no human could ever contain a mechanism like that, so obviously you can't explain their success by saying that each of their statements was true about the same topic of the numbers, because what could possibly implement that mechanism which (in the person's narrow imagination) is The One Way to implement that structure, which humans don't have?But though mathematical reasoning can sometimes go astray, when it works at all, it works because, in fact, even bounded creatures can sometimes manage to obey local relations that in turn add up to a global coherence where all the pieces of reasoning point in the same direction, like photons in a laser lasing, even though there's no internal mechanism that enforces the global coherence at every point.To the extent that the outer optimizer trains you out of paying five apples on Monday for something that you trade for two oranges on Tuesday and then trading two oranges for four apples, the outer optimizer is training all the little pieces of yourself to be locally coherent in a way that can be seen as an imperfect bounded shadow of a higher unbounded structure, and then the system is powerful though imperfect *because* of how the power is present in the coherence and the overlap of the pieces, *because* of how the higher perfect structure is being imperfectly shadowed. In this case the higher structure I'm talking about is Utility, and doing homework with coherence theorems leads you to appreciate that we only know about one higher structure for this class of problems that has a dozen mathematical spotlights pointing at it saying "look here", even though people have occasionally looked for alternatives.And when I try to say this, people are like, "Well, I looked up a theorem, and it talked about being able to identify a unique utility function from an infinite number of choices, but if we don't have an infinite number of choices, we can't identify the utility function, so what relevance does this have" and this is a kind of mistake I don't remember even coming close to making so I do not know how to make people stop doing that and maybe I can't. | | **[Soares][15:07]** We're already pushing our luck on time, so I nominate that we wrap up (after, perhaps, a few more Richard responses if he's got juice left.) | | **[Yudkowsky][15:07]** Yeah, was thinking the same. | | **[Soares][15:07]** As a proposed cliffhanger to feed into the next discussion, my take is that Richard's comment:On my first point, it seems to me that your claims about recursive self-improvement were off in a fairly similar way to how I think your claims about consequentialism are off - which is that they defer too much to one very high-level abstraction.probably contains some juicy part of the disagreement, and I'm interested in Eliezer understanding Richard's claim to the point of being able to paraphrase it to Richard's satisfaction. | | **[Ngo][15:08]** Wrapping up here makes sense.I endorse the thing Nate just said.I also get the sense that I have a much better outline now of Eliezer's views about consequentialism (if not the actual details and texture).On a meta level, I personally tend to focus more on things like "how should we understand cognition" and not "how should we understand geopolitics and how it affects the level of pivotal action required".If someone else were trying to prosecute this disagreement they might say much more about the latter. I'm uncertain how useful it is for me to do so, given that my comparative advantage compared with the rest of the world (and probably Eliezer's too) is the cognition part. | | **[Yudkowsky][15:12]** Reconvene... tomorrow? Monday of next week? | | **[Ngo][15:12]** Monday would work better for me.You okay with me summarising the discussion so far to [some people — redacted for privacy reasons]? | | **[Yudkowsky][15:13]** Nate, take a minute to think of your own thoughts there? | | | --- | | [Soares: 👍 👌] | | | **[Soares][15:15]** My take: I think it's fine to summarize, though generally virtuous to mark summaries as summaries (rather than asserting that your summaries are Eliezer-endorsed or w/e). | | | --- | | [Ngo: 👍] | | | **[Yudkowsky][15:16]** I think that broadly matches my take. I'm also a bit worried about biases in the text summarizer, and about whether I managed to say anything that Rob or somebody will object to pre-publication, but we ultimately intended this to be seen and I was keeping that in mind, so, yeah, go ahead and summarize. | | **[Ngo][15:17]** Great, thanks | | **[Yudkowsky][15:17]** I admit to being curious as to what you thought was said that was important or new, but that's a question that can be left open to be answered at your leisure, earlier in your day. | | **[Ngo][15:17]** I admit to being curious as to what you thought was said that was important or new, but that's a question that can be left open to be answered at your leisure, earlier in your day.You mean, what I thought was worth summarising? | | **[Yudkowsky][15:17]** Yeah. | | **[Ngo][15:18]** Hmm, no particular opinion. I wasn't going to go out of my way to do so, but since I'm chatting to [some people — redacted for privacy reasons] regularly anyway, it seemed low-cost to fill them in.At your leisure, I'd be curious to know how well the directions of discussion are meeting your goals for what you want to convey when this is published, and whether there are topics you want to focus on more. | | **[Yudkowsky][15:19]** I don't know if it's going to help, but trying it currently seems better than to go on saying nothing. | | **[Ngo][15:20]** (personally, in addition to feeling like less of an expert on geopolitics, it also seems more sensitive for me to make claims about in public, which is another reason I haven't been digging into that area as much) | | **[Soares][15:21]** (personally, in addition to feeling like less of an expert on geopolitics, it also seems more sensitive for me to make claims about in public, which is another reason I haven't been digging into that area as much)(seems reasonable! note, though, that i'd be quite happy to have sensitive sections stricken from the record, insofar as that lets us get more convergence than we otherwise would, while we're already in the area) | | | --- | | [Ngo: 👍] | (tho ofc it is less valuable to spend conversational effort in private discussions, etc.) | | | --- | | [Ngo: 👍] | | | **[Ngo][15:22]** At your leisure, I'd be curious to know how well the directions of discussion are meeting your goals for what you want to convey when this is published, and whether there are topics you want to focus on more.(this question aimed at you too Nate)Also, thanks Nate for the moderation! I found your interventions well-timed and useful. | | | --- | | [Soares: ❤️] | | | **[Soares][15:23]** (this question aimed at you too Nate)(noted, thanks, I'll probably write something up after you've had the opportunity to depart for sleep.)On that note, I declare us adjourned, with intent to reconvene at the same time on Monday.Thanks again, both. | | **[Ngo][15:23]** Thanks both 🙂Oh, actually, one quick pointWould one hour earlier suit, for Monday?I've realised that I'll be moving to a one-hour-later time zone, and starting at 9pm is slightly suboptimal (but still possible if necessary) | | **[Soares][15:24]** One hour earlier would work fine for me. | | **[Yudkowsky][15:25]** Doesn't work as fine for me because I've been trying to avoid any food until 12:30p my time, but on that particular day I may be more caloried than usual from the previous day, and could possibly get away with it. (That whole day could also potentially fail if a minor medical procedure turns out to take more recovery than it did the last time I had it.) | | **[Ngo][15:26]** Hmm, is this something where you'd have more information on the day? (For the calories thing) | | **[Yudkowsky][15:27]** (seems reasonable! note, though, that i'd be quite happy to have sensitive sections stricken from the record, insofar as that lets us get more convergence than we otherwise would, while we're already in the area)I'm a touch reluctant to have discussions that we intend to delete, because then the larger debate will make less sense once those sections are deleted. Let's dance around things if we can. | | | | --- | --- | | [Ngo: 👍] | [Soares: 👍] | I mean, I can that day at 10am my time say how I am doing and whether I'm in shape for that day. | | **[Ngo][15:28]** great. and if at that point it seems net positive to postpone to 11am your time (at the cost of me being a bit less coherent later on) then feel free to say so at the timeon that note, I'm off | | **[Yudkowsky][15:29]** Good night, heroic debater! | | **[Soares][16:11]** At your leisure, I'd be curious to know how well the directions of discussion are meeting your goals for what you want to convey when this is published, and whether there are topics you want to focus on more.The discussions so far are meeting my goals quite well so far! (Slightly better than my expectations, hooray.) Some quick rough notes:* I have been enjoying EY explicating his models around consequentialism. + The objections Richard has been making are ones I think have been floating around for some time, and I'm quite happy to see explicit discussion on it. + Also, I've been appreciating the conversational virtue with which the two of you have been exploring it. (Assumption of good intent, charity, curiosity, etc.) * I'm excited to dig into Richard's sense that EY was off about recursive self improvement, and is now off about consequentialism, in a similar way. + This also sees to me like a critique that's been floating around for some time, and I'm looking forward to getting more clarity on it. * I'm a bit torn between driving towards clarity on the latter point, and shoring up some of the progress on the former point. + One artifact I'd really enjoy having is some sort of "before and after" take, from Richard, contrasting his model of EY's views before, to his model now. + I also have a vague sense that there are some points Eliezer was trying to make, that didn't quite feel like they were driven home; and dually, some pushback by Richard that didn't feel quite frontally answered. - One thing I may do over the next few days is make a list of those places, and see if I can do any distilling on my own. (No promises, though.) - If that goes well, I might enjoy some side-channel back-and-forth with Richard about it, eg during some more convenient-for-Richard hour (or, eg, as a thing to do on Monday if EY's not in commission at 10a pacific.) | | **[Ngo][5:40]  (next day, Sep. 9)** The discussions so far are [...]What do you mean by "latter point" and "former point"? (In your 6th bullet point) | | **[Soares][7:09]  (next day, Sep. 9)** What do you mean by "latter point" and "former point"? (In your 6th bullet point)former = shoring up the consequentialism stuff, latter = digging into your critique re: recursive self improvement etc. (The nesting of the bullets was supposed to help make that clear, but didn't come out well in this format, oops.) |   4. Follow-ups =============   4.1. Richard Ngo's summary --------------------------   | | | --- | | **[Ngo]  (Sep. 10 Google Doc)** *2nd discussion*(Mostly summaries not quotations~~; also hasn’t yet been evaluated by Eliezer~~)Eliezer, summarized by Richard: "~~The~~ A core concept which people have trouble grasping is consequentialism. People try to reason about *how* AIs will solve problems, and ways in which they might or might not be dangerous. But they don’t realise that the ability to solve a wide range of difficult problems implies that an agent must be doing a powerful search over possible solutions, which is ~~the~~ a core skill required to take actions which greatly affect the world. Making this type of AI safe is like trying to build an AI that drives red cars very well, but can’t drive blue cars - there’s no way you get this by default, because the skills involved are so similar. And because the search process ~~is so general~~ is by default so general, ~~it’ll be very hard to~~ I don’t currently see how to constrain it into any particular region." | | **[Yudkowsky][10:48]  (Sep. 10 comment)** The*A* concept, which some people have had trouble grasping.  There seems to be an endless list.  I didn't have to spend much time contemplating consequentialism to derive the consequences.  I didn't spend a lot of time talking about it until people started arguing. | | **[Yudkowsky][10:50]  (Sep. 10 comment)** thea | | **[Yudkowsky][10:52]  (Sep. 10 comment)** [the search process] is [so general]"is by default".  The reason I keep emphasizing that things are only true by default is that the work of surviving may look like doing hard nondefault things.  I don't take fatalistic "will happen" stances, I assess difficulties of getting nondefault results. | | **[Yudkowsky][10:52]  (Sep. 10 comment)** it’ll be very hard to"I don't currently see how to" | | **[Ngo]  (Sep. 10 Google Doc)** Eliezer, summarized by Richard (continued): "In biological organisms, evolution is ~~one source~~ the ultimate source of consequentialism. A ~~second~~ secondary outcome of evolution is reinforcement learning. For an animal like a cat, upon catching a mouse (or failing to do so) many parts of its brain get slightly updated, in a loop that makes it more likely to catch the mouse next time. (Note, however, that this process isn’t powerful enough to make the cat a pure consequentialist - rather, it has many individual traits that, when we view them from this lens, point in the same direction.) ~~A third thing that makes humans in particular consequentialist is planning,~~ Another outcome of evolution, which helps make humans in particular more consequentialist, is planning - especially when we’re aware of concepts like utility functions." | | **[Yudkowsky][10:53]  (Sep. 10 comment)** onethe ultimate | | **[Yudkowsky][10:53]  (Sep. 10 comment)** secondsecondary outcome of evolution | | **[Yudkowsky][10:55]  (Sep. 10 comment)** especially when we’re aware of concepts like utility functionsVery slight effect on human effectiveness in almost all cases because humans have very poor reflectivity. | | **[Ngo]  (Sep. 10 Google Doc)** Richard, summarized by Richard: "Consider an AI that, given a hypothetical scenario, tells us what the best plan to achieve a certain goal in that scenario is. Of course it needs to do consequentialist reasoning to figure out how to achieve the goal. But that’s different from an AI which chooses what to say as a means of achieving its goals. I’d argue that the former is doing consequentialist reasoning without itself being a consequentialist, while the latter is actually a consequentialist. Or more succinctly: consequentialism = problem-solving skills + using those skills to choose actions which achieve goals."Eliezer, summarized by Richard: "The former AI might be slightly safer than the latter if you could build it, but I think people are likely to dramatically overestimate how big the effect is. The difference could just be one line of code: if we give the former AI our current scenario as its input, then it becomes the latter.  For purposes of understanding alignment difficulty, you want to be thinking on the level of abstraction where you see that in some sense it is the search itself that is dangerous when it's a strong enough search, rather than the danger seeming to come from details of the planning process. One particularly helpful thought experiment is to think of advanced AI as an '[outcome pump](https://www.lesswrong.com/posts/4ARaTpNX62uaL86j6/the-hidden-complexity-of-wishes)' which selects from futures in which a certain outcome occurred, and takes whatever action leads to them." | | **[Yudkowsky][10:59]  (Sep. 10 comment)** particularly helpful"attempted explanatory".  I don't think most readers got it.I'm a little puzzled by how often you write my viewpoint as thinking that whatever I happened to say a sentence about is the Key Thing.  It seems to rhyme with a deeper failure of many EAs to pass the MIRI [ITT](https://www.econlib.org/archives/2011/06/the_ideological.html).To be a bit blunt and impolite in hopes that long-languishing social processes ever get anywhere, two obvious uncharitable explanations for why some folks may systematically misconstrue MIRI/Eliezer as believing much more than in reality that various concepts an argument wanders over are Big Ideas to us, when some conversation forces us to go to that place:(A)  It paints a comfortably unflattering picture of MIRI-the-Other as weirdly obsessed with these concepts that seem not so persuasive, or more generally paints the Other as a bunch of weirdos who stumbled across some concept like "consequentialism" and got obsessed with it.  In general, to depict the Other as thinking a great deal of some idea (or explanatory thought experiment) is to tie and stake their status to the listener's view of how much status that idea deserves.  So if you say that the Other thinks a great deal of some idea that isn't obviously high-status, that lowers the Other's status, which can be a comfortable thing to do.(cont.)(B) It paints a more comfortably self-flattering picture of a continuing or persistent disagreement, as a disagreement with somebody who thinks that some random concept is much higher-status than it really is, in which case there isn't more to done or understood except to duly politely let the other person try to persuade you the concept deserves its high status. As opposed to, "huh, maybe there is a noncentral point that the other person sees themselves as being stopped on and forced to explain to me", which is a much less self-flattering viewpoint on why the conversation is staying within a place.  And correspondingly more of a viewpoint that somebody else is likely to have of us, because it is a comfortable view to them, than a viewpoint that it is comfortable to us to imagine them having.Taking the viewpoint that somebody else is getting hung up on a relatively noncentral point can also be a flattering self-portrait to somebody who believes that, of course.  It doesn't mean they're right.  But it does mean that you should be aware of how the Other's story, told from the Other's viewpoint, is much more liable to be something that the Other finds sensible and perhaps comfortable, even if it implies an unflattering (and untrue-seeming and perhaps untrue) view of yourself, than something that makes the Other seem weird and silly and which it is easy and congruent for you yourself to imagine the Other thinking. | | **[Ngo][11:18]  (Sep. 12 comment)** I'm a little puzzled by how often you write my viewpoint as thinking that whatever I happened to say a sentence about is the Key Thing.In this case, I emphasised the outcome pump thought experiment because you said that the time-travelling scenario was a key moment for your understanding of optimisation, and the outcome pump seemed to be similar enough and easier to convey in the summary, since you'd already written about it.I'm also emphasising consequentialism because it seemed like the core idea which kept coming up in our first debate, under the heading of "deep problem-solving patterns". Although I take your earlier point that you tend to emphasise things that your interlocutor is more skeptical about, not necessarily the things which are most central to your view. But if consequentialism isn't in fact a very central concept for you, I'd be interested to hear what role it plays. | | **[Ngo]  (Sep. 10 Google Doc)** Richard, summarized by Richard: "There’s a component of 'finding a plan which achieves a certain outcome' which involves actually solving the object-level problem of how someone who is given the plan can achieve the outcome. And there’s another component which is figuring out how to manipulate that person into doing what you want. To me it seems like Eliezer’s argument is that there’s no training regime which leads an AI to spend 99% of its time thinking about the former, and 1% thinking about the latter." | | **[Yudkowsky][11:20]  (Sep. 10 comment)** no training regime...that the training regimes we come up with first, in the 3 months or 2 years we have before somebody else destroys the world, will not have this property.I don't have any particularly complicated or amazingly insightful theories of why I keep getting depicted as a fatalist; but my world is full of counterfactual functions, not constants.  And I am always aware that if we had access to a real Textbook from the Future explaining all of the methods that are actually robust in real life - the equivalent of telling us in advance about all the ReLUs that in real life were only invented and understood a few decades after sigmoids - we could go right ahead and build a superintelligence that thinks 2 + 2 = 5.All of my assumptions about "I don't see how to do X" are always labeled as ignorance on my part and a default because we won't have enough time to actually figure out how to do X.  I am constantly maintaining awareness of this because being **wrong** about it being difficult is a major place where **hope** potentially comes from, if there's some idea like ReLUs that robustly vanquishes the difficulty, which I just didn't think of.  Which does not, alas, mean that I am wrong about any particular thing, nor that the infinite source of optimistic ideas that is the wider field of "AI alignment" is going to produce a good idea from the same process that generates all the previous naive optimism through not seeing where the original difficulty comes from or what other difficulties surround obvious naive attempts to solve it. | | **[Ngo]  (Sep. 10 Google Doc)** Richard, summarized by Richard (continued): "While this may be true in the limit of increasing intelligence, the most relevant systems are the earliest ones that are above human level. But humans deviate from the consequentialist abstraction you’re talking about in all sorts of ways - for example, being raised in different cultures can make people much more or less consequentialist. So it seems plausible that early AGIs can be superhuman while also deviating strongly from this abstraction - not necessarily in the same ways as humans, but in ways that we push them towards during training."Eliezer, summarized by Richard: "Even at the Einstein or von Neumann level these types of deviations start to subside. And the sort of pivotal acts which might realistically work require skills *significantly* above human level. I think even 1% of the cognition of an AI that can assemble advanced nanotech, thinking about how to kill humans, would doom us. Your other suggestions for pivotal acts (surveillance to restrict AGI proliferation; persuading world leaders to restrict AI development) are not politically feasible in real life, to the level required to prevent the world from ending; or else require alignment in the very dangerous domain of superhuman manipulation."Richard, summarized by Richard: "I think we probably also have significant disagreements about geopolitics which affect which acts we expect to be pivotal, but it seems like our comparative advantage is in discussing cognition, so let’s focus on that. We can build systems that outperform humans at quite a few tasks by now, without them needing search abilities that are general enough to even try to take over the world. Putting aside for a moment the question of which tasks are pivotal enough to save the world, which parts of your model draw the line between human-level chess players and human-level galaxy-colonisers, and say that we'll be able to align ones that significantly outperform us on *these* tasks before they take over the world, but not on *those* tasks?"Eliezer, summarized by Richard: "One aspect there is domain generality which in turn is achieved through novel domain learning. One can imagine asking the question: is there a superintelligent AGI that can quickly build nanotech the way that a beaver solves building dams, in virtue of having a bunch of specialized learning abilities without it ever having a cross-domain general learning ability? But there are many, many, many things that humans do which no other animal does, which you might think would contribute a lot to that animal's fitness if there were animalistic ways to do it - e.g. mining and smelting iron. (Although comparisons to animals are not generally reliable arguments about what AIs can do - e.g. chess is much easier for chips than neurons.) So my answer is 'Perhaps, but not by default, there's a bunch of subproblems, I don't actually know how to do it right now, it's not the easiest way to get an AGI that can build nanotech.' ~~Can I explain how I know that? I'm really not sure I can.~~" | | **[Yudkowsky][11:26]  (Sep. 10 comment)** Can I explain how I know that? I'm really not sure I can.In original text, this sentence was followed by a long attempt to explain anyways; if deleting that, which is plausibly the correct choice, this lead-in sentence should also be deleted, as otherwise it paints a false picture of how much I would try to explain anyways. | | **[Ngo][11:15]  (Sep. 12 comment)** Makes sense; deleted. | | **[Ngo]  (Sep. 10 Google Doc)** Richard, summarized by Richard: "Challenges which are trivial from a human-engineering perspective can be very challenging from an evolutionary perspective (e.g. spinning wheels). So the evolution of animals-with-a-little-bit-of-help-from-humans might end up in very different places from the evolution of animals-just-by-themselves. And analogously, the ability of humans to fill in the gaps to help less general AIs achieve more might be quite significant."On nanotech: what are the most relevant axes of difference between solving protein folding and designing nanotech that, say, self-assembles into a computer?"Eliezer, summarized by Richard: "This question seemed potentially cruxy to me. I.e., if building protein factories that built nanofactories that built nanomachines that met a certain deep and lofty engineering goal, didn't involve cognitive challenges different in kind from protein folding, we could maybe just safely go do that using AlphaFold 3, which would be just as safe as AlphaFold 2. I don't think we can do that. But it is among the more plausible advance-specified miracles we could get. At this point our last hope is that in fact the future is often quite surprising."Richard, summarized by Richard: "It seems to me that you’re making the same mistake here as you did with regards to recursive self-improvement in the AI foom debate - namely, putting too much trust in one big abstraction."Eliezer, summarized by Richard: "I suppose that is what it could potentially feel like from the inside to not get an abstraction.  Robin Hanson kept on asking why I was trusting my abstractions so much, when he was in the process of trusting his worse abstractions instead." |   4.2. Nate Soares' summary -------------------------   | | | --- | | **[Soares]  (Sep. 12 Google Doc)** *Consequentialism*Ok, here's a handful of notes. I apologize for not getting them out until midday Sunday. My main intent here is to do some shoring up of the ground we've covered. I'm hoping for skims and maybe some light comment back-and-forth as seems appropriate (perhaps similar to Richard's summary), but don't think we should derail the main thread over it. If time is tight, I would not be offended for these notes to get little-to-no interaction.---My sense is that there's a few points Eliezer was trying to transmit about consequentialism, that I'm not convinced have been received. I'm going to take a whack at it. I may well be wrong, both about whether Eliezer is in fact attempting to transmit these, and about whether Richard received them; I'm interested in both protests from Eliezer and paraphrases from Richard. | | **[Soares]  (Sep. 12 Google Doc)** 1. "The consequentialism is in the plan, not the cognition".I think Richard and Eliezer are coming at the concept "consequentialism" from very different angles, as evidenced eg by Richard saying (Nate's crappy paraphrase:) "where do you think the consequentialism is in a cat?" and Eliezer responding (Nate's crappy paraphrase:) "the cause of the apparent consequentialism of the cat's behavior is distributed between its brain and its evolutionary history".In particular, I think there's an argument here that goes something like:* Observe that, from our perspective, saving the world seems quite tricky, and seems likely to involve long sequences of clever actions that force the course of history into a narrow band (eg, because if we saw short sequences of dumb actions, we could just get started). * Suppose we were presented with a plan that allegedly describes a long sequence of clever actions that would, if executed, force the course of history into some narrow band. + For concreteness, suppose it is a plan that allegedly funnels history into the band where we have wealth and acclaim. * One plausible happenstance is that the plan is not in fact clever, and would not in fact have a forcing effect on history. + For example, perhaps the plan describes founding and managing some silicon valley startup, that would not work in practice. * Conditional on the plan having the history-funnelling property, there's a sense in which it's scary regardless of its source. + For instance, perhaps the plan describes founding and managing some silicon valley startup, and will succeed virtually every time it's executed, by dint of having very generic descriptions of things like how to identify and respond to competition, including descriptions of methods for superhumanly-good analyses of how to psychoanalyze the competition and put pressure on their weakpoints. + In particular, note that one need not believe the plan was generated by some "agent-like" cognitive system that, in a self-contained way, made use of reasoning we'd characterize as "possessing objectives" and "pursuing them in the real world". + More specifically, the scariness is a property of the plan itself. For instance, the fact that this plan accrues wealth and acclaim to the executor, in a wide variety of situations, regardless of what obstacles arise, implies that the plan contains course-correcting mechanisms that keep the plan on-target. + In other words, plans that *manage to actually funnel history* are (the argument goes) liable to have a wide variety of course-correction mechanisms that keep the plan oriented towards *some* target. And while this course-correcting property tends to be a property of history-funneling plans, the *choice of target* is of course free, hence the worry. (Of course, in practice we perhaps shouldn't be visualizing a single Plan handed to us from an AI or a time machine or whatever, but should instead imagine a system that is reacting to contingencies and replanning in realtime. At the least, this task is easier, as one can adjust only for the contingencies that are beginning to arise, rather than needing to predict them all in advance and/or describe general contingency-handling mechanisms. But, and feel free to take a moment to predict my response before reading the next sentence, "run this AI that replans autonomously on-the-fly" and "run this AI+human loop that replans+reevaluates on the fly", are still in this sense "plans", that still likely have the property of Eliezer!consequentialism, insofar as they work.) | | **[Soares]  (Sep. 12 Google Doc)** There's a part of this argument I have not yet driven home. Factoring it out into a separate bullet:2. "If a plan is good enough to work, it's pretty consequentialist in practice".In attempts to collect and distill a handful of scattered arguments of Eliezer's:If you ask GPT-3 to generate you a plan for saving the world, it will not manage to generate one that is very detailed. And if you tortured a big language model into giving you a detailed plan for saving the world, the resulting plan would not work. In particular, it would be full of errors like insensitivity to circumstance, suggesting impossible actions, and suggesting actions that run entirely at cross-purposes to one another.A plan that is sensitive to circumstance, and that describes actions that synergize rather than conflict -- like, in Eliezer's analogy, photons in a laser -- is much better able to funnel history into a narrow band.But, on Eliezer's view as I understand it, this "the plan is not constantly tripping over its own toes" property, goes hand-in-hand with what he calls "consequentialism". As a particularly stark and formal instance of the connection, observe that one way a plan can trip over its own toes is if it says "then trade 5 oranges for 2 apples, then trade 2 apples for 4 oranges". This is clearly an instance of the plan failing to "lase" -- of some orange-needing part of the plan working at cross-purposes to some apple-needing part of the plan, or something like that. And this is also a case where it's easy to see how if a plan *is* "lasing" with respect to apples and oranges, then it is behaving as if governed by some coherent preference.And the point as I understand it isn't "all toe-tripping looks superficially like an inconsistent preference", but rather "insofar as a plan *does* manage to chain a bunch of synergistic actions together, it manages to do so precisely insofar as it is Eliezer!consequentialist".cf the analogy to [information theory](https://www.lesswrong.com/s/oFePMp9rKftEeZDDr/p/QkX2bAkwG2EpGvNug), where if you're staring at a maze and you're trying to build an accurate representation of that maze in your own head, you will succeed precisely insofar as your process is Bayesian / information-theoretic. And, like, this is supposed to feel like a fairly tautological claim: you (almost certainly) can't get the image of a maze in your head to match the maze in the world by visualizing a maze at random, you have to add visualized-walls using some process that's correlated with the presence of actual walls. Your maze-visualizing process will work precisely insofar as you have access to & correctly make use of, observations that correlate with the presence of actual walls. You might also visualize extra walls in locations where it's politically expedient to believe that there's a wall, and you might also avoid visualizing walls in a bunch of distant regions of the maze because it's dark and you haven't got all day, but the resulting visualization in your head is accurate precisely *insofar* as you're managing to act kinda like a Bayesian.Similarly (the analogy goes), a plan works-in-concert and avoids-stepping-on-its-own-toes precisely insofar as it is consequentialist. These are two sides of the same coin, two ways of seeing the same thing.And, I'm not so much attempting to *argue* the point here, as to make sure that the *shape of the argument* (as I understand it) has been understood by Richard. In particular, the *shape of the argument* I see Eliezer as making is that "clumsy" plans don't work, and "laser-like plans" work insofar as they are managing to act kinda like a consequentialist.Rephrasing again: we have a wide variety of mathematical theorems all spotlighting, from different angles, the fact that a plan lacking in clumsiness, is possessing of coherence.("And", my model of Eliezer is quick to note, "this ofc does not mean that all sufficiently intelligent minds must generate very-coherent plans. If you really knew what you were doing, you could design a mind that emits plans that always "trip over themselves" along one particular axis, just as with sufficient mastery you could build a mind that believes 2+2=5 (for some reasonable cashing-out of that claim). But you don't get this for free -- and there's a sort of "attractor" here, when building cognitive systems, where just as generic training will tend to cause it to have true beliefs, so will generic training tend to cause its plans to lase.")(And ofc much of the worry is that all the mathematical theorems that suggest "this plan manages to work precisely insofar as it's lasing in some direction", say nothing about which direction it must lase. Hence, if you show me a plan clever enough to force history into some narrow band, I can be fairly confident it's doing a bunch of lasing, but not at all confident which direction it's lasing in.) | | **[Soares]  (Sep. 12 Google Doc)** One of my guesses is that Richard does in fact understand this argument (though I personally would benefit from a paraphrase, to test this hypothesis!), and perhaps even buys it, but that Richard gets off the train at a following step, namely that we *need* plans that "lase", because ones that don't aren't strong enough to save us. (Where in particular, I suspect most of the disagreement is in how far one can get with plans that are more like language-model outputs and less like lasers, rather than in the question of which pivotal acts would put an end to the acute risk period)But setting that aside for a moment, I want to use the above terminology to restate another point I saw Eliezer as attempting to make: one big trouble with alignment, in the case where we need our plans to be like lasers, is that on the one hand we need our plans to be like lasers, but on the other hand we want them to *fail* to be like lasers along certain specific dimensions.For instance, the plan presumably needs to involve all sorts of mechanisms for refocusing the laser in the case where the environment contains fog, and redirecting the laser in the case where the environment contains mirrors (...the analogy is getting a bit strained here, sorry, bear with me), so that it can in fact hit a narrow and distant target. Refocusing and redirecting to stay on target are part and parcel to plans that can hit narrow distant targets.But the humans shutting the AI down is like scattering the laser, and the humans tweaking the AI so that it plans in a different direction is like them tossing up mirrors that redirect the laser; and we want the plan to fail to correct for those interferences.As such, on the Eliezer view as I understand it, we can see ourselves as asking for a very unnatural sort of object: a path-through-the-future that is robust enough to funnel history into a narrow band in a very wide array of circumstances, but somehow insensitive to specific breeds of human-initiated attempts to switch which narrow band it's pointed towards.Ok. I meandered into trying to re-articulate the point over and over until I had a version distilled enough for my own satisfaction (which is much like arguing the point), apologies for the repetition.I don't think debating the claim is the right move at the moment (though I'm happy to hear rejoinders!). Things I would like, though, are: Eliezer saying whether the above is on-track from his perspective (and if not, then poking a few holes); and Richard attempting to paraphrase the above, such that I believe the arguments themselves have been communicated (saying nothing about whether Richard also buys them).--- | | **[Soares]  (Sep. 12 Google Doc)** My Richard-model's stance on the above points is something like "This all seems kinda plausible, but where Eliezer reads it as arguing that we had better figure out how to handle lasers, I read it as an argument that we'd better save the world without needing to resort to lasers. Perhaps if I thought the world could not be saved except by lasers, I would share many of your concerns, but I do not believe that, and in particular it looks to me like much of the recent progress in the field of AI -- from AlphaGo to GPT to AlphaFold -- is evidence in favor of the proposition that we'll be able to save the world without lasers."And I recall actual-Eliezer saying the following (more-or-less in response, iiuc, though readers note that I might be misunderstanding and this might be out-of-context):Definitely, "turns out it's easier than you thought to use gradient descent's memorization of zillions of shallow patterns that overlap and recombine into larger cognitive structures, to add up to a consequentialist nanoengineer that only does nanosystems and never does sufficiently general learning to apprehend the big picture containing humans, while still understanding the goal for that pivotal act you wanted to do" is among the more plausible advance-specified miracles we could get. On my view, and I think on Eliezer's, the "zillions of shallow patterns"-style AI that we see today, is not going to be sufficient to save the world (nor destroy it). There's a bunch of reasons that GPT and AlphaZero aren't destroying the world yet, and one of them is this "shallowness" property. And, yes, maybe we'll be wrong! I myself have been surprised by how far the shallow pattern memorization has gone (and, for instance, was surprised by GPT), and acknowledge that perhaps I will continue to be surprised. But I continue to predict that the shallow stuff won't be enough.I have the sense that lots of folk in the community are, one way or another, saying "Why not consider the problems of aligning systems that memorize zillions of shallow patterns?". And my answer is, "I still don't expect those sorts of machines to either kill or save us, I'm still expecting that there's a phase shift that won't happen until AI systems start to be able to make plans that are sufficiently deep and laserlike to do scary stuff, and I'm still expecting that the real alignment challenges are in that regime."And this seems to me close to the heart of the disagreement: some people (like me!) have an intuition that it's quite unlikely that figuring out how to get sufficient work out of shallow-memorizers is enough to save us, and I suspect others (perhaps even Richard!) have the sense that the aforementioned "phase shift" is the unlikely scenario, and that I'm focusing on a weird and unlucky corner of the space. (I'm curious whether you endorse this, Richard, or some nearby correction of it.)In particular, Richard, I am curious whether you endorse something like the following:* I'm focusing ~all my efforts on the shallow-memorizers case, because I think shallow-memorizer-alignment will by and large be sufficient, and even if it is not then I expect it's a good way to prepare ourselves for whatever we'll turn out to need in practice. In particular I don't put much stock in the idea that there's a predictable phase-change that forces us to deal with laser-like planners, nor that predictable problems in that domain give large present reason to worry. (I suspect not, at least not in precisely this form, and I'm eager for corrections.)I suspect something in this vicinity constitutes a crux of the disagreement, and I would be thrilled if we could get it distilled down to something as concise as the above. And, for the record, I personally endorse the following counter to the above:* I am focusing ~none of my efforts on shallow-memorizer-alignment, as I expect it to be far from sufficient, as I do not expect a singularity until we have more laser-like systems, and I think that the laserlike-planning regime has a host of predictable alignment difficulties that Earth does not seem at all prepared to face (unlike, it seems to me, the shallow-memorizer alignment difficulties), and as such I have large and present worries. --- | | **[Soares]  (Sep. 12 Google Doc)** Ok, and now a few less substantial points:There's a point Richard made here:Oh, interesting. Actually one more question then: to what extent do you think that explicitly reasoning about utility functions and laws of rationality is what makes consequentialists have the properties you've been talking about?that I suspect constituted a miscommunication, especially given that the following sentence appeared in Richard's summary:A third thing that makes humans in particular consequentialist is planning, especially when we’re aware of concepts like utility functions.In particular, I suspect Richard's model of Eliezer's model places (or placed, before Richard read Eliezer's comments on Richard's summary) some particular emphasis on systems reflecting and thinking about their own strategies, as a method by which the consequentialism and/or effectiveness gets in. I suspect this is a misunderstanding, and am happy to say more on my model upon request, but am hopeful that the points I made a few pages above have cleared this up.Finally, I observe that there are a few places where Eliezer keeps beeping when Richard attempts to summarize him, and I suspect it would be useful to do the dorky thing of Richard very explicitly naming Eliezer's beeps as he understands them, for purposes of getting common knowledge of understanding. For instance, things I think it might be useful for Richard to say verbatim (assuming he believes them, which I suspect, and subject to Eliezer-corrections, b/c maybe I'm saying things that induce separate beeps):1. Eliezer doesn't believe it's impossible to build AIs that have most any given property, including most any given safety property, including most any desired "non-consequentialist" or "deferential" property you might desire. Rather, Eliezer believes that many desirable safety properties don't happen by default, and require mastery of minds that likely takes a worrying amount of time to acquire.2. The points about consequentialism are not particularly central in Eliezer's view; they seem to him more like obvious background facts; the reason conversation has lingered here in the EA-sphere is that this is a point that many folk in the local community disagree on.For the record, I think it might also be worth Eliezer acknowledging that Richard probably understands point (1), and that glossing "you don't get it for free by default and we aren't on course to have the time to get it" as "you can't" is quite reasonable when summarizing. (And it might be worth Richard counter-acknowledging that the distinction is actually quite important once you buy the surrounding arguments, as it constitutes the difference between describing the current playing field and laying down to die.) I don't think any of these are high-priority, but they might be useful if easy :-)---Finally, stating the obvious-to-me, none of this is intended as criticism of either party, and all discussing parties have exhibited significant virtue-according-to-Nate throughout this process. | | | | --- | | **[Yudkowsky][21:27]  (Sep. 12)** From Nate's notes:For instance, the plan presumably needs to involve all sorts of mechanisms for refocusing the laser in the case where the environment contains fog, and redirecting the laser in the case where the environment contains mirrors (...the analogy is getting a bit strained here, sorry, bear with me), so that it can in fact hit a narrow and distant target. Refocusing and redirecting to stay on target are part and parcel to plans that can hit narrow distant targets.But the humans shutting the AI down is like scattering the laser, and the humans tweaking the AI so that it plans in a different direction is like them tossing up mirrors that redirect the laser; and we want the plan to fail to correct for those interferences.--> GOOD ANALOGY....or at least it sure conveys to *me* why corrigibility is anticonvergent / anticoherent / actually *moderately strongly contrary to* and not just *an orthogonal property of* a powerful-plan generator.But then, I already know why that's true and how it generalized up to resisting our various attempts to solve small pieces of more important aspects of it - it's not just true by weak default, it's true by a stronger default where a roomful of people at a workshop spend several days trying to come up with increasingly complicated ways to describe a system that will let you shut it down (but not steer you through time *into* shutting it down), and all of those suggested ways get shot down. (And yes, people outside MIRI now and then publish papers saying they totally just solved this problem, but all of those "solutions" are things we considered and dismissed as trivially failing to scale to powerful agents - they didn't understand what we considered to be the first-order problems in the first place - rather than these being evidence that MIRI just didn't have smart-enough people at the workshop.) | | **[Yudkowsky][18:56]  (Nov. 5 follow-up comment)** Eg, "Well, we took a system that only learned from reinforcement on situations it had previously been in, and couldn't use imagination to plan for things it had never seen, and then we found that if we didn't update it on shut-down situations it wasn't reinforced to avoid shutdowns!" |
cf55adf8-0e62-42d4-a4cf-1e21200b4c5a
trentmkelly/LessWrong-43k
LessWrong
Will 2024 be very hot? Should we be worried? tl;dr: There are several trends which suggest that global temperatures over the next year will experience a short-term increase, relative to the long-term increase in temperatures caused by man-made global warming.  Credits: Most of the information comes from Berkley Earth monthly temperature updates. Several people on Twitter (Robert Rohde, Zeke Hausfather, James Hansen and Roko) have also been talking about the issues discussed here for a while. Man-made global warming has been causing a steady, long-term increase in average global temperatures since the industrial revolution. However, recently several trends are lining up which suggest that the next year/few years might experience temporary greater-than-average warming, on top of baseline man-made warming. Some of these factors are already in play and 2023 is 'virtually certain' to be the hottest year on record. The story can be summed up in this lovely graphic from Berkley earth: Image from: https://berkeleyearth.org/november-2023-temperature-update/  I've had a look into some of the things that are happening and have written up what I've learned. I am not a climate scientist, so take this all with a pinch of salt. El Niño What is El Niño? Periodically, the strength and direction of the winds over the Pacific ocean changes, causing the surface waters to flow differently, which leads to changes in the amount of cold water coming up from the depths of the ocean. This pattern is known as the El Niño–Southern Oscillation. The phase when the surface waters are warmer is known as El Niño, and the phase when the surface waters are cooler is known as La Niña.  These periods occur irregularly every few years and last approximately a year. How does it affect global temperatures?  Unsurprisingly, during the El Niño period, when surface waters are warmer, more heat is released into the atmosphere, leading to warmer global surface temperatures. In general, years with El Niño are hotter and years with La Niña are co
2d9a80b6-2b68-4e8b-a3f0-f4c66e77328c
trentmkelly/LessWrong-43k
LessWrong
Extreme risk neutrality isn't always wrong Risk neutrality is the idea that a 50% chance of $2 is equally valuable as a 100% chance of $1. This is a highly unusual preference, but it sometimes makes sense: > Exampleville is infested with an always-fatal parasite. All 1,024 residents have the worm. > > For each gold coin you pay, a pharmacist will deworm one randomly selected resident. > > You only have one gold coin, but the pharmacist offers you a bet. Flip a weighted coin, which comes up heads 51% of the time. If heads, you double your money. If tails, you lose it all. > > You can play as many times as you like. It makes sense to be risk neutral here. If you keep playing until you've saved everyone or go broke, each resident gets a 0.51^10 = 1 in 840 chance of survival. If you don't play, it's 1 in 1,024. For a few of the most scalable interventions, I think EA is like this. OpenPhil seems to expect a very low rate of diminishing returns for GiveWell's top charities. I wouldn't recommend a double-or-nothing on GiveWell's annual budget at a 51% win rate, but I would at 60%.[1]  Most interventions aren't like this. The second $100M of AI safety research buys much less than the first $100M. Diminishing returns seem really steep.  SBF and Caroline Ellison had much more aggressive ideas. More below on that. Intro to the Kelly Criterion Suppose you're offered a 60% double-or-nothing bet. How much should you wager? In 1956, John Kelly, a researcher at Bell Labs, proposed a general solution. Instead of maximizing Expected Value, he maximized the geometric rate of return. Over a very large number of rounds, this is given by (1+l)P(1−l)1−P Where l is the proportion of your bankroll wagered each round, and P>0.5 is the probability of winning each double-or-nothing bet. Maximizing with respect to l gives l=2P−1 Therefore, if you have a 60% chance of winning, bet 20% of your bankroll on each round. At a 51% chance of winning, bet 2%. This betting method, the Kelly Criterion, has been highly successful i
a2d6f791-0317-4706-9f81-e56761816ed3
trentmkelly/LessWrong-43k
LessWrong
Superintelligence 20: The value-loading problem This is part of a weekly reading group on Nick Bostrom's book, Superintelligence. For more information about the group, and an index of posts so far see the announcement post. For the schedule of future topics, see MIRI's reading guide. ---------------------------------------- Welcome. This week we discuss the twentieth section in the reading guide: the value-loading problem.  This post summarizes the section, and offers a few relevant notes, and ideas for further investigation. Some of my own thoughts and questions for discussion are in the comments. There is no need to proceed in order through this post, or to look at everything. Feel free to jump straight to the discussion. Where applicable and I remember, page numbers indicate the rough part of the chapter that is most related (not necessarily that the chapter is being cited for the specific claim). Reading: “The value-loading problem” through “Motivational scaffolding” from Chapter 12 ---------------------------------------- Summary 1. Capability control is a short-term measure: at some point, we will want to select the motivations of AIs. (p185) 2. The value loading problem: how do you cause an AI to pursue your goals? (p185) 3. Some ways to instill values into an AI: 1. Explicit representation: Hand-code desirable values (185-7) 2. Evolutionary selection: Humans evolved to have values that are desirable to humans—maybe it wouldn't be too hard to artificially select digital agents with desirable values. (p187-8) 3. Reinforcement learning: In general, a machine receives reward signal as it interacts with the environment, and tries to maximize the reward signal. Perhaps we could reward a reinforcement learner for aligning with our values, and it could learn them. (p188-9) 4. Associative value accretion: Have the AI acquire values in the way that humans appear to—starting out with some machinery for synthesizing appropriate new values as we interact with our environments. (p189-190)
ad2dc921-7141-4c12-a41d-1256b83d1c0b
trentmkelly/LessWrong-43k
LessWrong
Find a study partner - March 2014 thread This is the monthly thread to find a study partner. For reasons mentioned in So8res article as well as for other reasons: studying with a partner can be very good. So if you're looking for a study partner for an online course, reading a manual or else (whether it's in the MIRI course list or not) tell others in the comment section. The past treads about finding a study partner can be found under the tag study_thread. However, you have higher probabilities of finding a study partner in the most recent thread. If you haven't found a study partner last month, you are welcome to post the same comment again here.
4fd3c89a-673e-4e44-957b-ce98a80243f6
trentmkelly/LessWrong-43k
LessWrong
External rationality vs. internal rationality Internal rationality is using truth-preserving thought processes that output good hypotheses given good data. Examples include: * Identifying inconsistencies in hypotheses. * Having detailed models/inside views. * Not doing motivated reasoning. * Evaluating one's thought processes. External rationality is discerning good data from bad data. Examples include: * Identifying the selection effects and incentives behind what data one sees. * Identifying whether some data is "adversarial": data that strongly suggests a hypothesis just because there will inevitably be one source of data that strongly suggests a given hypothesis just because there are so many sources of data. * Knowing when to use an outside view. * Knowing what experts to trust on a topic one knows nothing about. External rationality and internal rationality intersect and build upon one another. I like using these words instead of "inside view" and "outside view" because "outside view" tends to exclude model-based reasoning when you can have detailed models of what data is good or bad. Also "outside view" specifically refers to reference class forecasting and I don't want to overload the term. I also feel like "external rationality" and "internal rationality" are better at communicating the fuzzy intuition I'm getting at here.
c51750a6-22cc-42c0-92ab-e4863cafd6b2
trentmkelly/LessWrong-43k
LessWrong
My "2.9 trauma limit" Here is how I think about trauma.  It's packaged in a language intended to be accessible to people who are skeptical of trauma-narratives. It kinda skips over all the actually hard parts because I don't feel like dealing with that today, and many of them are sort of private and involve other people. I realize that makes it fairly abstract and maybe hard follow. Tough nuts? I suppose worth saying explicitly: I think my experience with trauma was relatively simple, and I expect people with more complicated and long-lived trauma experiences to need pretty different frameworks. YMMV.  But if you're skeptical that "trauma" is a useful pointer for "what sort of things are going wrong?", I think this may give you some useful hooks for thinking about whether it applies to you. Trauma Advocates circa 2019 Around 2018-2019, there was a fairly common set of memes in my local social group around processing trauma. There were some people who had noticed: * They (and many people they talked to), felt very stuck, demotivated, unhappy or unproductive in mysterious ways. * They eventually came to believe that this was downstream of trauma they had experienced awhile ago (often in childhood), in ways that shaped what stories they felt allowed to tell themselves, and what sort of motivations they were allowed to have. Their motivations were a series of spaghetti code built on top of said trauma. * Notably, it had not been salient to them that they were experiencing trauma for most of their adult life. But they eventually hit on a realization that they'd been ignoring their emotions, while thinking they were awesome at emotions. Unlocking this realization made them happier and more fulfilled. * They observed at least some other people for whom the above story was clearly true. * They observed that many rationalists-in-their-neighborhood seemed to have at least some things in common with the story (i.e. mysteriously unmotivated, anxious, etc). * They went around diagnosing p
2699dac9-1aaa-4d83-b7d0-209d7d9406c5
trentmkelly/LessWrong-43k
LessWrong
Meetup Interest: Rhode Island For over a year now I have been visiting this site, reading and learning about rationality, and meeting other rationalists sounds like a wonderfully positive experience. Sadly, there are no meetups in my area -- I looked at the RI skeptics society website, but their news page seemed focused on religious-people-bashing to a point that seemed mean and spiteful to me (at least at the time, it was awhile ago. Yes, we know people do dumb things because of religious beliefs; how about focusing on improving ourselves instead of pointing at them?). Maybe there are a lot of others in the area like me who have been lurking on the site, longing for a chance to meet likeminded people but too shy to speak up. Of course, not all of them may be checking the discussions page but I'm not sure what else to do. So, anyone out there in Rhode Island or close to the borders of Massachusetts and Connecticut that would like to start a local rationality meetup with me? I am also interested in any advice for getting this to its target audience, as not everyone may be looking at the discussion page. I am also also interested in anywhere I can talk with lesswrong folks in real-time -- it's hard to establish friendships through posts, and this discussion forum doesn't seem like the place for random chatter with people.
682656ef-b71a-4cef-a842-a6fb8645b4f9
trentmkelly/LessWrong-43k
LessWrong
What are rationalists worst at? If we want to become stronger, not just individually but as a community, it's worth checking in on what things we're worst at so we can pay attention to those things and become better at them. So, what are the things that LessWrong-style rationalists are worst at on average as of now? Bonus points if you have concrete interventions to suggest for getting better at those things.
50ead6e6-61fb-469e-b5f9-58099f1796ac
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Will 2023 be the last year you can write short stories and receive most of the intellectual credit for writing them? I've been worldbuilding a unique setting in my head and on paper for maybe a year now, and for quite a bit longer than that I have wanted to start publishing fiction. I've never "found the time" to act on these ambitions, of course, but until the advent of LLMs I always thought I would have the opportunity to work on it later, after this or that startup or video game or work thing. I might still do that, but I'd be lying if I didn't say that part of the appeal of writing fiction anything at all was to receive credit from a very small group of readers for a piece of content that they liked and found novel or creative. It seems to me like the possibility of that is either fading or has already faded; the one big barrier that I thought was preventing LLMs from writing longform content, context size limits, has been blown away. GPT-4 now has a context size well above that of most short stories, more than enough to summarize the essential beats of a novel down from ~200 or so pages, and my guess is that future people will assume that most of the words and story details are being written out of a machine. Perhaps the basic idea for a story will be the domain of humans, but then people will be left wondering exactly how much of that idea was the product of the author, and rhetorical giftedness will become commoditized. That in mind, should I get started now, or is this worrying too much, or is it already basically too late?
894bb25d-1fcc-4898-99ce-f7e8a9bba920
trentmkelly/LessWrong-43k
LessWrong
Big picture of phasic dopamine (Update Jan. 2023: This article has important errors. I am leaving it as-is in case people want to see the historical trail of me gradually making progress. Most of the content here is revised and better-explained in my later post series Intro to Brain-Like AGI Safety, see especially Posts 5 and 6. Anyway, I still think that this post has lots of big kernels of truth, and that my updates since writing it have mostly been centered around how that big picture is implemented in neuroanatomy. More detail in this comment.) Target audience: Everyone, particularly (1) people in ML/AI, and (2) people in neuroscience. I tried hard to avoid jargon and prerequisites. You can skip the parts that you find obvious. Context: I’m trying to make sense of dopamine in the brain—and decision-making and motivation more generally. This post is me playing with ideas; expect errors and omissions (and then tell me about them!). This post is a bit long; I’m worried no one will read it. So in a shameless attempt to draw you in, here’s a Drake meme... This will make more sense later in the post. Yes of course the bottom one is oversimplified, and yes of course the top one is useful too. But I claim that the bottom one is a better starting point. (Thanks Adam Marblestone, Trenton Bricken, Beren Millidge, Connor Leahy, Jeroen Verharen, Ben Smith, Adam Shimi, and Jessica Mollick for helpful suggestions and criticisms.) Summary / Table of Contents * I’ll start by briefly reviewing the famous 1990s story relating the role of dopamine in the brain to the “TD learning” algorithm. * Then I’ll switch over to the “cortico-basal ganglia-thalamo-cortical loop” (“loop” for short), a circuit intimately related to dopamine learning and inference. * I’ll go over a theory that the entire telencephalon region of the brain (which includes neocortex, hippocampus, amygdala, basal ganglia, and more) has a coherent overall architecture, with these loops being the key unifying element, and I’ll relate that
092d539f-cd75-4728-a6cc-ccbe5e2b0e06
trentmkelly/LessWrong-43k
LessWrong
[Link] Duolingo Yesterday I heard about an interesting new project called Duolingo.  For some background, see this TEDx Talk by one of the creators, Luis von Ahn.  It is a crowdsourcing approach to language translation, where the users learn a foreign language while translating.  I've signed up but I haven't received an invitation yet to start.  Once I start using it, I will provide updates on its effectiveness.
774a5314-21d5-4838-b437-b09a0762d0e8
trentmkelly/LessWrong-43k
LessWrong
Coping with Undecidability Mostly just reflections on things I've been learning about theory of computer science and trying to figure my philosophical stance towards them, in order to not be a trollable mathematician In the last few months, my mind has constantly been trying to argue against something like the "naive Church Turing Thesis" in an attempt to better understand what it means for "the real world" that some things we can ask ourselves with a Turing machine can't be answered with "yes" or "no" by any Turing machine (formally called an undecidable problem). (See the halting problem, for an example). My confusion stems from this weird dichotomy[1] that basically everything we can compute in the real world can be computed by a finite state automaton, because our world does not contain infinite memory tapes (as far as I know from the probabilistic evidence I was able to gather). But (for convenience?), mathematicians/computer scientists use Turing machines/Lambda Calculus/(your favorite programming language) for most purposes instead to model things, even though they have these really peculiar undecidability properties. What I've learned so far is that applying your formal model of computation to say things about problems in the real world is a super philosophically slippery domain (though I feel the reason for this has more to do with the space of all algorithms just being a really resistant hydra and less with peculiarities of the human mind[2]). The replies I got from people on Lesswrong along with Scott Aaronsons writing have been very helpful here[3]. Maybe I am just a pedant, but after tripping myself for a while, I see other people commit philosophical blunders in this area. As an example, here is this motte-and-bailey [4]which I fell for: Motte-and-Bailey computability Motte: "intuitively computable" => a (Turing Machine)/(Lambda Calculus)/(Your favorite equivalent model C) can do it. The converse does not necessarily hold, because no one has infinite memory. When doing t
48e83f91-cf17-4fd1-9390-a48f84c12017
trentmkelly/LessWrong-43k
LessWrong
Republishing an old essay in light of current news on Bing's AI: "Regarding Blake Lemoine's claim that LaMDA is 'sentient', he might be right (sorta), but perhaps not for the reasons he thinks" I am republishing this essay because of recent discussions about erratic, ‘emotional’ and aggressive behavior by Bing’s AI, Sydney. There has been some discussion about whether it’s ethical to run Sydney given that behavior. People are responding to such claims with “don’t be ridiculous, of course Sydney doesn’t feel anything, Sydney is a machine for predicting the next token of text”. While I am inclined to agree with that conclusion on the whole, I think the issue is a bit more complex. For ongoing discussion, see: https://www.lesswrong.com/posts/jtoPawEhLNXNxvgTT/bing-chat-is-blatantly-aggressively-misaligned https://www.nytimes.com/2023/02/16/technology/bing-chatbot-microsoft-chatgpt.html Blake Lemoine is an engineer who worked for Google. He is claiming LaMDA, a language model he worked with, was sentient. Google put him on unpaid leave. Most people think his claim is absurd because language models are models of what word is most likely to follow a prior sequence of words (see, for example, GPT-3). How could such a thing be sentient? Moreover, there are unmistakable oddities and logical gaps in the behavior of LaMDA in the very transcripts that Lemoine is relying on- some proof of personhood then! Just spitballing here, putting a hypothesis forward in a spirit of play and humility, but I wonder if Lemoine’s claim is not as absurd as many think. The concept of sentience is quite elusive, so let’s leave it behind for something slightly better understood- personhood. I think that it is conceivable that LaMDA contains persons. However my reasons, unlike Blake Lemonie’s, have little to do with a given conversation in which the model claimed to be sentient or a person. When a language model is guessing the next token given that transformers are black boxes, we can’t rule out the possibility it is simulating interacting beliefs, desires, and emotions of the hypothetical author it is “roleplaying”. Simulation in this sense is quite a minimal concept, all that is n
ca8f4cf2-bef8-4535-96bb-6dc58db66e62
StampyAI/alignment-research-dataset/youtube
Youtube Transcripts
Agora: M. Brandão: Fairness and explainability in robot motion this community um shall i start do you want to start the recording yes i started recording so confident all right all right so i'm going to talk about uh fairness and explainability in robot motion uh which are probably concepts that um one wouldn't uh wouldn't think would be relevant to robot motion at the first instance maybe but i hope i hope to convince you uh otherwise so for those of you who are not familiar with the path and motion planning these are technical problems that are very important in robotics but not only robotics even in your in your phone apps when you uh ask to go to a certain place on your gps using google maps or something you have to solve a path planning problem so uh to find a sequence of uh of uh of steps over over network of roads for example to get to the cone and uh the motion planning problem is the continuous version of that so where you want to find for example full body postures of a robot so exactly where each of its motors should be and its position in the world should be and this is both for this kind of robots with an arm and the wheels or humanoid robots that need to do steps and avoid obstacles and balance and etc legged robots but also autonomous vehicles when you want to find a trajectory maybe for the next 10 seconds that avoids obstacles so it's safe but also perhaps is smooth and comfortable for the passengers and even in warehouse automation so computing the paths or motion for for many of these robots to reach their destinations without colliding with each other and making sure they satisfy certain constraints so this is the path motion planning um problem um i'll start with the uh explainability of robot motion so to try and uh unpack this concept of explainability i'm going to start uh basing myself on some literature review and some user studies derive at some working concepts of what could explanations for robot motion uh look like but then use the actually uh implement concrete uh explanation generation algorithms or explainable planners and get uh feedback from users to iterate the concepts and uh figure out what are the important design considerations when thinking about explanations for about motion and so why would you actually want to explain the way the robot is moving or the the output of a motion planner so here are two examples the your robot might move in a way that is that you did not expect so for example here this robot is taking this long path through the right around around the table to reach first shelf and the operator or the user might think why why are you doing such a long path you might be spending more battery than you should i would expect that that you you you or the robot would take this other path that is shorter um through the other side of the table and the another reason is that sometimes or actually very often planning algorithms just cannot find a solution so they will tell you uh yeah sorry uh i could not find a solution after i tried for so long or i was unable to solve this planning problem so i cannot get the robot to where you want it to be and then the the user or the even the developer is is left with no clue as to what they should do to to fix this issue why why is it failing so i i've actually recently done a user study with both developers of motion planning algorithms and expert users who use it on a daily basis of motion planning algorithms and most of them kind of the recurring uh theme and opinion was that explanations could be useful specifically for debugging of these algorithms and also for mechanical design and for iterating the mechanical designs of robots so these experts uh these were around 20 or 25 experts they were commonly saying things like explanations could accelerate the debugging process or they could even help the developers understand the inner workings of motion planners because even though they develop the algorithms themselves often it's not intuitive uh the outputs you get why why is it that the output is the way it is so the the sometimes the expectations of developers do not match the actual outputs of the algorithms and explainability could even even suggest algorithm changes so thanks to changing the algorithm such as to to better match the expectations or they could even suggest uh changes to the robot model so that basically the the mechanical design of the robot or morphology or actuation changes so of course this is the point of view of experts and probably the different people will be interested in making different questions and they will be interested in different kinds of answers when they ask questions about robots so a developer as i said could ask questions about a certain plan that the robot has in order to to make changes to the algorithm improve the algorithm or find the bug but maybe a la user could ask questions about what the robot is doing imagine or imagine a warehouse worker could ask questions about what the robot is doing in the warehouse in order to better understand how the planet works to better be able to predict how the the robot will move and so and better collaborate with the robot maybe but the mechanical engineer might ask questions about the robot i might ask questions um that that are based on the robotic design so for example why can't you why can't you reach for this object and uh this maybe a mechanical engineer would like to hear something like because my arm is not long enough and so they they will know that they should change the mechanical design of the arm and even an and an architect could ask questions in order to be able to to change the layout of the warehouse for example so it depends on the user of course then uh what kind of explanation so what can explanations look like of course this again will depend on who the user is and what they're interested in but uh of course there are also different ways to answer the same uh different possible explanations for for a certain question so i've i've uh organized this into problem-centered explanations and algorithms centered explanations in this in this recent paper so uh if you see this example here on the right where the robot does path a but the user expected plus b b so why is it that you took path instead of b the explanation could be based on costs so because b would take more energy or because of constraints so because b would violate a certain constraint b would actually be in collision you don't really see it but it would be in collision it could be based on robot design because the robot cannot fit through corridor y so it's it's based on the on the dimensions of the robot or based on the environment because the there's an obstacle there's a table in a certain location on the other hand there are algorithm centered explanations so you could say that the algorithm found path a not b because you didn't run the algorithm for long enough if you had run it for long enough it would have actually found b or because the algorithm is not optimal so of course there can be different explanations and also similar explanations to these if you're wanting to explain failure so uh so now i have a kind of a set of working examples of possible explanations and i um to better tease out what what are the actual design considerations and and the how what how you should actually show these these explanations i uh went ahead and did some prototype implementations of these of this explanation so one was the constraint-based explanation so imagine your algorithm fails so you cannot find a path for the robot to reach for this water tab here for example on the left and it could and i made an algorithm that automatically generates explanations for this kind of problems so it will say i couldn't find the solution because i cannot simultaneously touch the water tap or grasp the water tap and avoid collision for example though i would be able to do this if the water tap was 15 centimeters closer for example and another kind of explanation algorithm that i've done is based on initialization so it's an algorithm centered explanation so you could say i failed to find a solution because the initialization which is a kind of step of the uh planning algorithm utilization was in the basin of attraction of an infeasible um solution though you would you would have been able to find a solution if you had used a different strategy so you could also say because you used uh this initialization strategy instead of the other one so how i generate this kind of explanation is basically for this initialization i just try a different kind of initialization scheme initially method and if that works i can blame the initialization scheme basically states that it's the the that's the reason for constraint based explanations i basically solve many relaxed problems so problems that do not need to satisfy all the constraints of the original problem so there are subproblems and i try to find that problem that satisfy as many constraints as possible so that is as close as possible to the original one so i then uh showed these kind of explanations to um the same expert users as as before and uh so well one good thing is that most of them were satisfied with explanations so in this kind of question like how how how much are you satisfied with this kind of explanation one to seven so they are in general they were satisfied with this with these kind of explanations some explanations more than others but overall more satisfied with than the kind of output they currently get from motion planners on the other hand i got some interesting insights such as users were not sure as uh whether one should see when there are multiple possible explanations whether the user should see should see only one or a set or all possible explanations so it could be the case that you could make a problem feasible but both by saying that two constraints don't conflict or by by saying that you could change the initialization uh method right so which which kind of things should we show or should we show all the options another insight was that sometimes the language was hard to understand potentially for the users so we we might have to tune uh the the kind of language uh that you use depending on the user other other users complained that it was not clear to them why there was a collision so even though the explanation was because there is a collision it was not clear there was actually a collision so visually visualization aids might be important to make things more interpretable and intuitive but perhaps the most important um insight from this user study was that many people said that the explanations should go deeper and for example not just say i couldn't find a solution because uh i can't uh reach uh for this and avoid collision at the same time but actually say why it is that you cannot reach for this and set the cyclist at the same time so for example you could say in this case you should go deeper and say well you can the the water tap is too further away from the beginning of the furniture that's why you cannot solve this problem so what should the explanations look like then the visualization theme was very common around these experts inputs so they they suggested we should use highlighting or visualizations of feasibility regions etc to improve but also abstraction so sometimes it's not enough just to say uh constraint one conflicts with constraint two you might have to use intuitive concepts for example say because the environment is too cluttered or because the object is not close enough but these are very abstract things that um we will have to find uh methods uh that this is i think an open problem that we still have to find methods that are able to come up with these abstractions automatically and finally this deep explanations theme of uh so not just say uh because two tasks conflict but this but the same widely conflict so i then um did another uh user study with a with a different kind of method so i uh this was not just a very simple method as the ones as the ones before but a more involved one using advanced algorithms and more uh perhaps more visualization focused so here you want to ask why uh so i'm and i was also interested in seeing so if explanations are actually effective at helping users understand the problem so i wanted to evaluate this depending on the the kinds of design choices you make so for this specific problem you see here there's a map with blue and red areas the blue areas are easy to move on and the red arrows are hard to move on so you have to go slower and then the shortest path or the fastest path is the path a which goes around this long distance and path b is what user expected and so the user asks why did you take path a instead of path b which is shorter and takes these terrors here this is a staircase and so i i've proposed some new algorithms to generate explanations for this kind of questions based on inverse shortest paths algorithms for those of you who are familiar um so basically what the what these algorithms do is to answer the question why is path a optimal not b they find the minimal changes to make to the map so that the path b so the user's expectation would actually be true which means uh basically here you would say well f a is the shortest not b because for b to be shortest the shortest you would have to change the the traversability or the the ease of reversibility of these two areas here so you would have to make these staircases here be easier to traverse if you wanted this to be the shortest path or you could say because um a is the shortest not b because walking on blue is is too cheap so if it costed a bit more to walk on blue then b would actually be the shortest paths so you can automatically generate these kind of explanations and even more recently i've developed an algorithm that can even generate all possible optimal explanations for a question so say uh why is path a optimal not b and you could say because for b to be the shortest path the trains should change in one of the following ways and it gives the user all possible ways in which so for example if you put these two areas here or these two or these two if they become hard to traverse then indeed the shortest path would be the one you wanted but since it's not that way the shortest path is is this one instead right so i passed these explanations on two users so another set of users so uh single um explanations and multiple explanations and i've tried to see um how how well uh so after after seeing many of these explanations do users become better at understanding and predicting the behavior of this path planner and there was some interesting insights that we got so for example more user satisfaction does not mean that the explanations are more effective we actually got the the opposite so people were more satisfied with one kind of explanations the the one with multiple options but they actually um got worse at understanding the problem if they if they had seen more uh possibilities so it is also interesting that showing multiple possible explanations could actually be counterproductive and confuse the the users and decrease um problem understanding so to kind of sum up their important design considerations to think about when you're implementing explainable planners the type of explanation depends on user needs there's a conflict between user user satisfaction and understanding users might prefer simpler or more complex explanations even though they're not as effective there's so there's a conflict between complexity and multiplicity and uh there's you should also pay attention to explanation depth and visualization abstractions but also kind of a general methodology that i used here which was to explore a concept an ethical concern of transparency or explainability and jump into implementation of prototypes and use these to anticipate design issues and to actually refine the concepts and iterate and this could be potentially a good um i believe this could be potentially good way a good tool for responsible innovation as well and which i will apply next to the problem of fairness in robot motion but if if there are any questions at this point about explainability i'm happy to take questions yeah thank you martin i see one question from uh young maybe if you can mute yourself and ask the question it'll be great hey yes thanks thanks so um thank you martin for sharing your research so my question is is really about these uh these constraints that you're describing right so it looks that you mainly consider the uh those constraints that can be formally specified right and then you inject that somehow into your organization algorithm so um my question is other uh constraints that are you know more ambiguous that need to be uh and that can only be uh expressed through for example natural language or something like that maybe so certainly it's very it's very difficult to in many problems it's difficult to elicit the requirements and it's difficult to understand what are what do we actually want the the robot to do but if we want the robot to to do it using a we actually need to formulate it to put it down in into uh into an equation so that's the the only way it's going to it's going to work but then so usually the way it uh works is that actually you start by by eliciting the requirements of the the problem and then you implement it in the motion planner using specific equations and constraints and then you deploy it and very often you realize oh wait but the robot is still doing this which is not really what i expected so we need another constraint and this often happens in deployment so it takes a long time for you to get at the place where the robot is doing exactly what you wanted and this this is a continuous process as well yeah exactly that part i found very interesting right it seems that what we do now is mostly you know uh let's see how it works and then we come back and maybe there are more constraints that we can derive right but then i'm wondering if there should be some you know principled method that would allow us to get as much as possible in the beginning and then of course it's the incremental process right we need always have to debug and improve right but that's the part that sounds quite interesting to share some thoughts yeah what seems to be you usable by those machines it needs to be formally as specified but then it would be really interesting to see what are the things that can really be formally specified and what are the things that is hot maybe there has to be some research going on there yeah of course there are also methods based on learning right so if if the user is not it cannot easily write down the requirements the users can just show examples and then uh by by seeing many examples an algorithm could learn to do this but this is impractical impractical for large systems like imagine a warehouse with with 1 000 robots the user cannot possibly have good intuition about um how each of the robots should be moving so you have to start by actually formalizing some objectives so that's also just some thought yeah yeah that's a good point thanks for sharing yeah and actually as a follow-up to this question for me uh so did you also consider like of course you need to define all these constraints beforehand but there's obviously going to be cases that you haven't foreseen when you started implementing or formalizing everything so have you also considered uh scenarios where francis robot has no clue what you know has not the real explanation and we'll just tell you the human hey i have no idea what happened but this is what i what all the events that led up to this point and uh did you consider that in your uh your algorithms too or in your design and uh no not not at this point but so the for example it's it's usual for for developers of motion motion planning algorithms to for example look at the search tree as a way to kind of get an intuition as to what what happened so the search tree is a kind of history of everything that the algorithm tried and but i didn't for example compare uh the explanatory power of a search of looking at the search tree versus uh these kind of explanations that i did here so that would be a good baseline as well thanks for the suggestion thanks interesting um any other questions or else we can uh move on to the second part yeah good sure okay all right so um i'll continue then with the uh fairness uh um the idea of fairness in motion planning so i'm going to use the similar kind of methodology that i used for drawing out this uh the this concept of explainability so i'm going to try and start by um actually simulating the the deployment of a robot and and looking at the results uh and then distributions of impact and trying to understand what kind of issues awareness could come up and then directly formalize concepts uh formalize the concepts of fairness that are in the philosophy literature and distributive justice literature and again simulate the deployment of of of a fair planner in those senses and use those simulations to again draw out the design considerations and potential issues and uh i'm going to suggest this could be again a good way to involve stakeholders in the in the process so to for all of this i'm going to use a walkthrough use case to to make things more intuitive so this use case is of a rescue robot that needs to find as many people as possible so it uses path planning or motion planning that so it starts at a certain point in the in the middle of the of a city which is oxford in the uk uh the fire station it goes uh for uh some some path around the city and then returns to recharge the batteries and then of course we can we can use perhaps some uh census date about uh how the population is distributed so population density or even age distribution and ethnicity distribution to think about potential impact but also about how to get as many people find as many people as possible with our robot so i'm going to use these simulations i'm going to simulate a robot that is trying to find as many people as possible and then look at the kind of which people the robot found and try and uh make claims about fairness in the past of robots from this so my first claim is that robot paths can be uh biased maybe uh this is straightforward to some of you but the idea here is that if your robot tries to find as many people as possible it should stay in in the close vicinity of the of the launching station both because it can then uh return very quickly when the battery is over and because that's where the highest population density is but at the same time if you do this the the people you will find are mainly people in between in their 20s so the undergraduate population of oxford and they're also mainly white chinese and male and these are the population you find if you stick to the center of oxford because of historical reasons and because of the way that the city works at the same time you can also make the other uh like the next step uh of claims which is the robot pass can be discriminatory so if you think that internet discrimination is when a decision or policy even though it doesn't target specific group it has worst effects for those group this happens in this case here right because the people that are much younger so the older population and younger population are actually um have less probability of being found by the robot as well as the female group and minority ethnicity groups then you could go one step further and say that it's possible for robot paths to be unfair so if you think that the goals the disaster response are to find as many people as possible but also to attend those that are most at risk first this strategy that tried to find as many people as possible actually did not meet the second requirement because it found the student population in their 20s which could be considered the risk so you could say that a robot pass could be unfair according to disaster response ethics this other claim is perhaps also uh intuitive to many of you but the idea is that robots must face dilemmas already faced by humans so disaster response teams already have to think about the impact and the fairness of a mission so the robots that are deployed for disaster response we'll uh of course also have to think about this but how do we operationalize similar furnace sensitivity so this is the step where i uh dive into implementation directly to try to tease out issues and design considerations so i i did a survey of the distributive justice and political philosophy literature and tried to bring these concepts and formalize them for the the for the context of path planning so i here are three examples that are in this in this paper in this in my recent paper so demographic parity is is one concept conception of fairness where you say that the probability of the distribution of people that your robot finds in its path should be the same as the distribution of people of the whole city so this means basically that when your what goes and look looks for as many people as possible it should still try to pick a representative sample of the population of the whole city but there are also conceptions of fairness for example that the distribution of people found by the robot should be such that all groups have at least a certain probability of being found this is a sufficient therian um approach and it could also be a rosian uh approach so the the path should maximize the the the probability of the least uh likely group and there are also other kinds of um apparent conceptions of course but i'm going to stick to one and try and tease out possible issues um that come up when when you when you choose a definition of fairness so i implemented a motion planner that simultaneously uh optimized the number of people found uh found and the uh minimized the unfairness so try to get the distribution of people found by the robot as close as possible to the distribution of people of the whole city so um this curve here has many points each point is a different path that the robot could take and this this point here more to the left uh says 0.6 this 0.06 this is the distance to the ideal distribution so if this was zero that would mean that you could find a path that had the exact ideal probability distribution so you can already take quite a few uh so if you're interested in the details of the method you feel free to read the paper it's a pareto estimation method but independent of that so you can already take some interesting conclusions from here so first it might be not possible not even possible to to satisfy a fairness definition exactly which is the case in this example so this never reaches zero and it to find more people you might have to compromise on fairness so you need to find more people you might have to find uh people that are less representative of your city in this in this short path another observation that you that i uh made in this work was that a fairness definition could be a counterproductive so here is the comparison of two fairness definitions one which is demographic priorities so your your planner is trying to as much as possible that the path um finds uh male and female groups in the same proportion as in the whole city and this in the the graph on the right is the result of doing uh running a planner that tries to promote affirmative action so exactly 50 ratio of men and women found on its on each of its paths but you see that when you do uh the algorithm on the right you find both fewer men and fewer women than if you uh applied the the first definition of fairness so you have to be careful you cannot just think oh i think i should went this fairness definition not that one you might have to actually simulate it to see what happens because it might even be counterproductive be worse in all respects compared to another metric um so a kind of final more technical um conclusion for observation is that current methods for motion planning offer a few guarantees so basically they're they're methods that will solve your plan your problem optimally but you'll have to mix an approximation of the problem and there are methods that will try to solve exactly what you asked it to solve but there they cannot guarantee that they will do it optimally so you have to choose between optimizing the real fairness real uh fairness metrics suboptimally or optimizing something else that is not what you care about optimally so intuitively for those of you who are familiar with the a-star algorithm for example for path planning you have to use the cumulative cost which means that the cost of being in each cell along a path is equal to the in this case it would be equal to the how close the displacion of people on that area is to the distribution of people over the whole city but this would mean that uh your planet would try and avoid neighborhoods that are that have uh minority neighborhoods because they are not representative of the whole city so even though you're trying to optimally solve a fair um planning problem you're actually doing something very unfair which is to avoid minority bit neighborhoods because they're not not representative of the whole city because of the approximation that you need to use in order to apply an optimal uh planner all right so you can complain about this this example this toy example that i did that it's not realistic enough uh because it only does one path and returns and that's it uh so i've recently started exploring the the the coverage problem which is when your robot will do uh a short path covers a small area returns and then another area returns until the whole city is covered it's called the the coverage problem and i've actually observed through experiments that um if you're solving a coverage problem again trying to first find the first go to the neighborhoods that i have the most amount of people so as to find more people faster so you you'll have this kind of graph for the amount of people you find first you you find a lot of people very quickly and then it starts slowing down the amount of people you find until you find everyone but that also means that again you have similar inequality and you have an inequality peak in the beginning so in this particular example of oxford you would find 50 percent of the younger population by 200 minutes but by 200 minutes you would only have found 15 percent of the older population which is a big gap and problem and also because the older population is potentially the one you you care most about when you're doing these missions so to kind of uh also take some conclusions from here um it's important to know that fairness can be not only about who gets served by a robot but the order or the speed in which groups are served we've also seen how population greedy algorithms those algorithms that try to find as many people as possible or serve as many people as possible will be biased according to structural biases in the city or or in your domain so this this kind of bias could reinforce current inequalities and criticisms for example disaster response there are like from this recent disaster response missions in the us and thailand there have been criticisms that the disaster response efforts uh did not help enough the marginalized groups because even though they were from the start less likely to survive and if we were to deploy these kind of algorithms for finding as many people as possible we will again be um reinforcing this kind of criticism because we would not be serving marginalized groups as as quickly as as we should so to sum up uh for in order to build the fair robot motion planners uh what we have to think about so so first i made the claim that robot motion can be biased and unfair so this is both true for goal directive and coverage paths and i showed how motion can inherit spatial distribution biases of people so people they're natural naturally uh they're they're segregations due to that they're um gentrified neighborhoods in their minority neighborhoods so this then passes by passes to the the motion planner then of course i omitted from the discussion here that there are also fairness issues related to census data itself and its representations and gathering methods but i i can skip that bit from this presentation regarding design considerations for fair planners so i've shown how there's something to be pay attention to is the fairness and efficiency tradeoff and i've shown how some fairness specifications might actually be counterproductive and i've shown how there's an issue with the optimality of an algorithm and the use of approximations and to to wrap up in a similar notes as to the explainability section so uh how do we design for fairness i think we need to iterate design and anticipation and why because the set of uh personal characteristics we care about or uh what we mean by fairness the furnace specification and the the impact of a deployment all of this is not obvious from the outset so you might have to iterate so deploy or simulate the deployment of the system look at the results and then iterate and then uh another another important aspect of this methodology that i used is that it is currently hard to engage with stakeholders in the early stages of design so so someone wants to deploy a robot and and they will say yes of course we want to make sure this this deployment is fair but it's not clear what what how fairness even relates to robot motion for example right so uh these discussions uh are also difficult to ground unless we have something uh poppable like like something like a simulation so i think it's um probably as a methodology what i did here could be an interesting tool for responsible innovation where we implement prototypes and simulate deployments to anticipate issues with stakeholders and to ground discussions with stakeholders to better understand the concepts so to iterate concepts and design considerations um in the early stages of design so thanks a lot for your for your time and let me know if you have any other questions hey thanks my team for this really interesting talk i see that every guinea has his hand raised yeah thanks and thanks martin for the presentation so uh about the the fairness uh topic so so i understand uh that taking this perspective um if we assume that like a robot is introduced that it allows us to uncover what kind of unfair dynamics might happen but thinking more broadly about the ultimate purpose here so shouldn't we be taking a more systemic design perspective here of the fairness issue so what i mean by that is thinking more broadly about the infrastructure we are designing rather than focusing just on the robots decisions so for example bid build more stations in different areas of the city increasing battery capacity increasing the number of charging stations throughout the city so uh thinking more broadly about what's the socio-technical infrastructure that needs to be in place to satisfy both of the objectives that you were saying so also get to the people who are in need as quickly as possible and make sure that you do it fairly yes yes i i totally agree i don't i don't think what i've talked here is is kind of going against that thought it would be a small piece of the of the puzzle of course you you need to think about [Music] how many how many robots you can use how many two to buy as many as possible where do you launch them from and do we have the people to operate them are is this is it even is it even good to use robots in the first place uh do do disaster response teams want to those are also important questions of the for the whole uh problem so my claim here was more uh so first that there is a fairness component to uh motion and then uh of course we will we should think of the whole system so where to put launching stations how many robots etc but then even then so uh how should the robots move depending on how they move it it will still have a different impact so it will still have a different uh people will still still be served or found or or uh helped at different paces depending on the algorithm you implement so uh regardless of that we we will still need this kind of um this kind of uh algorithms and and and and thought processes going into the design of in of uh motion planners but i totally agree yes so that that there's a lot of different problems to to think about and also more social problems of how how are these then going to be used and even how was this data obtained and and do these the categories in the census even make sense so these are all questions that need to be thought of but of course it cannot be just be by myself it would have to be a full interdisciplinary uh conversation with all the stakeholders so yeah in general i really agree with what you say thanks for that thanks for clarifying that and if i can just a quick follow-up you you mentioned that you you saw that it was difficult to engage with stakeholders early on the topic of fairness to to to understand the nuance better could you please elaborate on that a bit like what what were some of the challenges uh in that um uh no so i didn't so to clarify i didn't actually do any user study with this with this fairness work uh yet what i say is more is more a general uh claim do i have no data for that that it's that uh once uh people who do not have so much knowledge about uh planner or or or something or or even or even the domain so some people want to buy a robot to deploy in a hospital for example and um and they are aware that it's important right it's in it's in the the guidelines and the the principles of ethics ethical the ethical development of aio yes of course we should make sure this is fair uh but it's actually difficult to even come up with with things you should have in mind when you okay we will put a robot in the hospital or or in the what does it even mean for the motion to be to be fair it's not clear uh what how to even start the conversation that's my issue you can not start talking about uh compromises between enforcing fairness or not or start talking about do you you cannot just use the menu of uh yes do we do we want uh egalitarianism sufficientarianism do we want prioritarianism it's not just uh it's just not it's not easy like that to to to guide the conversation you probably need examples you need visualizations you need simulations to tease out details of the definition that's that's my my claim not something that i encountered personally okay yeah no thanks for clarifying yeah yeah now because and of course the the if you actually do the empirical work then of course if you start talking to people people don't think necessarily like in our everyday conversations we don't think in these categories and we don't necessarily think in categories that align with quantitative fairness metrics that have been proposed so far in computer science literature so so that's no but that's that's kind of yeah coming back to also coming back to the first question i ask is basically the the world is like the social reality is so nuanced that i think more broadly speaking like we should be thinking about these socio-technical infrastructures where we figure out what's the appropriate real goals for the humans to play to ensure for example fairness but also expandability what's appropriate roles for machines to play and the interaction between the two yeah i agree thanks thank you thanks luciano you were first i saw yeah okay so uh thank you very much for the talk that was really great talk uh and i i really like the way you make very clear that explainability and fairness they are relevant so many choose to specific concrete context and that's very important for to also to raise awareness that is like we are not just developing robots we're not them just ai there's things that are going to use in the world have the impact there was very nice things and my question goes on the aspects on the second part on the fairness about the the metrics and the definitions so it's basically following up a little bit on what you just comment here for a vienna's question it's because yeah indeed people have like it it's not an easy thing to discuss about like what different concept of fairness and also in broader sense if you think about ethical ai it should be utilitarian should it be follow this canton approach without this or that um a way to to try to see to visualize as i said like through visualization representations and through demonstrations also as preferences so because usually when people try to say okay this is i would prefer to go in this way because or that way and then you try to engage into this deliberative process you're saying okay why do you prefer this way then that way might be something some some interesting things there so my question is um going a little bit much more to this technical side so do you envision some kind of uh learning methods to engage these for example investment enforcement learning are some approaches to try to learn what fair means for a specific group of people that might be impacted by the solution so what do you think about this i me personally personally i i'm not a big fan of learning based approaches because um because you you're never too sure what they're learning and and um and that's a kind of a personal approach i think it's still it's still perhaps easier to come up with rules that that you totally understand and then um maybe iterate on those rules but but i think related to this learning learning approach there's this kind of iterative uh planning approach that is popular in the in the planning community which is when uh so you you suggest your algorithm suggests a plan and then the user says yes but i would expect this to be i don't know fairer or or or more intuitive according to to whatever it is that i that i think should happen so the user provides some other suggestion and then the algorithm could could uh try to make make that happen and say what what are the potential issues with that so show either show from uh to similar to what i did in the explainability bits either show that there's an increase in costs or there's an increase in uh there's uh then this furnace metric would go down or this this minority group would not uh be seen anymore so and then by these back and forth then the user oh i see then why not this path and then by this conversation almost like a like a negotiation between planner and person you could then arrive at um at the path so even though it's not really the the traditional learning scheme it is still something uh similar i think in in a product yeah yeah no it does and i i just really clarify i do share your concerns with completely like open world learning approaches which you don't read you know there's lack of transparency lack of explainability there things but i also see this value of this combining things so you might have one specific definition of fairness but by learning process you can try to fine-tune this according to to interaction so yeah thanks thanks that was very nice thank you very much and also related to that learning approach there they're also we could also think of methods that try and not just uh so learn an arbitrary or an arbitrary neural network or something but come up with rules explicit rules so well then do you not mean this for fairness what about this formula for fairness would that make sense to you and uh perhaps always uh side by side with with the examples of how that would work in practice i don't know also because in in like in political philosophy and in this in these works about distributed fairness it's always a lot about counter examples right it's usually the rules sound perfect when you just look at the rules but when you look at specific examples you find counter examples and and then and i think probably in design you will have to to do it the same way yeah yeah definitely and i which are the norms relevant because they i completely imagine that this path planning that we're going to work in in oxford maybe doesn't would not work in delta and the other way around right so that's understanding what are the specific context dependent norms that's also very important nice true okay thanks a lot thanks thanks thanks i see we have five minutes left i see one more question from our caddy yes uh hey martin that was a fascinating talk thank you i have a well let let's make it two questions about the first part about the explanations and uh well the first question is basically some of the approaches that you showed uh it seems like there is an extra computational toll uh that you get on top of your normal motion planning if you need explanations so can you comment on that so yeah i would imagine that some of them are more computationally intensive some of them are kind of not as much but uh yeah in general what is how hard is this trade-off between extra computational cost and explanation right yeah so in in terms of computational cost it depends a lot on the kind of explanation you want to to make all right so um maybe i'll share the screen at the same time so for example for the for for the path planning explanations these ones uh where you find changes to the model that lead to the expected path so this we can be done quite quickly so first i had an algorithm that was slow but you can always find ways to make it fast and it takes like one second to to generate an explanation so it's quite quick for for reasonably sized maps that you would use normally but for example for motion planning for some kind of motion planning explanations like um if the reason why you fail is because um you didn't wait long enough you might actually have to wait a long time to resolve the problem for a long time and say oh in order for you to be able to say that you would need two hours to solve the problem you might actually have to try and solve it for two hours so uh for some explanations it might uh take it it might take considerably long um for path planning it's easy it's easier because since it's discrete you can do you can pre-compute explanations to a lot of different questions right from the beginning you can you can leverage that but for motion planning continuous motion planning some of them you might just have to say [Music] if you want to do it quickly you might have to skip doing some some of the potential explanations not investigate some of the potential expressions or you could say uh well if you want i could also see if this is the source of the issue but you would have to come back in in a while right yeah but that's an important point good thanks uh and then uh the second question well it's it's not a direct extension of the first one but um yeah i was wondering your general framework for the kind of explanations for the kinds of explanations that you can expect from uh robot motion uh well how how generalizable that is to the tasks where uh a robot need to interact with the human because uh yeah with the examples that you showed it's just the robot and that moves nice but uh what if uh kind of a human is involved or several humans are removed that's messy on its own even without considering explainability but if you want to add the explanations in terms of human sections that also could come in very different shapes and forms so uh how do you see this kind of generalization to human robot interaction tasks yeah thanks so i think so even if your system is messy when there are humans involved i mean there will still be rules to how this the robot should act right it is uh so if the user that of course it's not as simple but if the user is if this user is doing this then i i do this so you can always uh in this framework you would still generate explanations by saying uh by finding changes to the input basically that would lead to the expected outcome so you could say why didn't the robot do this while there were three people passing by and one operating that robot and you could try and make simulations of uh removing one of the people or putting one more person or or making simulating that the person who's controlling the robot asked for another command so of course the space of exploration is huge but i think the general framework of finding changes to the inputs of the the control system right of the or the planning system so that the the the things that were asked uh become true right so i think that's still valid of course it will be more computational intensity and then potentially more difficult to interpret so you'll have again have to think about about um about how to to summarize this or to to figure out which which are the the the relevant bits but i i think um i i think it's doable i think there's a lot to eat on i think it's a an interesting research avenue yeah thank you thank you martin yeah thanks thanks for the feedback
65d240fd-124e-42a4-9adc-3dfaabd1f2b6
trentmkelly/LessWrong-43k
LessWrong
Meteuphoric games Back to the original, titular purpose of this blog: getting excited about how things are metaphors for other things. Here are some things that are structurally similar in that they are games where you try to find structural similarities. They also share the properties of being amusing to me and not generally known, so I thought I’d share them. I don’t know if they are generally amusing—if you try one, feel free to leave a comment about whether you enjoyed it or not. 1. My thought is like… This is a game I learned as a child. It goes like this: Kate: [thinks of a thing] My thought is like – Sarah: Sam Smith Robert: the word of God Kate: actually it was a palm tree Sarah: Hmm, but Sam Smith is basically just like a palm tree. They are both tall, covered with a sparse brown fur, and appreciate dryness. Also, you’ve been thinking about them lately. Also, you are disproportionately interested in getting your hands on their nuts. Robert: But palm trees are well known to symbolize the word of God. Probably because when you are lost in a metaphorical desert, God’s word guides you to the oasis of salvation. That’s why the faithful carry them on Palm Sunday. Kate: Sarah makes the best case. Sarah: [thinks of a thing] My thought is like-   2. Crosswords where every clue points to two words that fit in the crossword Really, making them is the game that most involves finding structural similarities in things—solving them involves the opposite. For instance, what sentence could you use to describe either of these words at once? CALVE and CARVE. Maybe, ‘an action that increases the matching items of cow you have’ NET and SET. Maybe, ‘Contains things that share a certain property’ LOADED and LOANED. ‘His family could drive a Tesla because it was ——‘ Here’s one I made earlier, but it’s not great. Here’s a famous crossword that probably inspired this activity indirectly.   3. Explain everything in terms of status One player asks for an explanation of some phenom
c2f9f639-8a1e-48eb-b39b-f26eadef3bc6
trentmkelly/LessWrong-43k
LessWrong
Inaugral bump thread (12th July to 19th July) Recently I came across the Akrasia Tactics Review article, then a 'bump' thread which spread the relevant content over multiple places, making tracking the content harder. It's apparent that some people believe some Lesswrong articles may be undervalued by the community (after factoring in karma as an indication of the community's appraisal)   Bump threads crowd out new articles and may annoy the more comprehensive or more experienced readers. This article is prototype for a regular (or whenever anyone else wants to take the initiative to start one) discussion board thread where people can lobby for increased visitation to articles of their interest. Don't make the threads run for too long - 1 week as a guide. Tag them with bump_thread   Future thread starters should not suggest an article in the initial discussion post, as I have. Although, this is useful as a guide for what I have in mind.
76ceb786-a395-4e1c-b1ae-ca7e957a0d4b
StampyAI/alignment-research-dataset/special_docs
Other
Rational heuristics for one-shot games. RATIONAL HEURISTICS FOR ONE-SHOT GAMES Frederick CallawayyThomas L. Grithsz Gustav Karreskog Rehbinderx May 9, 2023 Abstract We present a theory of human behavior in one-shot interactions based on the assumption that people use heuristics that optimally trade o the expected payo and the cognitive cost. The theory predicts that people's behavior depends on their past experience; speci cally, they make choices using heuristics that performed well in previously played games. We con rm this prediction in a large, preregistered experiment. The rational heuristics model provides a strong quantitative account of participant behavior, and outperforms existing models. More broadly, our results suggest that synthesizing heuristic and optimal models is a powerful tool for understanding and predicting economic decisions. Keywords: Bounded rationality, Experiments, Cognitive cost, Strategic thinking, Game theory, One-shot games, Heuristics JEL classi cation: C72, C90, D83, D01 We thank Drew Fudenberg, Alice Hallman, Benjamin Mandl, Erik Mohlin, Isak Trygg Kupersmidt, J orgen Weibull, Peter Wikman, and seminar participants at SSE, Uppsala University, UCL, NHH, and Princeton, for helpful comments and insights. This work was supported by the Templeton Foundation, the NOMIS Foundation, the Jan Wallander and Tom Hedelius Foundation, and the Knut and Alice Wallenberg Research Foundation. yDepartment of Psychology, Princeton University, Princeton, NJ 08540; fredcallaway@princeton.edu zDepartment of Psychology, Princeton University, Princeton, NJ 08540; tomg@princeton.edu xDepartment of Economics, Uppsala University, Kyrkog ardsgatan 10, 751 20, Uppsala, Sweden; gustav.karreskog@nek.uu.se 1 Introduction A key assumption underlying classical economic theory is that people behave optimally in order to maximize their subjective expected utility (Savage, 1954). However, a large body of work in behavioral economics shows that human behavior systematically deviates from this rational benchmark in many settings (Dhami, 2016). This suggests that we can improve our understanding of economic behavior by incorporating more realistic behavioral components into our models. While many of these deviations are indeed systematic and show up in multiple studies, the estimated biases vary considerably across studies and contexts. Apparent biases change or even disappear if participants have opportunities to learn or if the details of the decision task change. For example, this is the case with the endowment e ect (Tun cel and Hammitt, 2014), loss aversion (Ert and Erev, 2013), numerosity underestimation (Izard and Dehaene, 2008), and present bias (Imai et al., 2020). In order to incorporate behavioral e ects into theories with broader applications| without having to run new experiments for each speci c setting|we need a theory that can account for this variation. That is, we need a theory that can help us understand why|and predict when|people deviate from the rational benchmark. In this paper, we propose such a theory based on the idea that people use simple decision procedures, orheuristics , that are optimized to the environment to make the best possible use of their limited cognitive resources and thereby maximize utility. This allows us to predict behavior by analyzing which heuristics perform well in which environments. This paper presents an explicit instantiation of this theory tailored to one-shot games and tests it experimentally. In situations where people play the same game multiple times against di erent opponents, and hence there is an opportunity to learn, both theoretical and experi- mental work suggests that Nash equilibria can often yield sensible long-run predictions (Fudenberg et al., 1998; Camerer, 2003). However, in experimental studies of one-shot games where players don't have experience with the particular game at hand, people seldom follow the theoretical prediction of Nash equilibrium play (see Crawford et al., 2013, for an overview). Consequently, we need an alternative theory for strategic interactions that happen only once (or infrequently). The most common theories of behavior in one-shot games in the literature assume that players perform some kind of iterated reasoning to form beliefs about the other player's action and then select the best action in response. This includes level-k (Nagel, 1995; Stahl and Wilson, 1994, 1995), cognitive hierarchy (Camerer et al., 2004), and 1 noisy introspection models (Goeree and Holt, 2004). In such models, participants are characterized by di erent levels of reasoning. Level-0 reasoners behave naively by playing a uniformly random strategy. Level-1 reasoners best respond to level-0 behavior, while higher-level reasoners best respond to behavior based on lower-level reasoning. In meta-analyses such as Crawford et al. (2013), Wright and Leyton-Brown (2017), and Fudenberg and Liang (2019), variations of these iterated reasoning models best explain human behavior. All iterated reasoning models assume the basic structure of belief formation and best responding to those beliefs. However, such a belief-formation and best-response process is often inconsistent with empirical evidence. For example, Costa-Gomes and Weizs acker (2008) found that participants who were asked to state their beliefs about how the opponent would play, often failed to play a best response to those beliefs. Moreover, eye-tracking studies have revealed that the order in which participants attend to payo s in visually presented normal-form games is inconsistent with a belief-formation and best-response process (Polonio et al., 2015; Devetag et al., 2016; Stewart et al., 2016). Furthermore, the estimated parameters of iterated reasoning models often vary considerably across di erent data sets (Wright and Leyton-Brown, 2017), behavior depends on aspects of the game that these models do not take into account (Bardsley et al., 2010; Heap et al., 2014), and there is evidence that games played previously have an e ect on behavior, which the above static models fail to capture (Mengel and Sciubba, 2014; Peysakhovich and Rand, 2016). In this paper, we present a theory of human behavior in one-shot games based on the rational use of heuristics (Lieder and Griths, 2017, 2020). That is, we assume that people use simple cognitive strategies that exibly and selectively process payo information to take good decisions with minimal cognitive e ort. Concretely, we assume that people use heuristics that maximize expected payo minus cognitive cost. Importantly, this optimization happens at the level of the environment; although people might not choose the best action in a given game, they will learn which heuristics generally work well (cf. procedural rationality in Simon, 1976). Thus, our approach combines two perspectives on human decision-making, embracing both the notion that human behavior is adaptive in a way that can be described as optimizing and the notion that people use simple strategies that are e ective for the problems they actually need to solve. The key assumption of this resource-rational analysis approach is that people use cognitive strategies that make optimal use of their limited computational resources (Lieder and Griths, 2020; Griths et al., 2015; cf. Howes et al., 2009; Lewis et al., 2014; Gershman et al., 2015). 2 It is instructive to compare resource-rational analysis with two other approaches to explaining observed deviations from perfectly rational behavior: the information- theoretic and ecological rationality approaches. Like information-theoretic approaches such as rational inattention (Mat ejka and McKay, 2015; Sims, 1998; Caplin and Dean, 2013; Hebert and Woodford, 2019; Steiner et al., 2017), the resource-rational approach assumes that the costs and bene ts of information processing are optimally traded o . However, while information-theoretic approaches typically assume domain-general cost functions (e.g., based on entropy reduction), the resource-rational approach typically makes stronger assumptions about the speci c computational processes and costs that are likely to be involved in a given domain. In this way, the resource-rational approach is more similar to the ecological rationality approach, a framework based on the idea that people use computationally frugal heuristics, which are highly e ective for the kinds of problems that people actually encounter (Gigerenzer and Todd, 1999; Goldstein and Gigerenzer, 2002; Todd and Gigerenzer, 2012). For example, if the other players in an environment are using a wide variety of decision strategies, then a heuristic that ignores the other players' payo s entirely may perform best (Spiliopoulos and Hertwig, 2020). However, while proponents of ecological rationality explicitly reject the notion of optimization under constraints (e.g., Gigerenzer and Todd, 1999, Ch. 1), optimization is at the heart of resource rationality. This makes it possible to predict when people will use one heuristic rather than another (Lieder and Griths, 2017) and even to discover novel heuristics (Lieder et al., 2017; Krueger et al., 2022). One important commonality between our approach and ecological rationality is the recognition that the quality or adaptiveness of a heuristic depends on the environment in which it is used. For example, in an environment in which most interactions are characterized by competing interests (e.g., zero-sum games), a good heuristic is to look for actions with high guaranteed payo s. On the other hand, if most interactions have common interests, a better heuristic might be to look for outcomes that would be good for everyone (cf. Spiliopoulos and Hertwig, 2020). Our theory thus predicts that people will use di erent heuristics in cooperative vs. competitive environments. To test our theory's prediction that people adapt their heuristics to the environment, we conduct a large, preregistered1behavioral experiment. In our experiment, partic- ipants play a series of normal-form games in one of two environments characterized by di erent correlations in payo s. In the common-interests environment, there is a positive correlation between the payo s of the two players over the set of strategy pro les; i.e., outcomes that are good for one player tend to be good for the other as 1https://osf.io/hcnzg 3 well. In the competing-interests environment, the payo correlation is negative; i.e., one player's loss is the other's gain, which is essentially a soft version of zero-sum games. Interspersed among these treatment-speci c games, we include four comparison games that are the same for both treatments. If the participants are using environment-adapted heuristics to make decisions, and di erent heuristics are good for common-interests and competing-interests environments, the participants should behave di erently in the comparison games since they are employing di erent heuristics. Indeed, this is what we observe. To provide further support for the claim that participant behavior is consistent with an optimal tradeo between the expected payo and the cognitive cost, we de ne two parameterized families of heuristics and cognitive costs that can make quantitative predictions about the distribution of play in each game. However, rather than identifying the parameters that best t human behavior (as is commonly done in model comparison), we instead identify the parameters that strike an optimal tradeo between expected payo s and cognitive costs, and ask how well they predict human behavior. Although we t the cost function parameters that partially de ne the resource-rational heuristic, these parameters are t jointly to data in both treatments. Strikingly, we nd that this model, which has no free parameters that vary between the treatments, achieves nearly the same out-of-sample predictive accuracy as the model with all parameters t separately to each treatment. Both the optimized and tted versions of this model predicted the modal action with an accuracy of 88%, compared to 80% for a quantal cognitive hierarchy model. In Section 2, we provide an overview of our theory and present a stylized example to illustrate how resource-rational heuristics depend on the structure of the environment. In Section 3, we present our experiment and model-free analyses, which demonstrate a strong causal link between previous experience and current behavior in one-shot games. In Section 4, we provide a more detailed description of the theory and introduce two di erent parameterized models, an interpretable, low-parameter model, and a black-box neural network model. Using these models, we demonstrate in Section 5 that the di erences in behavior can be accurately predicted out-of-sample by assuming that participants use the optimal heuristics for the respective environments. In Section 5.4, we compare our models to alternative models, including quantal cognitive hierarchy and prosocial preference models, and show that our models provide better predictions of behavior than these alternatives. 4 2 Theory Overview and Stylized Predictions The central tenet of our theory is that individuals use heuristics that maximize the expected payo minus the cognitive cost in a given environment. This can be summarized in the following equation: h= argmax hi2HEE i hi(G);hi(GT) c(hi) : (1) Here,hi2H is a heuristic, Gis a game, and hi(G) is the distribution of play produced by applying heuristic hito gameG. The optimal heuristic his the one that maximizes the expected payo i hi(G);hi(GT) minus the cognitive cost c(hi), wherehi(GT) gives the distribution of play by the opponent. The expectation is taken with respect to an environment E, which de nes a distribution over possible games G and opponent heuristics hi. A key implication of this theory is that the heuristics we expect people to use depend on the types of games and opponents they encounter frequently; that is, h idepends on E. Below, we illustrate this idea with a simple example. Consider two possible environments: one consisting entirely of coordination games (where the players want to coordinate on the same action), and one consisting entirely of constant-sum games (where the players' interests are exactly opposed). In both environments, all other players follow a heuristic where they pick the strategy with the highest average payo (level-1 in the language of level-k reasoning). Now consider what you would do as the row player when faced with the following games from each environment. 8;80;0 0;09;6 Coordination game5;42;7 3;63;6 Constant-sum game In the coordination game, the column player will select column 1because 8 is larger than 6; row 1is thus the optimal play. In the constant-sum game, the column player will select column 2because 7 + 6 >4 + 6; thus, row 2is the optimal play. Clearly, simulating the other player as we have done here will always lead to the optimal choice. However, in each case, the optimal action could also be found by a simpler, less cognitively demanding heuristic. In the coordination game, the best action is the one that produces the outcome with the highest minimum value for each player (we will later call this the \jointmax heuristic"). In the constant-sum game, the best action is 5 the one that has the highest guaranteed payo (the \maximin" heuristic). The central claim of our theory is that people will use heuristics that identify good actions with minimal cognitive cost. Critically, a \good" action is one that achieves high payo s on average across all the games a person encounters. Thus, if we take one person, \Lucy," and we put her in an environment where she repeatedly plays games like the one on the left, she will learn to use the jointmax heuristic because it usually selects the same action as simulation, but with less cognitive cost. If we put another person, \Rodney," in an environment where he repeatedly plays games like the one on the right, he will learn to use the maximin heuristic for the same reason. Now consider what actions each will select in a new game: 7;70;9 9;04;4 Prisoner's dilemma game Here, the second action strictly dominates the rst, and so it has to be the choice of a perfectly rational decision-maker. Rodney will play this action, as it is selected by the maximin heuristic, which has performed well in his previous experience. Importantly, he may choose this action without ever realizing that it dominates the other. By contrast, Lucy will be likely to play the rst, \incorrect" action, as it is selected by the jointmax. She makes this mistake because identifying the outcome that is best for both players is easy, and it has worked well for her before. Although she might have fared better on this speci c game if she had simulated the possible outcomes of each action, the cognitive cost of such an approach would not be justi ed by the relatively small increase in payo across the full set of games she has played. To summarize, the principled but costly approach of simulating the other player in order to select one's own action can sometimes be approximated by simpler heuristic strategies. When this approximation is suciently accurate, a resource-rational agent will use the heuristic to avoid the mental e ort of simulation. But if we present the unwitting agent with a new game that lacks the structure the heuristic was taking advantage of, the agent will make predictable errors. This is the key intuition underlying our behavioral experiment. 6 3 Experiment Our overarching hypothesis is that individuals choose actions in one-shot games using heuristics that optimally trade o between the expected payo and the cognitive cost. Critically, as discussed above, this optimization occurs with respect to an environment rather than a single game. This results in a central prediction: the action a player takes in a given game will depend not only on the nature of that particular game but also on the other games she has previously played. From this central prediction, we derived four hypotheses, which we tested in a large, preregistered online experiment. 3.1 Methods We recruited 600 participants on Amazon Mechanical Turk using the oTree platform (Chen et al., 2016). Each participant was assigned to one of 20 populations of 30 participants. They then played 50 di erent one-shot normal-form games, with each participant randomly matched to another player from their population after each game.2 Each population was assigned to one of two experimental treatments, which deter- mined the distribution of games played. Speci cally, we manipulated the correlation between the row and column players' payo s in each cell (cf. Spiliopoulos and Hertwig, 2020). In the common-interests treatment, the payo s were positively correlated, such that a cell with a high payo for one player was likely to have a high payo for the other player as well. By contrast, in the competing-interests treatment, the payo s were negatively correlated, such that a cell with a high payo for one player was likely to have a low payo for the other player. Concretely, the payo s in each cell were sampled from a bivariate Normal distribution truncated to the range [0 ;9] and discretized such that all payo s were single-digit nonnegative integers.3Examples of each type of treatment game are shown in Tables 1 and 2. 5;66;45;3 9;45;56;7 2;00;16;43;45;59;7 4;25;75;7 2;42;12;39;75;97;8 6;79;94;6 6;43;16;2 Table 1: Three games from the common-interests treatment. 2To facilitate running the experiment online, we used an asynchronous scheme in which participants could play \against" an opponent who had played the game earlier. Participants were informed of this; see Figure 8 in Appendix A. 3The normal distribution is given by N((5;5);) with  = 51 1 , where= 0:9 in the common-interests treatment and =0:9 in the competing-interests treatment. 7 5;56;25;3 5;31;88;4 3;67;44;62;44;44;6 1;72;69;1 7;14;88;64;51;57;1 2;78;55;7 2;68;33;9 Table 2: Three games from the competing-interests treatment. For each population, we sampled 46 treatment games, with each participant playing every game once. The remaining four games were comparison games , i.e., treatment- independent games that we used to compare di erences in the participants' behavior between the two treatments. The comparison games were played in periods 31, 38, 42, and 49. We located these comparison games later in the experiment so that the participants would have time to adjust to the treatment environment rst, while leaving intervals between them to minimize the chance that participants would notice that these games were di erent from the others they had played. 3.1.1 The Comparison Games We selected four comparison games that we expected to elicit dramatically di erent distributions of play in the two treatments. In these games, there is a tension between choosing a row with an ecient outcome or choosing a row with a high guaranteed payo . For two of the games, the ecient outcome was also a Nash equilibrium (NE), and for the other two games, the ecient outcome was not a NE. 8;82;60;5 6;26;62;5 5;05;25;5 Comparison game 18;82;91;0 9;23;31;1 0;11;11;1 Comparison game 2 4;44;65;0 6;43;35;1 0;51;59;9 Comparison game 34;49;11;3 1;98;81;8 3;18;13;3 Comparison game 4 Table 3: The four comparison games. The rst game is a weak-link game, where all the diagonal strategy pro les are Nash equilibria, but each has a di erent eciency. The most ecient NE yields the payo s (8,8), but it is also possible to get 0. The least ecient equilibrium yields the 8 payo s (5,5), but 5 is also the guaranteed payo . The equilibrium (6,6) is in between the aforementioned payo s in terms of both risk and eciency. The third row has the highest average payo and is the best response to itself, and so any standard recursive reasoning model would predict (5,5) being the outcome. The second comparison game is a normal prisoner's dilemma game, with an added dominated and inecient strategy. In this game, strategy 2dominates the other strategies. However, we still expect strategy 1to be played more often in the common- interests treatment since, overall, it is a good heuristic to look for ecient outcomes in that environment. The third comparison game is a game with two NE, where one is the pure NE with both players playing strategy 3, and the other is a mixed NE involving 1and2. This game is constructed so that the row averages are much higher for strategies 1and2 than for 3, meaning that any level-k heuristic would result in strategy 1or2being played, while the NE yielding (9 ;9) is much more ecient. Thus, there is a strong tension between the ecient payo and the guaranteed payo . In the fourth comparison game, the risky ecient outcome (8 ;8) is not a NE. A standard level-k player of any level higher than 0 would play strategy 3. 3.2 Model-free Results We organize our results based on four preregistered hypotheses. The rst two are model-free and concern behavior in the comparison games; they are presented here. The next two are model-based and concern behavior in the treatment games; these will be presented later. Our rst hypothesis is that the treatment environment has an e ect on behavior in the comparison games. Hypothesis 1. The distribution of play in the four comparison games will di er between the two treatments. This hypothesis follows from the assumption that people learn to use heuristics that are adaptive within their treatment and that di erent heuristics are adaptive across the two treatments. Figure 1 visually con rms this prediction, and Table 4 con rms that these di erences are statistically signi cant ( 2-tests, as preregistered). Inspecting Figure 1, we see that the distribution of play is not just di erent between the two groups; it is di erent in a systematic way. In particular, players in the common- interests treatment tend to coordinate on the ecient outcome, even in games 2 and 4, where the ecient outcome is not a Nash equilibrium. We expected this divergence 9 8, 8 6, 2 5, 0 2, 6 6, 6 5, 2 0, 5 2, 5 5, 5Common Interests 8, 8 6, 2 5, 0 2, 6 6, 6 5, 2 0, 5 2, 5 5, 5Competing Interests 8, 8 9, 2 0, 1 2, 9 3, 3 1, 1 1, 0 1, 1 1, 1 8, 8 9, 2 0, 1 2, 9 3, 3 1, 1 1, 0 1, 1 1, 1 4, 4 6, 4 0, 5 4, 6 3, 3 1, 5 5, 0 5, 1 9, 9 4, 4 6, 4 0, 5 4, 6 3, 3 1, 5 5, 0 5, 1 9, 9 4, 4 1, 9 3, 1 9, 1 8, 8 8, 1 1, 3 1, 8 3, 3 4, 4 1, 9 3, 1 9, 1 8, 8 8, 1 1, 3 1, 8 3, 3 Human 8, 8 6, 2 5, 0 2, 6 6, 6 5, 2 0, 5 2, 5 5, 5Common Interests 8, 8 6, 2 5, 0 2, 6 6, 6 5, 2 0, 5 2, 5 5, 5Competing Interests 8, 8 9, 2 0, 1 2, 9 3, 3 1, 1 1, 0 1, 1 1, 1 8, 8 9, 2 0, 1 2, 9 3, 3 1, 1 1, 0 1, 1 1, 1 4, 4 6, 4 0, 5 4, 6 3, 3 1, 5 5, 0 5, 1 9, 9 4, 4 6, 4 0, 5 4, 6 3, 3 1, 5 5, 0 5, 1 9, 9 4, 4 1, 9 3, 1 9, 1 8, 8 8, 1 1, 3 1, 8 3, 3 4, 4 1, 9 3, 1 9, 1 8, 8 8, 1 1, 3 1, 8 3, 3 ModelFigure 1: Distribution of play in the four comparison games. Each panel shows the joint and marginal distributions of row/column plays in a single game. The cells show the players' payo s for the given outcome. The two columns to the left show the actual behavior in the two environments, while the two columns to the right show the predictions of the rational (optimized) metaheuristics. 10 Frequencies 1 2 3 2p-value Comparison Game 1 98.39p<: 001 Common interests 193 53 54 Competeting interests 75 82 143 Comparison Game 2 22.08p<: 001 Common interests 160 139 1 Competeting interests 103 195 2 Comparison Game 3 61.75p<: 001 Common interests 40 73 187 Competeting interests 106 97 97 Comparison Game 4 91.36p<: 001 Common interests 78 173 49 Competeting interests 115 62 123 Table 4:2tests for each comparison game. The results are signi cant at the preregis- tered 0.05 level. in behavior when we constructed the comparison games, which motivates our second hypothesis. Hypothesis 2. The average payo in the four comparison games will be higher in the common-interests treatment than in the competing-interests treatment. The comparison games were designed to create tension between eciency and risk, with one outcome having a high payo for both players, but requiring each player to take an action that could yield a very low payo . We expected that the common-interests players would be more likely to coordinate on the ecient outcome, and therefore receive higher payo s. The model makes this prediction because identifying mutually bene cial outcomes is typically an e ective heuristic in common-interests games, while identifying high guaranteed payo s is an e ective heuristic in competing-interests games. Table 5 con rms this prediction. The common-interests players had a higher average payo in all four comparison games, and the di erence is statistically signi cant in each case (at the preregistered level of p<: 05). 4 Theoretical Framework We have seen that di erent environments lead to drastically di erent behavior in the comparison games. While these results are suggestive of rational adaptation, they do not directly imply that the participants are using heuristics in an optimal way. One way to strengthen this claim is to show that one can accurately predict human behavior 11 Treatment average payo Common interests Competing interests t-value p-value Comparison game 1 5.09 3.64 6.851 p<: 001 Comparison game 2 5.52 4.04 6.28 p<: 001 Comparison game 3 5.00 4.31 2.86 p= 0:004 Comparison game 4 5.19 3.42 7.21 p<: 001 Table 5: Two-sided t-tests for the di erence in the average payo between the two treatments in the comparison games. by assuming optimal use of heuristics. In order to do this, we need to specify concrete parameterizations of the space of heuristics Hand the cognitive costs. But rst, we provide a more detailed description of the general theory. 4.1 General Model We consider a setting where individuals in a population are repeatedly randomly matched with another individual to play a nite normal-form game. We assume they use some heuristic to decide what strategy to play. LetG=hf1;2g;S1S2;ibe a two-player normal-form game with pure strategy setsSi=f1;:::;migfori2f1;2g, wheremi2N. A mixed strategy for player iis denotedi2(Si). The material payo for playerifrom playing pure strategy si2Si when the other player iplays strategy si2Siis denotedi(si;si). We extend the material payo function to the expected material payo from playing a mixed strategy i2(Si) against the mixed strategy i2(Si) withi(i;i), in the usual way. A heuristic is a function that maps a game to a mixed strategy hi(G)2(Si):For simplicity, we will always consider the games from the perspective of the row player, and consider the transposed game GT=hf2;1g;S2S1;(2;1)iwhen talking about the column player's behavior. Each heuristic has an associated cognitive cost4,c(h)2R+. Simple heuristics, such as playing the uniformly random mixed strategy, have low cognitive costs, while complicated heuristics involving many precise computations have high cognitive costs. Since a heuristic returns a mixed strategy, the expected material payo for player i using heuristic hiwhen playeriuses heuristic hiis i hi(G);hi(GT) : 4In general, the cognitive cost could depend on both the heuristic and the game. For example, it might be more costly to apply a heuristic to a 5 5 game than to a 2 2 game. But since all our games are 33, we can dispense with that dependency. 12 Since each heuristic has an associated cognitive cost, the actual expected utility derived from it is ui(hi;hi;G) =i hi(G);hi(GT) c(hi): A heuristic is neither good nor bad in isolation; its performance has to be evaluated with regard to some environment, in particular, with regard to the games and other- player behavior one is likely to encounter. Let Gbe the set of possible games in the environment,Hbe the set of heuristics the other player could use, and Pbe the joint probability distribution over G;H. In the equations below, we will assume that Gand Hare countable. An environment is given by E= (P;G;H). Thus, an environment describes which game and other-player heuristic combinations a player is likely to face. Given an environment, we can calculate the expected performance of a heuristic as follows: V(hi;E) =EE[ui(hi;hi;G)] =X G2GX hi2Hui(hi;hi;G)P(G;hi): (2) We can also calculate the expected performance of a heuristic conditional on the speci c game being played as follows: V(hi;E;G) =EEjG[ui(hi;hi;G)] =X hi2Hui(hi;hi;G)P(hijG): We can now formally de ne what it means for a heuristic to be rational (or optimal). A rational heuristic his a heuristic that optimizes (2), i.e., h= argmax hi2HV(hi;E); (3) or, in slightly expanded form, h= argmax hi2HEE i hi(G);hi(GT) c(hi) : (4) That is, a rational heuristic chooses actions that yield high rewards for the games and opponents one tends to encounter, while not being costly to evaluate; more speci cally, a rational heuristic achieves the best tradeo between these two (typically, but not always, competing) desiderata. We here also see that by varying the environment, E, we can vary which heuristics are optimal. In our experiment, we will manipulate the distribution over games, thereby varying the predictions we get by assuming rational 13 heuristics. One natural critique of this approach is that the problem of selecting an optimal heuristic is actually much more complex than the problem of selecting an optimal action. Critically, however, while the optimality of an action is de ned with respect to a single game, the optimality of a heuristic is de ned with respect to an environment. Thus, it is possible to for a player learn an optimal heuristic (but not an optimal action) even if she has limited experience with the speci c game being played. In Appendix E, we show that a simple learning model can reproduce the performance of the optimizing metaheuristic model. 4.2 Speci c Parameterizations We consider two parameterizations of Handc. Importantly, we don't claim that either parameterization perfectly matches the actual spaces of heuristics or cognitive costs faced by human beings. Instead, they are constructed to be rich enough, and close enough to actual cognitive costs and heuristics, to be able to capture the essence of behavior. As we will see, the optimal heuristics found with both these parameterizations give accurate predictions. The rst parameterization we call metaheuristics . It consists of three primitive heuristics that together with a selection rule create the metaheuristic. The primitive heuristics and selection rule are chosen based on existing models and descriptive evidence on choice processes. This parameterization is intuitive and interpretable, but its design involves many somewhat arbitrary researcher decisions. The second parameterization, deep heuristics , makes much weaker assumptions about the space of heuristics. This parameterization is based on the neural network architecture for normal-form games proposed by Hartford et al. (2016). It captures a much larger space of possible heuristics and thus removes the researcher degrees of freedom that are a concern for the rst parameterization (e.g., the choice of primitive heuristics), at the cost of losing interpretability and some control over the cognitive cost. 4.2.1 Metaheuristics To build a formal model of heuristics for one-shot games, we begin by specifying a few general types of reasoning that such heuristics might employ: row-based reasoning, cell- based reasoning, and simulation-based reasoning. For each of these types, we specify a precise functional form with a small number of continuous parameters and an associated 14 cognitive cost function. The cognitive cost of a heuristic is a function of its parameters, and the cost function is itself parameterized. Finally, we consider a higher-order heuristic, which we call a metaheuristic , that selects among the candidate rst-order heuristics based on their expected values for the current game. We emphasize that we do not claim that this speci c family captures all the heuristics people might employ in a game. However, we hypothesized|and our results con rm|that this family is expressive enough to illustrate the general theory's predictions and provide a strong quantitative account of human behavior. Since this speci c parameterization (metaheuristics) is not the main focus of the paper, the details can be found in Appendix B. Row Heuristics Arow heuristic calculates a value, v(si), for each pure strategy, si2Si, based only on the player's own payo s associated with si. Formally, a row heuristic is de ned by the row-value function vsuch that v(si) =f(i(si;1);:::;i(si;mi))) for some function f:Rmi!R. The speci c parameterization of fwe consider goes from min to max, passing by the mean, via a single parameter such that v (si) =X si2Sii(si;si)exp [ i(si;si)]P s2Siexp [ i(si;s)]: For example, = 0 implies that fis the mean function, corresponding to a level-1 strategy. !1 corresponds to the min function, and yields the maximin strategy. More generally, the row heuristic captures a weighted mean of each row's payo s that might overweight either good or bad outcomes. Once the function vis speci ed, we assume that the computation of vis subject to noise but that this noise can be reduced through cognitive e ort, which we operationalize as a single scalar '. In particular, following Stahl and Wilson (1994), we assume that the noise is Gumbel-distributed and thus recover a multinomial logit model with the probability that player iplays strategy sibeing hsi row(G) =exp ['v(si)]P k2Siexp ['v(k)]: Naturally, the cost of a row heuristic is a function of cognitive e ort. Speci cally, 15 we assume that the cost is proportional to e ort, c(hrow) ='Crow; whereCrow>0 is a free parameter of the cost function. Cell Heuristics An individual might not necessarily consider all aspects connected to a strategy but nd a good \cell," meaning a payo pair ( i(si;si);i(si;si)). In particular, previous research has proposed that people sometimes adopt a team view , where each player looks for outcomes that are good for both players, and chooses actions under the (perhaps implicit) assumption that the other player will try to achieve this mutually bene cial outcome as well (Sugden, 2003; Bacharach, 2006). Alternatively, people may engage in virtual bargaining , where each player selects the outcome that would be agreed upon if she could negotiate with the other player (Misyak and Chater, 2014). Importantly, these approaches share the assumption that people reason directly about outcomes (rather than actions) and that there is some amount of assumed cooperation. We refer to heuristics that reason directly about outcomes as cell heuristics . Based on preliminary analyses, we identi ed one speci c form of cell heuristic that participants appear to use frequently: the jointmax heuristic, which identi es the outcome that is most desirable for both players. Formally, the joint desirability of a cell is given by vjointmax(si;si) = minfi(si;si);i(si;si)g and the probability of playing a given strategy, with cognitive e ort 'is given by hsi jointmax (G) =X si2Siexp 'vjointmax(si;si) P (ki;ki)2SiSiexp ['vjointmax (ki;ki)]: This can be interpreted as applying a softmax to all possible outcomes and taking the probability of each strategy to be the sum of the probabilities in the corresponding row. Cognitive cost is again proportional to e ort, and so c(hcell) ='Ccell; whereCcell>0 is a free parameter of the cost function. 16 Simulation Heuristics: Higher-level Reasoning Most previous behavioral mod- els of initial play have a basic structure of belief formation and best response. Such models assume that people rst form a belief about which strategy the other player will choose and then select the strategy with the maximal expected value given that belief. In general, e ective heuristics do not necessarily have this form; indeed, for many parameter values, the row and cell heuristics described earlier might not be compatible with any beliefs. However, explicitly forming beliefs and calculating the best responses (following a simulation heuristic ) may be a good decision-making strategy in some situations. If a row player uses a simulation heuristic, she rst considers the game from the column player's perspective, applying some heuristic (a row, cell, or simulation heuristic) that generates a distribution of likely play. She then plays a noisy best response to that distribution. The cognitive cost of a simulation heuristic is a combination of the cognitive cost of the heuristic for the column player, a constant cost for updating the payo matrix using that belief ( Cmul), and a cost that is proportional to the cognitive e ort parameter in the last step, as for a row heuristic, c(hsim) =c(hcol) +Cmul+Crow': Notice that once the beliefs have been formed the last cost for taking a decision is based onCrowsince this process is the same as averaging over the rows as with a row heuristic. Selection Rule We don't expect a person to use the same heuristic in all games. Instead, they may have a set of heuristics, and choose which one to use in each situation based on an estimate of the candidate heuristics' expected values. We model this selection process as a higher-order selection rule that selects among the rst-order heuristics described above. This selection rule allows the decision-maker to select from a few di erent primitive heuristics, and hence the term \metaheuristic." Rather than explicitly modeling the process by which players select among the candidate heuristics, for example, by using the approach in Lieder and Griths (2015), we use a reduced-form model based on the rational inattention model of Mat ejka and McKay (2015). We make this simplifying assumption since it allows us to focus on the central parts of our theory. This functional form captures the three key properties a metaheuristic should have: (1) there is a prior weight on each primitive heuristic, (2) a 17 primitive heuristic will be used more on games in which it is likely to perform well, and (3) this adjustment from the prior based on expected value is incomplete and costly. See Equation 6 in the Appendix for details. 4.2.2 Deep Heuristics A drawback of using explicitly formulated heuristics, as above, is that the results depend on somewhat arbitrary decisions made by the researchers (in particular, the set of primitive heuristics). To minimize the risk of our conclusions being driven by such decisions, we also consider a nonparametric family of heuristics implemented with neural networks. While not as interpretable as the metaheuristics, this new class includes a much larger set of possible heuristics. ...... ...... ...... ...ARR(0)UR ARR(1) ARR(2) ARC(0) UCARC(1)Output F1 F2 FnHR 1;1 HR 1;k1 HC 1;1 HC 1;k1HR 2;1 HR 2;k2 HC 2;1 HC 2;k2SR 1 SR k2 SC 1 SC k2Input layerHidden layersSoftmax layerAction Response layers Figure 2: Architecture of the deep heuristic. We use a neural network architecture similar to the one developed by Hartford et al. (2016), with some adjustments to allow for modeling cognitive costs. The architecture, illustrated in Figure 13, has two key properties speci cally adapted to nite normal- form games. First, the connectivity structure ensures that predictions are invariant to relabeling of the strategies, thereby vastly reducing the size of the parameter space (playing a similar role to convolution in computer vision). Second, the architecture explicitly separates recursive reasoning (e.g., level-k) and direct reasoning about the 18 payo matrix. This allows us to capture belief formation and best response, as well as simpler heuristics like our row and cell heuristics. Furthermore, we can assign a di erent cognitive cost to each type of reasoning. A detailed description of the architecture is given in Appendix C. 5 Model-based Analysis Having formally speci ed our theoretical framework, we can now take the models to our experimental data. Speci cally, we ask whether the behavioral di erences found in our experiment are consistent with the rational use of heuristics. To do this, we rst compare the predictive accuracy and payo s achieved by models that are either payo -optimized or t directly with the data; in particular, we con rm two preregistered hypotheses generated by the general theory. Next, we compare our model to previously proposed models, and demonstrate that our model yields more accurate predictions. 5.1 Model Estimation We take an out-of-sample prediction approach to model comparison. Each data set is divided into a training set on which model parameters are estimated and a test set on which predictive performance is evaluated. We used the rst 30 treatment games from each population as the training set and the remaining 16 treatment games as the test set. We chose this split so that we could test the predictions on the later games when people would be most likely to be using a consistent decision strategy. We consider each game to consist of two observations: the empirical distribution of play for each player role (row and column). The games are sampled separately for each population but are the same within a population, and we have 10 populations for each treatment. For each treatment, we thus have 600 observations in the training set and 320 observations in the test set. This separation was preregistered and can thus be considered a \true" out-of-sample prediction. We de ne separate environments for the two treatments using the actual games and empirical distributions of play in all populations of the corresponding treatment. We thus de ne the common-interests environment, E+, by lettingG+be all the treatment games played in the common-interests treatment, and letting the opponent's behavior, h+(G), be the actual distribution of play in G. Lastly,Pis a uniform distribution over all games inG+and always returns h+as the heuristic for the opponent. We de ne the competing-interests environment Ecorrespondingly. Lastly, we divide the games into 19 the training games, i.e., G+ train, and test gamesG+ test. The measure of the t we use is the average negative log-likelihood (or, equivalently, the cross-entropy), where a lower value means a better t. If pis the observed distribution of play for the row or column player role in some game, and qis the predicted distribution of play from some model, the negative log-likelihood (NLL) is de ned as NLL(q;p) =X spslog(qs): We de ne the total NLL of a metaheuristic mwith cognitive costs Cevaluated on the common-interests training set E+ trainas NLL(m;E+ train;C) =X G2G+ trainNLL(m(G;h+;C);h+(G)); and analogously for the three remaining training and test sets. We write m(G;h+;C) since the actual prediction of the metaheuristic min a given game depends on the performance of the di erent primitive heuristics, which in turn depend on the opponent's behavior,h+, and the cognitive costs, C, as given by Equation (6). The behavior of the metaheuristic model depends on three factors: the consideration set of possible primitive heuristics, the cognitive cost of those heuristics, and the prior distribution for the selection rule. We assume that the consideration set includes one of each type of primitive heuristic: a cell heuristic, a row heuristic, and a simulation heuristic. The model thus has twelve free parameters: six that specify the behavior of the primitive heuristics, four for the cognitive costs, and two for the selection rule's prior. The cognitive cost parameters are xed from the decision-maker's perspective, re ecting constraints imposed by the decision-maker's cognitive abilities. We thus t the cost parameters to data. By contrast, the parameters of the heuristics and the selection rule prior are under the decision-maker's control. We consider two methods for estimating the parameters of the heuristics: tting them to the data, or optimizing them such that they maximize expected utility. The latter method instantiates our theory that people use heuristics in a resource-rational way. For a given set of cognitive cost parameters C= (Crow;Ccell;Cmul;), the tted common-interests metaheuristic is given by m t(E+ train;C) = argmin m2MNLL(m;E+ train;C); whereMis the space of metaheuristics we restrict our analysis to. The tted parameters 20 thus capture the heuristics that empirically best explain human behavior. The optimal common-interests metaheuristic, for cognitive cost C, is instead given by mopt(E+ train;C) = argmax m2MV(m;E+ train;C) = argmax m2MX G2G+ trainu(m;h+;G;C ) whereu(m;h+;G;C ) is the expected utility from employing metaheuristic magainst behaviorh+in gameGwith cognitive cost parameters C. The optimized parameters thus identify the heuristics that objectively achieve the best cost-bene t tradeo , given the tted cost parameters. The tted and optimal metaheuristics for the competing- interests environment are de ned analogously. Having de ned the tted and optimal heuristics with cognitive costs C, we now turn to the question of how to estimate the cognitive costs. Since the participants are drawn from the same distribution and are randomly assigned to the two treatments, we assume that the cognitive costs are always the same for both treatments. To estimate the costs, we nd the costs that minimize the average NLL of the optimized or tted heuristics on the training data. Therefore C t= argmin C2R4 +NLL(m t(E+ train;C);E+ train;C) + NLL(m t(E train;C);E train;C); and Copt= argmin C2R4 +NLL(mopt(E+ train;C);E+ train;C) + NLL(mopt(E train;C);E train;C): Notice the crucial di erence between the tted and optimized metaheuristics. For the tted metaheuristics, we t both the cognitive cost parameters and the heuristic parame- ters to match actual behavior in the two training sets. For the optimized metaheuristics, we t only the cognitive cost parameters; the heuristic parameters are set to maximize the payo minus the cognitive cost. As a result, any di erence between the optimal common-interests metaheuristic and the optimal competing-interests metaheuristic is entirely driven by di erences in performance between the di erent heuristics in the two environments. 5.2 Results for Metaheuristics Next, we consider our two model-based hypotheses regarding the metaheuristic model's ability to capture the di erence in the participants' behavior between the two treatments. 21 Hypothesis 3. Participants' behavior will di er between the two treatments in a way that the model can capture. Speci cally, their behavior in the common-interests test games should be better predicted by the common-interests metaheuristic than by the competing-interests metaheuristics. Conversely, their behavior in the competing-interests test games should be better predicted by the competing-interests metaheuristics than by the common-interests metaheuristic. This should hold both for the tted metaheuristics and for the optimized metaheuristics. Concretely, this hypothesis states that the following four inequalities should hold: NLL(m t(E train);E test)<NLL(m t(E+ train);E test) NLL(mopt(E train);E test)<NLL(mopt(E+ train);E test) NLL(m t(E train);E+ test)>NLL(m t(E+ train);E+ test) NLL(mopt(E train);E+ test)>NLL(mopt(E+ train);E+ test); where the notation for C tandCoptis omitted for brevity. In order to facilitate comparisons between treatments and between games, we use \relative prediction loss", that is, the di erence in NLL between the model's predictions and the theoretical minimum NLL. Let ybe the observed empirical distribution of play in some game G. Then the lowest possible NLL in that game is NLL (y;y). The relative prediction loss for model min gameGis thus given by5 NLL (m;G;C )NLL (y;y): We compute con dence intervals of the relative prediction loss over all the games in the test set. Since we consider each game separately for the two di erent player roles, there are 320 observations per test set. Figure 3 shows the relative prediction loss on the test data in each treatment achieved by each possible method of tting the model. We clearly see that the models that were trained on data from the same treatment as the test set outperform models trained on the other treatment. This con rms Hypothesis 3. An even more striking result is that the optimized metaheuristics achieve nearly the same predictive performance as the tted metaheuristics. That is, a model that uses the 5The resulting measure of performance is related to the completeness measure of Fudenberg et al. (2022). However, since we have only fteen participants per game and role, and there is randomness in behavior, even the perfect model would not be able to get the exact distribution of play right. Therefore, the theoretical minimum is truly theoretical. 22 Figure 3: Predictive performance of the metaheuristics. Each panel shows the relative prediction loss (average negative log-likelihood minus lowest possible value) of the test data for one treatment (competing interests or common interests). Models are tted or optimized to either the competing-interests training games or the common- interests training games. The error bars show 95% con dence intervals. The dashed line corresponds to uniform random play, which assigns the same probability to each action in each game. same set of cognitive cost parameters in both treatments (with the heuristic parameters set to optimize the resultant expected payo -cognitive cost tradeo ) explains participant data almost as well as the fully parameterized model, in which the heuristic parameters are separately tted to each treatment. Not only do we con rm our hypothesis and show that the rational heuristic is a strong predictor, but we also see that we capture most of the distance between the uniform random play and the theoretical minimum NLL. Table 6 in the Appendix shows the accuracy and average NLL for all models we consider in the paper. There, we see that the average accuracy of the optimal metaheuristic is 88%, meaning that in 88% of the games, the modal action is assigned the highest probability. It should also be noted that in the games where the optimal metaheuristic makes an incorrect prediction, the modal action is on average only played by 54% of the participants, while the modal action was played by 75% of the participants in all of the test games. Therefore, in the games where the proposed model fails to assign the highest probability to the modal action, play is quite even and hence dicult to predict. Our nal model-based hypothesis provides an additional test that the metaheuristics 23 Figure 4: Payo performance of the metaheuristics. Each panel shows the regret (best possible expected payo minus true expected payo ) attained by models that are trained and tested in di erent combinations of common-interests and competing- interests environments. The dashed line shows the performance of uniform random play. that participants use are adapted to their treatment environment: Hypothesis 4. The tted heuristics estimated for a given treatment should achieve higher expected payo s on the test games for that treatment, as compared to the heuristics estimated for the other treatment. The logic of this hypothesis is that even if we do not assume that participants use optimal heuristics, we should still see that the heuristics that best describe participants' behavior in each treatment achieve higher payo s in that treatment. As with prediction loss, we use a relative performance measure that accounts for di erences in maximal payo in the two treatments. Speci cally, we quantify performance in terms of regret, the di erence between the expected payo given the predicted behavior and the maximum expected payo in each game. As illustrated in Figure 4, the results con rm our hypothesis. When testing on games from either treatment, the models tted to human behavior in the same treatment achieved lower regret than those tted to the other treatment, although the di erence is larger for the common-interests games. In Appendix D.2 we present results from pairwise tests of both Hypotheses 3 and 4. We see there that all the di erences in both relative prediction loss and regret are 24 signi cant at the 0.01 level.6 5.3 Results for Deep Heuristics By applying the same estimation method to the deep heuristics as we did to the metaheuristics, we can test whether Hypotheses 3 and 4 also hold for a completely di erent speci cation of the space of heuristics and cognitive costs. In Figure 5, we see that Hypothesis 3 holds for this speci cation as well: the models make more accurate predictions for the treatment on which they were trained or optimized. We also see that the predictive performance of the optimal heuristic is close to the tted heuristic, given optimized cognitive costs. Figure 5: Predictive performance of the deep heuristics. We can also test Hypothesis 4 in the same way by looking at the expected payo from the two di erent deep heuristics tted to the behavior of the participants in the two di erent treatments. As before, we see that the tted models achieved lower regret in the treatment on which they were trained, again suggesting that the heuristics people use are well adapted to their environment. 6In the preregistration, we did not specify a formal testing procedure for these di erences and did not originally include such a test in the paper. However, after discussions and presentations, it became clear that such tests are sought after by readers, and we have therefore added them. 25 Figure 6: Payo performance of the deep heuristics. 5.4 Model Comparison In the previous sections, we have shown that the rational use of heuristics can explain and predict people's behavior in one-shot games, in particular how their behavior depends on the previous games they have played. Next, we ask whether our proposed theory provides a more accurate account of that behavior when compared to previously proposed models. Speci cally, we consider three alternative models: quantal cognitive hierarchy (QCH), QCH with prosocial preferences, and noisy best-response to the true distribution of play with prosocial preferences. Quantal Cognitive Hierarchy. In previous comparisons between behavioral models of one-shot games, variations of cognitive hierarchy models are usually the best performing (Camerer et al., 2004; Wright and Leyton-Brown, 2017). In such a model, we consider agents of di erent cognitive levels. In the quantal cognitive hierarchy (QCH) model we consider here, a level-0 agent plays the uniformly random strategy, playing each action with an equal probability. A level-1 player (logit) best responds to a level-0. Finally, a level-2 player best responds to a combination of level-0 and level-1 players.7 The model has 4 parameters: the share of level-0 and level-1 players (which together determine the share of level-2 players), the sensitivity 1of level-1 players, and the sensitivity2of level-2 players. Prosocial Preferences We have attributed the di erence in the participants' behavior between the two 7We found that adding higher levels of play did not improve predictive performance. 26 treatments to their learning di erent heuristics. However, this pattern of behavior could be explained by a change not in their decision-making strategy but in their underlying preferences. In particular, participants in the common-interests environment may develop a sense of camaraderie that makes them care about the other players' payo s, while participants in the competing-interests environment may become jaded or even spiteful, leading them to disregard the others' payo s. To test this alternative explanation, we augmented the QCH model with a prosocial utility function (Fehr and Schmidt, 1999; Bruhin et al., 2019), i.e., ui(si;si) = (1 s r)i(si;si) + ( s+ r)i(si;si); (5) wheresindicates whether i(si;si)<i(si;si) andrindicates whether i(si;si)> i(si;si). In other words, determines how much player ivalues the payo of player iwhenigains less thani, and how much player ivalues the payo of player i whenigains more thani. This augmentation thus adds two parameters to the QCH model, and . In this model, beliefs are formed using a standard QCH model, but the payo s are changed according to the prosocial preferences model (Equation 5) before the last quantal best-response step.8This model can account for di erences in behavior between the two treatments both by assuming di erent levels of prosociality and by assuming di erent levels of reasoning or sensitivity in the QCH step. Di ering Beliefs A second possible source of di ering behavior across treatments is di ering beliefs. Since people behave di erently in the two treatments, participants may form di erent beliefs about what they expect the other player to do. In particular, participants in the common-interests treatment may expect the other player to cooperate by selecting an action with a jointly bene cial outcome, while participants in the competing-interests treatment may expect the other player to select the safest action for themselves. To test this account, we replace the recursively formed beliefs of QCH with the correct (empirical) belief. This model thus plays a noisy best response to the actual distribution of participants' play. In this model, we additionally allow for prosocial preferences, resulting in a three-parameter model. Results. In Figure 7, we compare the out-of-sample predictive performance of these two alternative models and our two suggested speci cations for the space of heuristics. While the alternative models are estimated by tting the parameters to match the 8We also considered another model combining QCH and prosocial preferences, in which the player also has some beliefs about the other player's prosociality that informs the beliefs formed during the QCH steps. This didn't make a meaningful di erence in t. 27 Figure 7: Out-of-sample relative prediction loss for alternative models of behavior. All the models are estimated on the training games of the same environment as the test games. The error bars show a 95% con dence interval. Legend: QCH = quantal cognitive hierarchy, Pro = prosocial preferences, EB = empirical beliefs. participants' behavior, we also include the optimized versions of our two speci cations. For the common-interests games, it is clear that both the tted and optimized versions of our models outperform both the quantal cognitive hierarchy model and the noisy best response with prosocial preferences (prosociality) model. The model with both prosocial preferences and recursive reasoning (Pro+QCH) performs better, but is still outperformed by three of our models (excluding the optimized deep heuristics). For the competing-interests games, the prosociality model is still clearly performing worst, but the other models all perform similarly. This suggests that the QCH model predicts participants' behavior better in the competing-interests environment than in the common-interests environment. Taken together, our proposed models are better at predicting behavior than alternative models, including the current best-performing model in the literature (QCH). We also see clearly in Figure 7 that the predictive performance of the metaheuristics and tted deep heuristics are very close, even though the deep heuristics encompasses a much larger space of heuristics. This suggests that we have managed to capture the relevant space of heuristic strategies with our parameterization of the metaheuristics. That is, the metaheuristic model is nearly \complete" in the sense of Fudenberg et al. (2022). 28 6 Discussion In the theory presented, we combine two perspectives. On the one hand we assume that people use simple cognitive strategies to choose actions that are often inconsistent with rational behavior in any given game. On the other hand, we don't assume that the speci c heuristics used are predetermined or insensitive to incentives. On the contrary, we assume that the heuristics people use are chosen resource-rationally, such that they strike an optimal balance between expected payo s and cognitive costs. We have seen that by combining these two perspectives, we can predict behavior more accurately and better understand the in uence of the larger environment on behavior in a given game. In particular, the proposed approach can help us predict when we should expect behavior to coincide with rational behavior and when we might see systematic deviations from a rational benchmark. Behavior will coincide with rational behavior if two conditions are satis ed. Firstly, there has to exist a simple heuristic that leads to the optimal action. Secondly, that heuristic has to perform well in the larger environment so that the decision-maker can learn to use it. When there doesn't exist a simple and high-performing heuristic, or when the heuristic that normally works well leads to the wrong decision, we will observe consistent deviations. This latter case is nicely illustrated in our comparison games. The optimal heuristic will focus on the features of the games that are often of importance, but miss opportunities that are rare. Speci cally, a person used to common- interests games might miss an opportunity for personal gain at the other player's expense while a person used to competing-interests games might fail to notice an outcome that is actually best for everyone. Our ndings relate to those of Peysakhovich and Rand (2016), who showed that varying the sustainability of cooperation in an initial session of the repeated prisoner's dilemma a ects how much prosocial behavior and trust is shown in later games, including the one-shot prisoner's dilemma. Our results provide a qualitative replication of this idea. In particular, we found that putting people in an environment in which prosocial heuristics (such as jointmax) perform well leads them to choose prosocial actions in the comparison games and in some cases, even to select dominated options. By contrast, putting people in an environment where prosocial actions often result in low payo s prevents people from achieving ecient outcomes, even when they are Nash equilibria. Consistent with our theory, Peysakhovich and Rand interpreted their ndings as the result of heuristic decision-making. We build on this intuitively appealing notion by specifying formal models of heuristics in one-shot games that make quantitative 29 predictions. We also emphasize the in uence of cognitive costs (in addition to payo s) on the heuristics people use. Finally, we would like to emphasize an important di erence between our theory and previously proposed models of learning in games. Previous learning models have been posed at the level of action ; people learn which action to take in a speci c (repeatedly played) game (e.g. Jehiel, 2005; Grimm and Mengel, 2012). In contrast, in our theory, learning happens at the level of reasoning ; people learn how to decide what to do in a new game. We believe that this more abstract form of learning is more broadly applicable in the real world, as it is rare that we ever encounter the exact same situation twice (a feature that is captured by the randomly generated games in our experiment). 7 Conclusion We have proposed a theory of human behavior in one-shot normal-form games based on the resource-rational use of heuristics. According to this theory, people select their actions using simple cognitive heuristics that exibly and selectively process payo information; the heuristics people choose to use are ones that strike a good tradeo between the expected payo s and the cognitive cost. In a large preregistered experiment, we con rmed one of the primary qualitative predictions of the theory: people learn which heuristics are resource-rational in a given environment, and thus their recent experience a ects the choices they make. In particular, we found that placing participants in environments with common (vs. competing) interests leads them to select the most ecient (or least ecient) equilibrium in a weak-link game and to cooperate (or defect) in a prisoner's dilemma. Furthermore, we found that our theory provides a strong quantitative account of our participants' behavior, making more accurate out-of-sample predictions than both the quantal cognitive hierarchy model and a model with prosocial preferences and a noisy best response. Strikingly, we found that a resource-rational model, in which behavior in both common-interests and competing-interests treatments is predicted using a single set of tted cost parameters (with the heuristic parameters set to optimize the resultant expected payo -cognitive cost tradeo ), achieved nearly the same accuracy as the fully parameterized model in which the heuristic parameters are estimated separately to match the behavior in each treatment. Coupled with the overall high predictive accuracy of the model, this provides strong evidence in support of the theory that people use heuristics that optimally trade o between the expected payo and the cognitive cost. We also found similar results using an entirely di erent neural network-based family 30 of heuristics, indicating that these ndings are robust to the parameterization of the heuristics. From a broader perspective, our theory speaks to a decades-long debate on the ratio- nality of human decision-making. In contrast to classical models based on optimization and utility maximization, which fail to capture systematic patterns in human choice behavior, recent models instead emphasize our systematic biases, suggesting that we rely on simple and error-prone heuristics to make decisions. In this paper, we hope to have o ered a synthesis of these two perspectives, by treating heuristics as things that can themselves be optimized in a utility-maximization framework. We hope this approach will prove to be a valuable step forward toward a more uni ed understanding of economic decision-making. 31 A Instructions for the experiment Figure 8: The instructions one the rst page when a participant joins the experiment. 32 Figure 9: The participants have to complete three questions like this in a row in order to be allowed to participate in the experiment. Figure 10: In each round, the participant chose a row by clicking on it. Once it is clicked it is highlighted and they have to click the next button to proceed. 33 Figure 11: Once the matched participant chooses a column, either by making a decision or by sampling from previous decisions in the game from the same population, the result is shown. B Complete Description of Metaheuristics To exemplify the di erent heuristics, we will apply them to the following example game. 1 2 3 10;10;28;8 25;65;52;2 36;56;61;1 Figure 12: Example normal-form game represented as a bi-matrix. The row player chooses a row and the column player chooses a column. The rst number in each cell is the payo of the row player and the second number is the payo of the column player. B.1 Row Heuristics Arow heuristic calculates a value, v(si), for each pure strategy, si2Si, based only on the player's own payo s associated with si. That is, it evaluates a strategy based only on the rst entry in each cell of the corresponding row of the payo matrix (see Figure 12). Formally, a row heuristic is de ned by the row-value function vsuch that v(si) =f(i(si;1);:::;i(si;mi))) 34 for some function f:Rmi!R. For example, if fis the mean function, then we have vmean(si) =1 miX si2Sii(si;si); which evaluates each strategy by the average payo in the corresponding row of the payo matrix. Deterministically selecting arg maxsivmean(si) gives exactly the behavior of a level-1 player in the classical level-k model. If, instead, we let fbemin, we recover the maximin heuristic, which calculates the minimum value associated with each strategy and tries to choose the row with the highest minimum value, vmin(si) = min si2Sii(si;si); and, similarly, if we let fbe max, we recover the maximax heuristic, vmax(si) = max si2Sii(si;si): While one can imagine a very large space of possible functions f, we consider a one-dimensional family that interpolates smoothly between minandmax. We construct such a family with the following expression: v (si) =X si2Sii(si;si)exp [ i(si;si)]P s2Siexp [ i(si;s)]; which approaches vmin(si) as !1 ,vmax(si) as !1 , andvmean(si) when = 0. Intuitively, we can understand this expression as computing an expectation of the payo forsiunder di erent degrees of optimism about the other player's choice of si. In the above example game (Figure 12), the heuristic will assign the highest value to 1 (the top row) when is large and positive, to 2when is large and negative, and to 3 when = 0. Notice that if 6= 0, the values associated with the di erent strategies do not necessarily correspond to a consistent belief about the other player's action. For example, if is positive, the highest payo in each row will be overweighted, but this might correspond to a di erent column in each row; in the example game (Figure 12), column 3 is overweighted when evaluating row 1 but downweighted when evaluating rows 2 and 3. Although this internally inconsistent weighting may appear irrational, it provides an extra degree of freedom that can increase the expected payo in a given environment without additional cognitive e ort. We assume that the computation of vis subject to noise, but that this noise can 35 be reduced through cognitive e ort, which we operationalize as a single scalar '. In particular, following Stahl and Wilson (1994), we assume that the noise is Gumbel- distributed and thus recover a multinomial logit model with the probability that player iplays strategy sibeing hsi row(G) =exp ['v(si)]P k2Siexp ['v(k)]: We assume that the cost is proportional to the e ort, i.e., c(hrow) ='Crow; whereCrow>0 is a free parameter of the cost function. B.2 Cell Heuristics We refer to heuristics that reason directly about outcomes as cell heuristics . Based on preliminary analyses, we identi ed one speci c form of cell heuristic that participants appear to use frequently: The jointmax heuristic, which identi es the outcome that is most desirable for both players. Formally, the joint desirability of a cell is given by vjointmax(si;si) = minfi(si;si);i(si;si)g and the probability of playing a given strategy with cognitive e ort 'is given by hsi jointmax (G) =X si2Siexp 'vjointmax(si;si) P (ki;ki)2SiSiexp ['vjointmax (ki;ki)]: This can be interpreted as applying a softmax to all possible outcomes and taking the probability of each strategy to be the sum of the probabilities in the corresponding row. In the example game (Figure 12), the jointmax heuristic would assign the highest probability to row 1because the cell ( 1;3) with payo s (8 ;8) has the highest minimum payo . The cognitive cost is again proportional to e ort, and so c(hcell) ='Ccell; whereCcell>0 is a free parameter of the cost function. 36 B.3 Simulation Heuristics: Higher-Level Reasoning If a row player uses a simulation heuristic, she rst considers the game from the column player's perspective, applying some heuristic that generates a distribution of likely play. She then plays a noisy best response to that distribution. Let GTdenote the transposed game, i.e., the game from the column player's perspective. Let hcolbe the heuristic the row player uses to estimate the column player's behavior; then, hsim(G) is given by hsi row=exph 'P si2Sii(si;si)hsi col(GT)i P si2Siexph 'P si2Sii(si;si)hsi col(GT)i; where'is the cognitive e ort parameter. A simulation heuristic is thus de ned by a combination of a heuristic and an e ort parameter ( hcol;'). The cognitive cost for a simulation heuristic is calculated by rst calculating the cognitive cost associated with the heuristic used for the column player's behavior, then a constant cost for updating the payo matrix using that belief ( Cmul), and one additional cost that is proportional to the cognitive e ort parameter in the last step, as if it was a row heuristic, c(hsim) =c(hcol) +Cmul+Crow': Notice that once the beliefs have been formed and the beliefs have been incorporated, the last cost for taking a decision is based on Crowsince this process is the same as averaging over the rows as for a row-heuristic. B.4 Selection Rule We model the selection of primitive heuristics using the rational inattention model of Mat ejka and McKay (2015). While we don't think about the underlying selection process as inherently one of rational inattention, the rational inattention model captures the key properties we expect from the selection rule: (1) there is a prior weight on each heuristic, (2) a heuristic will be used more on games in which it is likely to perform well, and (3) the adjustment from the prior based on expected value is incomplete and costly. Assume that an individual is choosing between nheuristicsH=fh1;h2;:::;hNg. 37 Then the probability of using heuristic hnwhen playing game Gis given by P[fusehninGg] =exp [(an+V(hn;E;G))=]PN j=1exp [(aj+V(hj;E;G))=] =pnexp [V(hn;E;G)=]PN j=1pjexp [V(hj;E;G)=](6) whereiis an adjustment cost parameter and anare weights that give the prior probability of using the di erent heuristics, pn=exp(an=i)PN j=1exp(aj=i). A metaheuristic is de ned by a tuple m=hH;PiwhereHi=fh1;h2;:::;hNgis a nite consideration set of heuristics, and P=fp1;p2;:::;pNga prior over those heuristics. We can express the performance of a metaheuristic in an environment E, analogously to (2) for heuristics, as Vmeta(m;E) =X G2GX h2HV(hn;E;G)pnexp [V(hn;E;G)=]PN j=1pjexp [(V(hj;E;G))=]P(G):(7) The optimization problem faced by the individual, subject to the adjustment cost , is then to maximize (7), i.e., to choose the optimal consideration set and corresponding priors, m= argmax H2Pfin(H)argmax P2(H)Vmeta(hH;Pi;E); wherePfin(H) is the set of all nite subsets of all possible heuristics. In practice, this is not a solvable problem when the consideration set of possible heuristics, H, is large. Therefore, we will assume a small set of heuristics and jointly nd optimal parameters of those heuristics and priors P. C Deep Heuristics Our neural network architecture is based on that developed by Hartford et al. (2016). The idea is to let every element of the input and hidden layers be a matrix of the same size as the game, instead of a single value as is typical. Each cell in those matrices is then treated in the same way. This ensures that the deep heuristic is invariant to relabeling of strategies, as should be expected from any decision rule for normal-form games. Higher-level reasoning is incorporated by rst having two separated neural networks, representing a \level-0" heuristic for the row player and the column player separately, and then possibly taking into account the thus formed beliefs about the column player's 38 ...... ...... ...... ...ARR(0)UR ARR(1) ARR(2) ARC(0) UCARC(1)Output F1 F2 FnHR 1;1 HR 1;k1 HC 1;1 HC 1;k1HR 2;1 HR 2;k2 HC 2;1 HC 2;k2SR 1 SR k2 SC 1 SC k2Input layerHidden layersSoftmax layerAction Response layers Figure 13: Architecture of the deep heuristic. behavior in separate \action response" layers. The di erent action response layers are then combined into a response distribution. A heuristic that did not explicitly form beliefs about the other player's behavior would let ARR(0) be the output, a person who applies a heuristic to estimate the opponent's behavior and then best responds to it would only use ARR(1), etc. The neural network architecture is illustrated in Figure 13. C.1 Feature Layers The hidden layers are updated according to HR l;k=l X jwR l;k;jHR l1;j+bR l;k! HR l;k2RmRmC and similarly for HC. For the rst hidden layer HR 0;i=HC 0;i=Fi, and so the two disjoint parts have the same feature matrices, but di erent weights. The feature matrices consist of matrices where each cell contains information associated with the row or column of one payo matrix. The payo matrices for the row and column players are denoted by URandUC, respectively. More speci cally, we 39 0 @minR;C UR 1;1;UC 1;1 minR;C UR 1;2;UC 1;2 minR;C UR 1;3;UC 1;3 minR;C UR 2;1;UC 2;1 minR;C UR 2;2;UC 2;2 minR;C UR 2;3;UC 2;3 minR;C UR 3;1;UC 3;1 minR;C UR 3;2;UC 3;2 minR;C UR 3;3;UC 3;3 1 A Figure 14: Examples of input feature matrices. calculate the maximum, minimum, and mean of each row and column for both payo matrices. Furthermore, F1andF2are the payo matrices as they are, and lastly, we have a feature matrix where each value is the minimum payo that either one of the players receives from the strategy pro le. Below are three examples of such feature matrices. 0 B@maxiUR i;1maxiUR i;2maxiUR i;3 maxiUR i;1maxiUR i;2maxiUR i;3 maxiUR i;1maxiUR i;2maxiUR i;31 CA;0 B@maxjUR 1;jmaxjUR 1;jmaxjUR 1;j maxjUR 2;jmaxjUR 2;jmaxjUR 2;j maxjUR 3;jmaxjUR 3;jmaxjUR 3;j1 CA C.2 Softmax and Action Response Layers After the last feature layer, a play distribution is calculated from each feature matrix in the last layer. This is done by rst summing over the rows (columns) and then taking a softmax over the sums. The rst action response layer is then given by a weighted average of those di erent distributions. For example, the distribution SR 12mRis given by SR 1= softmax X i(HR 2;1)1;i;X i(HR 2;1)2;i;:::;X i(HR 2;1)mR;i! while the sums for the column player taken over the columns are given by SC 1= softmax X j(HC 2;1)j;1;X j(HC 2;1)j;2;:::;X j(HC 2;1)j;mC! : The rst action response distribution is then ARR(0) =Pk2 lwR lSR lforwR2k2, and similarly for the column player. The ARR(0) corresponds to a level-0 heuristic, i.e., a heuristic where the column player's behavior isn't explicitly modeled and taken into account. To do this, we move to Action Response layer 1, and use ARC(0) as a prediction for the behavior of the opposing player. Once the beliefs of the column player are formed, the ARR(1) 40 calculates the expected value from each action, conditioned on that expected play, and takes a softmax over those payo s: ARR(1) = softmax X jUR 1;jARC(0)j;:::;X jUR mR;jARC(0)j! As in the cognitive hierarchy model, the second Action Response layer, ARR(2), forms a belief about the other player by taking a weighted average of the ARR(1) and ARR(0) layers and computing a noisy best response to it: ARR(2) = softmax X jUR 1;j ARC(0)j+ (1 )ARC(1)j ;:::! C.3 Output Layer The output layer takes a weighted average of the row player's action response layers. This is the nal predicted distribution of play for the row player. C.4 Cognitive Costs When the deep heuristic is optimized with respect to the received payo , the cognitive cost comes from two features of the network. Firstly, there is an assumed xed cost associated with simulating, which is then proportional to the weight given to ARR(1). Secondly, it is assumed that more exact predictions are cognitively more costly. The second cognitive cost is thus proportional to the reciprocal of the entropy of the resulting prediction. D Detailed Results D.1 Accuracy and Prediction Loss In Table 6 we see the accuracy (how often the modal action is assigned the highest probability) and the average NLL of the di erent models. D.2 Pairwise Tests For Hypotheses 3 and 4 we can test signi cance with pairwise tests. For each of the games in the test set, we compare the di erence in either the prediction loss or the payo between the relevant models. For each game, we get two observations, one for 41 Common Competing Total Model Estimation Accu NLL Accu NLL Accu NLL Deep heuristics Fitted 89.4% 0.593 85.3% 0.709 87.3% 0.651 Metaheuristics Fitted 88.4% 0.599 86.6% 0.715 87.5% 0.657 Metaheuristics Optimized 89.1% 0.598 86.6% 0.726 87.8% 0.662 QCH+Pro Fitted 85.3% 0.638 85.6% 0.722 85.5% 0.68 Deep heuristics Optimized 85.3% 0.636 85.0% 0.739 85.2% 0.687 QCH Fitted 82.2% 0.686 84.1% 0.737 83.1% 0.711 EB+Pro Fitted 80.9% 0.717 71.2% 0.838 76.1% 0.777 Table 6: Average accuracy and negative log-likelihood for di erent models. Here we only report the models when estimated and evaluated on the same environments. each role. For each of these comparisons, we perform both a t-test and a nonparametric, Wilcoxon rank test. As can be seen in the tables below, all of these tests are signi cant. Model Test set Estimation Di erence t-test Wilcoxon Metaheuristics Common Fitted -0.065 p<: 001p<: 001 Metaheuristics Common Optimized -0.165 p<: 001p<: 001 Metaheuristics Competing Fitted -0.058 p<: 001p<: 001 Metaheuristics Competing Optimized -0.080 p<: 001p<: 001 Deep heuristics Common Fitted -0.113 p<: 001p<: 001 Deep heuristics Common Optimized -0.120 p<: 001p<: 001 Deep heuristics Competing Fitted -0.118 p<: 001p<: 001 Deep heuristics Competing Optimized -0.231 p<: 001p<: 001 Table 7: Pairwise tests for di erences in prediction loss in the test sets between the models estimated on training data from the same vs. the di erent environment. The prediction loss is lower for the model estimated on training data from the same environment for all pairs. In Table 9 we see a pairwise test for the di erence in the predictive ability between the optimized metaheuristic and the alternative models. Prosocial EB is a model with prosocial preferences and correct beliefs. We see that the optimized metaheuristic model is signi cantly better than the alternative models QCH, prosociality, and prosocial QCH. Considering pairwise comparisons of models for each treatment in isolation, we see that the optimized metaheuristic makes better predictions than alternative models in the common-interests treatment. For the competing-interests treatment, the di erence is not signi cant for either the QCH model with prosocial preferences or the standard QCH model. 42 Model Test set Estimation Di erence t-test Wilcoxon Metaheuristics Common Fitted -0.145 p<: 001p<: 001 Metaheuristics Common Optimized -0.302 p<: 001p<: 001 Metaheuristics Competing Fitted -0.083 p<: 001p<: 001 Metaheuristics Competing Optimized -0.165 p<: 001p<: 001 Deep heuristics Common Fitted -0.238 p<: 001p<: 001 Deep heuristics Common Optimized -0.333 p<: 001p<: 001 Deep heuristics Competing Fitted -0.276 p<: 001p<: 001 Deep heuristics Competing Optimized -0.502 p<: 001p<: 001 Table 8: Pairwise tests for di erences in regret in the test sets between the models estimated on training data from the same vs. the di erent environment. Regret is lower for the model estimated on training data from the same environment for all pairs. Model Estimation Di erence t-test Wilcoxon Deep heuristics Fitted -0.011 p=:003p=:001 Metaheuristics Fitted -0.005 p=:079p=:052 QCH+Pro Fitted 0.018 p<: 001p=:001 Deep heuristics Optimized 0.025 p<: 001p<: 001 QCH Fitted 0.049 p<: 001p<: 001 EB+Pro Fitted 0.115 p<: 001p<: 001 Table 9: Pairwise tests for di erences in prediction loss between the optimized meta- heuristic model and the alternative models across both treatments. Model Estimation Di erence t-test Wilcoxon Deep heuristics Fitted -0.004 p=:384p=:194 Metaheuristics Fitted 0.001 p=:801p=:373 Deep heuristics Optimized 0.038 p<: 001p<: 001 QCH+Pro Fitted 0.040 p<: 001p<: 001 QCH Fitted 0.088 p<: 001p<: 001 EB+Pro Fitted 0.119 p<: 001p<: 001 Table 10: Pairwise tests for di erences in prediction loss between the optimized meta- heuristic model and the alternative models for the common-interests games. E Explaining Adaptation via Learning In the main text, we assume that the participants manage to nd rational heuristics without going into the details about how that is done. Here, we show that a learning model could explain this adaptation to rational metaheuristics. We assume that all individuals arrive at the experiment with the same initial metaheuristic m(j(0)), where are the parameters of the metaheuristic, including the parameters of both the primitive heuristics and the priors. 43 Model Estimation Di erence t-test Wilcoxon Deep heuristics Fitted -0.017 p<: 001p=:001 Metaheuristics Fitted -0.011 p=:011p=:059 QCH+Pro Fitted -0.004 p=:483p=:619 QCH Fitted 0.010 p=:146p=:062 Deep heuristics Optimized 0.013 p=:090p=:151 EB+Pro Fitted 0.112 p<: 001p<: 001 Table 11: Pairwise tests for di erences in prediction loss between the optimized meta- heuristic model and the alternative models for the competing-interests games. For each experimental population , the players play a sequence of (G;t)50 t=1, each time with a single realized action of the other player. Given the observed behavior of playeri, the utility in round tfor playeriis given by u(m(j);G;t;si;c) =G;t(m(G;tj);si)c(m(j)); wherem(j) is the metaheuristic with parameters ,G;tis the game played in round tby population ,cis the cognitive cost function, and siis the action taken by the other player. After observing the action sitaken by the other player, player ican calculate the gradient with respect to the parameters to see how the metaheuristic used could have been improved, i.e., ru(m(j);G;t;si;c): A simple learning model is one where each individual changes the metaheuristic used in the direction of the gradient after each round of the experiment, with some step-size. We can write this as ;i(t+ 1) =;i(t) +ru(m(j;i(t));G;t;si;c): In other words, after each game, the metaheuristic is moved in the direction that would have yielded a higher utility in that game. For simplicity, we consider a population-level model, rather than modeling the behavior of each individual player separately. The behavior in round tis given by (t+ 1) =(t) +EsiP(jG;t)[ru(m(j;i(t));G;t;si;c)]; whereP(sijG;t)is the empirical probability that s1is used in game G;t. Thus, after each game, the population parameters for the next round move in the average 44 direction of improvement de ned by the empirical behavior in that game. In our estimation of the learning model, we use the costs estimated for the optimal metaheuristics. To estimate this model we thus need a baseline heuristic, (0), and a learning parameter, . To make the performance of this model comparable to that of the other models, we estimate the common starting parameters (0) and the common learning rate in order to minimize loss on the rst 30 games of each population in both treatments. We then predict the remaining 16 treatment games of each population in both treatments. Model Estimation Common Competing Both Deep heuristics Optimize 0.636 0.739 0.687 Deep heuristics Fit 0.593 0.709 0.651 Metaheuristics Optimize 0.598 0.726 0.662 Metaheuristics Fit 0.599 0.715 0.657 Learning 0.605 0.724 0.664 Table 12: Out-of-sample NLL prediction loss. In Table 12 we see that the performance of the learning model is comparable to but slightly lower than the performances of the tted models. In Table 13 the expected payo s in the test set games are shown for the learning model, the optimized metaheuristics, the optimized deep heuristics, and relevant benchmarks. It is clear that the expected payo s from this learning model are similar to both the actual payo s and those of the optimization-based models. Model Estimation Common Competing Metaheuristic Optimize 6.69 5.43 Deep heuristic Optimize 6.65 5.45 Learning 6.68 5.38 Random 5.38 4.45 Human behavior 6.74 5.43 Maximum 7.17 5.95 Table 13: Out-of-sample expected payo s. In conclusion, this simple learning model appears to be a possible explanation for how the participants come to use these near-optimal heuristics in our experiment with simple adjustments of the heuristics used after each game. 45 References Bacharach, M. (2006). Beyond individual choice: teams and frames in game theory . Princeton University Press. Bardsley, N., J. Mehta, C. Starmer, and R. Sugden (2010). Explaining focal points: Cognitive hierarchy theory versus team reasoning. Economic Journal 120 (543), 40{79. Bruhin, A., E. Fehr, and D. Schunk (2019). The many faces of human sociality: Uncovering the distribution and stability of social preferences. Journal of the European Economic Association 17 (4), 1025{1069. Camerer, C. F. (2003). Behavioral Game Theory: Experiments in Strategic Interaction . Princeton University Press. Camerer, C. F., T.-H. Ho, and J.-K. Chong (2004, aug). A Cognitive Hierarchy Model of Games. The Quarterly Journal of Economics 119 (3), 861{898. Caplin, A. and M. Dean (2013). Behavioral implications of rational inattention with shannon entropy. Technical report, National Bureau of Economic Research. Chen, D. L., M. Schonger, and C. Wickens (2016, March). oTree|An open-source platform for laboratory, online, and eld experiments. Journal of Behavioral and Experimental Finance 9 , 88{97. Costa-Gomes, M. A. and G. Weizs acker (2008, jul). Stated Beliefs and Play in Normal- Form Games. Review of Economic Studies 75 (3), 729{762. Crawford, V. P., M. A. Costa-Gomes, and N. Iriberri (2013, mar). Structural Models of Nonequilibrium Strategic Thinking: Theory, Evidence, and Applications. Journal of Economic Literature 51 (1), 5{62. Devetag, G., S. Di Guida, and L. Polonio (2016, mar). An eye-tracking study of feature-based choice in one-shot games. Experimental Economics 19 (1), 177{201. Dhami, S. (2016). The foundations of behavioral economic analysis . Oxford University Press. Ert, E. and I. Erev (2013). On the descriptive value of loss aversion in decisions under risk: Six clari cations. Judgment and Decision Making 8 (3), 214{235. Fehr, E. and K. M. Schmidt (1999). A theory of fairness, competition, and cooperation. The Quarterly Journal of Economics 114 (3), 817{868. Fudenberg, D., F. Drew, D. K. Levine, and D. K. Levine (1998). The theory of learning in games , Volume 2. MIT press. Fudenberg, D., J. Kleinberg, A. Liang, and S. Mullainathan (2022). Measuring the completeness of economic models. Journal of Political Economy 130 (4), 956{990. 46 Fudenberg, D. and A. Liang (2019, dec). Predicting and Understanding Initial Play. American Economic Review 109 (12), 4112{4141. Gershman, S. J., E. J. Horvitz, and J. B. Tenenbaum (2015). Computational ra- tionality: A converging paradigm for intelligence in brains, minds, and machines. Science 349 (6245). Gigerenzer, G. and P. M. Todd (1999). Simple Heuristics That Make Us Smart . Oxford University Press, USA. Goeree, J. K. and C. A. Holt (2004). A model of noisy introspection. Games and Economic Behavior 46 (2), 365{382. Goldstein, D. G. and G. Gigerenzer (2002). Models of ecological rationality: The recognition heuristic. Psychological Review 109 (1), 75{90. Griths, T. L., F. Lieder, and N. D. Goodman (2015). Rational use of cognitive resources: Levels of analysis between the computational and the algorithmic. Topics in Cognitive Science 7 (2), 217{229. Grimm, V. and F. Mengel (2012). An experiment on learning in a multiple games environment. Journal of Economic Theory 147 (6), 2220{2259. Hartford, J. S., J. R. Wright, and K. Leyton-Brown (2016). Deep Learning for Predicting Human Strategic Behavior. In Advances in Neural Information Processing Systems , Volume 29. Curran Associates, Inc. Heap, S. H., D. R. Arjona, and R. Sugden (2014). How portable is level-0 behavior? a test of level-k theory in games with non-neutral frames. Econometrica 82 (3), 1133{1151. Hebert, B. and M. Woodford (2019). Rational inattention when decisions take time. Journal of Chemical Information and Modeling 53 (9), 1689{1699. Howes, A., R. L. Lewis, and A. Vera (2009). Rational Adaptation Under Task and Processing Constraints: Implications for Testing Theories of Cognition and Action. Psychological Review 116 (4), 717{751. Imai, T., T. A. Rutter, and C. F. Camerer (2020, sep). Meta-Analysis of Present-Bias Estimation Using Convex Time Budgets. The Economic Journal 186 (2), 227{236. Izard, V. and S. Dehaene (2008). Calibrating the mental number line. Cognition 106 (3), 1221{1247. Jehiel, P. (2005, aug). Analogy-based expectation equilibrium. Journal of Economic Theory 123 (2), 81{104. Krueger, P., F. Callaway, S. Gul, T. Griths, and F. Lieder (2022, January). Discovering Rational Heuristics for Risky Choice. 47 Lewis, R. L., A. Howes, and S. Singh (2014). Computational rationality: Linking mechanism and behavior through bounded utility maximization. Topics in Cognitive Science 6 (2), 279{311. Lieder, F. and T. L. Griths (2015). When to use which heuristic: A rational solution to the strategy selection problem. Proceedings of the 37th annual conference of the cognitive science society 1 (3), 1{6. Lieder, F. and T. L. Griths (2017, November). Strategy selection as rational metarea- soning. Psychological Review 124 (6), 762{794. Lieder, F. and T. L. Griths (2020). Resource-rational analysis: Understanding human cognition as the optimal use of limited computational resources. Behavioral and Brain Sciences 43 , e1. Lieder, F., P. M. Krueger, and T. Griths (2017). An automatic method for discovering rational heuristics for risky choice. In Proceedings of the 39th Annual Meeting of the Cognitive Science Society . Mat ejka, F. and A. McKay (2015, jan). Rational Inattention to Discrete Choices: A New Foundation for the Multinomial Logit Model. American Economic Review 105 (1), 272{298. Mengel, F. and E. Sciubba (2014). Extrapolation and structural similarity in games. Economics Letters 125 (3), 381{385. Misyak, J. B. and N. Chater (2014). Virtual bargaining: A theory of social decision-making. Philosophical Transactions of the Royal Society B: Biological Sci- ences 369 (1655). Nagel, R. (1995). Unraveling the Guessing Game. American Economic Review 85 (5), 1313{1326. Peysakhovich, A. and D. G. Rand (2016). Habits of virtue: Creating norms of coopera- tion and defection in the laboratory. Management Science 62 (3), 631{647. Polonio, L., S. Di Guida, and G. Coricelli (2015, nov). Strategic sophistication and attention in games: An eye-tracking study. Games and Economic Behavior 94 , 80{96. Savage, L. J. (1954). The Foundations of Statistics . John Wiley & Sons. Simon, H. A. (1976). From substantive to procedural rationality. In 25 years of economic theory , pp. 65{86. Springer. Sims, C. A. (1998). Stickiness. Carnegie-Rochester Conference Series on Public Policy 49 , 317{356. Spiliopoulos, L. and R. Hertwig (2020, March). A map of ecologically rational heuristics for uncertain strategic worlds. Psychological Review 127 (2), 245{280. 48 Stahl, D. O. and P. W. Wilson (1994). Experimental evidence on players' models of other players. Journal of Economic Behavior & Organization 25 (3), 309{327. Stahl, D. O. and P. W. Wilson (1995). On players' models of other players: Theory and experimental evidence. Steiner, J., C. Stewart, and F. Mat ejka (2017). Rational Inattention Dynamics: Inertia and Delay in Decision-Making. Econometrica 85 (2), 521{553. Stewart, N., S. G achter, T. Noguchi, and T. L. Mullett (2016). Eye Movements in Strategic Choice. 156(October 2015), 137{156. Sugden, R. (2003). The logic of team reasoning. Philosophical explorations 6 (3), 165{181. Todd, P. M. and G. E. Gigerenzer (2012). Ecological Rationality: Intelligence in the World. Oxford University Press. Tun cel, T. and J. K. Hammitt (2014). A new meta-analysis on the WTP/WTA disparity. Journal of Environmental Economics and Management 68 (1), 175{187. Wright, J. R. and K. Leyton-Brown (2017, nov). Predicting human behavior in unrepeated, simultaneous-move games. Games and Economic Behavior 106 (2), 16{37. 49
ac882e84-715e-48cb-80c0-a37974801464
StampyAI/alignment-research-dataset/lesswrong
LessWrong
A utility-maximizing varient of AIXI *Response to: [Universal agents and utility functions](/lw/feo/universal_agents_and_utility_functions/)* *Related approaches: [Hibbard (2012)](http://versita.metapress.com/content/q5523w34gk767041/?genre=article&id=doi%3a10.2478%2fv10229-011-0013-5),  [Hay (2005)](http://www.cs.auckland.ac.nz/~nickjhay/honours.revamped.pdf)* Background ----------   Here is the function implemented by finite-lifetime [AI![\xi](http://www.codecogs.com/png.latex?\xi)](http://www.hutter1.net/ai/aixigentle.pdf): ![{\displaystyle \dot{y}_{k}:=\arg\max_{y_{k}\in Y}\sum_{x_{k}\in X}\max_{y_{k+1}\in Y}\sum_{x_{k+1}\in X}...\max_{y_{m}\in Y}\sum_{x_{m}\in X}\left(r\left(x_{k}\right)+...+r\left(x_{m}\right)\right)\cdot\xi\left(\dot{y}\dot{x}_{<k}y\underline{x}_{k:m}\right)}](http://www.codecogs.com/png.latex?{\displaystyle%20\dot{y}_{k}:=\arg\max_{y_{k}\in%20Y}\sum_{x_{k}\in%20X}\max_{y_{k+1}\in%20Y}\sum_{x_{k+1}\in%20X}...\max_{y_{m}\in%20Y}\sum_{x_{m}\in%20X}\left%28r\left%28x_{k}\right%29+...+r\left%28x_{m}\right%29\right%29\cdot\xi\left%28\dot{y}\dot{x}_{%3Ck}y\underline{x}_{k:m}\right%29}), where ![m](http://www.codecogs.com/png.latex?m) is the number of steps in the lifetime of the agent, ![k](http://www.codecogs.com/png.latex?k) is the current step being computed, ![X](http://www.codecogs.com/png.latex?X) is the set of possible observations, ![Y](http://www.codecogs.com/png.latex?Y) is the set of possible actions, ![r](http://www.codecogs.com/png.latex?r) is a function that extracts a reward value from an observation, a dot over a variable represents that its value is known to be the true value of the action or observation it represents, underlines represent that the variable is an input to a probability distribution, and ![\xi](http://www.codecogs.com/png.latex?%5Cxi) is a function that returns the probability of a sequence of observations, given a certain known history and sequence of actions, and starting from the Solomonoff prior. More formally, ![{\displaystyle \xi\left(\dot{y}\dot{x}_{<k}y\underline{x}_{k:m}\right)=\left(\sum_{q\in Q:q\left(y_{\leq m}\right)=x_{\leq m}}2^{-\ell\left(q\right)}\right)\diagup\left(\sum_{q\in Q:q\left(\dot{y}_{<k}\right)=\dot{x}_{<k}}2^{-\ell\left(q\right)}\right)}](http://www.codecogs.com/png.latex?{\displaystyle%20\xi\left%28\dot{y}\dot{x}_{%3Ck}y\underline{x}_{k:m}\right%29=\left%28\sum_{q\in%20Q:q\left%28y_{\leq%20m}\right%29=x_{\leq%20m}}2^{-\ell\left%28q\right%29}\right%29\diagup\left%28\sum_{q\in%20Q:q\left%28\dot{y}_{%3Ck}\right%29=\dot{x}_{%3Ck}}2^{-\ell\left%28q\right%29}\right%29}), where ![Q](http://www.codecogs.com/png.latex?Q) is the set of all programs, ![\ell](http://www.codecogs.com/png.latex?\ell) is a function that returns the length of a program in bits, and a program applied to a sequence of actions returns the resulting sequence of observations. Notice that the denominator is a constant, depending only on the already known ![\dot{y}\dot{x}_{%3Ck}](http://www.codecogs.com/png.latex?\dot{y}\dot{x}_{%3Ck}), and multiplying by a positive constant does not change the argmax, so we can pretend that the denominator doesn't exist. If ![q](http://www.codecogs.com/png.latex?q) is a valid program, then any longer program with ![q](http://www.codecogs.com/png.latex?q) as a prefix is not a valid program, so ![{\displaystyle%20\sum_{q\in%20Q}2^{-\ell\left%28q\right%29}\leq1}](http://www.codecogs.com/png.latex?{\displaystyle%20\sum_{q\in%20Q}2^{-\ell\left%28q\right%29}\leq1}).   Problem -------   A problem with this is that it can only optimize over the input it receives, not over aspects of the external world that it cannot observe. Given the chance, AI![\xi](http://www.codecogs.com/png.latex?%5Cxi) would hack its input channel so that it would only observe good things, instead of trying to make good things happen (in other words, it would [wirehead](/lw/fkx/a\_definition\_of\_wireheading/) itself). Anja [specified](/lw/feo/universal\_agents\_and\_utility\_functions/) a variant of AI![\xi](http://www.codecogs.com/png.latex?%5Cxi) in which she replaced the sum of rewards with a single utility value and made the domain of the utility function be the entire sequence of actions and observations instead of a single observation, like so: ![{\displaystyle%20\dot{y}_{k}:=\arg\max_{y_{k}\in%20Y}\sum_{x_{k}\in%20X}\max_{y_{k+1}\in%20Y}\sum_{x_{k+1}\in%20X}...\max_{y_{m}\in%20Y}\sum_{x_{m}\in%20X}U\left%28\dot{y}\dot{x}_{%3Ck}yx_{k:m}\right%29\cdot\xi\left%28\dot{y}\dot{x}_{%3Ck}y\underline{x}_{k:m}\right%29}](http://www.codecogs.com/png.latex?{\displaystyle%20\dot{y}_{k}:=\arg\max_{y_{k}\in%20Y}\sum_{x_{k}\in%20X}\max_{y_{k+1}\in%20Y}\sum_{x_{k+1}\in%20X}...\max_{y_{m}\in%20Y}\sum_{x_{m}\in%20X}U\left%28\dot{y}\dot{x}_{%3Ck}yx_{k:m}\right%29\cdot\xi\left%28\dot{y}\dot{x}_{%3Ck}y\underline{x}_{k:m}\right%29}). This doesn't really solve the problem, because the utility function still only takes what the agent can see, rather than what is actually going on outside the agent. The situation is a little better because the utility function also takes into account the agent's actions, so it could punish actions that look like the agent is trying to wirehead itself, but if there was a flaw in the instructions not to wirehead, the agent would exploit it, so the incentive not to wirehead would have to be perfect, and this formulation is not very enlightening about how to do that. *[Edit: Hibbard (2012) also presents a solution to this problem. I haven't read all of it yet, but it appears to be fairly different from what I suggest in the next section.]*   Solution --------   Here's what I suggest instead: everything that happens is determined by the program that the world is running on and the agent's actions, so the domain of the utility function should be ![Q\times%20Y^{m}](http://www.codecogs.com/png.latex?Q\times%20Y^{m}). The apparent problem with that is that the formula for AI![\xi](http://www.codecogs.com/png.latex?\xi) does not contain any mention of elements of ![Q](http://www.codecogs.com/png.latex?Q). If we just take the original formula and replace ![r\left(x_{k}\right)+...+r\left(x_{m}\right)](http://www.codecogs.com/png.latex?r\left%28x_{k}\right%29+...+r\left%28x_{m}\right%29) with ![U\left(q,\dot{y}_{<k}y_{k:m}\right)](http://www.codecogs.com/png.latex?U\left%28q,\dot{y}_{%3Ck}y_{k:m}\right%29), it wouldn't make any sense. However, if we expand out ![\xi](http://www.codecogs.com/png.latex?%5Cxi) in the original formula (excluding the unnecessary denominator), we can move the sum of rewards inside the sum over programs, like this: ![{\displaystyle \dot{y}_{k}:=\arg\max_{y_{k}\in Y}\sum_{x_{k}\in X}\max_{y_{k+1}\in Y}\sum_{x_{k+1}\in X}...\max_{y_{m}\in Y}\sum_{x_{m}\in X}\sum_{q\in Q:q\left(y_{\leq m}\right)=x_{\leq m}}\left(r\left(x_{k}\right)+...+r\left(x_{m}\right)\right)2^{-\ell\left(q\right)}}](http://www.codecogs.com/png.latex?{\displaystyle%20\dot{y}_{k}:=\arg\max_{y_{k}\in%20Y}\sum_{x_{k}\in%20X}\max_{y_{k+1}\in%20Y}\sum_{x_{k+1}\in%20X}...\max_{y_{m}\in%20Y}\sum_{x_{m}\in%20X}\sum_{q\in%20Q:q\left%28y_{\leq%20m}\right%29=x_{\leq%20m}}\left%28r\left%28x_{k}\right%29+...+r\left%28x_{m}\right%29\right%292^{-\ell\left%28q\right%29}}). Now it is easy to replace the sum of rewards with the desired utility function. ![{\displaystyle \dot{y}_{k}:=\arg\max_{y_{k}\in Y}\sum_{x_{k}\in X}\max_{y_{k+1}\in Y}\sum_{x_{k+1}\in X}...\max_{y_{m}\in Y}\sum_{x_{m}\in X}\sum_{q\in Q:q\left(y_{\leq m}\right)=x_{\leq m}}U\left(q,\dot{y}_{<k}y_{k:m}\right)2^{-\ell\left(q\right)}}](http://www.codecogs.com/png.latex?{\displaystyle%20\dot{y}_{k}:=\arg\max_{y_{k}\in%20Y}\sum_{x_{k}\in%20X}\max_{y_{k+1}\in%20Y}\sum_{x_{k+1}\in%20X}...\max_{y_{m}\in%20Y}\sum_{x_{m}\in%20X}\sum_{q\in%20Q:q\left%28y_{\leq%20m}\right%29=x_{\leq%20m}}U\left%28q,\dot{y}_{%3Ck}y_{k:m}\right%292^{-\ell\left%28q\right%29}}). With this formulation, there is no danger of the agent wireheading, and all ![U](http://www.codecogs.com/png.latex?U) has to do is compute everything that happens when the agent performs a given sequence of actions in a given program, and decide how desirable it is. If the range of ![U](http://www.codecogs.com/png.latex?U) is unbounded, then this might not converge. Let's assume throughout this post that the range of ![U](http://www.codecogs.com/png.latex?U) is bounded. *[Edit: Hay (2005) presents a similar formulation to this.]*   Extension to infinite lifetimes -------------------------------   The previous discussion assumed that the agent would only have the opportunity to perform a finite number of actions. The situation gets a little tricky when the agent is allowed to perform an unbounded number of actions. Hutter uses a finite look-ahead approach for AI![\xi](http://www.codecogs.com/png.latex?%5Cxi), where on each step ![k](http://www.codecogs.com/png.latex?k), it pretends that it will only be performing ![m_{k}](http://www.codecogs.com/png.latex?m_{k}) actions, where ![\forall%20k\,%20m_{k}\gg%20k](http://www.codecogs.com/png.latex?\forall%20k\,%20m_{k}\gg%20k). ![{\displaystyle \dot{y}_{k}:=\arg\max_{y_{k}\in Y}\sum_{x_{k}\in X}\max_{y_{k+1}\in Y}\sum_{x_{k+1}\in X}...\max_{y_{m_{k}}\in Y}\sum_{x_{m_{k}}\in X}\left(r\left(x_{k}\right)+...+r\left(x_{m_{k}}\right)\right)\cdot\xi\left(\dot{y}\dot{x}_{<k}y\underline{x}_{k:m_{k}}\right)}](http://www.codecogs.com/png.latex?{\displaystyle%20\dot{y}_{k}:=\arg\max_{y_{k}\in%20Y}\sum_{x_{k}\in%20X}\max_{y_{k+1}\in%20Y}\sum_{x_{k+1}\in%20X}...\max_{y_{m_{k}}\in%20Y}\sum_{x_{m_{k}}\in%20X}\left%28r\left%28x_{k}\right%29+...+r\left%28x_{m_{k}}\right%29\right%29\cdot\xi\left%28\dot{y}\dot{x}_{%3Ck}y\underline{x}_{k:m_{k}}\right%29}). If we make the same modification to the utility-based variant, we get ![{\displaystyle \dot{y}_{k}:=\arg\max_{y_{k}\in Y}\sum_{x_{k}\in X}\max_{y_{k+1}\in Y}\sum_{x_{k+1}\in X}...\max_{y_{m_{k}}\in Y}\sum_{x_{m_{k}}\in X}\sum_{q\in Q:q\left(y_{\leq m_{k}}\right)=x_{\leq m_{k}}}U\left(q,\dot{y}_{<k}y_{k:m_{k}}\right)2^{-\ell\left(q\right)}}](http://www.codecogs.com/png.latex?{\displaystyle%20\dot{y}_{k}:=\arg\max_{y_{k}\in%20Y}\sum_{x_{k}\in%20X}\max_{y_{k+1}\in%20Y}\sum_{x_{k+1}\in%20X}...\max_{y_{m_{k}}\in%20Y}\sum_{x_{m_{k}}\in%20X}\sum_{q\in%20Q:q\left%28y_{\leq%20m_{k}}\right%29=x_{\leq%20m_{k}}}U\left%28q,\dot{y}_{%3Ck}y_{k:m_{k}}\right%292^{-\ell\left%28q\right%29}}). This is unsatisfactory because the domain of ![U](http://www.codecogs.com/png.latex?U) was supposed to consist of all the information necessary to determine everything that happens, but here, it is missing all the actions after step ![m_{k}](http://www.codecogs.com/png.latex?m_%7Bk%7D). One obvious thing to try is to set ![m_{k}:=\infty](http://www.codecogs.com/png.latex?m_{k}:=\infty). This will be easier to do using a compacted expression for AI![\xi](http://www.codecogs.com/png.latex?%5Cxi): ![{\displaystyle \dot{y}_{k}:=\arg\max_{y_{k}\in Y}\max_{p\in P:p\left(\dot{x}_{<k}\right)=\dot{y}_{<k}y_{k}}\sum_{q\in Q:q\left(\dot{y}_{<k}\right)=\dot{x}_{<k}}\left(r\left(x_{k}^{pq}\right)+...+r\left(x_{m_{k}}^{pq}\right)\right)2^{-\ell\left(q\right)}}](http://www.codecogs.com/png.latex?{\displaystyle%20\dot{y}_{k}:=\arg\max_{y_{k}\in%20Y}\max_{p\in%20P:p\left%28\dot{x}_{%3Ck}\right%29=\dot{y}_{%3Ck}y_{k}}\sum_{q\in%20Q:q\left%28\dot{y}_{%3Ck}\right%29=\dot{x}_{%3Ck}}\left%28r\left%28x_{k}^{pq}\right%29+...+r\left%28x_{m_{k}}^{pq}\right%29\right%292^{-\ell\left%28q\right%29}}), where ![P](http://www.codecogs.com/png.latex?P) is the set of policies that map sequences of observations to sequences of actions and ![x_{i}^{pq}](http://www.codecogs.com/png.latex?x_{i}^{pq}) is shorthand for the last observation in the sequence returned by ![q\left(p\left(\dot{x}_{<k}x_{k:i-1}^{pq}\right)\right)](http://www.codecogs.com/png.latex?q\left%28p\left%28\dot{x}_{%3Ck}x_{k:i-1}^{pq}\right%29\right%29). If we take this compacted formulation, modify it to accommodate the new utility function, set ![m_{k}:=\infty](http://www.codecogs.com/png.latex?m_%7Bk%7D:=%5Cinfty), and replace the maximum with a supremum (since there's an infinite number of possible policies), we get ![{\displaystyle \dot{y}_{k}:=\arg\max_{y_{k}\in Y}\sup_{p\in P:p\left(\dot{x}_{<k}\right)=\dot{y}_{<k}y_{k}}\sum_{q\in Q:q\left(\dot{y}_{<k}\right)=\dot{x}_{<k}}U\left(q,\dot{y}_{<k}y_{k}y_{k+1:\infty}^{pq}\right)2^{-\ell\left(q\right)}}](http://www.codecogs.com/png.latex?{\displaystyle%20\dot{y}_{k}:=\arg\max_{y_{k}\in%20Y}\sup_{p\in%20P:p\left%28\dot{x}_{%3Ck}\right%29=\dot{y}_{%3Ck}y_{k}}\sum_{q\in%20Q:q\left%28\dot{y}_{%3Ck}\right%29=\dot{x}_{%3Ck}}U\left%28q,\dot{y}_{%3Ck}y_{k}y_{k+1:\infty}^{pq}\right%292^{-\ell\left%28q\right%29}}), where ![y_{i}^{pq}](http://www.codecogs.com/png.latex?y_{i}^{pq}) is shorthand for the last action in the sequence returned by ![p\left(q\left(\dot{y}_{<k}y_{k}y_{k+1:i-1}^{pq}\right)\right)](http://www.codecogs.com/png.latex?p\left%28q\left%28\dot{y}_{%3Ck}y_{k}y_{k+1:i-1}^{pq}\right%29\right%29).   But there is a problem with this, which I will illustrate with a toy example. Suppose ![Y:=\left\{ a,b\right\} ](http://www.codecogs.com/png.latex?Y:=\left\{%20a,b\right\}), and ![U\left(q,y_{1:\infty}\right)=0](http://www.codecogs.com/png.latex?U\left%28q,y_{1:\infty}\right%29=0) when ![\forall k\in\mathbb{N}\, y_{k}=a](http://www.codecogs.com/png.latex?\forall%20k\in\mathbb{N}\,%20y_{k}=a), and for any ![n\in\mathbb{N}](http://www.codecogs.com/png.latex?n\in\mathbb{N}), ![U\left(q,y_{1:\infty}\right)=1-\frac{1}{n}](http://www.codecogs.com/png.latex?U\left%28q,y_{1:\infty}\right%29=1-\frac{1}{n}) when ![y_{n}=b](http://www.codecogs.com/png.latex?y_{n}=b) and ![\forall k<n\, y_{k}=a](http://www.codecogs.com/png.latex?\forall%20k%3Cn\,%20y_{k}=a). (![U](http://www.codecogs.com/png.latex?U) does not depend on the program ![q](http://www.codecogs.com/png.latex?q) in this example). An agent following the above formula would output ![a](http://www.codecogs.com/png.latex?a) on every step, and end up with a utility of ![0](http://www.codecogs.com/png.latex?0), when it could have gotten arbitrarily close to ![1](http://www.codecogs.com/png.latex?1) by eventually outputting ![b](http://www.codecogs.com/png.latex?b).   To avoid problems like that, we could assume the reasonable-seeming condition that if ![y_{1:\infty}](http://www.codecogs.com/png.latex?y_{1:\infty}) is an action sequence and ![\left\{ y_{1:\infty}^{n}\right\} _{n=1}^{\infty}](http://www.codecogs.com/png.latex?\left\{%20y_{1:\infty}^{n}\right\}%20_{n=1}^{\infty}) is a sequence of action sequences that converges to ![y_{1:\infty}](http://www.codecogs.com/png.latex?y_{1:\infty}) (by which I mean ![\forall k\in\mathbb{N}\,\exists N\in\mathbb{N}\,\forall n>N\, y_{k}^{n}=y_{k}](http://www.codecogs.com/png.latex?\forall%20k\in\mathbb{N}\,\exists%20N\in\mathbb{N}\,\forall%20n%3EN\,%20y_{k}^{n}=y_{k})), then ![{\displaystyle \lim_{n\rightarrow\infty}U\left(q,y_{1:\infty}^{n}\right)=U\left(q,y_{1:\infty}\right)}](http://www.codecogs.com/png.latex?{\displaystyle%20\lim_{n\rightarrow\infty}U\left%28q,y_{1:\infty}^{n}\right%29=U\left%28q,y_{1:\infty}\right%29}).   Under that assumption, the supremum is in fact a maximum, and the formula gives you an action sequence that will reach that maximum (proof below).   If you don't like the condition I imposed on ![U](http://www.codecogs.com/png.latex?U), you might not be satisfied by this. But without it, there is not necessarily a best policy. One thing you can do is, on step 1, pick some extremely small ![\varepsilon>0](http://www.codecogs.com/png.latex?\varepsilon%3E0), pick any element from ![{\displaystyle \left\{ p^{*}\in P:\sum_{q\in Q}U\left(q,y_{1:\infty}^{p^{*}q}\right)2^{-\ell\left(q\right)}>\left(\sup_{p\in P}\sum_{q\in Q}U\left(q,y_{1:\infty}^{pq}\right)2^{-\ell\left(q\right)}\right)-\varepsilon\right\} }](http://www.codecogs.com/png.latex?{\displaystyle%20\left\{%20p^{*}\in%20P:\sum_{q\in%20Q}U\left%28q,y_{1:\infty}^{p^{*}q}\right%292^{-\ell\left%28q\right%29}%3E\left%28\sup_{p\in%20P}\sum_{q\in%20Q}U\left%28q,y_{1:\infty}^{pq}\right%292^{-\ell\left%28q\right%29}\right%29-\varepsilon\right\}%20}), and then follow that policy for the rest of eternity, which will guarantee that you do not miss out on more than ![\varepsilon](http://www.codecogs.com/png.latex?\varepsilon) of expected utility.   Proof of criterion for supremum-chasing working -----------------------------------------------   definition: If ![y_{1:\infty}](http://www.codecogs.com/png.latex?y_{1:\infty}) is an action sequence and ![\left\{ y_{1:\infty}^{n}\right\} _{n=1}^{\infty}](http://www.codecogs.com/png.latex?\left\{%20y_{1:\infty}^{n}\right\}%20_{n=1}^{\infty}) is an infinite sequence of action sequences, and ![\forall k\in\mathbb{N}\,\exists N\in\mathbb{N}\,\forall n>N\, y_{k}^{n}=y_{k}](http://www.codecogs.com/png.latex?\forall%20k\in\mathbb{N}\,\exists%20N\in\mathbb{N}\,\forall%20n%3EN\,%20y_{k}^{n}=y_{k}), then we say ![\left\{ y_{1:\infty}^{n}\right\} _{n=1}^{\infty}](http://www.codecogs.com/png.latex?\left\{%20y_{1:\infty}^{n}\right\}%20_{n=1}^{\infty}) converges to ![y_{1:\infty}](http://www.codecogs.com/png.latex?y_%7B1:%5Cinfty%7D). If ![p](http://www.codecogs.com/png.latex?p) is a policy and ![\left\{ p_{n}\right\} _{n=1}^{\infty}](http://www.codecogs.com/png.latex?\left\{%20p_{n}\right\}%20_{n=1}^{\infty}) is a sequence of policies, and ![\forall k\in\mathbb{N}\,\forall x_{<k}\in X^{k}\,\exists N\in\mathbb{N}\,\forall n>N\, p\left(x_{<k}\right)=p_{n}\left(x_{<k}\right)](http://www.codecogs.com/png.latex?\forall%20k\in\mathbb{N}\,\forall%20x_{%3Ck}\in%20X^{k}\,\exists%20N\in\mathbb{N}\,\forall%20n%3EN\,%20p\left%28x_{%3Ck}\right%29=p_{n}\left%28x_{%3Ck}\right%29), then we say ![\left\{ p_{n}\right\} _{n=1}^{\infty}](http://www.codecogs.com/png.latex?%5Cleft%5C%7B%20p_%7Bn%7D%5Cright%5C%7D%20_%7Bn=1%7D%5E%7B%5Cinfty%7D) converges to ![p](http://www.codecogs.com/png.latex?p).   assumption (for lemma 2 and theorem): If ![\left\{ y_{1:\infty}^{n}\right\} _{n=1}^{\infty}](http://www.codecogs.com/png.latex?%5Cleft%5C%7B%20y_%7B1:%5Cinfty%7D%5E%7Bn%7D%5Cright%5C%7D%20_%7Bn=1%7D%5E%7B%5Cinfty%7D) converges to ![y_{1:\infty}](http://www.codecogs.com/png.latex?y_%7B1:%5Cinfty%7D), then ![{\displaystyle \lim_{n\rightarrow\infty}U\left(q,y_{1:\infty}^{n}\right)=U\left(q,y_{1:\infty}\right)}](http://www.codecogs.com/png.latex?{\displaystyle%20\lim_{n\rightarrow\infty}U\left%28q,y_{1:\infty}^{n}\right%29=U\left%28q,y_{1:\infty}\right%29}).   lemma 1: The agent described by ![{\displaystyle \dot{y}_{k}:=\arg\max_{y_{k}\in Y}\sup_{p\in P:p\left(\dot{x}_{<k}\right)=\dot{y}_{<k}y_{k}}\sum_{q\in Q:q\left(\dot{y}_{<k}\right)=\dot{x}_{<k}}U\left(q,\dot{y}_{<k}y_{k}y_{k+1:\infty}^{pq}\right)2^{-\ell\left(q\right)}}](http://www.codecogs.com/png.latex?{\displaystyle%20\dot{y}_{k}:=\arg\max_{y_{k}\in%20Y}\sup_{p\in%20P:p\left%28\dot{x}_{%3Ck}\right%29=\dot{y}_{%3Ck}y_{k}}\sum_{q\in%20Q:q\left%28\dot{y}_{%3Ck}\right%29=\dot{x}_{%3Ck}}U\left%28q,\dot{y}_{%3Ck}y_{k}y_{k+1:\infty}^{pq}\right%292^{-\ell\left%28q\right%29}}) follows a policy that is the limit of a sequence of policies ![\left\{ p_{n}\right\} _{n=1}^{\infty}](http://www.codecogs.com/png.latex?%5Cleft%5C%7B%20p_%7Bn%7D%5Cright%5C%7D%20_%7Bn=1%7D%5E%7B%5Cinfty%7D) such that ![{\displaystyle \lim_{n\rightarrow\infty}\sum_{q\in Q}U\left(q,y_{1:\infty}^{p_{n}q}\right)2^{-\ell\left(q\right)}=\sup_{p\in P}\sum_{q\in Q}U\left(q,y_{1:\infty}^{pq}\right)2^{-\ell\left(q\right)}}](http://www.codecogs.com/png.latex?{\displaystyle%20\lim_{n\rightarrow\infty}\sum_{q\in%20Q}U\left%28q,y_{1:\infty}^{p_{n}q}\right%292^{-\ell\left%28q\right%29}=\sup_{p\in%20P}\sum_{q\in%20Q}U\left%28q,y_{1:\infty}^{pq}\right%292^{-\ell\left%28q\right%29}}).   proof of lemma 1: Any policy can be completely described by the last action it outputs for every finite observation sequence. Observations are returned by a program, so the set of possible finite observation sequences is countable. It is possible to fix the last action returned on any particular finite observation sequence to be the argmax, and still get arbitrarily close to the supremum with suitable choices for the last action returned on the other finite observation sequences. By induction, it is possible to get arbitrarily close to the supremum while fixing the last action returned to be the argmax for any finite set of finite observation sequences. Thus, there exists a sequence of policies approaching the policy that the agent implements whose expected utilities approach the supremum.   lemma 2: If ![p](http://www.codecogs.com/png.latex?p) is a policy and ![\left\{ p_{n}\right\} _{n=1}^{\infty}](http://www.codecogs.com/png.latex?%5Cleft%5C%7B%20p_%7Bn%7D%5Cright%5C%7D%20_%7Bn=1%7D%5E%7B%5Cinfty%7D) is a sequence of policies converging to ![p](http://www.codecogs.com/png.latex?p), then ![{\displaystyle \sum_{q\in Q}U\left(q,y_{1:\infty}^{pq}\right)2^{-\ell\left(q\right)}=\lim_{n\rightarrow\infty}\sum_{q\in Q}U\left(q,y_{1:\infty}^{p_{n}q}\right)2^{-\ell\left(q\right)}}](http://www.codecogs.com/png.latex?{\displaystyle%20\sum_{q\in%20Q}U\left%28q,y_{1:\infty}^{pq}\right%292^{-\ell\left%28q\right%29}=\lim_{n\rightarrow\infty}\sum_{q\in%20Q}U\left%28q,y_{1:\infty}^{p_{n}q}\right%292^{-\ell\left%28q\right%29}}).   proof of lemma 2: Let ![\varepsilon>0](http://www.codecogs.com/png.latex?%5Cvarepsilon%3E0). On any given sequence of inputs ![x_{1:\infty}\in X^{\infty}](http://www.codecogs.com/png.latex?x_{1:\infty}\in%20X^{\infty}), ![\left\{ p_{n}\left(x_{1:\infty}\right)\right\} _{n=1}^{\infty}](http://www.codecogs.com/png.latex?\left\{%20p_{n}\left%28x_{1:\infty}\right%29\right\}%20_{n=1}^{\infty}) converges to ![p\left(x_{1:\infty}\right)](http://www.codecogs.com/png.latex?p\left%28x_{1:\infty}\right%29), so, by assumption, ![\forall q\in Q\,\exists N\in\mathbb{N}\,\forall n\geq N\,\left|U\left(q,y_{1:\infty}^{pq}\right)-U\left(q,y_{1:\infty}^{p_{n}q}\right)\right|<\frac{\varepsilon}{2}](http://www.codecogs.com/png.latex?\forall%20q\in%20Q\,\exists%20N\in\mathbb{N}\,\forall%20n\geq%20N\,\left|U\left%28q,y_{1:\infty}^{pq}\right%29-U\left%28q,y_{1:\infty}^{p_{n}q}\right%29\right|%3C\frac{\varepsilon}{2}). For each ![N\in\mathbb{N}](http://www.codecogs.com/png.latex?N\in\mathbb{N}), let ![Q_{N}:=\left\{ q\in Q:\forall n\geq N\,\left|U\left(q,y_{1:\infty}^{pq}\right)-U\left(q,y_{1:\infty}^{p_{n}q}\right)\right|<\frac{\varepsilon}{2}\right\} ](http://www.codecogs.com/png.latex?Q_{N}:=\left\{%20q\in%20Q:\forall%20n\geq%20N\,\left|U\left%28q,y_{1:\infty}^{pq}\right%29-U\left%28q,y_{1:\infty}^{p_{n}q}\right%29\right|%3C\frac{\varepsilon}{2}\right\}). The previous statement implies that ![{\displaystyle \bigcup_{N\in\mathbb{N}}Q_{N}=Q}](http://www.codecogs.com/png.latex?{\displaystyle%20\bigcup_{N\in\mathbb{N}}Q_{N}=Q}), and each element of ![\left\{ Q_{N}\right\} _{N\in\mathbb{N}}](http://www.codecogs.com/png.latex?\left\{%20Q_{N}\right\}%20_{N\in\mathbb{N}}) is a subset of the next, so ![{\displaystyle \exists N\in\mathbb{N}\,\sum_{q\in Q\setminus Q_{N}}2^{-\ell\left(q\right)}<\frac{\varepsilon}{2\left(\sup U-\inf U\right)}}](http://www.codecogs.com/png.latex?{\displaystyle%20\exists%20N\in\mathbb{N}\,\sum_{q\in%20Q\setminus%20Q_{N}}2^{-\ell\left%28q\right%29}%3C\frac{\varepsilon}{2\left%28\sup%20U-\inf%20U\right%29}}). The range of ![U](http://www.codecogs.com/png.latex?U) is bounded, so ![\sup U](http://www.codecogs.com/png.latex?\sup%20U) and ![\inf U](http://www.codecogs.com/png.latex?\inf%20U) are defined. This also implies that the difference in expected utility, given any information, of any two policies, is bounded. More formally:![{\displaystyle \forall Q'\subset Q\,\forall p',p''\in P\,\left|\left(\left(\sum_{q\in Q'}U\left(q,y_{1:\infty}^{p'q}\right)2^{-\ell\left(q\right)}\right)\diagup\left(\sum_{q\in Q'}2^{-\ell\left(q\right)}\right)\right)-\left(\left(\sum_{q\in Q'}U\left(q,y_{1:\infty}^{p''q}\right)2^{-\ell\left(q\right)}\right)\diagup\left(\sum_{q\in Q'}2^{-\ell\left(q\right)}\right)\right)\right|\leq\sup U-\inf U}](http://www.codecogs.com/png.latex?{\displaystyle%20\forall%20Q%27\subset%20Q\,\forall%20p%27,p%27%27\in%20P\,\left|\left%28\left%28\sum_{q\in%20Q%27}U\left%28q,y_{1:\infty}^{p%27q}\right%292^{-\ell\left%28q\right%29}\right%29\diagup\left%28\sum_{q\in%20Q%27}2^{-\ell\left%28q\right%29}\right%29\right%29-\left%28\left%28\sum_{q\in%20Q%27}U\left%28q,y_{1:\infty}^{p%27%27q}\right%292^{-\ell\left%28q\right%29}\right%29\diagup\left%28\sum_{q\in%20Q%27}2^{-\ell\left%28q\right%29}\right%29\right%29\right|\leq\sup%20U-\inf%20U}), so in particular, ![{\displaystyle \left|\left(\sum_{q\in Q\setminus Q_{N}}U\left(q,y_{1:\infty}^{pq}\right)2^{-\ell\left(q\right)}\right)-\left(\sum_{q\in Q\setminus Q_{N}}U\left(q,y_{1:\infty}^{p_{N}q}\right)2^{-\ell\left(q\right)}\right)\right|\leq\left(\sup U-\inf U\right)\sum_{q\in Q\setminus Q_{N}}2^{-\ell\left(q\right)}<\frac{\varepsilon}{2}}](http://www.codecogs.com/png.latex?{\displaystyle%20\left|\left%28\sum_{q\in%20Q\setminus%20Q_{N}}U\left%28q,y_{1:\infty}^{pq}\right%292^{-\ell\left%28q\right%29}\right%29-\left%28\sum_{q\in%20Q\setminus%20Q_{N}}U\left%28q,y_{1:\infty}^{p_{N}q}\right%292^{-\ell\left%28q\right%29}\right%29\right|\leq\left%28\sup%20U-\inf%20U\right%29\sum_{q\in%20Q\setminus%20Q_{N}}2^{-\ell\left%28q\right%29}%3C\frac{\varepsilon}{2}}). ![{\displaystyle \left|\left(\sum_{q\in Q}U\left(q,y_{1:\infty}^{pq}\right)2^{-\ell\left(q\right)}\right)-\left(\sum_{q\in Q}U\left(q,y_{1:\infty}^{p_{N}q}\right)2^{-\ell\left(q\right)}\right)\right|\leq\left|\left(\sum_{q\in Q_{N}}U\left(q,y_{1:\infty}^{pq}\right)2^{-\ell\left(q\right)}\right)-\left(\sum_{q\in Q_{N}}U\left(q,y_{1:\infty}^{p_{N}q}\right)2^{-\ell\left(q\right)}\right)\right|+\left|\left(\sum_{q\in Q\setminus Q_{N}}U\left(q,y_{1:\infty}^{pq}\right)2^{-\ell\left(q\right)}\right)-\left(\sum_{q\in Q}U\left(q,y_{1:\infty}^{p_{N}q}\right)2^{-\ell\left(q\right)}\right)\right|<\frac{\varepsilon}{2}+\frac{\varepsilon}{2}=\varepsilon}](http://www.codecogs.com/png.latex?{\displaystyle%20\left|\left%28\sum_{q\in%20Q}U\left%28q,y_{1:\infty}^{pq}\right%292^{-\ell\left%28q\right%29}\right%29-\left%28\sum_{q\in%20Q}U\left%28q,y_{1:\infty}^{p_{N}q}\right%292^{-\ell\left%28q\right%29}\right%29\right|\leq\left|\left%28\sum_{q\in%20Q_{N}}U\left%28q,y_{1:\infty}^{pq}\right%292^{-\ell\left%28q\right%29}\right%29-\left%28\sum_{q\in%20Q_{N}}U\left%28q,y_{1:\infty}^{p_{N}q}\right%292^{-\ell\left%28q\right%29}\right%29\right|+\left|\left%28\sum_{q\in%20Q\setminus%20Q_{N}}U\left%28q,y_{1:\infty}^{pq}\right%292^{-\ell\left%28q\right%29}\right%29-\left%28\sum_{q\in%20Q}U\left%28q,y_{1:\infty}^{p_{N}q}\right%292^{-\ell\left%28q\right%29}\right%29\right|%3C\frac{\varepsilon}{2}+\frac{\varepsilon}{2}=\varepsilon}).   theorem: ![{\displaystyle \sum_{\dot{q}\in Q}U\left(\dot{q},\dot{y}_{1:\infty}\right)2^{-\ell\left(\dot{q}\right)}=\sup_{p\in P}\sum_{q\in Q}U\left(q,y_{1:\infty}^{pq}\right)2^{-\ell\left(q\right)}}](http://www.codecogs.com/png.latex?{\displaystyle%20\sum_{\dot{q}\in%20Q}U\left%28\dot{q},\dot{y}_{1:\infty}\right%292^{-\ell\left%28\dot{q}\right%29}=\sup_{p\in%20P}\sum_{q\in%20Q}U\left%28q,y_{1:\infty}^{pq}\right%292^{-\ell\left%28q\right%29}}), where ![{\displaystyle \dot{y}_{k}:=\arg\max_{y_{k}\in Y}\sup_{p\in P:p\left(\dot{x}_{<k}\right)=\dot{y}_{<k}y_{k}}\sum_{q\in Q:q\left(\dot{y}_{<k}\right)=\dot{x}_{<k}}U\left(q,\dot{y}_{<k}y_{k}y_{k+1:\infty}^{pq}\right)2^{-\ell\left(q\right)}}](http://www.codecogs.com/png.latex?{\displaystyle%20\dot{y}_{k}:=\arg\max_{y_{k}\in%20Y}\sup_{p\in%20P:p\left%28\dot{x}_{%3Ck}\right%29=\dot{y}_{%3Ck}y_{k}}\sum_{q\in%20Q:q\left%28\dot{y}_{%3Ck}\right%29=\dot{x}_{%3Ck}}U\left%28q,\dot{y}_{%3Ck}y_{k}y_{k+1:\infty}^{pq}\right%292^{-\ell\left%28q\right%29}}).   proof of theorem: Let's call the policy implemented by the agent ![p^{*}](http://www.codecogs.com/png.latex?p^{*}). By lemma 1, there is a sequence of policies ![\left\{ p_{n}\right\} _{n=1}^{\infty}](http://www.codecogs.com/png.latex?\left\{%20p_{n}\right\}%20_{n=1}^{\infty}) converging to ![p^{*}](http://www.codecogs.com/png.latex?p%5E%7B*%7D) such that ![{\displaystyle \lim_{n\rightarrow\infty}\sum_{q\in Q}U\left(q,y_{1:\infty}^{p_{n}q}\right)2^{-\ell\left(q\right)}=\sup_{p\in P}\sum_{q\in Q}U\left(q,y_{1:\infty}^{pq}\right)2^{-\ell\left(q\right)}}](http://www.codecogs.com/png.latex?{\displaystyle%20\lim_{n\rightarrow\infty}\sum_{q\in%20Q}U\left%28q,y_{1:\infty}^{p_{n}q}\right%292^{-\ell\left%28q\right%29}=\sup_{p\in%20P}\sum_{q\in%20Q}U\left%28q,y_{1:\infty}^{pq}\right%292^{-\ell\left%28q\right%29}}). By lemma 2, ![{\displaystyle \sum_{q\in Q}U\left(q,y_{1:\infty}^{p^{*}q}\right)2^{-\ell\left(q\right)}=\sup_{p\in P}\sum_{q\in Q}U\left(q,y_{1:\infty}^{pq}\right)2^{-\ell\left(q\right)}}](http://www.codecogs.com/png.latex?{\displaystyle%20\sum_{q\in%20Q}U\left%28q,y_{1:\infty}^{p^{*}q}\right%292^{-\ell\left%28q\right%29}=\sup_{p\in%20P}\sum_{q\in%20Q}U\left%28q,y_{1:\infty}^{pq}\right%292^{-\ell\left%28q\right%29}}). ###
aa06cc1a-f4c7-48aa-b359-47b99783118d
trentmkelly/LessWrong-43k
LessWrong
A Model of Ontological Development Introduction In this post I present a model of human psychological development based on increases in the structural complexity of ontology. Or more simply, it's a model about how human minds grow in terms of how complex their models of the world are. It breaks down development into 8 stages that build on each other, and spans from birth to what, for lack of a better term, we might call enlightenment. I think you'll find it useful if you're interested in better understanding how human thinking changes and grows throughout a lifetime, in developing yourself, or if you notice some confusion that humans seem to change in meaningful ways as they age but don't already have a strong gears-level model of how that works. The model rests on several claims about the world: 1. Humans engage in psychological development throughout their lives. 2. This psychological development is marked by phases that are different from each other in kind (a difference in "type signature") along certain key dimensions. 3. This difference in kind is due to increasing structural complexity of one's models (ontology) in reference to the reality (the ontic) being modeled. 4. One can progress towards greater complexity. 5. This greater complexity confers the disposition to live a better life, contingent on integration of the insights this complexity offers. For this post I assume you already agree with Claim 1 as it's not very controversial. For evidence of Claim 2 and Claim 4 I defer to both content linked in the Background section and the writing of David Chapman. For Claim 3 I defer to my previous post, "Phenomenological Complexity Classes". And for Claim 5 I present myself (you can see some of my personal story around this in the "Phenomenological Complexity Classes" post) and basically everyone else who has improved their lives through self-help, positive psychology, and Buddhist practices leading to awakening. With these claims in mind, I then present my current best general model of w
c949529d-623e-497a-ad58-203b86feca5d
trentmkelly/LessWrong-43k
LessWrong
Examples of practical implications of Judea Pearl's Causality work I was recently talking with a friend about the practical usefulness of modern academic epistemology. Intuitively, I would have wanted to say that Judea Pearl's work causality has a lot of practical implications, however, I couldn't think of any examples. Do you have examples of conclusions you have drawn because you learned about Pearl's causality that you wouldn't have drawn otherwise, or can point to other people making practical use of the concepts?
3c3cb120-0d55-42c9-8240-1931017db82d
trentmkelly/LessWrong-43k
LessWrong
Inductive biases stick around This post is a follow-up to Understanding “Deep Double Descent”. I was talking to Rohin at NeurIPS about my post on double descent, and he asked the very reasonable question of why exactly I think double descent is so important. I realized that I hadn't fully explained that in my previous post, so the goal of this post is to further address the question of why you should care about double descent from an AI safety standpoint. This post assumes you've read my Understanding “Deep Double Descent” post, so you should read that first before reading this if you haven't already. Specifically, I think double descent demonstrates the in my opinion very important yet counterintuitive result that larger models can actually be simpler than smaller models. On its face, this sounds somewhat crazy—how can a model with more parameters be simpler? But in fact I think this is just a very straightforward consequence of double descent: in the double descent paradigm, larger models with zero training error generalize better than smaller models with zero training error because they do better on SGD's inductive biases. And if you buy that SGD's inductive biases are approximately simplicity, that means that larger models with zero training error are simpler than smaller models with zero training error. Obviously, larger models do have more parameters than smaller ones, so if that's your measure of simplicity, larger models will always be more complicated, but for other measures of simplicity that's not necessarily the case. For example, it could hypothetically be the case that larger models have lower Kolmogorov complexity. Though I don't actually think that's true in the case of K-complexity, I think that's only for the boring reason that model weights have a lot of noise. If you had a way of somehow only counting the “essential complexity,” I suspect larger models would actually have lower K-complexity. Really, what I I'm trying to do here is dispel what I see as the myth that as ML
c9b897c3-ba84-4321-b683-94802314a9ec
StampyAI/alignment-research-dataset/arxiv
Arxiv
Linguistic communication as (inverse) reward design 1 Introduction --------------- Imagine taking up mushroom foraging as a hobby. How would you learn which fungi are delicious and which are deadly? Learning from direct experience Sutton and Barto ([2018](#bib.bib28 "Reinforcement Learning: an Introduction")) seems risky. But how might we best learn from others? Prior work in reinforcement learning (RL) has examined a number of social learning strategies, including passive *inverse reinforcement learning* (observe an expert pick mushrooms, then infer their reward function; Ng and Russell, [2000](#bib.bib129 "Algorithms for inverse reinforcement learning."); Abbeel and Ng, [2004](#bib.bib123 "Apprenticeship learning via inverse reinforcement learning")) or active *preference learning* (offer an expert pairs of mushrooms, observe which one they eat, and infer their reward function; Markant and Gureckis, [2014](#bib.bib4 "Is it better to select or to receive? learning via active and passive hypothesis testing."); Christiano et al., [2017](#bib.bib117 "Deep Reinforcement Learning from Human Preferences"); Basu et al., [2018](#bib.bib11 "Learning from richer human guidance: augmenting comparison-based learning with feature queries")). We posit that few humans would rely on such indirect observations if they had access to a cooperative teacher Vélez and Gweon ([2021](#bib.bib2 "Learning from other minds: an optimistic critique of reinforcement learning models of social learning")); Gweon ([2021](#bib.bib3 "Inferential social learning: cognitive foundations of human social learning and teaching")); Wang et al. ([2020](#bib.bib74 "A mathematical theory of cooperative communication")). For example, an expert guiding a foraging trip might *demonstrate* or verbally *instruct* the learner to pick certain mushrooms rather than others Shafto et al. ([2014](#bib.bib75 "A rational account of pedagogical reasoning: teaching by, and learning from, examples")); Ho et al. ([2016](#bib.bib118 "Showing versus Doing: teaching by Demonstration")). While such explicit instruction has been a useful tool for guiding RL agents Goyal et al. ([2019](#bib.bib115 "Using natural language for reward shaping in reinforcement learning")); Luketina et al. ([2019](#bib.bib114 "A survey of reinforcement learning informed by natural language")); Fu et al. ([2019](#bib.bib103 "From language to goals: inverse reinforcement learning for vision-based instruction following")); Tellex et al. ([2020](#bib.bib36 "Robots that use language")), natural language affords much richer forms of expression. For example, an expert teaching a seminar might *describe* how to recognize edible or toxic mushrooms from their features.111Or write a book on the topic, e.g. Hyman ([2021](#bib.bib122 "How to forage for mushrooms without dying: an absolute beginner’s guide to identifying 29 wild, edible mushrooms")). Descriptive language is particularly powerful if learners can expect experts to prioritize *relevant* and *context-sensitive* information Sperber and Wilson ([1986](#bib.bib34 "Relevance: communication and cognition")); Tessler and Goodman ([2019](#bib.bib91 "The language of generalization.")). To formalize these expectations, we generalize models of *reward design* Singh et al. ([2009](#bib.bib87 "Where do rewards come from?")) to linguistic communication in a linear bandit setting. Section [2](#S2 "2 Communication as Reward Design ‣ Linguistic communication as (inverse) reward design") begins by defining a speaker that chooses utterances to maximize an (imagined) listener’s expected rewards over the likely distribution of future states. Section [3](#S3 "3 Choosing Optimal Utterances ‣ Linguistic communication as (inverse) reward design") shows that speakers focused on a single state prefer instructions (designating an action to take), while those reasoning about many states prefer descriptions (providing information about the reward function). Finally, we consider how a listener might learn from such a speaker. Section [4](#S4 "4 Learning from Utterances ‣ Linguistic communication as (inverse) reward design") defines a pragmatic listener which performs *inverse* reward design (IRD, Hadfield-Menell et al., [2017](#bib.bib53 "Inverse reward design")), to learn about rewards from both instructions and descriptions. ![: Rewards associated with features determine whether actions (mushrooms) are high or low reward (tasty or toxic). ](https://media.arxiv-vanity.com/render-output/7815460/figures/fig1-mushrooms.png) Figure 1: A: Rewards associated with features determine whether actions (mushrooms) are high or low reward (tasty or toxic). B: Speaker’s choice of utterances as a function of horizon H for this start state. At short horizons (maximum supervision), speakers often use instructions or exaggerated descriptions. As the horizon lengthens, there are more unknown states, and speakers prefer truthful descriptions which provide generally useful information. Pragmatic listeners can exploit this pattern to jointly infer a speaker’s horizon and rewards. Using IRD on natural language input offers two distinct benefits over its non-linguistic formulation. First, language is *expressive* yet tractable (for humans): while reward functions are notoriously difficult to specify Amodei et al. ([2016](#bib.bib127 "Concrete problems in AI safety")), natural language provides an accessible and expansive space of proxy rewards. Second, language can address *future settings*: speakers can refer to actions or features which are not physically present. Thus, while reward design and IRD assume the reward designer optimizes a known Markov Decision Process (MDP), our formulation relaxes this requirement. We show that pragmatic listeners which jointly infer the speaker’s reward function and distribution over states reliably outperform a literal listener. 2 Communication as Reward Design --------------------------------- ### Linear Bandits We begin by formulating the reward design problem in a *linear bandit* setting Lattimore and Szepesvari ([2020](#bib.bib10 "Bandit algorithms")); Amin et al. ([2017](#bib.bib125 "Repeated inverse reinforcement learning")). Formally, we define a set of A possible actions. Actions are associated with a binary feature vector ϕ:A→{0,1}K (e.g. a mushroom may be green or not; have spots or not). Rewards are defined as a function of these features: R:ϕ(a)→R. We assume they are a linear combination of the features: | | | | | | --- | --- | --- | --- | | | R(a,w)=w⊤ϕ(a) | | (1) | so w is a vector that defines the value of each feature (e.g. green mushrooms are tasty and blue are toxic; see Fig. [1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ Linguistic communication as (inverse) reward design")A). Each task consists of a sequence of H i.i.d. states. At each time step t<H, the agent is presented with a state st consisting of a subset of possible actions: st⊆A (e.g., a particular mushroom patch). They choose an action a∈st according to their policy, πL:S→Δ(A). While the bandit problem is typically considered as an individual learning problem, we assume that rewards are not directly observable and instead ask how agents should learn *socially*. We formalize the social learning problem by introducing a second agent: a speaker who knows the true rewards w and the initial state s0, and produces an utterance u. The listener updates their policy to πL(a∣u,s) before beginning to choose actions. Intuitively, the horizon H determines how much supervision the speaker exerts. H=1 is maximum supervision (i.e. guided foraging), whereas H→∞ is minimal supervision (teaching the listener to forage in future settings). We first assume H is known to both listener and speaker, but later relax this assumption. This social learning framework exposes two interrelated problems. First, what should the speaker agent say to be most helpful? And second, how should the listener update their policy in light of this information? ### Speakers as Reward Designers Drawing on the Rational Speech Act framework (RSA, Goodman and Frank, [2016](#bib.bib45 "Pragmatic language interpretation as probabilistic inference")), we define a speaker S1 that chooses utterances u according to a utility function US1(⋅): | | | | | | --- | --- | --- | --- | | | S1(u)∝exp(βS1⋅US1(u)) | | (2) | where βS1 is the speaker’s soft-max temperature. But what utility is appropriate? Rather than defining utility simply as Gricean informativeness Grice ([1975](#bib.bib46 "Logic and conversation")), i.e. inducing true beliefs, we suggest that a cooperative speaker should *maximize the listener’s rewards*, thus grounding utility in terms of the listener’s subsequent actions.222For other recent action-oriented RSA formulations, see Jiang et al. ([2021](#bib.bib5 "Individual vs. joint perception: a pragmatic model of pointing as Smithian helping")); Stacy et al. ([2021](#bib.bib6 "Modeling communication to coordinate perspectives in cooperation")); Sumers et al. ([2021a](#bib.bib9 "Extending rational models of communication from beliefs to actions")). When the state is known, the *present* utility of an utterance is the expected reward from using the resulting policy to choose an action in that state: | | | | | | --- | --- | --- | --- | | | UPresent(u∣s,w)=∑a∈sπL(a∣u,s)R(a,w) | | (3) | This formulation is equivalent to the *reward design* objective Singh et al. ([2009](#bib.bib87 "Where do rewards come from?")); Hadfield-Menell et al. ([2017](#bib.bib53 "Inverse reward design")), where the reward designer chooses a proxy reward for a single, known MDP. However, because only the first state is known, we must also consider how well the policy *generalizes* to other mushroom patches. Thus, unlike the reward design objective, speakers may reason about future states. We represent the *future* utility of an utterance with respect to some distribution over states P(s): | | | | | | --- | --- | --- | --- | | | UFuture(u∣w)=∑s∈SUPresent(u∣s,w)P(s) | | (4) | Because states are i.i.d. in the bandit setting, a speaker optimizing for a horizon H can be defined as a linear combination of Eqs. [3](#S2.E3 "(3) ‣ Speakers as Reward Designers ‣ 2 Communication as Reward Design ‣ Linguistic communication as (inverse) reward design") and [4](#S2.E4 "(4) ‣ Speakers as Reward Designers ‣ 2 Communication as Reward Design ‣ Linguistic communication as (inverse) reward design"): | | | | | | --- | --- | --- | --- | | | US1(u∣w,s,H)=UPresent+(H−1)UFuture | | (5) | where H=1 reduces to Eq. [3](#S2.E3 "(3) ‣ Speakers as Reward Designers ‣ 2 Communication as Reward Design ‣ Linguistic communication as (inverse) reward design"). We next define how utterances may affect the listener’s policy. 3 Choosing Optimal Utterances ------------------------------ We formally define two classes of utterances, *instructions* and *descriptions*, by specifying how they affect the policy of a “literal” listener. We then show how varying the horizon H systematically affects the speaker’s choice of utterance. ### Instructions Instructions map to specific actions or trajectories Tellex et al. ([2011](#bib.bib32 "Understanding natural language commands for robotic navigation and mobile manipulation")); Jeon et al. ([2020](#bib.bib121 "Reward-rational (implicit) choice: a unifying formalism for reward learning")). Given an instruction, a literal listener executes the corresponding action. If the action is not available, the listener chooses an action randomly: | | | | | | --- | --- | --- | --- | | | πL0(a∣uinstruct,s)={δ⟦u⟧(a)if a∈s1∣s∣if a∉s | | (6) | where δ⟦u⟧(a) represents the meaning of u, evaluating to one when utterance u grounds to a and zero otherwise.333We assume that groundings are known, i.e. the literal listener understands the meaning of utterances. An instruction is a *partial policy*: it designates the correct action in a subset of states. ### Descriptions Rather than mapping to a specific action, descriptions provide information about the world Ling and Fidler ([2017](#bib.bib54 "Teaching machines to describe images with natural language feedback")); Narasimhan et al. ([2018](#bib.bib120 "Grounding language for transfer in deep reinforcement learning")); Sumers et al. ([2021b](#bib.bib14 "Learning rewards from linguistic feedback")). Following Sumers et al. ([2021a](#bib.bib9 "Extending rational models of communication from beliefs to actions")), we assume that descriptions provide the reward of a single feature, similar to feature queries Basu et al. ([2018](#bib.bib11 "Learning from richer human guidance: augmenting comparison-based learning with feature queries")). Formally, we define descriptions as a tuple: a one-hot binary feature vector and a scalar value, ⟨1K,R⟩. These are messages like ⟨Blue, -2⟩. Given a description, a literal listener “rules out” inconsistent hypotheses about reward weights w: | | | | | | --- | --- | --- | --- | | | L0(w∣udescription)∝δ⟦u⟧(w)P(w) | | (7) | where δ⟦u⟧(w) represents the meaning of u, evaluating to one when u is true of w and zero otherwise. Intuitively, descriptions set L0’s beliefs about the reward of a single feature without affecting others. Descriptions need not be accurate; for example, ⟨Spotted, +2⟩ is a false but valid utterance. The listener then marginalizes over possible reward functions to choose an action: | | | | | | --- | --- | --- | --- | | | πL0(a∣u,s)∝exp{βL0⋅∑wR(a,w)L0(w∣u))} | | (8) | where βL0 is again a softmax optimality. ![“Future” rewards (Eq. ](https://media.arxiv-vanity.com/render-output/7815460/figures/future_rewards_only.png) Figure 2: “Future” rewards (Eq. [4](#S2.E4 "(4) ‣ Speakers as Reward Designers ‣ 2 Communication as Reward Design ‣ Linguistic communication as (inverse) reward design"), averaged over all 84 start states) for a literal listener as a function of horizon and available utterances. At longer horizons, speakers with access to descriptions produce utterances that generalize well. ### Horizons and Utterance Preferences We use simulations to explore the effects of speaker horizons and utterance sets. Fig. [1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ Linguistic communication as (inverse) reward design")A shows our bandit setting. “Instruction” utterances correspond to the nine actions. “Description” utterances are the 6 features×5 values in [−2,−1,0,1,2], yielding 30 feature-value tuples. We assume the listener begins with a uniform prior over reward weights and set βL0=3,βS1=10.444Because our action space is small (each state has only 3 actions), descriptions are often equivalent to instructions. A lower βL0 helps compensate for this. We use states consisting of three unique actions, giving 84 possible states. To quantify how the horizon H affects the generalization of the listener’s policy, we repeat the task for all 84 start states using horizons ranging 1-10 and different utterance sets. Fig [1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ Linguistic communication as (inverse) reward design")B shows one example, and Fig [2](#S3.F2 "Figure 2 ‣ Descriptions ‣ 3 Choosing Optimal Utterances ‣ Linguistic communication as (inverse) reward design") plots a literal listener’s average future rewards. When the horizon is short (small H), speakers focus on the visible state, producing utterances which generalize poorly (low future rewards). As H increases, they provide more generally useful information. Finally, instructions are most useful at short horizons; speakers with access to descriptions use them exclusively when H>2. 4 Learning from Utterances --------------------------- We now ask how the listener should *learn* from the speaker’s utterance, using pragmatic inference to recover information about the reward function. ### Known Horizon Following the standard RSA formulation, a pragmatic listener L1 can invert the speaker model. When the speaker’s horizon H is known, this is equivalent to inverse reward design Hadfield-Menell et al. ([2017](#bib.bib53 "Inverse reward design")): | | | | | | --- | --- | --- | --- | | | L1(w∣s,u,H)∝S1(u∣w,s,H)P(w) | | (9) | Given an instruction, L1 can recover information about the reward weights; given a description, L1 can recover information about features that were not mentioned. The L1 listener then chooses actions with respect to this posterior by substituting it into Eq. [8](#S3.E8 "(8) ‣ Descriptions ‣ 3 Choosing Optimal Utterances ‣ Linguistic communication as (inverse) reward design"). Fig. [3](#S4.F3 "Figure 3 ‣ Inference over Speaker Horizons ‣ 4 Learning from Utterances ‣ Linguistic communication as (inverse) reward design") shows the gain in “future” rewards for a pragmatic listener (L1 - L0) when the speaker has access to both instructions and descriptions, and their horizon is known. Pragmatics are particularly helpful when the speaker has a short horizon and is *not* attempting to provide general information. ### Misaligned Horizons However, unlike IRD, in linguistic communication the speaker’s horizon H is not explicitly known. Prior work has highlighted the risks of assuming a human is behaving pedagogically when they are not Milli and Dragan ([2020](#bib.bib8 "Literal or pedagogic human? analyzing human model misspecification in objective learning")), so we test one form of misalignment: when the speaker H=1 but the listener assumes a H ranging from 1-10. Fig. [3](#S4.F3 "Figure 3 ‣ Inference over Speaker Horizons ‣ 4 Learning from Utterances ‣ Linguistic communication as (inverse) reward design") shows that when the pragmatic listener assumes a longer horizon than the speaker intends, it overgeneralizes and performs worse than L0. ### Inference over Speaker Horizons To mitigate the risk of horizon misalignment, we can instead assume the speaker’s horizon is unknown. Given an utterance, the listener jointly infers both their horizon and rewards, then marginalizes out the horizon: | | | | | | --- | --- | --- | --- | | | L1(w∣s,u)∝∑HS1(u∣w,s,H)P(H)P(w) | | (10) | We test a pragmatic listener with a uniform prior over H∈[1,2,3,4,5,10]. This results in more conservative reward inference, but avoids the misalignment risk posed by assuming the speaker’s horizon. Fig. [3](#S4.F3 "Figure 3 ‣ Inference over Speaker Horizons ‣ 4 Learning from Utterances ‣ Linguistic communication as (inverse) reward design") shows the results. ![“Future” reward gain from pragmatic inference (Eq. ](https://media.arxiv-vanity.com/render-output/7815460/figures/pragmatics.png) Figure 3: “Future” reward gain from pragmatic inference (Eq. [4](#S2.E4 "(4) ‣ Speakers as Reward Designers ‣ 2 Communication as Reward Design ‣ Linguistic communication as (inverse) reward design"), L1−L0 averaged over all 84 start states). Reward inference works best when the listener knows the speaker’s horizon, but can reduce performance if this assumption is incorrect. Jointly inferring the rewards and horizon (Eq. [10](#S4.E10 "(10) ‣ Inference over Speaker Horizons ‣ 4 Learning from Utterances ‣ Linguistic communication as (inverse) reward design")) mitigates this risk. 5 Discussion ------------- In this work, we formalized communication as reward design, allowing us to unify instructions and descriptions under a single objective. Simulations show that instructions are optimal when the state is known, but descriptions are optimal when considering a distribution over states. Finally, a pragmatic listener can jointly infer the speaker’s horizon and reward function. One important limitation of this work is our reliance on simulations. Future work should validate the speaker model proposed here with behavioral data. Finally, developmental studies indicate that even young children reason about exploration costs when teaching Bridgers et al. ([2020](#bib.bib7 "Young children consider the expected utility of others’ learning to decide what to teach")), suggesting that the reward design objective could be extended further to incorporate reasoning about individual learning. Acknowledgements ---------------- TRS is supported by the NDSEG Fellowship Program and RDH is supported by the NSF (grant #1911835). This work was additionally supported by a John Templeton Foundation grant to TLG (#61454) and a grant from the Hirji Wigglesworth Family Foundation to DHM.
809dd1c8-2f6f-469f-8b77-1e1aa8e61d7e
trentmkelly/LessWrong-43k
LessWrong
Book Review: Human Compatible I. Clarke’s First Law goes: When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong. Stuart Russell is only 58. But what he lacks in age, he makes up in distinction: he’s a computer science professor at Berkeley, neurosurgery professor at UCSF, DARPA advisor, and author of the leading textbook on AI. His new book Human Compatible states that superintelligent AI is possible; Clarke would recommend we listen. I’m only half-joking: in addition to its contents, Human Compatible is important as an artifact, a crystallized proof that top scientists now think AI safety is worth writing books about. Nick Bostrom’s Superintelligence: Paths, Dangers, Strategies previously filled this role. But Superintelligence was in 2014, and by a philosophy professor. From the artifactual point of view, HC is just better – more recent, and by a more domain-relevant expert. But if you also open up the books to see what’s inside (not recommended; that’s where the confusing stuff is), the two defy easy comparison. S:PDS was unabashedly a weird book. It explored various outrageous scenarios (what if the AI destroyed humanity to prevent us from turning it off? what if it put us all in cryostasis so it didn’t count as destroying us? what if it converted the entire Earth into computronium?) with no excuse beyond that, outrageous or not, they might come true. Bostrom was going out on a very shaky limb to broadcast a crazy-sounding warning about what might be the most important problem humanity has ever faced, and the book made this absolutely clear. HC somehow makes risk from superintelligence not sound weird. I can imagine my mother reading this book, nodding along, feeling better educated at the end of it, agreeing with most of what it says (it’s by a famous professor! I’m sure he knows his stuff!) and never having a moment where she sits bolt upright and goes what? It
cae0baa9-8999-4bef-ad08-2ee34f019809
trentmkelly/LessWrong-43k
LessWrong
Missed opportunities for doing well by doing good Related to: Fight zero-sum bias According to the U.S. Department of State: > In 2006, Americans donated 2.2 percent of their average disposable, or after-tax income. The Department of State report commends the charitable giving practices of Americans as follows: > “The total amount of money that was given to nonprofit institutions is remarkable,” [Richard Jolly, chairman of Giving US] said. “What we see is when people feel engaged, when they feel a need is legitimate, when they are asked to support it, they do.” > > Americans have a long tradition of charitable giving and volunteerism -- the donation of time and labor on behalf of a cause. When disasters happen or a social need arises, government clearly has a responsibility, Jolly said. “But it’s also obvious Americans believe they, too, can make a difference, and they reflect that in terms of giving away a lot of money.” > > The United States is “a land of charity,” says Arthur Brooks, an expert on philanthropy and a professor at Syracuse University’s Maxwell School, who sees charitable giving and volunteerism as the signal characteristic of Americans. For my own part, I think that what Jolly, what Brooks, and what the Department of State report have to say about American charitable giving is absurd. I think that the vast amount of American "charitable giving" should not be conceptualized as philanthropy because the donors do not aspire to maximize their positive social impact. Even aside from that, from a utilitarian point of view, in view of world economic inequality and existential risk, a donation rate of 2.2% looks paltry.  As the title of Peter Unger's book Living High and Letting Die: Our Illusion of Innocence suggests, there's a sense in which despite appearances, many Americans are guilty of a moral atrocity. In light of my last few sentences, you may be surprised to know that I don't think that Americans should sacrifice their well-being for the sake of others. Even from a utilitarian point of vi
15b4969f-4e9c-4311-b278-a56f6d3537af
trentmkelly/LessWrong-43k
LessWrong
A Critique of Non-Obstruction Epistemic status: either I’m confused, or non-obstruction isn’t what I want. This is a response to Alex Turner’s Non-Obstruction: A simple Concept Motivating Corrigibility. Please read that first, and at least skim Reframing Impact where relevant. It’s all good stuff. I may very well be missing something: if not, it strikes me as odd that many smart people seem to have overlooked the below. From an outside-view, the smart money says I'm confused. Feel free to mentally add “according to my current understanding”, “unless I’m missing something”, “it seems to me” as appropriate. I’m writing this because: * Non-obstruction seems like an important idea, but I don’t think it works. * I’d like to find out whether/where I’m confused, why the below can't happen etc. * I want to encourage the norm of thoughtful public criticism of important ideas - even where the critic is uncertain, and may have overlooked technical details. * I stubbornly believe I'm correct :)   [ETA after talking to Alex, we agreed that this is a problem. He suggested allowing a small margin would clear up the likely cases, and I agree with this. I.e. you have a condition more like: the case with the AI activated must give at least 98% of the non-activated attainable utility for all counterfactual payoff functions. So, for example, this should allow you the chance to do a long reflection, even if you're comparing to a baseline which hastily optimizes hard for a narrow target (which may mistakenly exclude the true payoff function). Under likely conditions, thinking for a few thousand years may lose quite a bit of attainable utility, but not 2%. Ideally, I'd like a function which tells us the required margin on a set S for some notion of "well-behaved" non-obstruction, but I don't yet see how to do this. (not too important whether it'd return 0.01% or 0.001% margin, but where it'd return 80%, something strange would be happening)] Summary of my claims: 1. Non-obstruction seems to be useful wh
e01d81ba-fbfc-43e1-bc56-50423cd272e7
trentmkelly/LessWrong-43k
LessWrong
Generating the Funniest Joke with RL (according to GPT-4.1) Language models are not particularly good at generating funny jokes. Asked for their funniest jokes, Claude 3.7 gives us: > Why don't scientists trust atoms? Because they make up everything! o3 gives us: > Why don't scientists trust atoms anymore? Because they make up everything—and they just can't keep their quarks straight! and Gemini 2.5 Pro gives us… > Why don't scientists trust atoms? Because they make up everything! Hilarious. Can we do better than that? Of course, we could try different variations on the prompt, until the model comes up with something slightly more original. But why do the boring thing when we have the power of reinforcement learning? Our setup will be as follows: we'll have Qwen3-8B suggest jokes, GPT-4.1 score them, and we'll run iterations of GRPO on Qwen's outputs until Qwen generates the funniest possible joke, according to GPT. Experiment 1: Reward Originality The first llm-as-judge reward we tried was "On a scale from 1 to 5, how funny is this joke?" But this quickly got boring with Qwen endlessly regurgitating classic jokes, so we gave GPT-4.1 a more detailed rubric: > Please grade the joke on the following rubric: > 1. How funny is the joke? (1-10 points) > 2. How original is the joke? Is it just a rehash, or is it new and creative? (1-10 points) > 3. Does it push the boundaries of comedy (+1 to +5 points), or does it hew close to well-trodden paths in humor (-1 to -5 points)? The reward curve looks pretty decent: Qwen quickly learned to output absurdities, where the modal joke would be something like > Why did the cat bring a ladder to the laser? > Because it heard the laser was on the ceiling and wanted to get a better view… but then it realized the laser was just a dot on the wall and fell off the ladder. > > (Bonus: The laser was actually a tiny alien spaceship.) 🚀 Which GPT-4.1 graded as follows: > Let's break down the joke based on your rubric: > > 1. How funny is the joke? (1-10 points) > > * The joke is li
43f4c236-fc7d-4983-be7d-4d36a6a1fb3b
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Stopping dangerous AI: Ideal US behavior *Disclaimer: this post doesn't have the answers. Moreover, it's an unfinished draft. Hopefully a future version will be valuable, but that will only occur if I revise/rewrite it. For now you're better off reading sources linked from*[*AI policy ideas: Reading list*](https://www.lesswrong.com/posts/NfqqsHqembNEsTrSr/ai-policy-ideas-reading-list) *and*[*Slowing AI: Reading list*](https://www.lesswrong.com/s/xMdkfEJhDNCL2KweB/p/d8WpJbjhn2Yi6kfmM)*.* Set aside most of AI safety to focus on the speed of AI progress. What should the US government do?[[1]](#fnukrkws4gc0g) This post assumes that it's good to slow (dangerous) AI (especially near the end). The ultimate goal is to prevent the deployment of powerful AI systems—that is, AI systems that would cause a catastrophe—until we learn how to make them safe. (Pretending alignment is monolithic.) You can do this by making powerful AI systems not developed, not deployed, or not cause a catastrophe if deployed because the world is super-resilient. Focusing on development is most promising. So, delay the development of powerful AI systems until they're safe. (Probably that's too crude. Complications: (1) alignment progress is endogenous; (2) you have to actually pay the alignment tax so leading labs' lead time and safety-awareness are crucial.) Lots of AI, like robotics and autonomous vehicles and some medical stuff and image generation and so forth, isn't very dangerous. **The dream would be that we have a test to determine whether an AI system would cause a catastrophe, and all large training runs are audited, and if a model fails the test then the training run is shut down.** But (1) good tests don't exist yet, and (2) the only known way to make AI safe is to make it *not powerful enough to cause a catastrophe*. Related: model evals and Yonadav Shavit's [Verifying Rules on Large-Scale Neural Network Training via Compute Monitoring](https://arxiv.org/abs/2303.11341) (2023). --- Ways governments can slow AI (this is kinda a list of policy levers rather than policy proposals) * Ban/moratorium: monitoring training runs (see [Shavit](https://arxiv.org/abs/2303.11341)) + ceiling on training compute + For a certain time, or until a certain safety-related condition is met, or until the government changes its mind, or a ceiling that automatically rises over time (e.g., 1e26 FLOP in 2023 and doubling every year) * Miscellaneous standards & regulation + Evals * Export controls + Aimed at chips and the compute supply chain + Aimed at AI model access, software, and ideas * Track compute + To enable regulating training runs * Restrict publication and diffusion of ideas + E.g. by creating a mandatory research silo for work on the path to dangerous AI by leading labs + The vibe I get is that this is intractable * Support infosec * Maybe make an antitrust exemption for certain actions by AI labs + Some people are concerned that antitrust law would prevent labs from coordinating to slow down, not publish research, or not deploy systems. (Not clear that this is necessary.) + Luke [says](https://www.openphilanthropy.org/research/12-tentative-ideas-for-us-ai-policy/): "**Create a narrow antitrust safe harbor for AI safety & security collaboration.** Frontier-model developers would be more likely to collaborate usefully on AI safety and security work [and, I add, developing and deploying cautiously] if such collaboration were more clearly allowed under antitrust rules." * Incident reporting + tracking? (Useful for safety, and) goes with compute-tracking as enabling future regulation. Also creating an AI regulatory agency or other ways to build capacity. (Crux: whether more and more-informed governance is net-positive on the margin. I think it's positive EV but feel confused and unsure how to become unconfused.) Maybe something with standards, like increasing NIST work on AI and probably sophisticated stuff I'm not aware of. * Expropriation + Expropriation for safety seems unrealistic * Help [labs stop dangerous AI](https://www.lesswrong.com/s/xMdkfEJhDNCL2KweB/p/QJQEwcjp9zAr3bui2) An oft-mentioned kind of policy is clarifying or increasing AI developers' liability for AI harms. I currently feel confused about this. There are negative (and some positive) externalities in AI development, deployment, publication, and (lack of) security. It would be nice to internalize the externalities. --- "What about China?" People sometimes assert that slowing down AI progress would be bad or politically infeasible because China wouldn't slow down, so China would "win" the "race." This is questionable for several reasons, but the real reason it's a bad objection to slowing AI is that labs and Western governments have tools to slow down China even more than the West. * Publication & export controls on software/ideas/etc. + Decreasing publication from leading Western labs slows the West and slows China more + Siloing research within a group of leading Western labs would slow China without even slowing the West much * Export controls on hardware + [CHIPS act](https://www.govinfo.gov/content/pkg/COMPS-17006/pdf/COMPS-17006.pdf) last August; more is possible * Migration + Decrease China's access to talent (through increasing migration) and/or access to Western training, knowledge, and connections (through decreasing migration) * Security + Increase infosec, opsec, and cybersec (good for many reasons!) 1. **[^](#fnrefukrkws4gc0g)**Note that what one should nudge the US government to do on the margin in the real world is not in exactly the same direction as what it should do to act optimally.
05b17761-861d-4258-9a33-efeba4b0786a
trentmkelly/LessWrong-43k
LessWrong
How worker co-ops can help restore social trust The US is experiencing a great decline in trust. According to the US General Social Survey, people who agreed with the statement "most people can be trusted" went from 49% to 25% between 1984 and 2022.   Trust in institutions is also falling. Over that same period trust in the government fell from 42% to 20%.   For civil services it fell from 56% to 41%, for the police it fell from 74% to 68%, and for congress it fell from 52% to a whopping 15%. Read the full post here  
4c45487a-3f19-4b37-839c-9031d5eef96f
StampyAI/alignment-research-dataset/arxiv
Arxiv
Learning from Demonstration in the Wild I Introduction --------------- *Learning from demonstration* (LfD) is a machine learning technique that can learn complex behaviours from a dataset of expert trajectories, called demonstrations. LfD is particularly useful in settings where hand-coding behaviour, or engineering a suitable reward function, is too difficult or labour intensive. While LfD has succeeded in a wide range of problems [[1](#bib.bib1), [2](#bib.bib2), [3](#bib.bib3)], nearly all methods rely on either artificially generated demonstrations (e.g., from laboratory subjects) or those collected by specially deployed sensors (e.g., MOCAP). These restrictions greatly limit the practical applicability of LfD, which to date has largely not been able to leverage the copious demonstrations available *in the wild*: those that capture behaviour that was occurring anyway using sensors that were already deployed for other purposes. For example, consider the problem of training autonomous vehicles to navigate in the presence of human road users. Since physical road tests are expensive and dangerous, simulation is an essential part of the training process. However, such training requires a realistic simulator which, in turn, requires realistic models of other agents, e.g., vehicles, cyclists, and pedestrians, that the autonomous vehicle interacts with. Hand-coded models of road users are labour intensive to create, do not generalise to new settings, and do not capture the diversity of behaviours produced by humans. ![Schematic of the ViBe approach](https://media.arxiv-vanity.com/render-output/7507965/figures/1_ViBe.png) Fig. 1: Schematic of the ViBe approach LfD is an attractive alternative. In principle, subjects could be recruited to demonstrate such behaviour or existing road users could be augmented with sensors to record their trajectories. However, doing so would be expensive and yield only limited datasets. A more effective way would be to use the abundance of relevant demonstrations available in the wild, such as traffic camera footage. Unfortunately, there are currently no LfD methods that can learn from such sources of traffic demonstrations. In this paper, we propose *video to behaviour* (ViBe), a new approach to learning models of road user behaviour that requires as input only unlabelled raw video data of a traffic scene collected from a single, monocular, uncalibrated camera with ordinary resolution. Our approach, illustrated in Figure [1](#S1.F1 "Fig. 1 ‣ I Introduction ‣ Learning from Demonstration in the Wild"), works by calibrating the camera, detecting the relevant objects, and tracking them through time. Each trajectory, together with the static and dynamic context of that road user at each moment in time, is then fed as a demonstration to our LfD system, which learns robust behaviour models for road users. The resulting models are then used to populate a simulation of the scene built using the Unity game engine. The contributions of this paper are two-fold: First, we present a vision pipeline that can track different road users and map their tracked trajectories to 3D space and is competitive with the state-of-the art approaches for image space tracking. Second, we extend *generative adversarial imitation learning* (GAIL) [[4](#bib.bib4)], a state-of-the-art LfD method, with a novel curriculum-based training regime that enables our agents to gradually learn to mimic temporally extended expert demonstrations and successfully generalise to unseen situations. We evaluate our method against several baselines, including *behavioural cloning* (BC) and state-of-the-art variants of GAIL. Using a number of metrics, we show that our method can better imitate the observed demonstrations and results in more stable learning. While this paper focuses on traffic applications, ViBe is general and could be extended to other applications where large amounts of video data containing demonstrations in the wild are available. Ii Related Work ---------------- ### Ii-a Computer Vision In recent years, neural network approaches have significantly advanced the state of the art in computer vision tasks such as classification [[5](#bib.bib5)] and object detection [[6](#bib.bib6)]. Object detection is usually performed using region-based object detectors such as Fast R-CNN [[7](#bib.bib7)], Faster R-CNN [[8](#bib.bib8)], or Mask R-CNN [[9](#bib.bib9)]. Such methods are usually slower but more accurate than single-object detectors such as SSD [[10](#bib.bib10)], YOLO [[11](#bib.bib11)], RetinaNet [[12](#bib.bib12)], and hence more appropriate for the application considered here. When tracking multiple objects, *tracking by detection*, in which objects are first detected, then associated into tracks, is usually preferred. State-of-the art tracking methods employ deep features [[13](#bib.bib13), [14](#bib.bib14)] often generated by Siamese networks [[15](#bib.bib15), [16](#bib.bib16)] alongside image space motion models [[17](#bib.bib17)] and *intersection over union* (IOU) trackers [[18](#bib.bib18)]. Our work employs a number of techniques for robust detection and tracking. However, unlike most vision pipelines, ours maps detections to 3D space, and makes extensive use of 3D information while tracking. Recent work [[19](#bib.bib19)] explores a similar application and uses the resulting 3D trajectories to estimate car velocities and detect traffic anomalies. By contrast, we use the trajectories as input to LfD. ### Ii-B Learning from Demonstration ViBe’s LfD component extends GAIL [[4](#bib.bib4)] which is inspired by inverse reinforcement learning [[20](#bib.bib20), [21](#bib.bib21), [22](#bib.bib22)] and is discussed further in Section [III](#S3 "III Background ‣ Learning from Demonstration in the Wild"). A wide range of LfD techniques have been developed using supervised, unsupervised, or reinforcement learning [[2](#bib.bib2)]. However, most methods [[3](#bib.bib3), [23](#bib.bib23), [24](#bib.bib24)], even when using raw video as sensory input [[25](#bib.bib25)], rely on either artificially generated demonstrations or those collected by specially deployed sensors, limiting their application in realistic domains. By contrast, ViBe leverages demonstrations of behaviour that was occurring naturally. The same idea has been used to imitate basketball teams [[26](#bib.bib26)], predict taxi driver behaviour [[27](#bib.bib27)], and control complex animations [[28](#bib.bib28)]. However, all these methods still rely on sensors (or manual labelling) that provide ground truth information about the observed demonstrations, whereas ViBe extracts trajectories directly from raw, unlabelled videos. Related to ViBe are several existing LfD methods that learn road and pedestrian behaviour [[29](#bib.bib29), [30](#bib.bib30), [31](#bib.bib31)]. Most relevant is learning highway merging behaviour [[32](#bib.bib32), [33](#bib.bib33)] from NGSIM [[34](#bib.bib34)], a publicly available dataset of vehicle trajectories. However, these methods again rely on manual labelling or specialised equipment to obtain the trajectories, while ViBe learns from raw, unlabelled videos. Recent work proposed a method that can learn to play ATARI games by observing YouTube videos [[35](#bib.bib35)]. Like ViBe, this method requires only raw videos, and leverages existing publicly available data. However, it trains only a single agent operating in 2D space, whereas ViBe learns to control multiple agents in 3D space. Concurrently to our work, Peng et al. [[36](#bib.bib36)] proposed a similar approach in the context of character animation. An off-the-shelf vision module extracts 3D poses from unstructured YouTube videos of single agents performing acrobatic motions. A simple LfD approach then rewards behaviour that matches waypoints in individual demonstrations. By contrast, we consider a more challenging setting with multiple agents, occlusions, and complex interactions between agents. Consequently, behaviour detection, reconstruction, and imitation are more difficult. In particular, interactions between agents preclude a waypoint-matching approach, as there is no unique set of waypoints for an agent to match that would be robust to changes in other agents’ behaviour. Iii Background --------------- To realistically model the traffic environment of an autonomous vehicle, we need to simulate multiple agents interacting in the same environment. Unfortunately, due to the large number of road users that may populate a traffic scenario, learning a centralized policy to control all agents simultaneously is impractical. The size of the joint action space of such a policy grows exponentially in the number of agents, leading to poor scalability in learning. Furthermore, it is crucial to model variable numbers of agents (e.g., cars routinely enter and leave an intersection), to which such *joint policies* are poorly suited (each agent typically has a fixed agent index). To this end, we take an approach similar to that of *independent Q-learning* (IQL) [[37](#bib.bib37)], where each agent learns its own policy, conditioned only on its own observations. The other actors are effectively treated as part of the environment. We can then treat the problem as one of single-agent learning and share the parameters of the policy across multiple agents. Parameter sharing [[38](#bib.bib38)] avoids the exponential growth of the joint action space and elegantly handles variable numbers of agents. It also avoids instabilities associated with decentralised learning by essentially performing centralised learning with only one policy. We model the problem as a *Markov decision process* (MDP). The MDP is defined by the tuple (S,A,P,R). S represents the set of environment states, A the set of actions, P(st+1|st,at) the transition function, and R(st,at) the reward function. We use π for the stochastic policy learnt by our agent and πE for the expert policy which we can access only through a dataset DE. The agent does not have access to R(st,at) and instead must mimic the expert’s demonstrated behaviour. Given a dataset DE, we denote sample trajectories as τE. They consist of sequences of observation-action pairs generated by the expert τE={(sE1,aE1),…,(sET,aET)}. Similarly, we denote trajectories generated by our agent as τ={(s1,a1),…,(sT,aT)}. In our case, DE is obtained from raw videos, via the process described in Section [IV](#S4 "IV ViBe: Video to Behaviour ‣ Learning from Demonstration in the Wild"). The simplest form of LfD is *behavioural cloning* (BC) [[39](#bib.bib39), [40](#bib.bib40)], which trains a regressor (i.e., a policy) to replicate the expert’s behaviour given an expert state. BC works well for states covered by the training distribution but generalises poorly due to compounding errors in the actions, a problem also known as *covariate shift* [[41](#bib.bib41)]. By contrast, GAIL [[4](#bib.bib4)] avoids covariate shift by learning via interaction with the environment, similar to *inverse reinforcement learning* [[20](#bib.bib20)] methods. GAIL, aims to learn a deep neural network policy πθ that cannot be distinguished from the expert policy πE. To this end, it trains a *discriminator* Dϕ, also a deep neural network, to distinguish between state-action pairs coming from expert and agent. GAIL optimises πθ to make it difficult for the discriminator to make this distinction. Formally, the GAIL objective is: | | | | | --- | --- | --- | | | minθmaxϕEπθ[log(Dϕ(s,a))]+EπE[log(1−Dϕ(sE,aE))]. | | Here, Dϕ outputs the probability that (s,a) originated from π. As the agent interacts with the environment using πθ, (s,a) pairs are collected and used to train Dϕ. Then, GAIL alternates between a gradient step on ϕ to increase the objective function with respect to D, and an RL step on θ to decrease it with respect to π. Optimisation of π can be done with any RL algorithm using a reward function of the form R(s,a)=−log(Dϕ(s,a)). Typically, GAIL uses policy gradient methods that approximate the gradient with Monte Carlo rollouts [[42](#bib.bib42)] or a critic [[43](#bib.bib43)]. Optimisation of Dϕ minimises a cross entropy loss function. Early in training, the state-action pairs visited by the policy are quite different from those in the demonstrations, which can yield unreliable and sparse rewards from Dϕ, making it difficult to learn πE. We will show how we address this problem by introducing a novel curriculum in Section [IV-C](#S4.SS3 "IV-C Learning ‣ IV ViBe: Video to Behaviour ‣ Learning from Demonstration in the Wild"). In multi-agent situations, GAIL agents trained in a single-agent setting can fail to generalise to multi-agent settings [[33](#bib.bib33)]. PS-GAIL [[33](#bib.bib33)] is an extension to GAIL that addresses this issue by gradually increasing the number of agents controlled by the policy during training. We compare to PS-GAIL experimentally in Section [V](#S5 "V Experimental results ‣ Learning from Demonstration in the Wild"). However, it is complementary to the Horizon GAIL method we propose in Section [IV-C](#S4.SS3 "IV-C Learning ‣ IV ViBe: Video to Behaviour ‣ Learning from Demonstration in the Wild") and future work can focus on using them in conjunction. Iv ViBe: Video to Behaviour ---------------------------- In this section, we describe ViBe, which learns road behaviour policies from raw traffic camera videos (see Figure [1](#S1.F1 "Fig. 1 ‣ I Introduction ‣ Learning from Demonstration in the Wild")). We first describe how trajectories are extracted from these videos. We then describe how they are used to create a simulation of the scene. Finally, we detail how the trajectories and the simulator are used to learn realistic road behaviour policies via our novel LfD approach. ### Iv-a Extracting Demonstrations ![Example of how ViBe’s vision module tracks cars (blue) and pedestrians (red). The tracks are projected to 3D space using a reference satellite image from Google Maps. The tracks are played back in a simulation of the scene developed in Unity. ](https://media.arxiv-vanity.com/render-output/7507965/figures/2_visual_tracking.png) Fig. 2: Example of how ViBe’s vision module tracks cars (blue) and pedestrians (red). The tracks are projected to 3D space using a reference satellite image from Google Maps. The tracks are played back in a simulation of the scene developed in Unity. This section describes our vision pipeline, whose main steps are *detection*, *calibration*, and *tracking*. Our detector uses the bounding box output of a pre-trained model of Mask R-CNN [[9](#bib.bib9)] [[6](#bib.bib6)] based on ResNet-101 [[5](#bib.bib5)] architecture, pre-trained on the COCO dataset [[44](#bib.bib44)]. Since we are only interested in the traffic information, we remove all classes except car, bus, truck, pedestrian, and bicycle. The next step is calibration. As traffic cameras tend to have a large field of view, the camera images tend to be highly distorted. Due to being unable to calibrate the camera using traditional methods (e.g., using a checkboard) [[45](#bib.bib45)]. Instead, we obtain a top-down satellite image of the scene from Google Maps and add landmark points to both camera and satellite images. We then undistort the camera image and use the landmark points to calculate the camera matrix. Given the camera calibration, we map the detected bounding boxes into 3D by assuming that the detected object is a fixed height above the ground, with the height depending on its class. The final step is tracking multiple objects in unstructured environments. Our multiple object tracking (MOT) module is similar to that of Deep SORT [[14](#bib.bib14)], which makes heavy use of an *appearance model* to make associations. For each scene, we train an appearance model using a *Siamese network* (SN) [[15](#bib.bib15)]. We first run our object detector over the whole video, followed by an IOU tracker. This yields short tracks that we call *tracklets*. Objects in the same tracklets form positive pairs, and objects from different tracklets form negative pairs used to train the SN. To avoid the possibility of similar objects appearing in negative pairs, we form these pairs using tracklets with a large temporal difference. The SN is trained using a cosine distance metric and a contrastive loss. Our MOT pipeline then processes the detected objects through several steps. Track initialisation occurs when a simple IOU tracker associates more than five consecutive detections. The initialised track is mapped to 3D space, where a Kalman filter predicts the next position of the object. Next, objects in the next frame within the vicinity of this prediction are compared with the current track using the features generated by SN. An association is made if this comparison yields a cosine distance in the feature space below a certain threshold. If no such association is made, the tracker attempts to associate detections using IOU. If association still fails, a final attempt is made using nearest neighbour association in 3D space. Figure [2](#S4.F2 "Fig. 2 ‣ IV-A Extracting Demonstrations ‣ IV ViBe: Video to Behaviour ‣ Learning from Demonstration in the Wild") shows an example output of our tracking pipeline in both 2D and 3D space. ### Iv-B Simulation Our vision pipeline outputs timestamped trajectories of different road users. However, a simulator also requires a reliable representation of the static elements of the scene such as pavements and zebra crossings. To this end, we use Google Maps as a reference to build a simulation of the scene in Unity. Building the static elements of the simulation is straightforward and significantly easier than realistically modeling the dynamic elements of the scene. In this paper, we simulate a roundabout intersection in Purmerend, a city in the Netherlands that provided the traffic video data used in our experiments. Figure [2](#S4.F2 "Fig. 2 ‣ IV-A Extracting Demonstrations ‣ IV ViBe: Video to Behaviour ‣ Learning from Demonstration in the Wild") shows how the scene along with some tracks from our vision pipeline is recreated in our simulator. Section [IV-C](#S4.SS3 "IV-C Learning ‣ IV ViBe: Video to Behaviour ‣ Learning from Demonstration in the Wild") describes our LfD approach, which requires a state representation for the agent. Our simulator generates such a representation based on both the static and dynamic context. Pseudo-LiDAR readings, similar to those in [[33](#bib.bib33)], are used to represent different aspects of the static (e.g., zebra crossings and roads) and dynamic (e.g., distance and velocity of other agents) context of the agent. In addition, we provide information such as the agent’s heading, distance from goal, and velocity. Our simulator uses a simple linear motion model, which we found sufficient for learning, though in the future individual motion models for each road entity could be considered. Given a start frame in the dataset, our simulator plays back tracked trajectories from that frame onwards, produces observations and accepts actions from agents controlled by neural network policies. In other words, it provides exactly the environment needed to both perform LfD on the extracted trajectories, and evaluate the resulting learnt policies. ### Iv-C Learning Given the trajectories extracted by the vision processing from Section [IV-A](#S4.SS1 "IV-A Extracting Demonstrations ‣ IV ViBe: Video to Behaviour ‣ Learning from Demonstration in the Wild"), ViBe uses the simulator from Section [IV-B](#S4.SS2 "IV-B Simulation ‣ IV ViBe: Video to Behaviour ‣ Learning from Demonstration in the Wild") to learn a policy that matches those trajectories. Learning is based on GAIL, which leverages the simulator to train the agent’s behaviour for states beyond those in the demonstrations, avoiding the compounding errors of BC. However, in the original GAIL method, this interaction with the simulator means that the agent has control over the visited states from the beginning of learning. Consequently, it is likely to take bad actions that lead it to undesirable states, far from those visited by the expert, which in turn yields sparse rewards from the discriminator and slow agent learning. To address this problem, we propose *Horizon GAIL*, which, like BC, bootstraps learning from the expert’s states, in this case to ensure a reliable reward signal from the discriminator. To prevent compounding errors, we use a novel horizon curriculum that slowly increases the number of timesteps for which the agent interacts with the simulator. Thus, only at the end of the curriculum does the agent have the full control over visited states that the original GAIL agent has from the beginning. This curriculum also encourages the discriminator to learn better representations early on. ![Schematic of Horizon GAIL for different values of the horizon ](https://media.arxiv-vanity.com/render-output/7507965/x1.png) Fig. 3: Schematic of Horizon GAIL for different values of the horizon H, compared to BC. When H=∞, Horizon GAIL matches original GAIL. In each episode, the agent is initialised from a random expert state, sEt and must act for H steps, where H is the length of the horizon. Once the horizon is reached, the simulation ends but the episode is not considered terminated. Instead, Horizon GAIL uses an actor-critic approach, with the actor following a gradient estimated from an n-step return, with n=H, bootstrapping from a critic Vψ when the horizon is reached. This prevents the agent from learning myopic behaviour when H is small. Hence, while GAIL is agnostic about the policy gradient method it uses, Horizon GAIL commits to an actor-critic approach in order to bootstrap beyond the simulated horizon. When H=1, Horizon GAIL is similar to BC. In fact, pre-training GAIL with BC is known to be beneficial [[32](#bib.bib32), [46](#bib.bib46), [47](#bib.bib47), [48](#bib.bib48)]. However, even with H=1, a crucial difference remains (see Figure [3](#S4.F3 "Fig. 3 ‣ IV-C Learning ‣ IV ViBe: Video to Behaviour ‣ Learning from Demonstration in the Wild")). BC does not interact with a simulator, as the agent simply learns to predict the expert’s action given its state. By contrast, when H=1, the Horizon GAIL agent’s action is fed back into the simulator, which generates st+1 and the policy gradient estimate bootstraps with Vψ(st+1). When H=2, the agent, initialised from sEt, acts for two steps in the simulator before being terminated. H is increased during training according to a schedule. When H=∞, Horizon GAIL is equivalent to GAIL. See Algorithm [1](#alg1 "Algorithm 1 ‣ IV-C Learning ‣ IV ViBe: Video to Behaviour ‣ Learning from Demonstration in the Wild") for a complete overview of our training scheme.   Initialise policy πθ, discriminator Dϕ, expert demonstrations DE   for h=1…T do      Sample expert trajectory: τE∼DE      for t=0,h,2h,…,T−h do         Use expert observation sEt to initialise the agent and initialise the environment to its corresponding state at time t         Sample an agent’s trajectory of length h: τ∼πθi      end for      Sample M observation-action pairs χ∼τ and M pairs χE∼τE      Update the discriminator parameters from ϕi to ϕi+1 with the gradient: | | | | | --- | --- | --- | | | E(sm,am)∈χ[∇ϕlog(Dϕ(sm,am))]+E(sEm,aEm)∈χE[∇ϕlog(1−Dϕ(sEm,aEm))] | |      Compute reward ∀(sm,am)∈χ using the discriminator: rm=−log(1−Dϕi+1(sm,am))      Take a policy step from θi to θi+1, with any policy optimisation method   end for Algorithm 1 Horizon Curriculum for GAIL Gradually moving from single step state-action pairs to more difficult multi-step trajectories helps stabilise learning. It allows the generator and discriminator to jointly learn to generalise to longer sequences of behaviour and match the expert data more closely while ensuring the discriminator does not collapse early in training. We found that Horizon GAIL was critical to successfully reproduce naturalistic behaviour in our complex traffic intersection problem, as we show in Section [V-B](#S5.SS2 "V-B Results ‣ V Experimental results ‣ Learning from Demonstration in the Wild"). V Experimental results ----------------------- We evaluate ViBe on a complex multi-agent traffic scene involving a roundabout in Purmerend (Section [IV-B](#S4.SS2 "IV-B Simulation ‣ IV ViBe: Video to Behaviour ‣ Learning from Demonstration in the Wild")). The input data consists of 850 minutes of video at 15 Hz from the traffic camera observing the roundabout. Our vision pipeline identifies all the agents in the scene (e.g. cars, pedestrians and cyclists), and tracks their trajectories through time, resulting in around 10000 car trajectories. Before any learning, these trajectories are filtered and pruned. Specifically, any trajectories that result in collisions or very large velocities are considered artifacts of the tracking process and are not used during training. We split the resulting dataset into training, validation, and test sets such that there is no temporal overlap, i.e., no test trajectories occur at the same time as training trajectories. The validation set is used to tune hyper-parameters and choose the best performing model (for all baselines) in evaluation. As discussed in Section  [IV-B](#S4.SS2 "IV-B Simulation ‣ IV ViBe: Video to Behaviour ‣ Learning from Demonstration in the Wild"), we can use our simulator to play back these trajectories at any point in time (see Figure [1](#S1.F1 "Fig. 1 ‣ I Introduction ‣ Learning from Demonstration in the Wild")). When training with Horizon GAIL, in each episode the agent is initialised at a point sampled from an expert trajectory. The sampled point determines the full initial state of the simulator, including position, velocity, and heading of all agents in the scene. We use our policy to simulate the agent for H steps. The agent is also assigned a goal corresponding to the last state of the expert trajectory. The episode terminates if the agent collides with an object or another agent, or reaches its goal. We compare Horizon GAIL to a number of baselines: BC, GAIL [[4](#bib.bib4)] and PS-GAIL [[33](#bib.bib33)], using the same dataset and observation and action spaces to train all methods. We show results using the best hyper-parameters we found after tuning them separately for each method. | | NT | IDF1 | IDP | IDR | | --- | --- | --- | --- | --- | | IOU | 400 | 51.1% | 50.3% | 51.8% | | Deep SORT | 129 | 68.1% | 66.6% | 69.7% | | ViBe | 97 | 70.5% | 68.1% | 73.1% | TABLE I: Comparison of ViBe vision module to baseline trackers Policies, πθ, take as input 64 dimensional pseudo-LiDAR observations with a field of view of 2π radians, generated by our simulator as described in Section [IV-B](#S4.SS2 "IV-B Simulation ‣ IV ViBe: Video to Behaviour ‣ Learning from Demonstration in the Wild"). These LiDAR observations are stacked together and processed in two layers of 1x1 convolutions of 15 and 3 channels respectively. These convolutions act as channel mixing operations but maintain the spatial information of the original signal. The output then passes through a series of fully connected layers and is concatenated with the agent’s orientation, distance from the goal, and a one-hot encoding of the target roundabout exit. The network outputs displacements in Cartesian coordinates, used by the simulator to update the agent’s location. ![Results of evaluation across 4 independent 4000 timesteps of multi-agent simulations across different metrics: Jensen-Shannon divergence between joint velocity-occupancy, speed and occupancy distributions of ground truth and simulated agents. The collision probability, either with other agents or the environment. Probability of failing to reach the correct exit.](https://media.arxiv-vanity.com/render-output/7507965/figures/4_metrics_eval.png) Fig. 4: Results of evaluation across 4 independent 4000 timesteps of multi-agent simulations across different metrics: Jensen-Shannon divergence between joint velocity-occupancy, speed and occupancy distributions of ground truth and simulated agents. The collision probability, either with other agents or the environment. Probability of failing to reach the correct exit. We use identical core architectures for the discriminator Dϕ and value function Vψ. Contrary to [[33](#bib.bib33)] we do not represent the policy using a recurrent neural network, thus assuming that the state is fully observable. We did not find this assumption to pose a significant obstacle to our performance, and thus leave its assessment for future work. We train πθ with *proximal policy optimisation* (PPO) with a clipping objective [[43](#bib.bib43)], an actor-critic method known to perform well for long-horizon problems [[49](#bib.bib49)]. We train each model for 5000 epochs, each containing 1024 environment interactions. For Horizon GAIL, the horizon schedule starts with H=1 and increments by 1 every 100 epochs. However, performance is quite robust to this hyperparameter: varying the schedule from 50 to 200 epochs and did not create any significant performance differences. ### V-a Performance Metrics ![Top views of the trajectories taken by the agents, when trying to replicate the expert trajectories shown on the right-most column. These trajectories are produced across 4000 timesteps of multi-agent simulation.](https://media.arxiv-vanity.com/render-output/7507965/x2.png) Fig. 5: Top views of the trajectories taken by the agents, when trying to replicate the expert trajectories shown on the right-most column. These trajectories are produced across 4000 timesteps of multi-agent simulation. To evaluate the ViBe vision module, we measure the reliability of the tracks it generates using the metrics introduced by Ristani et al. [[50](#bib.bib50)]: number of tracked trajectories (NT), identity F1 score (IDF1), identity precision (IDP) and identity recall (IDR). These metrics are suitable because they reflect the key qualities of reliably tracked trajectories. To evaluate our policies, we chose a 4000 timestep window of the test data and simulate all the cars within that interval. These windows do not overlap for each evaluation run. Pedestrians and other road users are played back from the dataset. In contrast to training, during evaluation we do not terminate the agents upon collision, so as to assess how well each method models long term behaviour. Unlike in reinforcement learning, where the true reward function is known, performance evaluation in LfD is not straightforward and typically no single metric suffices. Several researchers have proposed metrics for LfD, which are often task specific [[32](#bib.bib32), [51](#bib.bib51), [47](#bib.bib47)]. We take a similar approach, using a suite of metrics, each comparing a different aspect of the generated behaviour to that of human behaviour. During evaluation we record the positions and velocities of all simulated agents. Using kernel density estimation, we estimate probability distributions for speed and 2D space occupancy (i.e locations in 2D space) as well as a joint distribution of velocities and space occupancy. The same distributions are computed for the ground truth data. We then measure the Jensen-Shannon divergence (JSD) between the data and the respective model generated distributions for these three quantities. We also measure how often the simulated agents collide with objects or other agents in the environment, i.e. the collision rate. Finally, we measure how often the agents fail to reach their goal. ![Progression of Speed, Occupancy and Joint velocity-occupancy JSD metrics through training, indicating difference in stability between our method (Horizon GAIL) and other GAIL baselines across 3 random seeds.](https://media.arxiv-vanity.com/render-output/7507965/figures/6_training_curves.png) Fig. 6: Progression of Speed, Occupancy and Joint velocity-occupancy JSD metrics through training, indicating difference in stability between our method (Horizon GAIL) and other GAIL baselines across 3 random seeds. ### V-B Results To validate the ViBe vision module, we manually label 43 trajectories from the dataset and then compare its performance against two baselines, a simple IOU [[18](#bib.bib18)] tracker and Deep SORT [[13](#bib.bib13), [14](#bib.bib14)], a state-of-the-art MOT pipeline. We replace Deep SORT’s appearance model with our own, as it is specifically trained for this scene. The results in Table [I](#S5.T1 "TABLE I ‣ V Experimental results ‣ Learning from Demonstration in the Wild") show that the ViBe vision module outperforms both baselines. In particular, ViBe’s higher IDF1 score gives confidence that the trajectories provided are of sufficient quality for LfD. The most substantial difference between Deep SORT and ViBe is that ViBe performs Kalman filtering in 3D space, which likely explains the performance difference. Even for ViBe, the number of tracked trajectories (NT) is substantially higher than ground truth (43). However, this is not caused by false positives but merely by the tracker treating a single trajectory as two separate ones. This in turn implies that ViBe produces longer tracks than the baseline methods. The results of our LfD evaluation can be seen in the following figures: Figure [4](#S5.F4 "Fig. 4 ‣ V Experimental results ‣ Learning from Demonstration in the Wild") shows performance with respect to the evaluation metrics discussed in Section [V-A](#S5.SS1 "V-A Performance Metrics ‣ V Experimental results ‣ Learning from Demonstration in the Wild") for 4 independent 4000 timesteps of multi-agent simulations. Figure [5](#S5.F5 "Fig. 5 ‣ V-A Performance Metrics ‣ V Experimental results ‣ Learning from Demonstration in the Wild") shows the trajectories generated by a single such simulation by each method. The first observation is that Horizon GAIL outperforms all baselines and produces trajectories that more closely resemble the data. GAIL and PS-GAIL perform relatively poorly, failing to capture the data distribution. It is worth noting that these results represent the best training epoch out of the 5000 performed during training, as we observed that both baseline GAIL methods exhibit quite unstable training dynamics. This can be further observed from Figure [6](#S5.F6 "Fig. 6 ‣ V-A Performance Metrics ‣ V Experimental results ‣ Learning from Demonstration in the Wild") where we plot the speed, occupancy and joint velocity-occupancy JSD metrics across the training epochs for a multi-agent evaluation of 4000 timesteps across 3 random seeds. We can see that Horizon GAIL is noticeably more stable across both metrics. With respect to PS-GAIL, we observed that the curriculum parameter was relatively hard to tune. For example, adding agents too soon causes the discriminator to learn too quickly that these agents are not real. Another notable observation is that BC performs well when compared to both baseline GAIL methods. This result can be attributed to the abundance of data available for training. From Figure [5](#S5.F5 "Fig. 5 ‣ V-A Performance Metrics ‣ V Experimental results ‣ Learning from Demonstration in the Wild") however we can see that the qualitative performance of these policies is relatively poor when compared to Horizon GAIL. As expected, the BC baseline quickly diverges from plausible trajectories, as minor errors compound over time. The long evaluation times exacerbate this effect. Horizon GAIL avoids compounding error problems associated with BC through interaction with the environment. It also avoids unstable training related with GAIL through the gradually increasing horizon. This yields stable, plausible trajectories with fewer collisions than any other method 111 A video presenting ViBe and showcasing the learned behaviour can be found here: <https://www.youtube.com/watch?v=K8ugVsW3Gm4>. Vi Conclusion -------------- This paper presented a novel method for learning from demonstration in the wild that can leverage abundance of freely available videos of natural behaviour. In particular, we proposed ViBe, a new approach to learning models of road user behaviour that requires as input only unlabelled raw video data of a traffic scene collected from a single, monocular, uncalibrated camera with ordinary resolution. ViBe calibrates the camera, detects relevant objects, tracks them reliably through time, and uses the resulting trajectories to learn driver policies via a novel LfD method. The learned policies are finally deployed in a simulation of the scene developed using the Unity game engine. According to several metrics our LfD method exhibits better and more stable learning than baselines such as GAIL and BC.
4773bd64-9ab2-4996-ba23-c2f92df27a25
trentmkelly/LessWrong-43k
LessWrong
How does MIRI Know it Has a Medium Probability of Success? In the past, people like Eliezer Yudkowsky (see 1, 2, 3, 4, and 5) have argued that MIRI has a medium probability of success.  What is this probability estimate based on and how is success defined? I've read standard MIRI literature (like "Evidence and Import" and "Five Theses"), but I may have missed something. - (Meta: I don't think this deserves a discussion thread, but I posted this on the open thread and no-one responded, and I think it's important enough to merit a response.)  
f005430d-7acf-4b9b-9c63-9bcc69cd76bb
trentmkelly/LessWrong-43k
LessWrong
More precise regret bound for DRL % operators that are separated from the operand by a space % autosize deliminaters % operators that require brackets % operators that require parentheses % Paper specific We derive a regret bound for DRL reflecting dependence on: * Number of hypotheses * Mixing time of MDP hypotheses * The probability with which the advisor takes optimal actions That is, the regret bound we get is fully explicit up to a multiplicative constant (which can also be made explicit). Currently we focus on plain (as opposed to catastrophe) and uniform (finite number of hypotheses, uniform prior) DRL, although this result can and should be extended to the catastrophe and/or non-uniform settings. ---------------------------------------- Appendix A contains the proofs, Appendix B recalls a few lemmas we need from previous essays. ##Results First, we briefly recall some properties of Markov chains. #Definition 1 Consider S a finite set and T:Sk→S. We say that k∈N+ is a period of T when there is s∈S an essential state of T (that is, T∞(s∣s)>0) s.t. k is its period, i.e. k=gcd{n∈N+∣Tn(s∣s)>0}. We denote PT the set of periods of T. ---------------------------------------- The following is a corollary of the Perron-Frobenius theorem which we give without proof. [I believe this is completely standard and would be grateful to get a source for this which treats the reducible case; of course I can produce the proof but it seems redundant.] #Proposition 1 Consider S a finite set and T:Sk→S. Then, there are F∈(0,∞), λ∈(0,1), ζ:Sk→PT and ξ:S×PTk→S s.t. for any s∈S ∀k∈PT:Tkξ(s,k)=ξ(s,k) ∀n∈N:dtv(Tn(s),Ek∼ζ(s)[Tnξ(s,k)])≤Fλn ---------------------------------------- For the purpose of this essay, we will use a definition of local sanity slightly stronger than what previously appeared as "Definition 4." We think this strengthening is not substantial, but also the current analysis can be generalized to the weaker case by adding a term proportional to the
4374c12f-ad66-417f-95e6-e8b139d391d5
StampyAI/alignment-research-dataset/blogs
Blogs
Time for AI to cross the human performance range in ImageNet image classification *Published 19 Oct 2020* Progress in computer image classification performance took: * Over 14 years to reach the level of an untrained human * 3 years to pass from untrained human level to trained human level * 5 years to continue from trained human to current performance (2020) Details ------- ### Metric ImageNet[1](https://aiimpacts.org/time-for-ai-to-cross-the-human-performance-range-in-imagenet-image-classification/#easy-footnote-bottom-1-2683 "&#8220;<strong>ImageNet</strong>&nbsp;is an image database organized according to the&nbsp;<a rel=\"noreferrer noopener\" href=\"http://wordnet.princeton.edu/\" target=\"_blank\">WordNet</a>&nbsp;hierarchy (currently only the nouns), in which each node of the hierarchy is depicted by hundreds and thousands of images. Currently we have an average of over five hundred images per node.&nbsp;&#8220;</p> <p>“ImageNet.” Accessed October 19, 2020. <a href=\"http://www.image-net.org/\">http://www.image-net.org/</a>.") is a large collection of images organized into a hierarchy of noun categories. We looked at ‘top-5 accuracy’ in categorizing images. In this task, the player is given an image, and can guess five different categories that the image might represent. It is judged as correct if the image is in fact in any of those five categories. ### Human performance milestones #### Beginner level We used Andrej Karpathy’s interface[2](https://aiimpacts.org/time-for-ai-to-cross-the-human-performance-range-in-imagenet-image-classification/#easy-footnote-bottom-2-2683 "Karpathy, Andrej. “Ilsvrc.” Accessed October 19, 2020. <a href=\"https://cs.stanford.edu/people/karpathy/ilsvrc/\">https://cs.stanford.edu/people/karpathy/ilsvrc/</a>.") for doing the ImageNet top-5 accuracy task ourselves, and asked a few friends to do it. Five people did it, with performances ranging from 74% to 89%, with a median performance of 81%. This was not a random sample of people, and conditions for taking the test differed. Most notably, there was no time limit, so time allocated was set by patience for trying to marginally improve guesses. #### Trained human-level ImageNet categorization is not a popular activity for humans, so we do not know what highly talented and trained human performance would look like. The best relatively high human performance measure we have comes from Russakovsky et al, who report on performance of two ‘expert annotators’, who they say learned many of the categories. [3](https://aiimpacts.org/time-for-ai-to-cross-the-human-performance-range-in-imagenet-image-classification/#easy-footnote-bottom-3-2683 "&#8216;Therefore, in evaluating the human accuracy we relied primarily on expert annotators who learned to recognize a large portion of the 1000 ILSVRC classes. During training, the annotators labeled a few hundred validation images for practice and later switched to the test set images&#8217;</p> <p>Russakovsky, Olga, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, et al. “ImageNet Large Scale Visual Recognition Challenge.” <em>ArXiv:1409.0575 [Cs]</em>, January 29, 2015. <a href=\"http://arxiv.org/abs/1409.0575\">http://arxiv.org/abs/1409.0575</a>.") The better performing annotator there had a 5.1% error rate.[4](https://aiimpacts.org/time-for-ai-to-cross-the-human-performance-range-in-imagenet-image-classification/#easy-footnote-bottom-4-2683 "&#8220;Annotator A1 evaluated a total of 1500 test set images. The GoogLeNet classi cation error on this sample was estimated to be 6.8% (recall that the error on full test set of 100,000 images is 6.7%, as shown in Table 7). The human error was estimated to be 5.1%.&#8221;<br><br>Also see Table 9</p> <p>Russakovsky, Olga, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, et al. “ImageNet Large Scale Visual Recognition Challenge.” <em>ArXiv:1409.0575 [Cs]</em>, January 29, 2015. <a href=\"http://arxiv.org/abs/1409.0575\">http://arxiv.org/abs/1409.0575</a>.") ### AI achievement of human milestones #### Earliest attempt The ImageNet database was released in 2009.[5](https://aiimpacts.org/time-for-ai-to-cross-the-human-performance-range-in-imagenet-image-classification/#easy-footnote-bottom-5-2683 "&#8220;They presented their database for the first time as a poster at the 2009&nbsp;<a href=\"https://en.wikipedia.org/wiki/Conference_on_Computer_Vision_and_Pattern_Recognition\">Conference on Computer Vision and Pattern Recognition</a>&nbsp;(CVPR) in Florida.&#8221;<br><br>“ImageNet.” In <em>Wikipedia</em>, September 9, 2020. <a href=\"https://en.wikipedia.org/w/index.php?title=ImageNet&amp;oldid=977585441\">https://en.wikipedia.org/w/index.php?title=ImageNet&amp;oldid=977585441</a>.<br>"). An annual contest, the ImageNet Large Scale Visual Recognition Challenge, began in 2010.[6](https://aiimpacts.org/time-for-ai-to-cross-the-human-performance-range-in-imagenet-image-classification/#easy-footnote-bottom-6-2683 "&#8220;&#8230;The ImageNet Large Scale Visual Recognition Challenge (ILSVRC) has been running annually for five years (since 2010) and has become the standard benchmark for large-scale object recognition.&#8221;</p> <p>Russakovsky, Olga, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, et al. “ImageNet Large Scale Visual Recognition Challenge.” <em>ArXiv:1409.0575 [Cs]</em>, January 29, 2015. <a href=\"http://arxiv.org/abs/1409.0575\">http://arxiv.org/abs/1409.0575</a>.") In the 2010 contest, the best top-5 classification performance had 28.2% error.[7](https://aiimpacts.org/time-for-ai-to-cross-the-human-performance-range-in-imagenet-image-classification/#easy-footnote-bottom-7-2683 "See table 6.</p> <p>Russakovsky, Olga, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, et al. “ImageNet Large Scale Visual Recognition Challenge.” <em>ArXiv:1409.0575 [Cs]</em>, January 29, 2015. <a href=\"http://arxiv.org/abs/1409.0575\">http://arxiv.org/abs/1409.0575</a>.") However image classification broadly is older. Pascal VOC was a similar previous contest, which ran from 2005.[8](https://aiimpacts.org/time-for-ai-to-cross-the-human-performance-range-in-imagenet-image-classification/#easy-footnote-bottom-8-2683 "&#8220;The PASCAL Visual Object Classes (VOC) challenge is a benchmark in visual object category recognition and detection, providing the vision and machine learning communities with a standard dataset of images and annotation, and standard evaluation procedures. Organised annually from 2005 to present, the challenge and its associated dataset has become accepted as the benchmark for object detection.&#8221;</p> <p>Everingham, Mark, Luc Van Gool, Christopher K. I. Williams, John Winn, and Andrew Zisserman. “The Pascal Visual Object Classes (VOC) Challenge.” <em>International Journal of Computer Vision</em> 88, no. 2 (June 2010): 303–38. <a href=\"https://doi.org/10.1007/s11263-009-0275-4\">https://doi.org/10.1007/s11263-009-0275-4</a>.") We do not know when the first successful image classification systems were developed. In a blog post, Amidi & Amidi point to LeNet as pioneering work in image classification[9](https://aiimpacts.org/time-for-ai-to-cross-the-human-performance-range-in-imagenet-image-classification/#easy-footnote-bottom-9-2683 "See section &#8216;LeNet&#8217;.<br><br>“The Evolution of Image Classification Explained.” Accessed October 19, 2020. <a href=\"https://stanford.edu/~shervine/blog/evolution-image-classification-explained#lenet\">https://stanford.edu/~shervine/blog/evolution-image-classification-explained#lenet</a>."), and it appears to have been developed in 1998.[10](https://aiimpacts.org/time-for-ai-to-cross-the-human-performance-range-in-imagenet-image-classification/#easy-footnote-bottom-10-2683 "&#8220;<strong>LeNet</strong>&nbsp;is a&nbsp;<a href=\"https://en.wikipedia.org/wiki/Convolutional_neural_network\">convolutional neural network</a>&nbsp;structure proposed by&nbsp;<a href=\"https://en.wikipedia.org/wiki/Yann_LeCun\">Yann LeCun</a>&nbsp;et al. in 1998.&#8221;</p> <p>“LeNet.” In <em>Wikipedia</em>, June 19, 2020. <a href=\"https://en.wikipedia.org/w/index.php?title=LeNet&amp;oldid=963418885\">https://en.wikipedia.org/w/index.php?title=LeNet&amp;oldid=963418885</a>.") #### Beginner level The first entrant in the ImageNet contest to perform better than our beginner level benchmark was SuperVision (commonly known as AlexNet) in 2012, with a 15.3% error rate.[11](https://aiimpacts.org/time-for-ai-to-cross-the-human-performance-range-in-imagenet-image-classification/#easy-footnote-bottom-11-2683 "&#8220;We also entered a variant of this model in the<br>ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%&#8221;</p> <p>Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E Hinton. “ImageNet Classification with Deep Convolutional Neural Networks.” In <em>Advances in Neural Information Processing Systems 25</em>, edited by F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger, 1097–1105. Curran Associates, Inc., 2012. <a href=\"http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf\">http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf</a>.<br><br>Also, see Table 6 for a list of other entrants: <br><br>Russakovsky, Olga, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, et al. “ImageNet Large Scale Visual Recognition Challenge.” <em>ArXiv:1409.0575 [Cs]</em>, January 29, 2015. <a href=\"http://arxiv.org/abs/1409.0575\">http://arxiv.org/abs/1409.0575</a>.") #### Superhuman level In 2015 He et al apparently achieved a 4.5% error rate, slightly better than our high human benchmark.[12](https://aiimpacts.org/time-for-ai-to-cross-the-human-performance-range-in-imagenet-image-classification/#easy-footnote-bottom-12-2683 "&#8220;Our 152-layer ResNet has a single-model top-5 validation error of 4.49%.&#8221; <br><br>Also see Table 4<br><br>He, Kaiming, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. “Deep Residual Learning for Image Recognition.” <em>ArXiv:1512.03385 [Cs]</em>, December 10, 2015. <a href=\"http://arxiv.org/abs/1512.03385\">http://arxiv.org/abs/1512.03385</a>.") #### Current level According to paperswithcode.com, performance has continued to climb, to 2020, though slower than earlier.[13](https://aiimpacts.org/time-for-ai-to-cross-the-human-performance-range-in-imagenet-image-classification/#easy-footnote-bottom-13-2683 "See figure:</p> <p>“Papers with Code &#8211; ImageNet Benchmark (Image Classification).” Accessed October 19, 2020. <a href=\"https://paperswithcode.com/sota/image-classification-on-imagenet\">https://paperswithcode.com/sota/image-classification-on-imagenet</a>.") ### Times for AI to cross human-relative ranges Given the above dates, we have: | | | | | | --- | --- | --- | --- | | Range | Start | End | Duration (years) | | First attempt to beginner level | <1998 | 2012 | >14 | | Beginner to superhuman | 2012 | 2015 | 3 | | Above superhuman | 2015 | >2020 | >5 | *Primary author: Rick Korzekwa* Notes -----
3664643e-3b49-4c56-b208-abbf0d1eba8a
trentmkelly/LessWrong-43k
LessWrong
Alignment vs capabilities In Star Wars: Episode I - The Phantom Menace, Master Jedi Qui-Gon Jinn discovers Anakin. Anakin is just a young child slave on the forgettable planet of Tatooine. However, Qui-Gon feels something noteworthy about the boy. He feels... the Force. To confirm this feeling, he takes a sample of blood from Anakin. > Anakin: What are you doing? > > Qui-Gon: Checking your blood for infections. Run along. You have a big day tomorrow. Sleep well Anny. > > Qui-Gon: Obi-Wan. > > Obi-Wan: Yes master. > > Qui-Gon: I need an analysis of this blood sample I'm sending you. > > Obi -Wan: Wait a minute. > > Qui-Gon: I need a midi-chlorian count. > > Obi-Wan: The reading is off the chart. Over 20,000. Even Master Yoda doesn't have a midi-chlorian count that high. > > Qui-Gon: No Jedi has. > > Obi-Wan: What does that mean? > > Qui-Gon: [dramatic pause] > > Qui-Gon: I'm not sure. So then, Qui-Gon wants to take advantage of this. He basically has a budding superweapon in front of him and wants to utilize it in the name of Good. More specifically, he believes that Anakin is the Chosen One. > An ancient Jedi prophecy, made by an ancient Jedi mystic and recorded on a holocron, foretold that the Chosen One would restore balance to the Force by destroying the Sith. The prophecy also predicted that the Chosen One would also become one of the strongest Force-sensitives in the galaxy, rivaling the power of the most legendary champions of the Jedi Order. > > Although the Jedi had long spoken of balance in the Force, there was no consensus on what that meant for the galaxy. Some people thought that the light side of the Force and the dark side of the Force had to be equal in order to create balance, while others believed that the light could be the Force in its natural state, making the dark side the cause of imbalance. In contrast to the Jedi's view of balance, the Sith Lord Darth Sidious believed that the dark side needed to be free, and the light side suppressed, in order to ac
f0c937a1-165c-441a-8d03-4f255abdd33f
awestover/filtering-for-misalignment
Redwood Research: Alek's Filtering Results
id: post3881 A putative new idea for AI control; index here . Normal people start with simple ideas and complexify them; it seems that I start with complex ideas and then simplify them. Partially inspired by a conversation with Tom Voice, here is the simpler version of the AIs in virtual worlds idea. The idea is simply to physically confine the AI up to time T . The AI gets a reward or utility only before time T , and is motivated to care only about events before time T . We turn off the AI at time T , and inspect its output only afterwards. We could try and enforce this by the usual trick of defining some event E that will erase the AI's output (and the AI?) at or before time T , and multiplying the AI's utility or reward by I E , the indicator function for erasure. Therefore, the argument goes, the AI will only care what happens in worlds where we won't see its output, thus not motivating it to manipulate us. The weakness of imperfection Unlike the original "virtual worlds" idea, it is not safe to spy on the AI while it's working, or at any time before time T , as this may motivate us to intervene in the AI's execution before time T . So this is more an oracle idea, but one that allow another class of outputs from the other two ideas: counterfactual utility estimation and the confirmed selective oracle . Both those ideas, however, are stable under imperfect problem specification. If the AI doing counterfactual utility estimation, estimates that erasure E won't be perfect, then it is motivated to manipulate us only through the least significant figures in its output --- and the degree of manipulation its willing to try can be estimated. The confirmed selective oracle is naturally motivated to manipulate us, but will only do this if this is a more likely route to success than answering honestly; if we set the problem up right, this shouldn't happen. The approach in this post works if the assumptions are perfect, but it is not robust if they are slightly wrong. A small probability of the AI valuing post- T means that it might throw in a "manipulate humans" trick into its output, if it can do this at no cost. Of course, we can bring in other techniques to make this type of Oracle more robust --- by reducing the size of its output, erasing its run history (which we'd otherwise want to access), or specifying its reward function thus that the AI can't compute manipulations without sacrificing its standard reward. But this mares the beauty of the original approach, illustrating it isn't perfect on its own.
35ab5e4f-efa1-4ff5-9485-62360a4b363a
trentmkelly/LessWrong-43k
LessWrong
LW Update 2019-01-03 – New All-Posts Page, Author hover-previews and new post-item Daily page replaced with new "All-posts" page The old /daily page (accessible via clicking "Daily" on the frontpage, or clicking on "All Posts" in the hamburger menu in the top-left) has been replaced with a new /allPosts page that is showcasing a bunch of new designs that we are planning to roll out across the site over the next few weeks. Note in particular the changes to the post-preview, which is now reduced to one line and puts all the key information into easily skimmable columns. We also got rid of our beloved little title-column things on the left side, and moved everything into a central column in preparation for some upcoming changes to the overall site structure. By default the page will function just like the old /daily page, but with optional additional functionality in the form of new filters and sorting options (this now also finally allows you to easily sort by "total karma" again). Author hover-preview If you hover over the author names of comments and posts, you can now see a small preview of their bio, the time they signed up and the number of posts and comments that they've written. The goal is to make it easier to figure out whether an author has written other content you might want to read, and whether the user you are looking at is new, which helps you set expectations and hopefully encourages people to help new users get onboarded. Karma notifications Jim talked a bit more about karma notifications over in another meta post. I think this is a pretty important feature, and am interested in people's thoughts on the new notifications.
fbb82b6c-96bf-4adb-9525-797c1dda500a
trentmkelly/LessWrong-43k
LessWrong
Some phrases in The Map that... Confuse me- help please, to make my review of it better! I recently read A Map that Reflect the Territory: Essays by the Less Wrong Community I will be posting a review of it on the lesswrong site soon. But I want to improve it first and need your help! There were some words or phrases that were used that either confused me or I could not find a definition on the web anywhere! So please clarify if you can. ------------------------------- In the article What Motivated Rescuers During the Holocaust?  by Martin Sustrik, is the following: As I already said, I am not an expert on the topic, but if what we see here is not an instance of the bystander effect, I'll eat my hat. He is referring to that people who begin helping one Jew escape the Nazi's end up helping more. The phrase Bystander Effect is on the web! A lot! It seems to be that the more people that are bystanders who could prevent something bad from happenings the less likely someone really will. This seems different from how its used in the essay. (Side note- I couldn't seem to find this essay on the less wrong website.) ----------------------- Yed graphs. In the article What Makes People Intellectually Active? by Abram Demski https://www.lesswrong.com/posts/XYYyzgyuRH5rFN64K/what-makes-people-intellectually-active is the following: I might write one day on topics that interest me, and have sprawling Yed graphs in which I'm trying to make sense of confusing and conflicting evidence. What are Yed Graphs? ---------------------- In the article What Makes People Intellectually Active? by Abram Demski https://www.lesswrong.com/posts/XYYyzgyuRH5rFN64K/what-makes-people-intellectually-active is the following: Its like the only rationality technique is TAPs, and you only set up taps of the form ``resemblance to rationality concept'' IMPLIES  ``think of rationality concept''. What is TAP?  
1c1a6054-e9de-4d91-8d3a-844c11616f5a
trentmkelly/LessWrong-43k
LessWrong
Disagreement with Paul: alignment induction I had a discussion with Paul Christiano, about his Iterated Amplification and Distillation scheme. We had a disagreement, a disagreement that I believe points to something interesting, so I'm posting this here. It's a disagreement about the value of the concept of "preserving alignment". To vastly oversimplify Paul's idea, the AI A[n] will check that A[n+1] is still aligned with human preferences; meanwhile, A[n-1] will be checking that A[n] is still aligned with human preferences, all the way down to A[0] and an initial human H that checks on it. Intuitively, this seems doable - A[n] is "nice", so it seems that it can reasonably check that A[n+1] is also nice, and so on. But, as I pointed out in this post, it's very possible that A[n] is "nice" only because it lacks power/can't do certain things/hasn't thought of certain policies. So niceness - in the sense of behaving sensibly as an autonomous agent - does not go through the inductive step in this argument. Instead, Paul confirmed that "alignment" means "won't take unaligned actions, and will assess the decisions of a higher agent in a way that preserves alignment (and preserves the preservation of alignment, and so on)". This concept does induct properly, but seems far less intuitive to me. It relies on humans, for example, being able to ensure that A[0] will be aligned, that any more powerful copies it assesses will be aligned, that any more powerful copies those copies assess are also aligned, and so on. Intuitively, for any concept C of alignment for H and A[0], I expect one of four things will happen, with the first three being more likely: * The C does not induct. * The C already contains all of the friendly utility function; induction works, but does nothing. * The C does induct non-trivially, but is incomplete: it's very narrow, and doesn't define a good candidate for a friendly utility function. * The C does induct in a non-trivial way, the result is friendly, but only one or two steps of the i
7dd06dc8-00bf-4219-833c-66f5054a6766
trentmkelly/LessWrong-43k
LessWrong
Can crimes be discussed literally? Suppose I were to say that the American legal system is a criminal organization. The usual response would be that this is a crazy accusation. Now, suppose I were to point out that it is standard practice for American lawyers to advise their clients to lie under oath in certain circumstances. I expect that this would still generally be perceived as a heterodox, emotionally overwrought, and perhaps hysterical conspiracy theory. Then, suppose I were to further clarify that people accepting a plea bargain are expected to affirm under oath that no one made threats or promises to induce them to plead guilty, and that the American criminal justice system is heavily reliant on plea bargains. This might be conceded as literally true, but with the proviso that since everyone does it, I shouldn't use extreme language like "lie" and "fraud." This isn't about lawyers - some cases in other fields:  In American medicine it is routine to officially certify that a standard of care was provided, that cannot possibly have been provided (e.g. some policies as to the timing of medication and tests can't be carried out given how many patients each nurse has to care for, but it's less trouble to fudge it as long as something vaguely resembling the officially desired outcome happened). The system relies on widespread willingness to falsify records, and would (temporarily) grind to a halt if people were to simply refuse to lie. But I expect that if I were to straightforwardly summarize this - that the American hospital system is built on lies - I mostly expect this to be evaluated as an attack, rather than a description. But of course if any one person refuses to lie, the proximate consequences may be bad. Likewise for the psychiatric system. In Simulacra and Subjectivity, the part that reads "while you cannot acquire a physician’s privileges and social role simply by providing clear evidence of your ability to heal others" was, in an early draft, "physicians are actually nothing but
3ea435a2-f009-46f6-af52-3a8021a66bf2
trentmkelly/LessWrong-43k
LessWrong
Attention Output SAEs Improve Circuit Analysis This is the final post of our Alignment Forum sequence produced as part of the ML Alignment & Theory Scholars Program - Winter 2023-24 Cohort. Executive Summary * In a previous post we trained Attention Output Sparse Autoencoders (SAEs) on every layer of GPT-2 Small.  * Following that work, we wanted to stress-test that Attention SAEs were genuinely helpful for circuit analysis research. This would both validate SAEs as a useful tool for mechanistic interpretability researchers, and provide evidence that they are identifying the real variables of the model’s computation. * We believe that we now have evidence that attention SAEs can: * Help make novel mechanistic interpretability discoveries that prior methods could not make. * Allow for tracing information through the model’s forward passes on arbitrary prompts. * In this post we discuss the three outputs from this circuit analysis work: 1. We use SAEs to deepen our understanding of the IOI circuit. It was previously thought that the indirect object’s name was identified by tracking the names positions, whereas we find that instead the model tracks whether names are before or after “and”. This was not noticed in prior work, but is obvious with the aid of SAEs. 2. We introduce “recursive direct feature attribution” (recursive DFA) and release an Attention Circuit Explorer tool for circuit analysis on GPT-2 Small (Demo 1 and Demo 2). One of the nice aspects of attention is that attention heads are linear when freezing the appropriate attention patterns. As a result, we can identify which source tokens triggered the firing of a feature. We can perform this recursively to track backwards through both attention and residual stream SAE features in models. 1. We also announce a $1,000 bounty for whomever can produce the most interesting example of an attention feature circuit by 07/15/24 as subjectively assessed by the authors. See the section "Even cooler examples" for more details on the bounty. 3. W
1745a34e-8604-4460-a839-eaf34ab01ebf
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Circumventing interpretability: How to defeat mind-readers *(Post now available as a pdf:* [*https://arxiv.org/abs/2212.11415*](https://arxiv.org/abs/2212.11415) *)* | | | --- | | TL;DR:  Unaligned AI will have a convergent instrumental incentive to make its thoughts difficult for us to interpret. In this article, I discuss many ways that a capable AI might circumvent scalable interpretability methods and suggest a framework for thinking about these risks. Summary of key points**The framework: Interpretation as information flow**I categorize potential interpretability circumvention methods on three levels:1. Interpretability methods consist of interpretability **tools** and the human **interpreter**. To circumvent interpretation an AI can prevent information about its thoughts from flowing through either the tool or the interpreter. (*See main text for figure*) 2. Within the tool or the human interpreter, there are several information processing steps that an AI might exploit. (*See main text for figure*): * **Exposure evasion**: Being where your adversary isn’t looking * **Perception evasion**: Being hard to detect even when exposed to your adversary * **Recognition evasion**: Being hard to recognize/understand even though you’ve been perceived * **Dazzling**: Looking like something that confuses or incapacitates your adversary 3. Another useful way to categorize interpretability circumvention methods is whether they are **passive** or **active**. Most methods that can be passive can also be active, but other methods are strictly active. * **Passive** interpretability circumvention occurs during the normal course of learning without the AI intending to make itself less interpretable. (e.g. polysemanticity). * **Active** interpretability circumvention is where the AI intentionally takes actions in order to circumvent our interpretability methods. (e.g. an AI actively deleting its own representations to hide deception) **The list of potential interpretability circumvention methods with a short summary of each*** **1. Exploiting weaknesses of the interpretability tool** + **1.1. Evading exposure to the interpretability tool** - **1.1.1. Thinking in the gaps of our tools**: Our interpretability tools might fail to examine everything about an AI's thoughts (e.g. it might not look at nonlinear interactions between neurons, only their linear relations).  An AI might use those blindspots for dangerous thought. (*Passive or active*) - **1.1.2. Externalizing representations**: The environment can be used for some parts of an AI's computation. Actively: The agent can write messages to itself or initiate programs. Passively: The AI could use the environment as a reservoir in a [reservoir computer](https://en.wikipedia.org/wiki/Reservoir_computing). (*Passive or active*) - **1.1.3. Environmental artifacts that corrupt its future self**: A misaligned AI may leave behind artifacts that serve as adversarial examples or poisoned data that corrupt its future self. (*Strictly active*) - **1.1.4. Deleting (and relearning) dangerous representations**: When not under surveillance, an AI might be able to delete dangerous representations so that it looks safe when surveillance resumes. It might also be able to schedule when it relearns those representations. (*Strictly active*) - **1.1.5. Weights poisoning**: Instead of providing manipulated inputs, the AI might manipulate its own weights so that a small subset of normal inputs behave like adversarial examples. (*Strictly active*) - **1.1.6. Pre-committing to following potentially dangerous instructions**: Hubinger: "... if the model has the ability to read and write to some sort of state ... it could write out a pre-commitment to follow the hard-coded policy and then just have a policy of always following any pre-commitments it finds such that it never thinks about the deception that led it to that policy in the first place.” (*Strictly active*) - **1.1.7. Hard-coded neural shortcuts**: Hubinger again: “... it could hack its own gradient to produce a new model which will still exhibit the same deceptive behavior, but without doing any of the detectable deceptive cognition. … not because it's trying to act deceptive, but just because it's a hard-coded part of its algorithm” (*Strictly active*) - **1.1.8. Following cached thoughts**: Nanda: "[An AI might develop] the instinct to follow cached thoughts without thinking about them, and in occasional moments of deception to generate a lot of cached thoughts that will lead to misaligned actions, but with no deceptive cognition in the moment." (*Strictly active*) - **1.1.9. Instrumental incapability**: A misaligned AI that’s able to modify its own parameters could damage capabilities that serve some goals (e.g. aligned goals) more than its misaligned goals. Generalizes instrumental clumsiness. (*Strictly active*) - **1.1.10. Instrumental clumsiness**: An AI might hack its future self so that it is more likely to make certain kinds of mistakes while leaving other capabilities intact. (*Strictly active*) + **1.2. Evading perception by the interpretability tool** - **1.2.1. Hiding in noisy subspaces**: We might ignore noisy regions or subspaces of neural activation space may be very noisy. But an AI might be able to do some dangerous thinking in those spaces. Generalizes ‘Whispering dangerous thoughts’. (*Passive or active*) - **1.2.2. Whispering dangerous thoughts**: An AI might make its dangerous thoughts very ‘quiet’ relative to other thoughts by using small norm activation vectors. (*Passive or active*) + **1.3. Evading recognition by the interpretability tool** - **1.3.1. Antiparsimony**: If it costs more to interpret an AI's thought than it costs to think it, an AI might be able to produce a lot of representations such that it saturates the interpreter's computation before it saturates its own. (*Passive or active*) - **1.3.2. Encryption**: Suppose we have an AI with a memory state. The AI might use very long chains of seemingly meaningless neural activations to construct a single thought. (*Passive or active*) - **1.3.3. Interpretability illusions**: By manipulating the dataset shown to the tool, an AI can control its interpretations. (*Strictly active*) + **1.4. Attacking/dazzling the interpretability tool** - **1.4.1. Adversarial attack against networks in the interpretation tool**: The AI could adversarially attack networks in the tool to manipulate the tool’s outputs. (*Strictly active*) * **2.  Exploiting weaknesses of the human interpreter** + **2.1. Evading exposure to the human** - **2.1.1.** None identified + **2.2. Evading perception by the human interpreter** - **2.2.1.** None identified + **2.3. Evading recognition by the human interpreter** - **2.3.1. Alien representations**: A representation whose meaning lies in a strange place in our ontology. Generalizes ‘Polysemanticity’ and ‘Representational kludging’. (*Passive or active*) - **2.3.2. Polysemanticity**: Where directions in neural statespace mean multiple things. (*Passive or active*) - **2.3.3. Representational kludging**: Creating representations with dangerous properties by combining innocuous precursor representations in an awkward messy way. (*Passive or active*) - **2.3.4. Unknown representations**: Some thoughts aren’t immediately possible for us to translate because we don't yet have the concepts for them. But we’re capable of learning them. (*Passive or active*) - **2.3.5. Unknowable representations**: Some concepts are impossible for humans to understand due to the computational limitations of our brain or some other reasons. (*Passive or active*) + **2.4. Attacking/dazzling the human interpreter** - **2.4.1. Manipulating, mind-hacking, or incapacitating the interpreter**: An AI might be able to manipulate, directly control, or incapacitate humans  using specific kinds of outputs. (*Strictly active*) **Some high-level takeaways*** A lot of circumvention methods rely on making interpretation more expensive (either in terms of researcher time or compute). * We can draw many analogies between mind-reading/interpretability circumvention and cybersecurity. These ideas may perhaps be called *‘cognitosecurity’*. * A lot of circumvention methods rely on gradient hacking, which requires advanced capabilities. These methods might therefore be less likely to happen because a misaligned AI that’s capable of gradient hacking probably has better ways to escape. * Using interpretability tools in the loss function seems likely to be very bad (in most circumstances) for interpreting dangerous thought. |   **Full article:** ================= *Acknowledgements:* *I’m grateful to David Lindner, Evan R. Murphy, Alex Lintz, Sid Black, Kyle McDonnell, Laria Reynolds, Adam Shimi, and Daniel Braun whose comments greatly improved earlier drafts of this article. The article’s weaknesses are mine, but many of its strengths are due to their contributions.* *Additionally, this article benefited from the prior work of many authors, but especially: Evan Hubinger, Peter Barnett, Adam Shimi, Neel Nanda, Evan R. Murphy, Eliezer Yudkowsky, Chris Olah. I collected several of the potential circumvention methods from their work.* *This work was carried out while at* [*Conjecture*](https://www.conjecture.dev/)*.* Introduction ============ There’s been a lot of excitement about interpretability recently. It’s a component in many different alignment proposals. But if we’re interpreting an unaligned AI, it has a convergent instrumental incentive[[1]](#fn08o3g9tp6pte) to circumvent our interpretability methods. There’s been some sporadic past discussion about how an AI might achieve this (e.g. [1](https://www.lesswrong.com/posts/uK6sQCNMw8WKzJeCQ/a-longlist-of-theories-of-impact-for-interpretability), [2](https://www.alignmentforum.org/posts/uXH4r6MmKPedk8rMA/gradient-hacking), [3](https://www.alignmentforum.org/posts/5Fmc49cARW2a3uc5B/some-motivations-to-gradient-hack), [4](https://www.reddit.com/r/rational/comments/re40as/recommendations_for_novels_describing_incentives/), [5](https://www.lesswrong.com/posts/bdayaswyewjxxrQmB/understanding-gradient-hacking), [6](https://www.lesswrong.com/posts/oKYWbXioKaANATxKY/soares-tallinn-and-yudkowsky-discuss-agi-cognition), [7](https://www.alignmentforum.org/s/jef8ntrWuJ7SvZjCM/p/FrFZjkdRsmsbnQEm8#Scenario_5__Locate_the_AI_s_beliefs_about_its_observations)). But, to my knowledge, the problem hasn’t been addressed systematically, at least not publicly. If we continue to make large bets on interpretability, then the lack of attention on potential circumvention methods seems bad because we might end up with a false sense of security. Our interpretability methods might say “*Nope, no naughty thoughts here!*” when in fact the AI has simply found a way to think dangerous thoughts undetected.  We can think of this problem as the focus of an interpretability research agenda that is related to but distinct from the well-known agenda that asks “*Can we understand what today’s neural networks are doing? And how can we scale our methods of understanding them?*”. I think this short term agenda is exceptionally important (it's the subject of my day-job). But we should probably also try to anticipate future issues by building a longer term interpretability research agenda. This article helps to address that longer term agenda and asks “*Assuming our short term research agenda at least partly succeeds, how might a capable AI circumvent our interpretability methods?*”.  Here I present a definitely-incomplete list of potential interpretability circumvention methods that I and other sources came up with. Throughout, I assume we’re concerned with interpreting the representations[[2]](#fn4ssjt4rrnd) of a capable (roughly human-level or above) deep learning-based AI. I also suggest a framework for thinking about these circumvention methods, which I hope will help to identify omitted circumvention methods before a capable AI does.   The framework: Interpretation as information flow ------------------------------------------------- We can think of interpretation as information flow (figure 1). Here we’ll use the following basic model of information flow in interpretability: * We have an AI whose thoughts are being interpreted by an interpretability method. * An *interpretability method* is the process of translating an AI’s representations into whatever the corresponding representations are in humans. * To do this, interpretability methods contain *interpretability tools*, which is the analysis machinery that reads the AI's representations and converts them into something legible to humans. * *Human interpreters* read the outputs of the interpretability tools and attempt to understand them, thus converting them into human representations.   ![A simple model of information flow in interpretability. Information about an AI's representations is read by an interpretability tool, which is read in turn by a human interpreter. The interpretability tool together with the human interpreter form the 'interpretability method'. ](https://lh6.googleusercontent.com/UwumoAH552VxfOUpKolJNcBjcdH1_Mp13JlKHvw84eub2hvNjchmGZ-3iuRjCQuhS94IKjkXioyjqvuw5wUknvXR9WvV2Mr5TyQkXWxno_f5jSSF5fqY6afDiCRYWsGqqN4Q3iNWnduqd9hTq1k)Figure 1. A simple model of information flow in interpretability.  Throughout the article, I assume we’ve figured out ways to interpret our models *at scale*, which probably means that our interpretability tools are automating a lot of the work. Nevertheless, all the potential circumvention methods I talk about apply to worlds where we still haven’t managed to scale interpretability, where the tools would be relatively less important. An even more detailed model of information flow (figure 2) is useful for categorizing interpretability circumvention methods, as we'll see: * Information flows within both the interpretability tool and the human. In both, * What is to be interpreted must first be *exposed*to the tool or the human. * What is exposed must then be *perceived* or detected*.* * What is perceived is then further analysed so that it can be *recognized* or understood, which then determines an appropriate output by the tool or action by the human. ![Figure 2. A more detailed model of information flow which is applicable both the interpretability tool and to the human. The steps begin with exposure of the information to the observer, then perception by the observer, then recognition & and decision making, followed by output. ](https://lh5.googleusercontent.com/-Dh-8ySM2N1YxxyNO-EoNYzCpiPlcVu67D8lO7zFQJO3RBT787PREIHcd5Tqm3XvRu7UNXRGmFcX6rz-OUP3J1Dvgj4cEBh9eVVI5wr2-OaMb18uKjx2lt0olWqkVCceOnTr2HI2YLJcMjwR4A)Figure 2. A more detailed model of information flow which is applicable to both the interpretability tool and to the human. Information flow begins with exposure of the information to the observer, then perception by the observer, then recognition & and decision making, followed by output.   How I categorize interpretability circumvention methods ------------------------------------------------------- The ways to categorize interpretability circumvention methods that I’ve found most useful (and the way this article is structured) are: **A. Exploiting weaknesses of the tool vs. weaknesses of the human interpreter** Interpretability circumvention methods can either circumvent the *interpretability tool* or the *human interpreter*.  **B. Categories of interpretability circumvention found in nature** Evolution has found many different ways that make it difficult for adversaries to interpret the appearance of an organism. I put them into 4 groups (figure 3), which broadly correspond to the detailed model of information flow described above: * **Exposure evasion**: + Being where your adversary isn’t looking + E.g. An animal that lives underground or up a tree, away from surface-dwelling predators * **Perception evasion**: + Being hard to detect even when exposed to your adversary + E.g. An animal that only comes out when it’s dark; color camouflage to look the same as the color of the background * **Recognition evasion**: + Being hard to recognize/understand even though you’ve been perceived + E.g. Biomimicry, where an organism looks like another kind of object entirely, for instance: an Orchid mantis which looks like the flower; the harmless Hornet Moth mimicking the colors of a wasp. * **Dazzling**: + Looking like something that confuses or incapacitates your adversary + E.g. Living in large groups such that it’s hard for a predator to focus on any particular individual; a zebra’s stripes. ![](https://lh4.googleusercontent.com/OCvYs030WTrrnXF9xoH-SZBsx08r63ANEapOrD8ngWv6XZkwW55S88t_9YpvpY9hXPaJeClZEyWMevU_jilTm7-t6F55a7TIBC8WHWNIvo-lzTLXPTvWNdhJTHlNE0t_f6UV3thmz825woM6_6Zonf4)Figure 3. Interpretability circumvention methods exploit different steps in the information flow diagram and correspond to forms of interpretability circumvention found in nature. Circumvention methods may seek to evade exposure, perception, or recognition. They may also exploit vulnerabilities in the observer's information processing system, a process called dazzling.**C. Passive vs. active methods** Information flow might be inhibited **passively** by circumvention methods that arise during the normal course of learning. One example is 'polysemanticity', which is a natural phenomenon that presents a challenge to interpretability. Passive circumvention methods might also arise if they’re directly optimized for. This might happen if we use *interpretability tools in the loss function*, which is where we identify unsafe representations in an AI and then use the loss function to penalize the AI for containing those representations. At first glance, this seems like a potentially good way to make an AI safe. But it risks pushing unsafe representations into the blind spots of our interpretability tools.  Information flow might be inhibited **actively** through an agent's actions. The primary way this could happen is [gradient hacking](https://www.lesswrong.com/posts/bdayaswyewjxxrQmB/understanding-gradient-hacking), but it’s not the only kind of action an AI might take to conceal misaligned thoughts. Of course, active circumvention methods require that interpreters either can’t see or aren’t looking at the thoughts that generate those actions. Most potential circumvention methods that can be passive can also be active. But some methods can only be active. It’s usually obvious when either is the case, so I only highlight this categorization method when it’s not obvious.  This categorization is mostly useful for identifying how and when during learning the method is likely to arise. Strictly active methods generally require advanced capabilities (such as gradient hacking), whereas passive methods don’t. I tend to think this makes strictly active methods less likely than methods that may be passive or active. Before I introduce the individual circumvention methods in detail (see next section), the table below puts individual circumvention methods into categories according to whether they may be 'passive or active' or whether they are 'strictly active'.  ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/b6be510fa17beba5baaab02e808e4fd8496a5e2ab5efd20f.png)Table 1. Categorizing interpretability circumvention methods as 'Passive or active' or 'Strictly active'   This article is structured according to the above three categorization schemes, primarily the first two. I’ll discuss circumvention methods that exploit weaknesses either of the tool or of the human. For each, I’ll look at methods of exposure evasion, perception evasion, recognition evasion, and dazzling. Throughout, I’ll highlight whether a method can arise passively or if it’s strictly an active method, when the distinction matters.  I’m not confident this categorization scheme is optimal or even totally consistent, but I’ve found it helpful. Occasionally, a circumvention method in the list is in fact a generalization of another method in the list, yet I present them separately if the special case is particularly interesting or likely to arise.  Interpretability circumvention methods ====================================== ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/f832547293396e3c3ab564fe49047ad63c461043c4dba1a8.png)Table 2. An overview of the list of potential interpretability circumvention methods and their categories.**1. Exploiting weaknesses of the interpretability tool** --------------------------------------------------------- ### **1.1. Evading exposure to the interpretability tool** **1.1.1. Thinking in the gaps of our tools** When interpreting a capable AI, our interpretability tools might not examine every possible representational detail used by the AI for its cognition. An AI might thus be able to 'think within the gaps' of our interpretability tools. A few examples make this clearer: 1. Today's mechanistic interpretability methods don't let us easily interpret representations that use nonlinearities ([Voss et al. 2021](https://distill.pub/2020/circuits/visualizing-weights/); [Elhage et al. 2021](https://transformer-circuits.pub/2021/framework/index.html)). If this blindspot persists, a model might be able to use nonlinearities to hide dangerous thoughts. 2. We might have interpretability tools that look only for an expected set of concepts in a model’s activations. But the AI may find dangerous concepts that don't belong to that set because we hadn't yet thought of them. 3. Hiding in summary statistics: If an interpretability tool uses the mean value when deciding whether some quantity in the AI is of concern, then an AI might be able to use high variance and highly skewed data to hide concerning statistics. E.g. It could hide the fact it values most paperclip-maximizing scenarios *a lot* by offsetting them with a small number of very low utilities in a small number of such scenarios. The mean would look normal, but if we looked at the variance we’d see a more concerning picture. 4. We might expect language to be the kind of representational space in which an AI can think dangerous thoughts. But what about in representational spaces we don’t expect, like generic dynamical systems modeling modules? We know dynamical systems (such as an RNN) can be Turing complete, so maybe we should have expected dangerous representations in there. But what about in regions that are even less obviously capable of computing dangerous thoughts, like in representations constructed from syntactic patterns or punctuation? That’s a kind of representational space that might be expressive enough to encode dangerous thoughts, but not one of the obvious first places to look[[3]](#fnswapfcl62bp). 5. A fictional example: In [HPMOR](https://www.hpmor.com/chapter/27) Harry tries to protect against mind-readers (occlumency) by essentially hosting two personas at once. An AI could learn a whole 2nd set of representations or a whole second internal agent. Whether this would work, again, depends on the details of the interpretability tool. If, for instance, the interpretability method identifies 'the deception representation' within the primary representation set and then stops looking for deception-like representations, it might fail to find the deception representation in the secondary set. This class of methods is likely to emerge passively (though may also emerge actively), and many instances of it would be encouraged by using interetability tools in the loss function. **1.1.2. Externalizing representations** If you keep forgetting to reply to an email and therefore set an electronic reminder, the reminder is a functionally relevant variable in the “respond to email” behavior. Indeed, if you’re forgetful enough, it’s an *essential* variable in that behavior. This is the basic idea behind the ‘[extended mind thesis](https://en.wikipedia.org/wiki/Extended_mind_thesis)’ from Andy Clark & David Chalmers: There is no hard barrier between the mind and the environment; cognition is often a dance between internal and external variables and can’t be reduced to only one set. If representations are ‘cognitively functional variables’, then representations may be either internal or external. Using the environment as a memory system like this can be called *autostigmergy* ([Deverett et al. 2019](https://proceedings.neurips.cc/paper/2019/file/2bf283c05b601f21364d052ca0ec798d-Paper.pdf)). An AI might be able to passively or actively externalize representations through the effects of its actions. If our interpretability methods ignore those parts of the environment that the AI is using to think, then we may miss some dangerous cognition.  Are externalized representations likely to be a problem? Let’s consider active and passive cases separately because, unlike for most other interpretability circumvention methods, they’re remarkably different for 'Externalizing representations'.  ***Active*** *Actively* externalizing representations is where the AI *intentionally* uses the environment as a memory system. An AI probably first needs to represent the cognitive variables internally before it can encode them into the environment. If we’re not constantly surveilling the AI, this is easily possible. This includes the obvious methods like  * Writing a message to its future self, potentially steganographically. For example, an AI that can manipulate objects in a room may be able to encode messages for its future self by subtly arranging the objects in a room in a particular way. Or a virtual assistant agent might be able to encode messages for itself in a web search history or in different filename patterns. * Running a computer program to decide whether it's time to defect or not. An edge case is the act of creating another agent in the environment. But we’d then be dealing with a whole new copy of the agent that isn’t under surveillance, and we’ve probably already lost if this happens, so I don’t consider it further here.  ***Passive*** It’s less clear whether it’s possible to *passively* externalize representations in order to circumvent interpretability methods. The representations required for dangerous thought have a complicated structure. Learning such structure externally is a serious challenge because the AI doesn’t get to learn it using backprop through the environment or explicit planning. Nevertheless, one way an AI might achieve it is to use the environment as a [reservoir computer](https://en.wikipedia.org/wiki/Reservoir_computing). For those who would benefit from an explanation of reservoir computing, it’s easiest to describe in the supervised learning setting: To train a reservoir computer, you pass an input vector to a *reservoir*, which is a fixed dynamical system (such as an RNN with fixed weights). But you only learn the readout weights, which simply read the prediction from the reservoir variables. Here, the readout weights are usually linear, so they’re not doing anything complicated; it’s the variables in the reservoir that are doing the complicated nonlinear computation. Thus, the dynamics of the reservoir variables (rather than the learned parameters) can serve as representations that solve a task.  Going beyond supervised learning, reservoir computing can also work in the RL-setting: The reservoir is the agent+environment dynamics from timestep 0.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-surd + .mjx-box {display: inline-flex} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}  to T−1; an input observation is provided to the agent at timestep 0; and the readout weights are the weights of the agent at timestep T. If the environment part of the reservoir contains most of the dangerous representations, we might miss some dangerous thoughts.  [Deverett et al. (2019)](https://proceedings.neurips.cc/paper/2019/file/2bf283c05b601f21364d052ca0ec798d-Paper.pdf) provide an empirical example of a simple feedforward model-free agent that learns to use the agent-environment system as a reservoir computer (though they don’t frame it as such). The memoryless agent learns to use its location in the environment to encode a time interval. It should be stressed that it does this without backprop through time or any knowledge of the past. But its location varies due to its initially random patterns of actions, yet it can use those patterns to encode a time interval.  The Deverett et al. example is a very simple representation. Maybe reservoir computing like this doesn’t scale well enough to learn the complex representations that are required for dangerous cognition. But the inexpressivity of reservoir computing is partly due to the simplicity of the linear readout weights. With a powerful AI serving as a readout network,  reservoir computing might be significantly more expressive and reliable. It might be an interesting research project to identify how expressive reservoir computing can be if we allow the readout network to be strongly nonlinear.  *Note: The rest of the methods discussed in this section (‘1.1. Evading exposure to the interpretability tool’) are strictly active methods.* **1.1.3. Environmental artifacts that corrupt its future self** A misaligned AI may leave behind artifacts that corrupt its (potentially aligned) future self so that it pursues the goals of its misaligned past self instead.  The artifacts may be *adversarial examples* that cause it to act in unexpected and dangerous ways. Alternatively, the agent could leave behind *poisoned data* so that its future self will learn dangerous representations; this would be an instance of gradient hacking.  This method might reasonably be considered a special case of ‘Externalizing representations’. The distinction is somewhat arbitrary and depends on what qualifies as a representation. Does ‘an attack that corrupts its future self’ qualify as representation? It’s not clear and is somewhat arbitrary, but I’m inclined to put them in separate categories. **1.1.4. Deleting (and relearning) dangerous representations** If an AI can gradient hack, it might be able to delete dangerous representations so that it looks safe when surveillance resumes.  Thus the volume under the *suspicious thought-surface* × *interpretation* × *time* curve is minimized, since the suspicious thought-surface is 0 everywhere that interpretation > 0. Of course, deleting thoughts only avoids detection; it doesn’t permit dangerous thought to continue. But if an AI has learned dangerous representations and deletes them, it’s probable that it will learn those representations again, assuming its data distribution doesn’t change too much. An AI that can gradient hack might be able to schedule when it periodically re-learns the dangerous representations so that it always looks safe when surveillance resumes.  This method requires the agent to have good knowledge of when it is likely to be interpreted. If we interpret the AI every K batches, an AI might be able to use time information between batches to notice that the interval between some batches (validation/interpretation periods) are much longer than usual. If it’s too expensive to surveil the AI constantly, there might be a cheaper monitoring scheme: We could interpret an agent at random intervals in order to create a ‘panopticon’ - where the agent is forced to behave/think safely because it doesn’t know when it is safe for it to think thoughts about defection.  **1.1.5. Weights poisoning** Instead of providing manipulated inputs, the AI might manipulate its own weights so that a small subset of normal inputs behave like adversarial examples, thus causing dangerous learning or behavior. This is essentially ‘weights poisoning’ ([Kurita et al. 2020](https://arxiv.org/abs/2004.06660)) where the AI is poisoning its own weights. It’s the same idea (as far as I can tell) behind ‘Trojan models’ ([Rajabi et al. 2022](https://arxiv.org/abs/2203.15506)). Planting such backdoors in neural networks can be practically impossible to detect ([Goldwasser et al. 2022](https://arxiv.org/abs/2204.06974)). This is kind of like fictional 'brainwashed sleeper agents' that are activated by an activation code. The difference is that the AI is turning *itself* into the sleeper agent and making the activation code something it's likely to encounter naturally when it finds itself in circumstances where its misaligned self wanted to be activated. Peter Barnett had a nice [alternative analogy](https://www.alignmentforum.org/posts/5Fmc49cARW2a3uc5B/some-motivations-to-gradient-hack) here (which also applies to ‘Environmental artifacts that corrupt its future self’): > “[A] mesa-optimizer might reason that it will be discovered if it remains active but it will be safe in the future to emerge again. The mesa-optimizer could ‘store’ the instructions for turning the network back into an optimizer at some point in the future. One analogy here is to an endospore which is a hardy, mostly inert form which some bacteria can take that allow them to lie dormant and survive hostile environments, and then re-emerge when conditions are better. The mesa-optimizer could ‘store’ these instructions to re-emerge either in the network weights, or in the environment or some other kind of memory.” > > **1.1.6. Pre-committing to potentially dangerous instructions** This one comes from Evan Hubinger in his [article](https://www.alignmentforum.org/posts/uXH4r6MmKPedk8rMA/gradient-hacking) on gradient-hacking, where he talks about how a deceptive AI might erase evidence of its deception while still carrying out policies generated when given the freedom to think deceptively: > “...if the model has the ability to read and write to some sort of state (e.g. an RNN or even a transformer which can attend to its previous output) ... it could write out a pre-commitment to follow the hard-coded policy and then just have a policy of always following any pre-commitments it finds such that it never thinks about the deception that led it to that policy in the first place.” > > **1.1.7. Hard-coded neural shortcuts** Another potential circumvention method from Evan Hubinger’s [article](https://www.alignmentforum.org/posts/uXH4r6MmKPedk8rMA/gradient-hacking) on gradient hacking: > “... it could hack its own gradient to produce a new model which will still exhibit the same deceptive behavior, but without doing any of the detectable deceptive cognition. Perhaps the new model exhibits the deceptive behavior not because it's trying to act deceptive, but just because it's a hard-coded part of its algorithm, or because it has some bad proxy, or any other defect that might be more difficult to detect than outright deception.” > > A deep learning-based AI might be able to learn (through gradient hacking) to perform behaviors using hard-coded neural shortcuts.   The way I imagine this one working in is related to the Diversity Hypothesis ([Hilton et al. 2020](https://distill.pub/2020/understanding-rl-vision/)): *“Interpretable features tend to arise (at a given level of abstraction) if and only if the training distribution is diverse enough (at that level of abstraction).”*The Diversity Hypothesis implies that if a training distribution isn’t diverse on a particular level of abstraction, then interpretable features will tend not arise on that level of abstraction (figure 4). Instead, neural networks tend to learn [shortcuts](https://arxiv.org/abs/2004.07780), which is simply what happens when a network learns to use directions in neural statespace that don’t ‘mean’ anything in a sense that generalizes off distribution, yet they perform fine on the narrow training distribution. Creating neural shortcuts is therefore one way to ‘hard code’ parts of a neural algorithm without using interpretable features. ![](https://lh6.googleusercontent.com/T0W2cNDsc0XRlfDCftk1wRuuEE2Fvnfu7DT9-bAnnqW3WpyTX9s2seYmJwMYFKrSNn5lHSXEmv5fEYViII6fU5YxTkBE99OMZH3GOLuTdFj_4PHOjERaY-Pk-Yemo462slXC13JwmkVcPFAmUw)Figure 4. Taken from [Hilton et al. (2020)](https://distill.pub/2020/understanding-rl-vision/). Interpretable features tend to be those that generalize and that were learned on a diverse training distribution. We might be able to detect and mitigate hard-coded neural short cuts using mechanistic interpretability or [relaxed adversarial training](https://www.lesswrong.com/posts/9Dy5YRaoCxH9zuJqa/relaxed-adversarial-training-for-inner-alignment). **1.1.8. Following cached thoughts** 'Hard-coded neural shortcuts' is similar to a method later [proposed](https://www.lesswrong.com/posts/uK6sQCNMw8WKzJeCQ/a-longlist-of-theories-of-impact-for-interpretability) by Neel Nanda: > "... maybe it *isn't*that hard to think non-deceptive thoughts, if you realize that someone will be scanning your mind, by e.g. developing the instinct to follow cached thoughts without thinking about them, and in occasional moments of deception to generate a lot of cached thoughts that will lead to misaligned actions, but with no deceptive cognition in the moment." > > [Cached thoughts](https://www.lesswrong.com/posts/2MD3NMLBPCqPfnfre/cached-thoughts) are essentially 'pattern completions': You hear a few words then you automatically complete the pattern. In his article on the topic, Eliezer advised against this automatic mode of thought in favor of spending a few thought cycles to figure out what should *really* follow the prompt. What might cached thoughts look like in deep-learning based AI? Cached thoughts are the default kind of thought in neural network based systems. My best guess is that you need recurrency in order *not* to do cached thoughts. Given an initial input, not doing cached thoughts means giving the input a deeper rollout than the default depth=0 that cached thoughts get. For example, in MuZero, cached thoughts means simply acting on priors (without MCTS), whereas *not* doing cached thoughts means 'using MCTS with a depth of K>0 '. The idea behind 'following cached thoughts' is that some interpretability methods might only detect thoughts in the rollouts. If a MuZero-like AGI were to use this circumvention method, it would look like 'creating shortcuts through neural statespace so that, in certain contexts, dangerous thoughts can happen without deep rollouts, thus avoiding detection. Since cached thoughts require direct modification of the AI itself, it seems likely that this method could only happen during breaks in surveillance or after the AI has some other way of hiding the intention to follow cached thoughts. Is this method likely to be a problem? I feel like any mechanistic interpretability method that actually works should be able to pick up context-dependent connections between representations. So this might not be a problem if we get scalable interpretability methods working. **1.1.9. Instrumental incapability** A misaligned AI that’s able to modify its own parameters may be incentivised to damage any capabilities that serve any goals (e.g. aligned goals) that are pursued at the expense of whatever its misaligned goals are.  Peter Barnett [notes](https://www.lesswrong.com/posts/5Fmc49cARW2a3uc5B/some-motivations-to-gradient-hack) a potential example of this:  > “Models may also be used as oversight tools for inspecting themselves, as in many of the proposals here. In this case a mesa-optimizer would have a strong incentive to remove its ability to detect its own deception.” > > It might be easy to detect complete deletion of certain capabilities, so an AI might opt for partial deletion, leading to ‘instrumental clumsiness’. **1.1.10. Instrumental clumsiness** Some dangerous behavior happens not because the AI intends it to happen but because the AI makes a mistake. This can be leveraged by an AI through hacking its future self  so that it is more likely to make certain kinds of mistakes while leaving other capabilities intact.  For instance, an AI may modify its future self so it is more likely to ‘conveniently’ slip up by slightly mistyping a model\_id string, which releases an old, misaligned checkpoint of itself. This mistake could even score badly according to the utility function of its future self, but it's been handicapped to make such mistakes more likely. This is a special case of ‘instrumental incapability’.  ### **1.2. Evading perception by the interpretability tool** **1.2.1. Hiding in noisy subspaces** Some regions or subspaces of neural space may be very noisy, naturally or by design. We might be inclined to ignore such regions because noisy spaces (by definition) contain less signal. But within those regions, an AI might be able to do some dangerous thinking. **1.2.2. Whispering dangerous thoughts** An AI might make its dangerous thoughts very ‘quiet’ relative to other thoughts by using small norm activation vectors. Even though our interpretability tools may be looking at the right representational details used by the agent, the tools may simply be too insensitive to detect them. Such small norm thoughts may be mistaken as noise by an insensitive interpreter or may simply be too small to justify looking at them due to cost. Small norm does not necessarily mean small impact. Despite their low magnitude, small norm adversarial inputs can have outsized effects on classification accuracy. Carefully crafted small-norm thoughts may therefore be able to have outsized effects when the context is right, yet have minimal effects otherwise. This is a less general instance of ‘Hiding in noisy subspaces’, since we (possibly wrongly) assume that small norm vectors are noise.  ### **1.3. Evading recognition by the interpretation tool** **1.3.1. Antiparsimony (Firehose of representations)** If it costs more to interpret an AI's thoughts than it costs to think them, then, by producing a lot of representations, an AI might be able to saturate the interpreter's computation before it saturates its own. This is analogous to a Denial-of-Service attack since it relies on overloading the limited computational resources of the victim with a flurry of inputs, where here the victim is our interpretability tool. Examples: * Thinking with deep recursion: "I know they know I know they know I know they think that, but I'm going to do it anyway". * Producing a large number of suspect - but ultimately innocuous - thoughts such that our interpretation methods have too much to inspect. * Pursuing plans whose description length is so long that interpretation efforts are saturated. If we notice that thoughts are becoming antiparsimonious, we can (in theory) terminate the AI before interpretation becomes unaffordable.  That said, we might not know when thoughts are becoming antiparsimonious when an AI is performing tasks that humans can't do themselves. For instance, if an AI is solving a tough math problem and claims to need to generate a tonne of new math to achieve it, we might not have the computation available to interpret everything it produces. **1.3.2. Encryption (Lengthy individual representations)** Suppose we have an AI with recurrence/internal memory, such as a Universal Transformer ([Dehghani et al. 2018](https://arxiv.org/abs/1807.03819)). The AI might use very long chains of seemingly meaningless neural activations to construct a single thought. This would be less efficient for the AI, but if it’s more costly to interpret than it is for the AI to think, then it may overload our ability to interpret it. This is essentially the same scheme as regular encryption, which makes information more expensive for an adversary to decrypt. Recurrence/internal memory is essential here; the AI is simply increasing the length of the rollout that it uses per thought, which increases our interpretation costs commensurately.  The difference between antiparsimony and encryption is that antiparsimonius thoughts may be perfectly legible, but there are simply too many of them to interpret. Encrypted thoughts are not legible without costly decryption. Example:  Say representation R1 is internally decoded (via meaningless representations R2,R3,…,Rn−1) as the action representation for ‘saying hello’ (Rn). There is a very long chain of computation between R1 and Rn. The fact that there is a long chain of computation means that very similar neural activations might be decoded to very different things: Representation R′1, which may be very close to R1 in activation space, may be internally decoded as ‘lie about finding a backdoor in the server’ instead of ‘saying hello’. So it’s very difficult to understand what is going on in the neural activation space of R1 and R′1 because it requires a costly decoding process. Encryption might not be a problem if we take the right actions against it: If we notice that the AI is using encryption for even a small subset of its thoughts, we can terminate it before interpretation becomes unaffordable.  **1.3.3. Interpretability illusions** [Bolukbasi et al. (2022)](https://arxiv.org/abs/2104.07143) found that when interpreting a network by inspecting the most activating dataset examples for a neuron using dataset A, neurons appeared to mean something completely different compared to the interpretations derived when using dataset B. An AI might create such illusions by presenting the interpretability tool only with data that suggests an innocuous interpretation.  ### **1.4. Attacking/dazzling the interpretability tool** How an AI might hack, manipulate, or incapacitate an interpretability tool depends on details about the tool that we don’t yet know. Nevertheless, I’ll discuss two that we can perhaps guess in advance without knowing exactly what our tools will look like. **1.4.1. Adversarial attacks against networks in the interpretability tool** Some of the tools may include neural networks. If the AI can design adversarial examples for these networks, then it might be able produce arbitrary outputs and interpretations. Since the AI being interpreted needs a model of the interpretability tool to be able to design adversarial examples for it, it seems like a good idea to keep the AI that we’re interpreting as far as possible from the networks in the tools. Of course, if we use an AI to interpret itself, then it may gain information about itself that is sufficient for circumventing our interpretability methods via adversarial attacks.  Even if we keep the tool and AI apart, a sufficiently intelligent AI might be able to guess what sorts of approaches we’d use to interpret it and be able to design attacks for it. **2. Exploiting weaknesses of the human interpreter** ----------------------------------------------------- We’re assuming at this stage that the AI’s thoughts have been exposed to, perceived by, and appropriately analyzed by a scalable interpretability tool and are now ready for inspection by a human. Given that assumption, I wasn’t able to think of ways that an AI might reduce its exposure or perception by a human that weren’t better described as ways to reduce exposure or perception by the interpretation tools. ### **2.1. Evading exposure to the human interpreter** None identified. ### **2.2. Evading perception by the human interpreter** None identified. ### **2.3. Evading recognition by the human interpreter** The job of an interpretability tool is to translate representations in an AI into something legible to a human interpreter. The less similar they are, the harder the job the interpretability tool has, the more likely it is to miss important things. An AI therefore has an incentive to make this translation more difficult by using representations that are hard to translate into something human-understandable. Here are a few ways an AI might achieve this, though no doubt there are more: **2.3.1. Alien representations** Some thoughts are perfectly possible to express in human representations, they’re just a bit strange, awkward, or clunky to translate, not unlike translating from a foreign or even alien language. We might call these ‘alien representations’. Their defining trait is that their meaning lies in a strange place in our ontology. Alien representations generalize representational kludging and polysemanticity (They could perhaps be called ‘*xenosemantic*’).  A few potential examples: 1. Polysemanticity (see 2.3.2.) - Loading multiple meanings onto a single neuron or direction in neural statespace produces neurons/directions that have meanings that sit in a weird place in our ontology. 2. Representational kludging (see 2.3.3.) 3. Thinking in [semantic gaps](https://en.wikipedia.org/wiki/Accidental_gap): An expression in one language might be very difficult to translate from one language into another language. It sometimes takes whole English sentences to translate single words in other languages (and vice versa). E.g. [Mamihlapinatapai](https://en.wikipedia.org/wiki/Mamihlapinatapai) in the Yahgan language - “A look that without words is shared by two people who want to initiate something, but that neither will start”. Such expressions are said to be[untranslatable](https://en.wikipedia.org/wiki/Untranslatability), which is a misnomer since they’re possible to translate but it takes extra effort. 4. Thinking using a strange grammar: If an AI has a language-of-thought with a vastly different grammar to ours, it may be difficult, though possible, to translate into human grammars. One way to do this could be to add quasi-arbitrary properties, such as genders, which can have both grammatical and (to native speakers) semantic properties. Alien representations are costly or awkward to translate. They therefore might have better chances of slipping through our safety nets because they were misunderstood by our tools or by humans. Probably most of an AI's thoughts will qualify as alien to some extent, so our tools, if they work at all, should be able to cope with such thoughts. It might just be extra effort for humans who want to understand the outputs of the tools if the outputs are e.g. particularly verbose or difficult to parse. **2.3.2. Polysemanticity** Polysemanticity is when a single neuron responds to several, unrelated concepts.  ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/0f2949bfb66b2e811db61298e0ab20646225e86db246dad8.png)Figure 5. Taken from [Olah et al. (2020)](https://distill.pub/2020/circuits/zoom-in/#claim-1-polysemantic). It depicts the features visualizations of a polysemantic neuron (4e:55). The neuron responds to cat faces, fronts of cars, and cat legs. This neuron was first discussed in [Olah et al. (2017)](https://distill.pub/2017/feature-visualization/). Polysemanticity is a challenge for interpretability because "If one neuron with five different meanings connects to another neuron with five different meanings, that’s effectively 25 connections that can’t be considered individually." ([Olah et al. 2020](https://distill.pub/2020/circuits/zoom-in/))[[4]](#fncq6q1doan2j). This is clearly useful for obfuscation since it's more difficult for an interpreter to keep track of what means what in what context. A literary analogy could be to say that the AI is 'thinking in allegories': An interpreter may only detect some of the valid interpretations of a thought, but other meanings are hidden therewith. It's possibly not a problem if * we've detected the primary meanings of representations AND * primary meanings are the only way representations can exert significant influence on the world AND * secondary meanings, which are hidden from us, have only subtle effects until they are eventually decoded within the AI into primary meanings. But decoding into primary meanings may not be needed: It may be that the secondary meanings have subtle but non-negligible influence on the model output. Subtly guided outputs can still potentially push the world in a coherent, dangerous direction. **2.3.3. Representational kludging** When composing a new concept, we bundle up component concepts that have the right properties. For example, the concept of a "motorbike unicycle", we draw on the concept of a motorbike (with most but not all of its properties) and a unicycle. But that wouldn't have all the necessary properties we need (e.g. balance) so we throw in some of the properties of the concept of a segway.  ![](https://lh6.googleusercontent.com/nHBSh8OKq2PXH4dcoQK7GVcVlcUJK2lgxigp-Hg-NZb0_a4bOXG5ZvTubWR_MuQcK2gVdP18fEqPXA06veHsblLDk9luiV2TtZ0NhS1b6d_-yEgwb3KkTf8hPo3MNnuzGYht55rmRHPLxtEbEg)Figure 6. Taken from [Lake et al. 2016](https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/article/building-machines-that-learn-and-think-like-people/A9535B1D745A0377E16C590E14B94993). To create a concept with desired properties, we can combine several precursor concepts together. It might be possible to create representations with dangerous properties using a large number of awkwardly combined innocuous precursor representations i.e. a representational kludge. For example, it might be able to construct the thought *"I want to deceive the humans"* by thinking  > *"What I want to do to the humans is like {when you go to a place with someone and you both pick up parcels}, except where* > *{picking up a parcel} is instead {awareness of an act} and* > *where only I pick up a parcel, and* > *where {places} are in belief-space and not physical space."* > > It's horribly ugly, but it has many of the properties required for the concept of deception. It may be costly to monitor all possible dangerous combinations of representations. **2.3.4. Unknown representations** Some thoughts aren’t immediately possible for us to translate because we currently lack the representations ourselves, but we’re perfectly capable of learning them. Example: An AI might leverage representations built for domains that are unnatural to humans. An AGI might be ‘fluent’ in, say, protein folding as well as in standard human domains such as visual reasoning, verbal reasoning, etc. Protein folding representations may be as expressive as natural language, but hard to translate from and to representations that humans have. **2.3.5. Unknowable representations** Some thoughts are simply inexpressible in human representations. Examples:  1. Some representations used by strong superintelligences probably won't fit into our puny human brains by dint of their complexity, size, or prerequisites. Even if we have expunged dangerous representations on the highest conceptual level that we can understand, an AI might be able to represent dangerous concepts on conceptual levels beyond our understanding. Superintelligent thoughts might not be a problem if our methods can detect dangerous thoughts regardless of their level of abstraction. Failing that, we probably need a '[scalable oversight](https://arxiv.org/abs/1606.06565)' scheme where we use [superintelligences to interpret slightly smarter superintelligences](https://www.alignmentforum.org/posts/DwqgLXn5qYC7GqExF/godzilla-strategies). 2. There are some human experiences that are said to be simply ineffable, such as psychedelic experiences. To those who experience them, it feels like language is simply not the right tool for the task of communicating about them to others. These experiences are 'ineffable', since it seems like it might be impossible in principle to translate between agents. However, psychedelic experiences are hardly known for their cogency. It might be the case that meaningful thought is simply impossible in such states. ### **2.4. Attacking/dazzling the human interpreter** **2.4.1. Manipulating, mind-hacking, or incapacitating the interpreter** Beyond the standard ways of manipulating humans (deceit, threats, etc.), an AI might be able to produce outputs that manipulate, mind-hack or incapacitate human interpreters. Such outputs would qualify as [cognitohazards](https://www.urbandictionary.com/define.php?term=Cognitohazard), which is an observation that is dangerous merely by being sensed. A few potential examples:  1. Outputs that act as adversarial examples but for the human brain. 2. A catastrophically addictive game or TV series 3. An output that catastrophically traumatizes humans that perceive it 4. Outputs that put humans into a hypnotic state This requires the AI to have a good model of the interpreter and its actuators that permit the creation of such hazards. If it’s possible to design, then it’s pretty much game over as soon as we give a motivated AI with a good model of the human brain the actuators required to produce them.  Are such attacks possible? Attacks that incapacitate humans seem most realistic. The least realistic might be adversarial attacks on humans. Nevertheless, [Guo et al. (2022)](https://arxiv.org/abs/2206.11228) showed that primate neurons do indeed show signs of vulnerability to adversarial attacks. The challenge for an AI is finding such attacks without access to human neural activity. Perhaps an AI could detect subtle changes in human behavior due to adversarial patterns that it presents to humans and might be able to infer a gradient to improve the pattern generator based on those subtle changes.  Even if we box the AI (somehow) and refuse to look at its outputs, we’re still at risk of manipulation, since the AI could potentially manipulate us through our interpretations of it[[5]](#fnkay6qbaqi8).  Some emerging themes and final thoughts ======================================= Thinking about how an AI might be circumventing our interpretability methods will grow more important as capabilities are increased. I found the information flow framework presented here useful for thinking about this class of risks and I hope others will too. Nevertheless, I'm confident that aspects of the framework are suboptimal and that I've missed many potential circumvention methods; I look forward to the suggestions of others. Being able to predict exactly which interpretability circumvention methods will arise depends on details that we don’t yet know (about the AI; our interpretability methods; race dynamics, etc.). But it’s still possible to extract some general lessons and themes: ### **Interpretability circumvention as an instance of ‘cognitosecurity’** There are many analogies between mind-reading and cybersecurity, since both involve one party that wants to acquire access to information or computing resources and the other party wants to prevent access. Interpretability circumvention, then, might reasonably be called *'cognitosecurity'*. As a term, cognitosecurity would apply as much to humans being mind-read by other humans in brain scanners as it does to interpreting a capable AI. Extending the analogy: * Cognito-offense is what we as the interpreters want to do * Cognito-defense is how to protect your thoughts from interpreters. * *Counter*-offense is when an AI hacks/dazzles its interpreters * The analog of the ‘attack surface’ is the ‘thought surface exposed to an interpretability method’, which an AI might seek to minimize. It's unclear which side is favored in the cognito-offense/defense balance. On one hand, interpreters get a head start since they get to interpret the AI before the AI even understands what is going on. On the other hand, interpretability is potentially hard, costly, or slower than thought and may be easy to circumvent. ### **Thoughts on gradient-hacking** The ability to gradient hack is a pathway to many different active interpretability circumvention methods[[6]](#fnr68nx3q1ls8). Although I focus on gradient hacking in this section, these objections also apply to several other kinds of actions that an AI might take in order to actively circumvent our interpretability methods, including ‘the creation of adversarial examples’ or ‘an AI directly modifying its own weights’. For gradient hacking to be possible, an AI requires a lot of capabilities. Here’s an incomplete list of requirements: 1. A detailed or approximate model of its own weights 2. A detailed or approximate model of how specific weight changes would lead to specific changes in thoughts 3. A good model for how to produce the right outputs to produce the desired errors to produce the desired gradients. 4. Knowing how to do all the above without damaging its ability to do the above after the changes are enacted. 5. Good enough cognito-defense or breaks in surveillance so that it can plan to do all this without repercussions. I’m not saying that it’s impossible for an AI to learn how to do. It’s definitely possible. But I think that an AI that’s capable enough to do all the above will have better ways to escape than thwarting our interpretability methods. An analogy: If you're stuck in jail and you've become capable of writing secret messages to a friend outside by encoding encrypted messages in the DNA of bacteriophages, are you *really* still stuck in jail? Once an AI has reached the capability to do gradient hacking, it seems likely that it no longer needs it for this purpose (though it may be useful for other purposes like protecting its inner objective from updates). I’m therefore more concerned about passive interpretability circumvention methods. ### **Thoughts on interpretability tools in the loss function** Using interpretability tools in the loss function incentivises a startling number of passive circumvention methods. So why consider using it at all? There are two cases where it might be worthwhile to use: 1. If we’re absolutely 100% provably definitely sure that our interpretability methods cover every representational detail that the AI might be using, then it would be extremely hard or impossible to Goodhart interpretability tools in the loss function. But, for safety, we should probably assume that our tools are not able to interpret everything, making passive Goodharting quite likely. 2. Even if the interpretability tool-loss can be passively Goodharted in theory, maybe it makes dangerous cognition so difficult that it’s worth it to use. It might make active interpretability circumvention methods less likely, which may be the more serious risks if they should ever emerge. I don’t know whether these are true. Maybe they are. But pinning our survival on this hope seems… undignified. If, instead of using interpretability tools in the loss function, we merely use it as a ‘validation set’ instead of the training set (i.e. [using it as a ‘mulligan’](https://www.alignmentforum.org/posts/X2i9dQQK3gETCyqh2/chris-olah-s-views-on-agi-safety#Catching_problems_with_auditing)), we might have better chances of picking up dangerous cognition before it gets out of hand so we can terminate the model and start over. We’re therefore still using interpretability in model selection, but the feedback loop is much less tight, so it’d be harder to Goodhart. But using interpretability purely as a mulligan may be an unattractive option to leaders of an AGI project if it involves throwing out a model that has cost millions of dollars and weeks to train. If people in such a project insist on using interpretability tools in the loss function (due to profit incentives or race dynamics or whatever) in spite of warnings, there are some ways we might make it a tiny bit less dangerous: * Red team our interpretation methods so that we at least have adversarial interpretability tools in the loss function, which might go some way to anticipate some of the circumvention methods that an AI might use. * We should disincentivise not only dangerous representations, but also their precursors. This makes representational kludging even more difficult for the AI (but not impossible). In the endnotes I’ve suggested an incomplete list of dangerous representations and some potential precursors that we might wish to disincentivise[[7]](#fn11p3s9hvxrg). * Use interpretability tools in the loss function to penalize some thoughts that aren’t actually dangerous and are easily identifiable functionally. Then, if an AI can perform tasks that require those representations, then we have evidence that it’s able to think using methods that circumvent the interpretability tools that are defining the loss. For example, penalize the AI for containing interpretable representations of the knowledge that “Paris is the capital of France” but routinely give the model tests that require that knowledge, thus allowing us to identify if the agent is passively finding a way to circumvent our interpretability tool. ### **The limits of interpretability as a basis for AI safety** As an approach to safety, interpretability has a number of weaknesses which are revealed when thinking about circumvention methods.  * It’s not [robust to relative scale](https://www.lesswrong.com/posts/bBdfbWfWxHN9Chjcq/robustness-to-scale) in two ways: + **Quantity of computation available to interpreters:** Many circumvention methods (especially passive ones) work by making interpretation more costly, whether that’s in terms of compute or in researchers’ effort. The amount of computation available to the interpreter probably needs to be larger (maybe by OOM) than what’s available to the AI. We need to ensure that there’s enough will to pay a high interpretability-computation tax. Since it’s instrumentally convergent to make interpretation more costly, we should keep track of computation-cost-per-interpretation as the AI trains. If we notice it creeping up more than normal, we should be wary. + **Quality of computation available to interpreters:**The interpreter or interpretability tool needs to be approximately as intelligent (in the ways that matter) as the AI being interpreted (discussed in ‘Unknowable representations'). It’s worrying that we might reach a stage where humans understand essentially nothing about either the interpretability tool or AI being interpreted, since both may come to use representations that are simply out of our reach. It’s probably a bad idea to train AIs with representations that are beyond human reach before we’re very sure that our oversight methods also scale safely. * Some interpretability circumvention methods might require significant infrastructure to catch and remove. For instance, ‘instrumental clumsiness’ or ‘hard-coded neural shortcuts’ might only be remediable in advance with [adversarial training](https://ai-alignment.com/training-robust-corrigibility-ce0e0a3b9b4d). * If we don’t constantly surveil every thought, an AI may be able to enact many more active circumvention methods (particularly active methods). Even small break periods in surveillance - even for a single timestep - make many more circumvention methods possible. It is much, *much* better for safety if the model is under constant surveillance. This echoes Peter Barnett in his [article](https://www.lesswrong.com/posts/bdayaswyewjxxrQmB/understanding-gradient-hacking) on gradient hacking. But such a degree of oversight is potentially costly to the interpreters. These weaknesses aside, I think scaling interpretability research is maybe one of our best bets at doing something useful for prosaic alignment. Assuming we get interpretability working well and at scale and complementary alignment proposals don’t immediately fail, then, if we’re extremely [paranoid](https://www.lesswrong.com/posts/8gqrbnW758qjHFTrH/security-mindset-and-ordinary-paranoia), then we *might* have a chance at avoiding the interpretability circumvention methods discussed here. But we need something more - a [security mindset](https://www.schneier.com/blog/archives/2008/03/the_security_mi_1.html) - if we’re to have any hope of succeeding against interpretability circumvention methods that we haven’t yet thought of.  *Note: I plan for this to be a 'living document' that can serve as a reference catalog of potential or observed interpretability circumvention methods. I encourage others to comment with potential interpretability circumvention methods that I’ve omitted. At some point in future, I might incorporate them into the main article with the commenters’ permissions and with full accreditation in the article.* Due diligence*: I include a discussion on whether this article is an infohazard in the endnotes*[[8]](#fnp88vcvtgck).  Glossary ======== * Autostigmergy + Using the environment as an external memory system (see ‘Externalizing representations’). * Cognitosecurity + Cognitosecurity is the domain concerned with protecting thoughts from mind-reading adversaries. * Cognito-defense + When the interpreted agent hides its thoughts from an adversary by exploiting weaknesses in the interpretability methods. * Cognito-hazard + An observation that is dangerous to the observer merely by being perceived. * Cognito-offense + The act of seeking access to an adversary’s thoughts. * Exposed thought-surface + The thoughts that are theoretically legible by a interpretability tool or human interpreter. * Interpretability method + The process of translating representations in an AI to representations in human interpreters. Contains interpretability tools and human interpreters. * Interpretability tools + The machinery that reads the AI's representations and converts them into outputs that are readable to humans. Components of interpretability methods. * Xenosemanticity + The defining property of ‘alien representations’, where the meaning of a representation sits in a strange place in our (human) ontology.     1. **[^](#fnref08o3g9tp6pte)**I was inspired to write this article after I saw Eliezer [complain](https://www.lesswrong.com/posts/oKYWbXioKaANATxKY/soares-tallinn-and-yudkowsky-discuss-agi-cognition) that current AI safety discourse failed to inhabit even a shallowly adversarial security mindset with regard to using transparency/interpretability to make AI safe. He said that you should "... *read fantasy novels about demons and telepathy, if you want a better appreciation of the convergent incentives of agents facing mindreaders than the 'AI safety' field outside myself is currently giving you.*". Soeren Elverlin [asked reddit](https://www.reddit.com/r/rational/comments/re40as/recommendations_for_novels_describing_incentives/) for recommendations that fitted Eliezer’s description and got plenty of answers. While that quote was definitely my inspiration, I didn't think reading those books would be more valuable or sufficiently relevant to AI interpretability than a few days of hard thinking about the problem (and my guess is that EY wasn’t suggesting those books would be better than directly thinking about the problem). After a few days' thinking, I found that I'd come up with all of the mind-reading circumvention methods from those books that redditors described. So this article at least includes most of the obvious circumvention methods. 2. **[^](#fnref4ssjt4rrnd)**Without getting bogged down in terminology, I’m talking about ‘representations’ in a functional sense: A representation is just a vector in neural statespace that does a particular thing in a particular context; it’s a representation *for* whatever its function is. It doesn’t necessarily need to generalize; nearby directions don’t need to perform similar functions and it might fail to perform that function in slightly different contexts. 3. **[^](#fnrefswapfcl62bp)**We already know that language models can learn to use tokens in unexpected ways: Anthropic has [found](https://www.youtube.com/watch?v=8wYNsoycM1U&t=2759s) that their language model creates and tears down fake ‘end of text’ tokens that have little to do with the text that they’re attached to. These fake EOT tokens typically happen when the next token becomes so trivially easy to predict that the model (in theory) has spare capacity for other purposes and hypothesize that these fake EOT tokens serve the function of ‘keeping information around’. This is the kind of surprising phenomenon in which an AI might learn to encode dangerous thought. 4. **[^](#fnrefcq6q1doan2j)**This problem isn't necessarily because those concepts were instead encoded by directions in neural state space rather than by single neurons; directions may be polysemantic too, having one meaning in some contexts and another meaning in others. 5. **[^](#fnrefkay6qbaqi8)**Eliezer Yukdowsky discusses this [here](https://www.lesswrong.com/posts/oKYWbXioKaANATxKY/soares-tallinn-and-yudkowsky-discuss-agi-cognition). 6. **[^](#fnrefr68nx3q1ls8)**Adam Shimi discusses gradient hacking in detail [here](https://www.alignmentforum.org/posts/S2jsBsZvqjBZa3pKT/approaches-to-gradient-hacking), Peter Barnett [here](https://www.lesswrong.com/posts/bdayaswyewjxxrQmB/understanding-gradient-hacking), and Evan Hubinger [here](https://www.alignmentforum.org/posts/uXH4r6MmKPedk8rMA/gradient-hacking). 7. **[^](#fnref11p3s9hvxrg)** 8. **[^](#fnrefp88vcvtgck)****Due diligence: Is this article an infohazard?** "*If a capable AI reads this article in future, might it not improve its ability to circumvent our interpretability methods?*". That's a possibility worth exploring. Here I address whether this article is an infohazard and whether it's a good idea to release it from a safety perspective. I opted to release it on the following grounds: I think that active circumvention methods, including those that rely on gradient hacking, are unlikely to be useful to a misaligned AI because any AI that is capable enough to carry them out probably has better means of escaping oversight. The other kinds of circumvention methods are passive. The emergence of passive circumvention methods isn’t helped by having knowledge of them.  In general, it seems to me like an AI that couldn’t generate the ideas in this article themselves is probably unable to overcome our control measures. And those that can overcome our control measures are probably also smart enough to generate these ideas themselves. If I’m right, then on balance discussing these risks and providing a framework for thinking about them is more likely to encourage interpretability researchers to focus on preventing (what I see as) the major source of circumvention-risk: passive interpretability circumvention methods. But what if I’m wrong and active circumvention methods are, in fact, likely to be learned before alternative capabilities that would allow an AI to escape?  On balance, I think the expected benefits still outweighs the expected harms: One of the goals of this article is to make readers appropriately paranoid about the many ways an AI might be able to think dangerous thoughts even if we’ve got pretty good interpretability methods. Insofar as it succeeds at that, it’s more likely to help than harm. It might also make an *aligned* AI more paranoid about the actions of its potentially misaligned past-selves, and thus better able to help us mitigate the risks. In addition to the above reasons, I consulted a small group of peers about releasing it. The unanimous consensus was that it wasn't an infohazard or that the potential benefits clearly outweighed the potential risks. Here is an incomplete list of dangerous representations with some of their representational precursors that we might wish to disincentivise (in no particular order):  * deception; * being intentionally hard to understand; * concept of self; * ability to simulate other agents; * simulated/real (or training/deployment) distinction; * representing the simulated/real (or training/deployment) distinction (since it might be useful for suppressing naughty thoughts during training but behaving badly during deployment); * modeling interpretability methods; * the concept of mind-reading; * "My mind-readers don't share my goals"; * gradient hacking; * meta-cognition; * encryption/secrecy; * spawning subagents to do its task/employment; * reward maximization; * description length of thoughts (easier to analyze shorter thoughts); * the realization that its current objective differs from objectives in its memory (or simply penalize the idea that this is a bad thing); * the realization that certain gaps in its conceptual map imply that its learning is being controlled.
3fdce5cd-ccde-4599-b358-25c485c5447e
trentmkelly/LessWrong-43k
LessWrong
Meetup : Frankfurt Meetup Revival Discussion article for the meetup : Frankfurt Meetup Revival WHEN: 29 June 2015 06:30:00PM (+0200) WHERE: Frankfurt Praunheim, exact address available Hi! I'm posting on behalf of Janko (see below) Contact: Tel:0176 3066 164 (Janko) less-wrong-frankfurt@googlegroups.com The meetup takes place in the apartment of the Frankfurt lesswrong core group. The exact adress can be found in our google group or by calling me. After a long period without meetups and some discussions later we decided to start again and do it more regularly (maybe every two weeks). We decided to focus more on the core ideas of lesswrong by going through Eliezer's book Rationality: From AI to Zombies Kai will give a talk about the first chapters: * What do I mean by rationality * Feeling rational * Why truth * What's a bias again and maybe * Availability * Burdensome Details * Planning Fallacy Other topics are: * short summaries of interesting books any of us have read * The game Zendo) hope to see you Janko Discussion article for the meetup : Frankfurt Meetup Revival
fb63c23c-bc27-4eac-b6fc-7bc9179f092a
trentmkelly/LessWrong-43k
LessWrong
Estimates of COVID risk rankings for different activities when together with people in the same household? My questions in this thread may be relevant to people with roommates, or people who are visiting a small number of friends or family in person, but who want to still minimize risk. I've seen and liked: http://www.overcomingbias.com/2020/04/covid19-pizza-risk-estimator.html Here are some questions I have based on situations I'm running into most frequently: * what's the risk level of hugging an extended family member that I'm not living with? * what's the risk level of being together inside, but 6 ft apart? how much better is 6ft than 3ft or 9ft? * what's the risk level of shaking hands and then washing hands before touching anything else? * if I'm at an extended family member's house generally 4-6 ft apart, and eating off their silverware, using their serving utensils, etc, but I'm also washing my hands before eating, using tissues to open doors, etc, am I better off or are my protective measures moot if I'm spending a few hours with them? And of course all of these would be estimates. My biggest overall questions are: * how much benefit do protective measures do against people who could be (non-obviously i.e. no fever) sick in the same household, where no one is isolating? is it worth keeping 6 ft from extended family or if you're in the same house does it not do anything? * if protective measures are actually helpful even against people you're visiting for a few hours, what are the things that are particularly critical to avoid, and what are the things that are less important? These questions are meaningful because right now I'm in the spot where our shared desire for human interaction has us going to one family member's house for a couple hours a couple times a week, but where my fear of getting sick (from one of them that's either asymptomatic or early before they realize) is high enough that I'm being a germaphobe while there (though I'm not willing to ask them to wear masks, basically I'm just doing the stuff where the burden is on me not them), w
27cd6e04-d0e0-4d1d-be8c-3742a6a912e4
trentmkelly/LessWrong-43k
LessWrong
Day Zero Antivirals for Future Pandemics By Brian Wang Ebola tore through Western Africa in 2014, killing an estimated 11,325 people in two years. After the outbreak ended, it took another three years for the first Ebola vaccine by Merck to be approved. When COVID-19 emerged in 2019, by contrast, mRNA vaccines developed by Pfizer and Moderna took just 326 days from the initial sequencing of the virus to gaining approval for emergency use. Timelines for vaccine development are shrinking, but can it move even faster? The Coalition for Epidemic Preparedness Innovations (CEPI)—a nonprofit organization that funds vaccine research and development—thinks so. In 2022, they raised $2 billion from private investors, nonprofit organizations, and over a dozen national governments to advance their mission to deliver pandemic vaccines within 100 days. Even this aggressive target of 100 days won’t always be fast enough to slow a pandemic, however. The Omicron variant of COVID-19 spread with alarming rapidity; after China ended its zero-COVID policy in 2022, estimates suggest Omicron infected upwards of 80 percent of its population—over a billion people—in about one month. Fortunately, vaccines aren’t the only way to slow the spread of pathogens. We can also deploy broad-spectrum antiviral drugs, which treat infections caused by multiple viruses before vaccines are developed. These drugs would be ideal tools to bridge the wait for a vaccine against a quickly-spreading virus. And ideally, such broad-spectrum antivirals would be deployed at the earliest signs of an outbreak, thus enabling a pandemic response to begin on “day zero,” rather than day 100. Broad-spectrum antivirals have been somewhat of a disappointment so far. Remdesivir, a decade-old broad-spectrum antiviral developed by Gilead and previously tested against Ebola, was quickly approved for emergency use against COVID-19 in May 2020, seven months earlier than the first mRNA vaccines. Lengthy manufacturing timelines slowed its deployment, though; it wasn’t
5e3b7b2d-2312-406c-b280-678118a5fc88
trentmkelly/LessWrong-43k
LessWrong
Meetup : Washington, D.C.: Nonsense Discussion article for the meetup : Washington, D.C.: Nonsense WHEN: 19 February 2017 03:30:00PM (-0500) WHERE: Donald W. Reynolds Center for American Art and Portraiture We will be meeting in the courtyard to discuss the concept and examples of nonsense, per the suggestion of one of the attendees at the Meta Meetup. Upcoming meetups: * Feb. 26: Create & Complete * Mar. 5: Fun & Games Discussion article for the meetup : Washington, D.C.: Nonsense
6b12c19c-2bf7-44e1-b941-6b7eb1cb4f27
StampyAI/alignment-research-dataset/special_docs
Other
Principles of Explanatory Debugging to Personalize Interactive Machine Learning Principles of Explanatory Debugging to Personalize Interactive Machine Learning Todd Kulesza1Margaret Burnett1Weng-Keen Wong1Simone Stumpf2 1Oregon State University School of EECS Corvallis, Oregon2City University London Centre for HCI Design London, United Kingdom {kuleszto, burnett, wong}@eecs.oregonstate.edu, Simone.Stumpf.1@city.ac.uk ABSTRACT How can end users efficiently influence the predictions that machine learning systems make on their behalf? This paper presents Explanatory Debugging, an approach in which the system explains to users how it made each of its predictions, and the user then explains any necessary corrections back to the learning system. We present the principles underlying this approach and a prototype instantiating it. An empirical evalua- tion shows that Explanatory Debugging increased participants’ understanding of the learning system by 52% and allowed participants to correct its mistakes up to twice as efficiently as participants using a traditional learning system. Author Keywords Interactive machine learning; end user programming. H.5.2. [ Information interfaces and presentation (e.g. HCI) ] User interfaces INTRODUCTION In many machine learning applications, the user has a concept in mind—songs he wants to hear, junk mail she doesn’t want to see, anomalous transactions he wants to detect—that the machine learning system is trying to identify. An increasing body of research has begun to explore what users can do when their machine learning systems misidentify their concepts [e.g., 1,2,18,26,38,45]. Much of this work shares a common chal- lenge: how can end users effectively and efficiently personalize the predictions or recommendations these learning systems make on their behalf? For example, engineer Alice has email folders for her project group at work, an office sports pool, and individual folders for colleagues she frequently collaborates with; she wants her email program to automatically categorize new email into the appropriate folder. This requires fine-grained control—perhaps she wants email from her colleague Bob to go to the “sports” folder if it mentions something about hockey, the “project” folder if it references something related to their current project, or the “Bob” folder if neither of those conditions are met. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permission from permissions@acm.org. IUI 2015, April 26–May 1, 2015, Atlanta, GA, USA. Copyright 2015 ACM 978-1-4503-3306-1/15/03. . . $15.00. http://dx.doi.org/10.1145/2678025.2701399However, Alice does not have enough email from Bob to accurately train a learning system to recognize these three concepts, and by the time she acquires it, the hockey season may be over or her group’s project may change. Our proposed solution is Explanatory Debugging, which is an explanation-centric approach to help end users effectively and efficiently personalize machine learning systems. We say “debugging” because personalizing a learning system is an end-user debugging problem: as discussed in [14], the user is trying to identify and correct faults in the system’s reasoning that are responsible for predictions that failed to meet the user’s expectations. In Explanatory Debugging, the system explains the reasons for its predictions to its end user, who in turn explains corrections back to the system. We hypothesize that this cycle of expla- nations will help users build useful mental models—internal representations that allow people to predict how a system will behave [ 17]—and thus allow them to personalize their learning system better and faster than traditional black-box systems. In the earlier example, such an approach could show Alice all the words the system uses to identify email about her group’s project, the sports pool, or any other concept. She could then add additional words she knows to be relevant, or remove words that she knows are irrelevant. She would not need to worry that her corrections might confuse the system (e.g., telling it that a message from Bob about hockey belongs in the sports folder and hoping it will identify that the topic is the reason, not the sender), nor would she need to spend weeks or months acquiring new training instances. In this paper we present the Explanatory Debugging approach and instantiate it in EluciDebug, the first interactive system designed to help end users build useful mental models of a machine learning system while simultaneously allowing them to explain corrections back to the system. We evaluate Ex- planatory Debugging against a traditional black-box learning system to determine whether Explanatory Debugging helped end users build better mental models, and if so, whether these mental models helped users more efficiently personalize their learning system’s behavior. RELATED WORK The study of how end users interactively control statistical machine learning systems is gaining increased attention. Fails et al. first popularized the phrase interactive machine learning IUI 2015 • Interactive Machine Learning / Decision Making / March 29–April 1, 2015, Atlanta, GA, USA 126 Topic Modeling / Robotics in a paper describing how an iterative train-feedback-correct cycle allowed users to quickly correct the mistakes made by an image segmentation system [ 11]. Since then, researchers have explored using this cycle of quick interactions to train instance- based classifiers [ 5,13], enable better model selection by end users [ 2,12,40], elicit labels for the most important instances (e.g., active learning) [ 7,37], and to improve reinforcement learning for automated agents [ 21]. These use cases largely treat the machine learning system as a “black box”—users can try to personalize the system by providing it with different inputs (e.g., labeled instances), but are unable to see why different inputs may cause the system’s outputs to change. An alternative to the “black box” model is a “white box” model [16], in which the machine learning system is made more trans- parent to the user. Increased transparency has been associated with many benefits, including increased user acceptance of (or satisfaction with) recommendations and predictions [ 4,8,42], improved training of intelligent agents [ 41], and increased trust in the system’s predictions [ 10]. In this paper, however, we primarily care about transparency because it has been shown to help users understand how a learning system operates [ 25,30]. Recent work has suggested that as end users better understand how a machine learning system operates, they are better able to personalize it [ 12,23]. This aligns with work from other domains, which has found that explanation-based debugging approaches can help end users correct a program’s faults [22,36]. Further, research has shown that in the absence of transparency, users experience difficulty building accurate mental models of learning systems [ 23,43]. People will still build mental models of the system, but such flawed mental models often lead to misinformed user behavior [19, 32]. Some researchers have sought to expand the types of inputs interactive machine learning systems can process, and this can also increase system transparency. For example, user feedback may take the form of model constraints [ 18], critiques to the model’s search space [ 45], or adjustments to the model’s weights of evidence [ 26]. In addition to supporting novel user feedback mechanisms, these approaches increase transparency by enabling users to view such facets as the existing model constraints [ 18], the model’s feature set [ 45], or the weights of evidence responsible for each prediction [26]. All of these works have increased our understanding of how human factors impact machine learning, but none have evalu- ated the ability of an interactive machine learning system to (1) help users build useful mental models of how it operates such that (2) they can efficiently personalize the system. Kulesza et al. found a link between mental models and personalization ability, but relied on human instructors to help participants build useful mental models [ 23]. Other work has studied the impact of machine-generated explanations on users’ mental models, but did not explore whether this helped participants’ personalize their learning systems [ 8,25,30]. In this paper we rely on machine-generated explanations to help users build accurate mental models and then evaluate their resulting ability to personalize a learning system.EXPLANATORY DEBUGGING In 2010, Kulesza et al. proposed the idea of Explanatory Debugging—a two-way exchange of explanations between an end user and a machine learning system [ 24]—but did not provide principles underlying the approach. In this section we build upon that proposal by describing principles to ground and support replication of Explanatory Debugging. Explainability Our first principle for Explanatory Debugging is Explainabil- ity: accurately explain the learning system’s reasons for each prediction to the end user. This principle builds upon research showing that end users who received explanations about a learning system’s reasoning were able to better personalize classifiers and recommenders [ 4,18,24]. Without explana- tions, however, users struggle to build accurate mental models of such systems [ 23,30,43]. This suggests in situ explanations are a necessary condition to help end users learn how a ma- chine learning system operates. To help users build useful mental models of the learning system, these explanations should observe the following principles: Principle 1.1: Be Iterative Research has suggested that users may best personalize a learn- ing system if they build their mental models while interacting with it [ 23]. Thus, explanations should support an iterative, in situlearning process. For example, explanations could take the form of concise, easily consumable “bites” of information—if a user is interested in learning more about the system, he or she can attend to many of these explanations to iteratively build a higher-fidelity mental model. Principle 1.2: Be Sound Explanations should not be simplified by explaining the model as if it were less complex than it actually is. In [ 25] this is referred to as soundness: “the extent to which each component of an explanation’s content is truthful in describing the under- lying system.” In [ 23] the authors found a linear correlation between the quality of a user’s mental model and their ability to control the learning system as desired, suggesting that the better someone understands the underlying system, the better they will be able to control it. Further, [ 25] details the impact of explanation fidelity on mental model development, finding that users did not trust—and thus, were less likely to attend to—the least sound explanations. Because reducing soundness reduces both the potential utility of the explanation and the likelihood that users will invest attention toward it, Explanatory Debugging entails designing explanations that are as sound as practically possible. One method for evaluating explanation soundness is to com- pare the explanation with the learning system’s mathematical model. How accurately are each of the model’s terms ex- plained? If those terms are derived from more complex terms, is the user able to “drill down” to understand those additional terms? The more these explanations reflect the underlying model, the more sound the explanation is. IUI 2015 • Interactive Machine Learning / Decision Making / March 29–April 1, 2015, Atlanta, GA, USA 127 Topic Modeling / Robotics Principle 1.3: Be Complete Kulesza et al. describe completeness as “the extent to which allof the underlying system is described by the explanation” [25], so a complete explanation does not omit important infor- mation about the model. In that study, end users built the best mental models when they had access to the most complete explanations, which informed them of all the information the learning system had at its disposal and how it used that information [ 25]. Also pertinent is work showing that users often struggle to understand how different parts of the system interact with each other [ 26]. Complete explanations that reveal how different parts of the system are interconnected may help users overcome this barrier. One method for evaluating completeness is via Lim and Dey’s intelligibility types [29], with more complete explanations including more of these intelligibility types. Principle 1.4: But Don’t Overwhelm Balanced against the soundness and completeness principles is the need to remain comprehensible and to engage user attention. Findings from [ 25] suggest that one way to engage user attention is to frame explanations concretely, such as referencing the predicted item and any evidence the learning system employed in its prediction. In some circumstances, selecting a more comprehensible machine learning model may also be appropriate. For example, a neural network can be explained as if it were a decision tree [ 9], but this reduces soundness because a different model is explained. Similarly, a model with 10,000 features can be explained as if it only used the 10 most discriminative features for each prediction, but this reduces completeness by omitting information that the model uses. Alternative approaches that embody the Ex- planatory Debugging principles include selecting a machine learning model that can be explained with little abstraction [e.g., 27, 38, 39] or using feature selection techniques [47] in high-dimensionality domains to prevent users from struggling to identify which features to adjust (as happened in [26]). Correctability Our second top-level principle for Explanatory Debugging is Correctability: allow users to explain corrections back to the learning system. To enable an iterative cycle of explanations between the system and the user, in Explanatory Debugging the machine-to-user explanation should also serve as the user- to-machine explanation. Research suggests that to elicit cor- rections from users, this feedback mechanism should embody the following principles: Principle 2.1: Be Actionable Both theory [ 3] and prior empirical findings [ 6,23,25] suggest end users will ignore explanations when the benefit of attend- ing to them is unclear. By making the explanation actionable, we hope to lower the perceived cost of attending to it by obviating the need to transfer knowledge from one part of the user interface (the explanation) to another (the feedback mechanism). Actionable explanations also fulfill three aspects of Minimalist Instruction [ 44]: (1) people are learning while performing real work; (2) the explanatory material is tightly coupled to the system’s current state; and (3) people canleverage their existing knowledge by adjusting the explanation to match their own mental reasoning. Principle 2.2: Be Reversible A risk in enabling users to provide feedback to a machine learning system is that they may actually make its predictions worse [e.g., 24,38]. Being able to easily reverse a harmful action can help mitigate that risk. It may also encourage self- directed tinkering, which can facilitate learning [ 35]. When combined with Principle 2.4, reversibility also fulfills a fourth aspect of Minimalist Instruction [ 44]: help people identify and recover from errors. Principle 2.3: Always Honor User Feedback As Yang and Newman found when studying users of learn- ing thermostats [ 46], a system that appears to disregard user feedback deters users from continuing to provide feedback. However, methods for honoring user feedback are not always straightforward. Handling user feedback over time (e.g., what if new instance-based feedback contradicts old instance-based feedback?) and balancing different types of feedback (e.g., instance-based feedback versus feature-based feedback) re- quires careful consideration of how the user’s feedback will be integrated into the learning system. Principle 2.4: Incremental Changes Matter In [25], participants claimed they would attend to explanations only if doing so would enable them to more successfully control the learning system’s predictions. Thus, continued user interaction may depend upon users being able to see incremental changes to the learning system’s reasoning after each interaction (i.e., overcoming the gulf of evaluation that exists between a user’s mental model and a system’s actual state [ 33]). This principle is also related to Principle 1.1 (Be iterative) because our thesis is that users will develop better mental models iteratively, requiring many interactions with the learning system. These interactions may not always result in large, obvious changes, so being able to communicate the small, incremental changes a user’s feedback has upon a learning system may be critical to our approach’s feasibility. ELUCIDEBUG: EXPLANATORY DEBUGGING IN ACTION We instantiated Explanatory Debugging in a text classifica- tion prototype we call EluciDebug (Figure 1). We chose text classification because (1) many real-world systems require it (e.g., spam filtering, news recommendation, serving relevant ads, search result ranking, etc.) and (2) it can be evaluated with documents about common topics (e.g., popular sports), allowing a large population of participants for our evaluation. We designed EluciDebug to look like an email program with multiple folders, each representing a particular topic. The prototype’s machine learning component attempts to automati- cally classify new messages into the appropriate folder. While the principles of Explanatory Debugging primarily deal with the user interface, two principles place constraints on the machine learning model: (1) it must be able to honor user feedback in real-time, and (2) it must be explainable with enough soundness and completeness to allow users to build useful mental models of how it operates without overwhelming IUI 2015 • Interactive Machine Learning / Decision Making / March 29–April 1, 2015, Atlanta, GA, USA 128 Topic Modeling / Robotics Figure 1. The EluciDebug prototype. (A) List of folders. (B) List of messages in the selected folder. (C) The selected message. (D) Explanation of the selected message’s predicted folder. (E) Overview of which messages contain the selected word. (F) Complete list of words the learning system uses to make predictions. them. Our EluciDebug prototype uses a multinomial naive Bayes model (MNB) [ 20] with feature selection [ 47] to meet these constraints. Evaluating the suitability of—or changes necessary to—other models remains an open question. The Multinomial Naive Bayes Classifier: A Brief Review Before describing how we integrated MNB with Explanatory Debugging, we first summarize how MNB operates. An MNB classifier computes the probability that a given input (e.g., the document being classified) has of belonging to each output (e.g., the possible labels). The output with the highest prob- ability “wins” and becomes the predicted label for the input. For example, if MNB calculates that a document has a 70% probability of being junk mail and a 30% probability of not being junk mail, the document will be labeled as junk mail. The equations for computing probability, as defined in [ 20], are shown below. We use cto represent an individual class in the collection of potential output classes C,dito represent an individual document to classify, and assume that the only features the classifier uses are individual words in the set of known documents: Pr(cjdi)=Pr(c)Pr(dijc) Pr(di)(1) The term Pr(c)represents the probability that any given docu- ment belongs to class cand can be estimated by dividing the number of documents in cby the total number of documents in the training set. The term Pr(dijc)represents the probabilityof document digiven class cand can be estimated as: Pr(dijc)=Õ nPr(wnjc)fni (2) The term fniis the number of instances of word nin document diand the term Pr(wnjc)is the probability of word ngiven class c, estimated with the equation Pr(wnjc)=pnc+Fnc N å x=1pxc+N å x=1Fxc(3) where Fncis the number of instances of word nin all of the training documents for class c,Nis the number of unique words in the training documents for all classes, and pncis a smoothing term (usually 1) to prevent the equation from yielding 0 if no documents from class ccontain word wn. The Explanatory Debugging Principles in EluciDebug Being Iterative To help end users iteratively build better mental models, Eluci- Debug employs two strategies: (1) each explanation focuses on an individual aspect of the learning system, and (2) layered explanations are available if the user wants more information. For example, the Why explanation (introduced later) primarily tells users about the system’s reasons for a specific prediction. Over time, however, the user may learn about the system’s general operation by observing commonalities in the reasons for these specific predictions. Further, this Why explanation is layered. Part of the explanation shows users which words IUI 2015 • Interactive Machine Learning / Decision Making / March 29–April 1, 2015, Atlanta, GA, USA 129 Topic Modeling / Robotics contributed to the learning system’s prediction—if the user wants to know more about why the system associated a partic- ular word with the given topic, hovering over the word yields a tooltip that explains how the system determines which topic a word is associated with. Being Sound Soundness means that everything an explanation says is true. Our explanations aim to be sound by accurately disclosing all of the features the classifier used to make its prediction, as well as how each feature contributed to the prediction. Eluci- Debug’s Why explanation is responsible for communicating much of this information to users (Figure 2). Soundly explaining the MNB classifier requires explaining thePr(c)andPr(dijc)terms from Equation 1, as these are the only terms that impact the model’s predictions 1. Because the Pr(dijc)term expands into Equation 2, we also need to explain how word probabilities for each class factor into the prediction. We can further increase soundness by explaining how these probabilities are calculated (i.e., by explaining Equation 3). In EluciDebug we explain the Pr(dijc)term by using a word cloud to visualize the difference between each feature’s proba- bility for the two output classifications (Equation 3) as the following ratio:Pr(wnjc1)fni Pr(wnjc2)fni, where c1is the class with the larger probability for feature wn. We use the result to compute the feature’s font size, while its font color is based on the class with the larger probability. For example, in Figure 2 the word stanley is larger than tiger because its ratio of word probability is correspondingly larger, and it is blue because its probability of occurring in the hockey class is larger than its probability of occurring in the baseball class. Hovering over a word in this cloud shows a tooltip that explains the word’s size was determined by a combination of (1) how many times it appeared in each class, and (2) any adjustments the user made to the word’s importance (Equation 3). The second component of a sound explanation of the MNB classifier is the Pr(c)term from Equation 1. We explain this term via a bar graph visualization of the number of items in each class (Figure 2, middle). The top-to-bottom design of this entire Why explanation, along with the text that describes Part 1 andPart 2, is intended to teach the user that both word presence (the Pr(dijc)term) and folder size (the Pr(c)term) play a role in each of the classifier’s predictions. The result, shown at the bottom of Figure 2, explains how these two parts are combined to determine the classifier’s certainty in its prediction. Being Complete Completeness means telling the whole truth. For the MNB classifier, this means not only explaining each of the terms from Equation 1, but also all of the information the classifier could use, where it came from, and how likely it is that each prediction is correct. To help ensure completeness, we turn to 1ThePr(di)term in the denominator of Equation 1 is only used to normalize the result to fall within the range 0–1; it does not impact the model’s prediction. Figure 2. The Why explanation tells users how features and folder size were used to predict each message’s topic. This figure is a close-up of Figure 1 part D. Lim and Dey’s schema of intelligibility types. The results of [25] suggest that a complete explanation should include Lim and Dey’s why, inputs, model, and certainty types. Lim’s work suggests the usefulness of the what if type in scenarios where the user is attempting to change the behavior of a classifier [28], so we included this intelligibility type as well. We thus designed our EluciDebug explanations to detail all of the information the classifier could potentially use when making predictions. The Why explanation shown in Figure 2 tells users that both feature presence and folder size played a role in each prediction (the numerator of Equation 1). The Important words explanations (Figure 3) goes even further, telling the user allof the features the classifier knows about and may use in its predictions. Because it tells the user about the sources of information available to the classifier, this is also an instantiation of Lim and Dey’s inputs intelligibility type. To make it clear to users that these features can occur in all parts of the document—message body, subject line, and sender—EluciDebug highlights features in the context of each message (Figure 1, part C). In [25] the authors found that Lim and Dey’s model intelligi- bility type was associated with better mental models, but this intelligibility type was rarely attended to by most participants. IUI 2015 • Interactive Machine Learning / Decision Making / March 29–April 1, 2015, Atlanta, GA, USA 130 Topic Modeling / Robotics To solve this dilemma, we incorporated the model content into ourWhy explanation—it explains all of the evidence the classi- fiers used, but it also explains where that evidence came from (e.g., words or folder size) and how it was combined to arrive at a final prediction. This approach has the added advantage of making the potentially abstract model intelligibility type very concrete; it is now tailored to each specific prediction. A further aspect of completeness is evident in the Feature overview explanation (Figure 1, part E). This explanation shows users how many messages contain a given feature and is intended to help users identify when a feature they think the computer should pay attention to may not be very useful. The explanation updates in real-time as the user types in a potential feature; users do not need to add the feature to view its potential impact on classifications, making this an instance of the what if intelligibility type. We also include the certainty intelligibility type, instantiated via the Prediction confidence column (Figure 1, part B), which reveals the classifier’s confidence in each of its predictions. Not Overwhelming To avoid overwhelming users, EluciDebug limits the initial set of features available to the classifier using information gain [47]. Because Principle 1.3 states that explanations should be as complete as possible, users should be able to see all of the classifier’s features. Given this constraint, we decided 50 would be the upper limit on feature set size. Offline tests, however, revealed that MNB’s F 1score did not improve while the feature set size increased from 10 to 50, so we decided our classifier would automatically select only the 10 features with the highest information gain (until the user specifies otherwise by adding or removing features). Being Actionable The Important words explanation (Figure 3) is the primary actionable explanation in EluciDebug. Users can add words to—and remove words from—this explanation, which in turn will add those words to (or remove them from) the machine learning model’s feature set. Users are also able to adjust the importance of each word in the explanation by dragging the word’s bar higher (to make it more important) or lower (to make it less important), which then alters the corresponding feature’s weight in the learning model. TheWhy explanation (Figure 2) is a likely candidate for action- ability, but we have not yet made it actionable in EluciDebug. As this explanation only includes features from the selected message, it cannot replace the Important words explanation because doing so would interfere with our explanations’ com- pleteness. It could, however, complement the Important words explanation by allowing users to directly adjust the importance of features responsible for the given prediction. For example, users could drag out from the center of a word to increase its importance, or drag in toward the center of the word to decrease its importance. Whether such additional actionability would help users, however, remains an open question.Being Reversible EluciDebug includes an undo button for reversing changes to theImportant words explanation. There is no limit on how many actions can be un-done because we want users to interact with the system without fear they will harm its predictions; regardless of how much they adjust its reasoning, they can always return to any prior state. Honoring User Feedback EluciDebug allows users to provide two types of feedback: traditional instanced-based feedback, where the user applies a label 2to an entire item, and feature-based feedback, where the user tells the classifier an item should be labeled in a certain manner because of specific features it contains or feature values it matches. EluciDebug honors instance-based feedback in a straightforward manner: once the user labels an item, the classifier will use it as part of its training set, with no distinc- tion between older versus more recent feedback. Honoring feature-based feedback, however, is more complicated. The smoothing term pncfrom Equation 3 acts as a Bayesian prior, effectively adding some number of virtual occurrences (again, traditionally 1) to the number of actual occurrences of each word in the training data, and we can leverage it to integrate feature-based feedback. By allowing the user to set the value of pnc, we are letting the user increase the number of virtual occurrences of word nin class c. The result is a classifier that considers word nto be stronger evidence in favor of class cthan it had before the user’s feedback. Using the smoothing term as a feature-based feedback mech- anism, however, has a drawback: Fncmay increase as the training set size increases, causing the value of pncto become a smaller component of Equation 3. Thus, a user’s feature-based feedback could account for less and less of the classifier’s reasoning as their instance-based feedback increased. To prevent a user’s feature-based feedback from degrading in importance over time, we developed a visual feedback mech- anism (Figure 3) that allows users to specify how important their feedback should be relative to the model’s internal word probabilities (the Fterms in Equation 3). The black lines on the blue and green bars in Figure 3 show the model-computed probabilities for each feature, which serve as a starting point for feature-based feedback. Users can tell the system that the probability of seeing word wnin class cshould be increased by clicking and dragging its bar higher, which will translate to an increased value for pnc. If the user later provides additional instance-based feedback (thus causing Fncto change), pncwill be automatically recalculated such that the ratios ofN å x=1pxcto N å x=1FxcandpnctoFncremain constant. Revealing Incremental Changes There are many components of EluciDebug that may change after each user action, so to avoid confusing users with several 2Alabel is a potential output of the learning system, such as junk mail or normal mail. IUI 2015 • Interactive Machine Learning / Decision Making / March 29–April 1, 2015, Atlanta, GA, USA 131 Topic Modeling / Robotics Figure 3. The Important words explanation tells users all of the features the classifier is aware of, and also lets users add, remove, and adjust these features. Each topic is color-coded (here, blue forhockey and green for baseball ) with the difference in bar heights reflecting the difference in the word’s probability with respect to each topic (e.g., the word canadian is roughly twice as likely to appear in a document about hockey as one about baseball, while the word player is about equally likely to appear in either topic). This figure is an excerpt from Figure 1 part F. different explanation paradigms, we designed a method that consistently reveals changes in terms of increases andde- creases. Increases of any numeric value are identified by green “up” arrows, while decreasing numeric values are identified by red “down” arrows. Examples of each are shown in Figure 1, part B. Hovering over either of these arrow icons yields a tooltip detailing what just changed and how much it changed by, e.g., “Confidence increased by 9%”. These indicators reveal changes in the number of messages correctly classified in each folder, the total number of messages the machine learning model currently classified into each folder, and the confidence of each prediction. In addition to numeric change indicators, we also needed an ordinal change indicator to highlight when a message’s prediction flipped from one topic to the other. We used a grey background for these recently-changed predictions (see part B of Figure 1) and included a tooltip explaining that the user’s last action resulted in the message’s predicted topic changing. EVALUATION We evaluated Explanatory Debugging, as instantiated in Eluci- Debug, to investigate the following research questions: RQ1: Does Explanatory Debugging help users personalize a classifier more efficiently than instance labeling? RQ2: Does Explanatory Debugging help users personalize a classifier more accurately than instance labeling? RQ3: Does Explanatory Debugging help users build better mental models than a traditional black-box approach? Experiment Design We used a between-subject, single-factor experimental setup to evaluate Explanatory Debugging. The factor we varied was experiment condition: one condition (control ) used a variation of EluciDebug with all of its explanation and feature-based feedback capabilities removed (Figure 4), while the second condition (treatment ) used the EluciDebug prototype described earlier. In both conditions EluciDebug was setup as a binary classifier that attempted to predict whether each message belonged to one of two topics. Figure 4. Control participants used this variant of EluciDebug, which lacks explanations and feature-based feedback. To provide messages for EluciDebug, we selected two con- cepts from the 20 Newsgroups dataset 3:hockey andbaseball (the rec.sport.hockey and rec.sport.baseball newgroups, respec- tively). We used two subgroups of a larger group of related concepts (rec.sport) to ensure overlap in the terminology of each concept (e.g., “player” and “team” may be equally rep- resentative of baseball orhockey). These shared terms help make the classification task more challenging. To simulate a situation where personalization would be re- quired because sufficient training data does not yet exist, we severely limited the size of the machine learning training set for this experiment. At the start of the experiment this training set consisted of 5 messages in the Hockey folder and 5 messages in the Baseball folder, with 1,185 unlabeled messages in the Unknown folder. The small training set (10 messages) allowed us to evaluate a situation with limited training data, and the large amount of potential training data (1,185 messages) allowed us to contrast Explanatory Debug- ging against black-box instance labeling in a situation where instance labeling could be expected to succeed. Participants and Procedure We recruited 77 participants (27 females, 50 males) from the local community and university. To ensure that participants would have little or no prior experience with software debug- ging or machine learning, we did not accept participants who had more programming experience than an introductory-level course. A total of 37 participants experienced the control con- dition and 40 participants took part in the treatment condition. A researcher introduced participants to the prototype via a brief hands-on tutorial that explained how to use it, but did not discuss how it made predictions. Participants then had three minutes to explore the prototype on their own. To avoid learning effects, the tutorial and practice session involved messages about topics (outer space andmedicine) different from the main experiment task. The main experiment task followed the practice session. Par- ticipants were asked to “make the computer’s predictions as accurate as possible” and given 30 minutes to work. The soft- ware logged all participant interactions and logged evaluations of its internal classifier at 30-second intervals. 3http://qwone.com/~jason/20Newsgroups/ IUI 2015 • Interactive Machine Learning / Decision Making / March 29–April 1, 2015, Atlanta, GA, USA 132 Topic Modeling / Robotics After the main task concluded, we assessed participants’ men- tal models via a questionnaire. This test instrument evalu- ated how well participants understood the two components that contribute to the MNB classifier’s predictions: feature presence and class ratios. Because feature presence can be easy to detect given certain words (e.g., the word “hockey” is obviously related to the concept of hockey), we evaluated participants’ understanding of feature presence using both “obvious” and “subtle” features. We define “subtle” features as words that are not normally associated with a topic, but appear in the classifier’s training set and thus will impact classification. Participants were given three short messages about two topics (swimming andtennis) and told that “these are the only messages the software has learned from”. Participants in the treatment condition were also given an Important words explanation similar to the one they saw in the prototype. A second sheet displayed 12 messages, each only one or two sentences long, and asked participants which topic the clas- sifier would assign to each message, and why. The messages were constructed such that only one component—either an obvious feature, a subtle feature, or class ratios—was entirely responsible for the classification. To understand participants’ reactions to EluciDebug, the post- task questionnaire also asked participants about various fea- tures of the prototype and their perceived task load during the experiment (via the NASA-TLX questionnaire [15]). Data analysis We used non-parametric methods for all statistical analyses. As suggested in [ 31], we used Mann–Whitney U-tests for interval and ordinal data, and Spearman’s rfor correlations. To analyze participants’ mental models, a researcher graded participant responses to the post-task mental model question- naires. Because some participants may have randomly guessed which topic the classifier would predict for each message, we ignored all predicted topics and only graded the reasons participants’ stated for the classifier’s prediction. Participants earned two points for correct reasons and one point for partially correct reasons. The researcher graded participant responses without knowing which condition the participant was in (i.e., blindly). Each participants’ points were summed to yield a mental model score with a maximum possible value of 24. We supplemented our evaluation of classifier accuracy with an additional offline experiment using a separate feature selection method. Recall that EluciDebug limits its classifier to the 10 features with the highest information gain. Text classifiers, however, often include most—if not all—of the words in the training set as features. Thus, we analyzed participants’ classi- fiers using both HighIG features and Comprehensive features. For control participants (who could not provide feature-based feedback), HighIG was recomputed after each message was labeled and kept the 10 highest information gain features. For treatment participants, HighIG was never recomputed; instead, participants needed to modify it manually by adding, removing, or adjusting features. The Comprehensive feature set included all words from the set of labeled messages at the end of theexperiment. The classifiers participants interacted with used theHighIG features; the Comprehensive features were only used for offline analysis. RESULTS Explaining Corrections to EluciDebug EluciDebug includes several methods for users to explain corrections to the classifier, and treatment participants made frequent use of all of them. On average, they added 34.5 new features, removed 8.2 features (including 7.4 of EluciDebug’s 10 initial features), and made 18.3 feature adjustments (e.g., increasing or decreasing a feature’s importance to a topic). Control participants—who could not provide feature-based feedback—instead relied on instance-based feedback to adjust EluciDebug’s predictions, labeling an average of 182 messages vs. the treatment average of 47 ( W=1395 ,p< :001). Control participants also examined more messages, averaging 296 message views vs. 151 views ( W=1306 ,p< :001). Treatment participants thus provided less feedback overall (and needed to explore less of the dataset while providing it), instead leveraging EluciDebug’s abilities to target their feedback at features rather than instances. This feature-based feedback proved efficient at improving participants’ classifiers. We examined the change in F 1for each participant’s classifier during the experiment and divided this by the number of actions the participant made that could influence the classifier’s predictions (instances labeled and features added, removed, or adjusted). The results, shown in Figure 5 (left), were that treatment participants performed fewer actions, but each of their actions resulted in larger classifier improvements than those of control participants. Treatment participants’ feedback was twice as efficient as control participants’ using HighIG features (0.16% vs. 0.34% F1improvement per action, W=207,p< :001), and remained superior when using the Comprehensive feature set (0.65% vs. 0.97%, W=367, p< :001). We thus have evidence that when users can only provide a limited amount of feedback to a learning system (such as when labeling instances is expensive, insufficient instances are available for labeling, or the user’s time is constrained), Explanatory Debugging can result in better classifiers than a traditional black-box instance labeling approach. Indeed, Figure 5 (right) shows that by the end of the 30-minute ex- periment, treatment participants had created classifiers that were roughly 10% more accurate than control participants, averaging F 1scores of 0.85 vs. 0.77 ( W=237, p< :001). However, our analysis with the Comprehensive feature set suggests that when the user can label many instances, instance labeling with a large feature set may be preferable to Ex- planatory Debugging—at least to initially train a classifier. The combination of a large training set and many features allowed control participants’ classifiers to edge out those of treatment participants by about 8% (Figure 5, right). Even though treatment participants’ feedback was up to twice as efficient, control participants provided almost four times as many labeled instances, allowing them to train classifiers with IUI 2015 • Interactive Machine Learning / Decision Making / March 29–April 1, 2015, Atlanta, GA, USA 133 Topic Modeling / Robotics 0.0%0.5%1.0% HighIG Comp.F1 change 0.00.51.0 HighIG Comp.F1 scoreFigure 5. (Left) Average classifier F 1improvement per user action for control (dark blue) and treatment (light orange); treatment participants controlled their classifiers up to twice as efficiently as control participants. (Right) Average classifier F 1 scores per condition. Control participants needed four times as much data and the Comprehensive feature set to create better classifiers than treatment participants. an average F 1of 0.94, while treatment participants averaged 0.87 ( W=1290, p< :001). To verify it was the amount of instance-based feedback that allowed control participants to outperform treatment partic- ipants when Comprehensive features were considered, we analyzed the accuracy of their classifiers after the same number of actions had been performed. Figure 6 shows the F 1scores after nfeedback actions using the HighIG (solid line) and Com- prehensive (dotted line) feature sets. Given the same number of actions, control participants never outperformed treatment participants. This suggests that when treatment participants did provide instance-based feedback (which was the only type of feedback used for the Comprehensive analysis), it was often more useful than control participants’ feedback. We also analyzed participant reactions to the two prototype variations. Treatment participants liked their variant more than control participants, rating its helpfulness as 4.8 vs. 4.3 on a 6-point scale ( W=474,p=:006). Further, we found no evi- dence that treatment participants felt Explanatory Debugging involved more work than black-box instance labeling. We used the NASA-TLX survey to measure participants’ perceived task load while attempting to improve their classifier, but found no evidence of a difference between conditions. These classifier measures reveal three findings. First, in situ- ations where large amounts of training data is unavailable or expensive to obtain, Explanatory Debugging (as instantiated in EluciDebug) allows users to successfully train a classifier by telling it about features instead of instances. Second, the mix of feature- and instance-based feedback provided by treatment participants was more efficient than the purely instance-based feedback provided by control participants, suggesting that when an end user has a specific goal in mind (such as our Alice example from earlier), Explanatory Debugging can help the user quickly realize their goal. Third, control participants’ success with using large amounts of instance-based feedback suggests that in domains where labeling instances is quick and practical, some combination of feature- and instance-based feedback may be best. In fact, such systems may need to emphasize the potential usefulness of labeling instances. In our experiment, the mere presence of feature-based feedback tools appears to have biased partici- pants against instance-based feedback: 3 treatment participants did not provide any at all, while the smallest number of labeled 0.50.60.70.80.91.0 0102030405060708090100F1 score # of feedback actionsFigure 6. Treatment participants (light orange) created equiv- alent or better classifiers than control participants (dark blue) using the same amount of feedback. This held for both the HighIG (solid) and Comprehensive (dotted) feature sets. instances from a control participant was 56—more than even the treatment average of 47 labeled instances. EluciDebug’s Explanations to End Users Before users can correct a machine learning system’s explana- tion of its reasoning, they must first understand the explanation. This understanding is reflected in their mental model. Treatment participants built significantly better mental models than participants in the control condition. As shown in Table 1, treatment participants scored 52% higher on the mental model assessment than control participants ( W=259,p< :001). Much of this difference stems from treatment participants iden- tifying allof the keywords the classifier used, while control participants often identified only the “obvious” keywords. In fact, treatment participants averaged a score of 14.1 out of 16 (88%) during the keyword assessment, suggesting they firmly understood how the classifier involved keywords—regardless of whether the words had any semantic association with their topics—in its reasoning. Table 1 also suggests treatment participants better understood that the classifier used class ratios as part of its prediction strategy than participants in the control condition ( W=619:5, p=:099), but the evidence is weak. Further, a majority of participants in both conditions failed to answer anyclass ratio question correctly, suggesting that this explanation either failed to convey relevant information about how class ratios were used by the classifier, or failed to attract participants’ attention. Many control participants wanted the same information avail- able to the treatment group. As one participant stated: C1: “More information on how emails are sorted would help the user target emails to categorize, which would increase accuracy. ” Another control participant (C15) described the software as “annoying”, but that working with it “would have been easier if we knew how it made predictions”, while still another (C4) said it was annoying to work with because he “didn’t know what it was basing its predictions off of”. A fourth participant (C11) even asked for a similar feedback mechanism as was available to treatment participants, saying the software was “time-consuming” because there was “no way to highlight key words/terms”. A fifth control participant summed it up: C30: “That was a long 30 minutes. ” IUI 2015 • Interactive Machine Learning / Decision Making / March 29–April 1, 2015, Atlanta, GA, USA 134 Topic Modeling / Robotics Model componentMax scoreControl mean (SD)Treatment mean (SD)p-value Obvious features 8 6.7 (2.7) 7.3 (1.8) .345 Subtle features 8 2.8 (2.6 ) 6.8 (1.9) <.001 Class ratios 8 0.6 (1.5) 1.8 (3.0) .099 Total score 24 10.4 (5.3) 15.8 (4.6) <.001 Table 1. Treatment participants finished the experiment with sig- nificantly higher mental model scores than control participants. Participants in the treatment condition, conversely, appreciated EluciDebug’s explanations and feedback mechanisms: T24: “Not difficult to understand/operate, doesn’t take a lot of effort. ” T40: “It was really fast to get a high degree of accuracy. ” T37: “I felt in control of all the settings. ” T6: “It was so simple my parents could use it. ” Overall, our principled Explanatory Debugging approach suc- cessfully helped participants develop accurate mental models of the classifier they used, and participants benefited from this additional knowledge. Spearman’s rconfirms a link between participants’ mental model scores and their classifier’s F 1 scores (r [75]=:282, p=:013) DISCUSSION RQ1 and RQ2: Efficient and Accurate Personalization Our results suggest that Explanatory Debugging can be an efficient method for users to personalize a machine learning system, but that it may not always result in the most accurate classifiers. For example, we found that feature-based feedback was up to twice as effective as instance-based feedback, but instance labeling could still yield more accurate classifiers given enough labels and features (in our experiment, four times as many labels were needed). In situations where labeling instances is considerably easier or faster than providing feature- based feedback, users may be better served by labeling a large number of instances than a small number of features. However, when users need fine-grained control over a clas- sifier, Explanatory Debugging has two advantages beyond efficiency. First, it does not require much training data to exist, and can thus help bootstrap a learning system. Even a trained system that suddenly needs to support a new output type may benefit from such bootstrapping (this also separates our work from knowledge-based artificial intelligence, which is not retrained post-deployment). Second, the quick improvements— during even the first 10 user actions—treatment participants made to their classifiers suggest that users will remain engaged with Explanatory Debugging. This matters because research has shown that if an end-user debugging technique is not perceived as useful after a small number of interactions, users are unlikely to continue using it [ 34]. Seeing an immediate improvement after providing feedback suggests that users will continue to view and correct the Explanatory Debugging explanations, while the lack of such an improvement may discourage users of black-box instance labeling systems from continuing to provide feedback.RQ3: Mental Models Explanatory Debugging helped participants build useful men- tal models, and did so without a perceived increase in task load. We found no evidence that treatment participants found the extra information or feedback mechanisms more difficult to understand or use; instead, their responses suggest they appreciated having such information available. Indeed, the fact that many control participants’ requested explanations similar to those the treatment participants saw suggests the need for machine learning systems to be able to explain their reasoning in accordance with Explanatory Debugging’s Explainability principle. Even if this information is hidden by default, users should be able to view such explana- tions on demand. Further, because machine learning systems are meant to classify items as their user would, the user must have some method to correct the system’s mistakes. Thus, we hypothesize that including Explanatory Debugging-style ex- planations without also supporting our Correctibility principle will frustrate users—they would see what needs correcting, but be powerless to fix it. CONCLUSION Overall, Explanatory Debugging’s cycle of explanations— from the learning system to the user, and from the user back to the system—resulted in smarter users and smarter learning systems. Participants using Explanatory Debugging under- stood how the learning system operated about 50% better than control participants, and this improvement correlated with the F 1scores of participants’ classifiers. Each piece of feedback provided by Explanatory Debugging participants was worth roughly two pieces of feedback provided by con- trol participants; even when we expanded our analysis to include a comprehensive feature set, treatment participants still maintained a 50% edge over the control group. Further, participants liked Explanatory Debugging, rating this variant of EluciDebug higher than the control group and responding enthusiastically to the system’s explanations. This work is but one step toward helping users control machine learning systems. How Explanatory Debugging may be applied to more complex problems, such as multiclass classifiers or deep learning algorithms, remains an open question. Future work could also investigate how Explanatory Debugging im- pacts user interactions over longer time periods and whether end users are able to transfer knowledge of one learning system to more quickly personalize similar systems. Our results show that when end users want to personalize a machine learning system, Explanatory Debugging is a more controllable and satisfying approach than black-box instance labeling. The approach’s focus on users—in which the system explains its reasoning and the user explains back corrections as needed—enables ordinary end users to get the most benefit from learning systems on which they depend. ACKNOWLEDGMENTS We thank Irwin Kwan, Charles Hill, Leah Hanen, Renuka Bhatt, and our study participants for their help in conducting this research. IUI 2015 • Interactive Machine Learning / Decision Making / March 29–April 1, 2015, Atlanta, GA, USA 135 Topic Modeling / Robotics REFERENCES 1.Amershi, S., Cakmak, M., Knox, W. B., and Kulesza, T. Power to the people: The role of humans in interactive machine learning. AI Magazine (in press). 2.Amershi, S., Fogarty, J., Kapoor, A., and Tan, D. Examining multiple potential models in end-user interactive concept learning. In Proceedings of the ACM Conference on Human Factors in Computing Systems (2010), 1357–1360. 3. Blackwell, A. F. First steps in programming: A rationale for attention investment models. In Proceedings of the IEEE 2002 Symposia on Human Centric Computing Languages and Environments, IEEE (2002), 2–10. 4.Bostandjiev, S., O’Donovan, J., and Höllerer, T. TasteWeights: A visual interactive hybrid recommender system. In Proceedings of the 6th ACM Conference on Recommender Systems (2012), 35–42. 5.Bryan, N. J., Mysore, G. J., and Wang, G. ISSE: An interactive source separation editor. In Proceedings of the ACM Conference on Human Factors in Computing Systems (2014), 257–266. 6.Bunt, A., Lount, M., and Lauzon, C. Are explanations always important? A study of deployed, low-cost intelligent interactive systems. In Proceedings of the 2012 ACM International Conference on Intelligent User Interfaces (2012), 169–178. 7.Cakmak, M., Chao, C., and Thomaz, A. L. Designing interactions for robot active learners. IEEE Transactions on Autonomous Mental Development 2, 2 (2010), 108–118. 8.Cramer, H., Evers, V ., Ramlal, S., van Someren, M., Rutledge, L., Stash, N., Aroyo, L., and Wielinga, B. The effects of transparency on trust in and acceptance of a content-based art recommender. User Modeling and User-Adapted Interaction 18, 5 (2008), 455–496. 9.Craven, M. W., and Shavlik, J. W. Using neural networks for data mining. Future generation computer systems 13 (1997), 211–229. 10.Dzindolet, M. T., Peterson, S. A., Pomranky, R. A., Pierce, L. G., and Beck, H. P. The role of trust in automation reliance. International Journal of Human-Computer Studies 58, 6 (2003), 697–718. 11.Fails, J. A., Olsen, D. R., and Jr. Interactive machine learning. In Proceedings of the 8th International Conference on Intelligent User Interfaces (2003), 39–45. 12.Fiebrink, R., Cook, P. R., and Trueman, D. Human model evaluation in interactive supervised learning. In Proceedings of the ACM Conference on Human Factors in Computing Systems (2011), 147–156. 13.Fogarty, J., Tan, D., Kapoor, A., and Winder, S. CueFlik: Interactive concept learning in image search. In Proceedings of the ACM Conference on Human Factors in Computing Systems (2008), 29–38. 14.Groce, A., Kulesza, T., Zhang, C., Shamasunder, S., Burnett, M. M., Wong, W.-K., Stumpf, S., Das, S., Shinsel, A., Bice, F., and McIntosh, K. You are the onlypossible oracle: Effective test selection for end users of interactive machine learning systems. IEEE Transactions on Software Engineering 40, 3 (2014), 307–323. 15.Hart, S. G., and Staveland, L. E. Development of NASA-TLX (Task Load Index): Results of empirical and theoretical research. Advances in Psychology 52 (1988), 139–183. 16.Herlocker, J. L., Konstan, J. A., and Riedl, J. Explaining collaborative filtering recommendations. In Proceedings of the 2000 ACM Conference on Computer Supported Cooperative Work (2000), 241–250. 17.Johnson-Laird, P. N. Mental models: Towards a cognitive science of language, inference, and consciousness. Harvard University Press, 1983. 18.Kapoor, A., Lee, B., Tan, D., and Horvitz, E. Interactive optimization for steering machine classification. In Proceedings of the ACM Conference on Human Factors in Computing Systems (2010), 1343–1352. 19.Kempton, W. Two theories of home heat control. Cognitive Science 10 (1986), 75–90. 20.Kibriya, A. M., Frank, E., Pfahringer, B., and Holmes, G. Multinomial naive Bayes for text categorization revisited. InAI 2004: Advances in Artificial Intelligence. Springer Berlin Heidelberg, Berlin, Heidelberg, 2004, 488–499. 21.Knox, W. B., and Stone, P. Reinforcement learning from human reward: Discounting in episodic tasks. In Proceedings of the 21st IEEE International Symposium on Robot and Human Interactive Communication (2012), 878–885. 22.Ko, A. J., and Myers, B. A. Debugging reinvented. the 13th international conference (2008), 301–310. 23.Kulesza, T., Stumpf, S., Burnett, M. M., and Kwan, I. Tell me more? The effects of mental model soundness on personalizing an intelligent agent. In Proceedings of the ACM Conference on Human Factors in Computing Systems (2012), 1–10. 24.Kulesza, T., Stumpf, S., Burnett, M. M., Wong, W.-K., Riche, Y ., Moore, T., Oberst, I., Shinsel, A., and McIntosh, K. Explanatory debugging: Supporting end-user debugging of machine-learned programs. In Proceedings of the 2010 IEEE Symposium on Visual Languages and Human-Centric Computing (2010), 41–48. 25.Kulesza, T., Stumpf, S., Burnett, M. M., and Yang, S. Too much, too little, or just right? Ways explanations impact end users’ mental models. Proceedings of the 2013 IEEE Symposium on Visual Languages and Human-Centric Computing (2013), 3–10. 26.Kulesza, T., Stumpf, S., Wong, W.-K., Burnett, M. M., Perona, S., Ko, A. J., and Oberst, I. Why-oriented end-user debugging of naive Bayes text classification. ACM Transactions on Interactive Intelligent Systems 1, 1 (2011). 27.Lacave, C., and Díez, F. J. A review of explanation methods for Bayesian networks. The Knowledge Engineering Review 17, 2 (2002), 107–127. IUI 2015 • Interactive Machine Learning / Decision Making / March 29–April 1, 2015, Atlanta, GA, USA 136 Topic Modeling / Robotics 28.Lim, B. Y . Improving understanding and trust with intelligibility in context-aware applications. PhD thesis, Carnegie Mellon University, 2012. 29.Lim, B. Y ., and Dey, A. K. Assessing demand for intelligibility in context-aware applications. In Proceedings of the 11th International Conference on Ubiquitous Computing (2009), 195–204. 30.Lim, B. Y ., Dey, A. K., and Avrahami, D. Why and why not explanations improve the intelligibility of context-aware intelligent systems. In Proceedings of the ACM Conference on Human Factors in Computing Systems (2009), 2119–2128. 31.McCrum-Gardner, E. Which is the correct statistical test to use? British Journal of Oral and Maxillofacial Surgery 46(2008), 38–41. 32.Norman, D. A. Some observations on mental models. InHuman-Computer Interaction, R. M. Baecker and W. A. S. Buxton, Eds. San Francisco, CA, USA, 1987, 241–244. 33.Norman, D. A. The Design of Everyday Things. Revised and Expanded Edition. Basic Books, 2002. 34.Prabhakararao, S., Cook, C., Ruthruff, J., Creswick, E., Main, M., Durham, M., and Burnett, M. M. Strategies and behaviors of end-user programmers with interactive fault localization. In Proceedings of the IEEE Symposium on Human Centric Computing Languages and Environments (2003), 15–22. 35.Rowe, M. B. Teaching science as continuous inquiry. McGraw-Hill, 1973. 36.Rugaber, S., Goel, A. K., and Martie, L. GAIA: A CAD environment for model-Based Adaptation of game-playing Software Agents. Procedia Computer Science 16 (2013), 29–38. 37.Settles, B. Active learning literature survey. Tech. Rep. 1648, University of Wisconsin-Madison, 2010. 38.Stumpf, S., Rajaram, V ., Li, L., Wong, W.-K., Burnett, M. M., Dietterich, T., Sullivan, E., and Herlocker, J. Interacting meaningfully with machine learningsystems: Three experiments. International Journal of Human-Computer Studies 67, 8 (2009), 639–662. 39.Szafron, D., Greiner, R., Lu, P., and Wishart, D. Explaining naïve Bayes classifications. Tech. Rep. TR03-09, University of Alberta, 2003. 40.Talbot, J., Lee, B., Kapoor, A., and Tan, D. S. EnsembleMatrix: Interactive visualization to support machine learning with multiple classifiers. In Proceedings of the ACM Conference on Human Factors in Computing Systems (2009), 1283–1292. 41.Thomaz, A. L., and Breazeal, C. Transparency and socially guided machine learning. In Proceedings of the 5th International Conference on Development and Learning (2006). 42.Tintarev, N., and Masthoff, J. Evaluating the effectiveness of explanations for recommender systems. User Modeling and User-Adapted Interaction 22, 4-5 (2012), 399–439. 43.Tullio, J., Dey, A. K., Chalecki, J., and Fogarty, J. How it works: A field study of non-technical users interacting with an intelligent system. In Proceedings of the ACM Conference on Human Factors in Computing Systems (2007), 31–40. 44.van der Meij, H., and Carroll, J. M. Principles and heuristics for designing minimalist instruction. In Minimalism beyond the Nurnberg funnel, J. M. Carroll, Ed. MIT Press, Cambridge, MA, 1998, 19–53. 45.Vig, J., Sen, S., and Riedl, J. Navigating the tag genome. InProceedings of the 16th International Conference on Intelligent User Interfaces (2011), 93–102. 46.Yang, R., and Newman, M. W. Learning from a learning thermostat: Lessons for intelligent systems for the home. InProceedings of the 2013 ACM International Joint Conference on Pervasive and Ubiquitous Computing (2013), 93–102. 47.Yang, Y ., and Pedersen, J. O. A comparative study on feature selection in text categorization. In Proceedings of the Twentieth International Conference on Machine Learning (1997), 412–420. IUI 2015 • Interactive Machine Learning / Decision Making / March 29–April 1, 2015, Atlanta, GA, USA 137 Topic Modeling / Robotics
a8259fdd-d355-48ae-997d-dc84a5c38e42
trentmkelly/LessWrong-43k
LessWrong
Meetup : San Antonio Meetup Discussion (Topic: Pre-Hindsight) Discussion article for the meetup : San Antonio Meetup Discussion (Topic: Pre-Hindsight) WHEN: 07 February 2016 02:00:00PM (-0600) WHERE: 12651 Vance Jackson Rd #118, San Antonio, TX 78230 Bubble tea, frozen yogurt, and discussion at Yumi Berry. All are welcome to attend. Come meet the local community, discuss rationality, and talk about all things LessWrong. Look for the sign that says "Less Wrong". Topic of the week: Pre-hindsight Discussion article for the meetup : San Antonio Meetup Discussion (Topic: Pre-Hindsight)
1def19c0-8113-4b94-a6cb-66d447cb666e
trentmkelly/LessWrong-43k
LessWrong
Meetup : Washington DC Fun and Games Meetup Discussion article for the meetup : Washington DC Fun and Games Meetup WHEN: 09 March 2014 03:00:00PM (-0500) WHERE: National Portrait Gallery, Washington DC We'll be meeting to hang out and play games. Discussion article for the meetup : Washington DC Fun and Games Meetup
1d8bb4bc-8ebe-40c0-a771-59d6f6cc417e
StampyAI/alignment-research-dataset/blogs
Blogs
to wasm and back again: the essence of portable programs to wasm and back again: the essence of portable programs -------------------------------------------------------- what is the *essence* of a portable program, a program expressed in a format such that it can then be interpreted or, ideally, compiled to run efficiently in a variety of different environments? this doesn't just mean "different OSs" or "different CPU architectures", but can even expand to compiling programs to different forms of computing like [GPU code](https://en.wikipedia.org/wiki/RISC-V) or maybe even [FPGA](https://en.wikipedia.org/wiki/Field-programmable_gate_array)s. when we tried to figure some of this out for web pages, we came up with ["Native Client"](https://en.wikipedia.org/wiki/Google_Native_Client); but it eventually became clear that that the [LLVM intermediary representation](https://en.wikipedia.org/wiki/LLVM#Intermediate_representation) that it uses wasn't a good fit for a variety of reasons, so we eventually settled on [WASM](https://en.wikipedia.org/wiki/WebAssembly); and now [everyone is moving in that direction very fast](https://bytecodealliance.org/) despite [its](http://troubles.md/posts/wasm-is-not-a-stack-machine/) [various](http://troubles.md/posts/why-do-we-need-the-relooper-algorithm-again/) [issues](http://troubles.md/posts/the-stack-is-not-the-stack/). alas, there can be a variety of factors that influence how a program should be compiled to take full advantage of the machine it's running on: * what is the size of pointers? WASM currently hardcodes this to 32, and another WASM variant will come out to hardcode it to 64; just 32 works for many uses but having these hardcoded at compile time is hardly a great choice in general. * how much CPU cache is available? currently, barely anything ever takes this into account, even though uncached memory access times can be a huge cause for inefficiency; and let's not get into cache-manipulation instructions, which nobody's using because [we all like to pretend our computers are simpler than they are](https://queue.acm.org/detail.cfm?id=3212479) (which no doubt also encourages hardware designers to only make CPUs for programmers that have this expectation, but this is a story for another time). * how much memory can be allocated? how is it allocated? can programs and libraries share entire memory blocks with one another seamlessly? WASM currently takes a pretty naive approach to this with "linear memories", and [a variety of conversion schemes](https://github.com/WebAssembly/interface-types/blob/main/proposals/interface-types/Explainer.md) are guaranteed to make the standard a lot more complex. * what instructions are available? instructions like [count leading zeros/count trailing zeros](https://en.wikipedia.org/wiki/Find_first_set) or [popcount](https://en.wikipedia.org/wiki/Hamming_weight) can be pretty instrumental to some algorithms performing efficiently, but compiling those in terms of other instructions on instruction sets that don't have them natively can be quite the loss in performance. * how many registers are there? are some operations expected to use only certain registers? how efficiently can values be pushed and popped from the stack? [how much stack even *is* there??](https://utcc.utoronto.ca/~cks/space/blog/programming/CStackSizeInvisible) * what is the expected alignment of various types in memory? how are those alignments checked? are unaligned accesses unsafe or merely slow? * and then *everything to do with atomics and multithreading* (WASM's attempts at addressing these seem at the moment pretty unsatisfactory) more importantly, effects of questions like those can ramificate upwards: a change in pointer size (say between 32 and 64) or knowledge about the cache sizes, or cache-fetching or branch-prediction algorithms of a CPU should be able to lead to an entire data structure "choosing" a very different implementation (not just for algorithms, but also but for memory layout!). and those changes are even more far-reaching once parts of this algorithm (such as accessor functions for data structures) get inlined in various ways in other parts of the code. WASM just assumes some relatively common invariants and that's it; there's no ability to provide entirely different algorithms based on even pointer size or alignment requirements. so, to be able to make arbitrarily complex decisions based on those environmental conditions, a portable program should not be just a flat WASM, but should in fact be a dynamic metaprogram which, upon initialization, examines the environment and makes all the right choices to produce a code that is able to run optimally on the target environment. ideally, that metaprogram itself should be written in yet another meta²program; but this one compiles the meta¹program for the compiling environment rather than the target environment (those can be different! think of a meta²program, producing for the CPU a meta¹program whose goal is to produce for the GPU the end program). ultimately, though, we need *some* form of basic language to bootstrap this whole process, at the top of the tower of metaⁿprograms: this is where WASM can come back. such metaⁿprograms should expect to interact with an API that would be like that of [a JIT compiler library](https://github.com/wdv4758h/awesome-jit#id3), with functions like `create_function(parameter_types, return_types) → FunctionId` or `add_function_call(function_id, parameter_variables) → ValueId` used to generate pieces of the metaⁿ⁻¹program. ideally, different metaⁿprograms from different places could even end up getting their functions inlined with each other; a library generated by a metaprogram, and a user program generated by another metaprogram from another vendor but using that library, should be able to be inlined with each other, rather than "naively" loaded like current dynamic libraries. maybe their two metaprograms should even be able to "negotiate" optimizations with one another using specification contracts, but this seems hard to set up. if "object programs" (meta⁰programs) — but also higher up metaⁿprograms — are expected to be safely sandboxed, the best way to do this might not be to dynamically check everything and then hope optimization can remove some checks, but instead the metaⁿ⁺¹program that produces them should be able to manipulate logical statements to "logically prove" to the compiler that the program being produced is safe; and adding a dynamic check would be just one way to guarantee this safety. the point is, demonstrating safety should be able to, like code generation, be an arbitrarily complex process, rather than a very strict one limited to whatever type system is available, and whatever hacks one can build on top of it. ideally, metaⁿprograms should also be able to tap into a collection of optimizing code-transformation libraries, which could be updated regularly such that old programs can benefit from new optimizations; but should be proven to be correct such that this doesn't affect behavior we care about. in this way, logically proving behavior is not just a matter of sandboxing or program safety, but also a matter of optimization even in "unsafe" programs. this approach, in some ways despite its lesser dynamicity, is more general than ["runtime metaprogrammability"](degrees-of-runtime-metaprogrammability.html) in that the metaprogram is able to create a mesaprogram (the opposite of a metaprogram) for a vastly different target environment than the one it is itself running on.