id
stringlengths
36
36
source
stringclasses
15 values
formatted_source
stringclasses
13 values
text
stringlengths
2
7.55M
4b284edd-8b8c-4db9-8874-b0ad465c0fd5
trentmkelly/LessWrong-43k
LessWrong
Trigger-Action Planning Epistemic status: Established and confirmed There has been a tremendous amount of research on “implementation intentions” since their development by psychologist Peter Gollwitzer in the late 1990’s. A meta- analysis of 94 studies involving 8461 participants found that interventions using implementation intentions were an average of .65 standard deviations more effective than control interventions. Similar effect sizes were found in the 34 studies which looked at behavioral change on personal or health goals (average of .59 standard deviations more effective). Trigger-action planning—our version of implementation intentions—draws directly on this research and has proven useful to the majority of our alumni for a wide range of problems, tasks, and goals. In previous sections of this book, we’ve looked at the differences between System 1 and System 2, talked about the process of turning goals into plans, and learned to distinguish useful and relevant practice from irrelevant or unproductive practice. In this section, we will combine those insights and their implications into a single, robust technique for building awareness and supporting behavioral change. Complex chains: The parable of the Sphex Sphexes are a genus of wasps, and for many years, a story about their behavior has been a major touchstone in cognitive science. Typically, when it comes time for egg laying, a sphex will build a burrow and fill it with paralyzed insects for her future larvae to eat. When hunting, she will sting her prey, wait for the venom to take effect, drag the prey back to the burrow entrance, leave it outside while she goes in and reconnoiters (presumably confirming the absence of predators or structural problems), and finally come back out to drag her victim inside. This sequence of actions is elaborate, organized, and complex, and on the surface seems to indicate an impressive level of mental sophistication for an insect whose brain weighs less than a milligram. However, in 1879,
7c4400d9-66fc-452c-aacf-3a2cc4a09b38
trentmkelly/LessWrong-43k
LessWrong
Horrible LHC Inconsistency Followup to: When (Not) To Use Probabilities, How Many LHC Failures Is Too Many? While trying to answer my own question on "How Many LHC Failures Is Too Many?" I realized that I'm horrendously inconsistent with respect to my stated beliefs about disaster risks from the Large Hadron Collider. First, I thought that stating a "one-in-a-million" probability for the Large Hadron Collider destroying the world was too high, in the sense that I would much rather run the Large Hadron Collider than press a button with a known 1/1,000,000 probability of destroying the world. But if you asked me whether I could make one million statements of authority equal to "The Large Hadron Collider will not destroy the world", and be wrong, on average, around once, then I would have to say no. Unknown pointed out that this turns me into a money pump.  Given a portfolio of a million existential risks to which I had assigned a "less than one in a million probability", I would rather press the button on the fixed-probability device than run a random risk from this portfolio; but would rather take any particular risk in this portfolio than press the button. Then, I considered the question of how many mysterious failures at the LHC it would take to make me question whether it might destroy the world/universe somehow, and what this revealed about my prior probability. If the failure probability had a known 50% probability of occurring from natural causes, like a quantum coin or some such... then I suspect that if I actually saw that coin come up heads 20 times in a row, I would feel a strong impulse to bet on it coming up heads the next time around.  (And that's taking into account my uncertainty about whether the anthropic principle really works that way.) Even having noticed this triple inconsistency, I'm not sure in which direction to resolve it! (But I still maintain my resolve that the LHC is not worth expending political capital, financial capital, or our time to shut down; compare
e4b3007c-44a1-4804-8bde-ba3f0202a1fa
StampyAI/alignment-research-dataset/blogs
Blogs
May 2015 decision theory conference at Cambridge University MIRI, [CSER](http://cser.org/), and the philosophy department at Cambridge University are co-organizing a decision theory conference titled [**Self-Prediction in Decision Theory and AI**](http://www.phil.cam.ac.uk/events/decision-theory-conf), to be held in the Faculty of Philosophy at the Cambridge University. The dates are May 13-19, 2015. [Huw Price](http://prce.hu/w/index.html) and [Arif Ahmed](http://www.phil.cam.ac.uk/people/teaching-research-pages/ahmed/ahmed-page) at Cambridge University are the lead organizers. Confirmed speakers, in the order they are scheduled to speak, are: * [Arif Ahmed](http://www.phil.cam.ac.uk/people/teaching-research-pages/ahmed/ahmed-page) (Cambridge) * [Huw Price](http://prce.hu/w/index.html) (Cambridge) * [Julia Haas](http://philosophy.artsci.wustl.edu/people/julia-haas) (WU St. Louis) * [Wlodek Rabinowicz](http://www.fil.lu.se/en/department/staff/WlodekRabinowicz/) (Lund) * [Kenny Easwaran](http://www.kennyeaswaran.org/) (Texas A&M) * [Preston Greene](http://www.prestongreene.com/Home.html) (NTU) * [Joseph Halpern](http://www.cs.cornell.edu/home/halpern/) (Cornell) * [Katie Steele](http://www.lse.ac.uk/researchAndexpertise/experts/profile.aspx?KeyValue=k.steele%40lse.ac.uk) (LSE) * [Jenann Ismael](http://www.jenanni.com/) (Arizona) * [H. Orri Stefánsson](https://sites.google.com/site/hostefansson/) (IFS) * [Benja Fallenstein](http://intelligence.org/team/#staff) (MIRI) * [Reuben Stern](https://wisc.academia.edu/ReubenStern) (U Wisconsin) * [Nate Soares](http://mindingourway.com/) (MIRI) * [Stuart Armstrong](http://www.fhi.ox.ac.uk/about/staff/) (Oxford) * [Patrick LaVictoire](https://intelligence.org/team/) (MIRI) * [Catrin Campbell-Moore](http://www.mcmp.philosophie.uni-muenchen.de/people/doct_fellows/moore/index.html) (MCMP) * [James Joyce](http://www-personal.umich.edu/~jjoyce/) (U Michigan) * [Alan Hájek](http://philosophy.anu.edu.au/profile/alan-hajek/) (ANU) * [Stuart Russell](http://www.cs.berkeley.edu/~russell/) (Berkeley) * [Vladimir Slepnev](http://lesswrong.com/user/cousin_it/submitted/) (Google) (Updated May 17, 2015.) The post [May 2015 decision theory conference at Cambridge University](https://intelligence.org/2014/07/12/may-2015-decision-theory-workshop-cambridge/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).
1616323f-8cc7-4919-93b6-7909c9a166a8
trentmkelly/LessWrong-43k
LessWrong
Which textbook would you recommend to learn decision theory? Eliezer talks a lot about decision theory in his sequences, e.g. the Aumann agreement theorem or the von Neumann-Morgenstern utility theorem. From what I've seen so far, decision theory looks extremely interesting. Which textbook in decision theory would you recommend to start with? I'd appreciate if the book contained not only theory, but also some exercise/problem section - I have noticed that usually a lecture is not enough to fully grasp a topic. I want a book which will not shy away from mathematical side of the theory. I have a strong background in mathematics and computer science, but I only know a little about game theory.
ef9f29e2-de2b-477f-bc37-809665153aba
trentmkelly/LessWrong-43k
LessWrong
Progress and preservation in IDA Overview This post arose out of my attempts to understand IDA and ways it could fail. It might help you do the same and could provide useful vocabulary for discussing desiderata for IDA. We want IDA to satisfy progress---decomposition should make answering questions easier---and preservation---semantics should be retained across transformations. We need progress in each decomposition and, furthermore, repeated decompositions must be able to eventually simplify each question such that it can be answered directly by a human. Also, each decomposition and aggregation of questions and answers must introduce no more than a bounded amount of semantic drift and, furthermore, repeated decompositions and aggregations should also introduce no more than a bounded amount of semantic drift. IDA Iterated distillation and amplification (henceforth IDA) is a proposal for improving the capability of human-machine systems to suprahuman levels in complex domains where even evaluation of system outputs may be beyond unaugmented human capabilities. For a detailed explanation of the mechanics, I'll refer you to the original paper just linked, section 0 of Machine Learning Projects for Iterated Distillation and Amplification, or one of the many other explanations floating around the Web. We can view IDA as dynamic programming with function approximation[1] instead of a tabular cache. Just like the cache in dynamic programming, the machine learning component of IDA is a performance optimization. We can excise it and look at just the divide-and-conquer aspect of IDA in our analysis. Then this simplified IDA roughly consists of: (1) repeatedly decomposing tasks into simpler subtasks; (2) eventually completing sufficiently simple subtasks; and (3) aggregating outputs from subtasks into an output which completes the original, undecomposed task. We'll examine this simplified model[2] in the rest of the post. (If you'd like a more concrete description of the divide-and-conquer component of
f551166b-7d06-4d2f-9080-567df4306945
trentmkelly/LessWrong-43k
LessWrong
Status Regulation and Anxious Underconfidence Follow-up to: Against Modest Epistemology ----------------------------------------   I’ve now given my critique of modesty as a set of explicit doctrines. I’ve tried to give the background theory, which I believe is nothing more than conventional cynical economics, that explains why so many aspects of the world are not optimized to the limits of human intelligence in the manner of financial prices. I have argued that the essence of rationality is to adapt to whatever world you find yourself in, rather than to be “humble” or “arrogant” a priori. I’ve tried to give some preliminary examples of how we really, really don’t live in the Adequate World where constant self-questioning would be appropriate, the way it is appropriate when second-guessing equity prices. I’ve tried to systematize modest epistemology into a semiformal rule, and I’ve argued that the rule yields absurd consequences. I was careful to say all this first, because there’s a strict order to debate. If you’re going to argue against an idea, it’s bad form to start off by arguing that the idea was generated by a flawed thought process, before you’ve explained why you think the idea itself is wrong. Even if we’re refuting geocentrism, we should first say how we know that the Sun does not orbit the Earth, and only then pontificate about what cognitive biases might have afflicted geocentrists. As a rule, an idea should initially be discussed as though it had descended from the heavens on a USB stick spontaneously generated by an evaporating black hole, before any word is said psychoanalyzing the people who believe it. Otherwise I’d be guilty of poisoning the well, also known as Bulverism. But I’ve now said quite a few words about modest epistemology as a pure idea. I feel comfortable at this stage saying that I think modest epistemology’s popularity owes something to its emotional appeal, as opposed to being strictly derived from epistemic considerations. In particular: emotions related to social status
46618146-89f4-4333-a72b-3e2b7ffc3479
StampyAI/alignment-research-dataset/arbital
Arbital
Development phase unpredictable Several proposed problems in [advanced safety](https://arbital.com/p/2l) are alleged to be difficult because they depend on some property of a mature [agent](https://arbital.com/p/2c) that is allegedly hard to predict in advance at the time we are designing, teaching, or testing the agent. We say that such properties are allegedly 'development phase unpredictable'. For example, the [Unforeseen Maximums problem](https://arbital.com/p/47) arises when [we can't search a rich solution space as widely as an advanced agent](https://arbital.com/p/2j), making it development-phase unpredictable which real-world strategy or outcome state will maximize some formal utility function.
878d3d04-4444-412b-86c6-dab70aed986c
trentmkelly/LessWrong-43k
LessWrong
How much does cybersecurity reduce AI risk? AI is here, and AGI is coming. It's quite possible that any work being done now will be futile in comparison to reducing AI risk. This is one of those things that's unsettling for me as someone who did a Ph. D. in a non-AI area of computer science. But one of the main vectors by which a bootstrap AGI will gain power is by hacking into other systems. And that's something I can do something about. Not many appreciate this, but unhackable systems are very possible. Security vulnerabilities occur when there is some broken assumption or coding mistake. They are not omnipresent: someone has to put them there. Software has in general gotten more secure over the last few decades, and technologies that provide extremely high security guarantees have. Consider the verified hypervisor coming out of Bedrock Systems; RockSalt, an unbreakable sandbox;  or sel4, the verified kernel now being used in real safety-critical systems.   Suppose we "solve" security by bringing the vulnerabilities in important applications to near zero. Suppose we also "solve" the legacy problem, and are able to upgrade a super-majority of old software, included embedded devices, to be similarly secure. How much will this reduce AI risk? To be clear: I personally am mainly interested in assuming this will be solved, and then asking the impact on AI safety. If you want talk about how hard it is, then, well, I won't be interested because I've given many lectures on closely related topics, although some others here may benefit from the discussion. (When I call something verified or unbreakable, there are a number of technicalities about what exactly has been proven and what the assumptions are. E.g.: nothing I've mentioned provides guarantees against hardware attacks such as Row Hammer or instruction skipping. I'll be happy to explain these to anyone in great detail, but am more interested in discussion which assumes these will all be solved.)
10ff2ef0-abfb-4cc5-9292-2884aafbc6a5
trentmkelly/LessWrong-43k
LessWrong
Meetup : Israel Less Wrong Meetup - Social and Board Games Discussion article for the meetup : Israel Less Wrong Meetup - Social and Board Games WHEN: 09 December 2014 07:00:00PM (+0200) WHERE: Google Tel Aviv We're going to have a meetup on Tuesday, December 9th at Google Israel's offices, Electra Tower, 98 Yigal Alon st., Tel Aviv. IMPORTANT NOTE: The time above might say 6pm or 7pm or 8pm depending on how daylight savings time is processed. The meetup is at 7pm Israel Local Time. This time we're going to have a social meetup! We'll be socializing and playing games. Specifically, we look forward to playing any cool board or card game anyone will bring. By all means bring your favorite game(s) with you and teach others or find people who already like that game. But it's also fine to come empty-handed. We always end up with enough games for everyone. We'll start the meetup at 19:00, and we'll go on as much as we like to. Feel free to come a little bit later, as there is no agenda. (We've decided to start slightly earlier this time to give us more time and accommodate people with different schedules). We'll meet at the 29th floor of the building. If you arrive and cant find your way around, call Anatoly, who is graciously hosting us, at 054-245-1060. Email at avorobey@gmail.com also works. See you there! Discussion article for the meetup : Israel Less Wrong Meetup - Social and Board Games
746e3b79-c690-4f1a-9d2f-0a3c89caea72
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Notes on "Can you control the past" *The following is a (lightly edited version of a) series of notes I sent Joe Carlsmith about his essay,* [Can you control the past?](https://handsandcities.com/2021/08/27/can-you-control-the-past/)*. It's addressed to Joe, but it seems worth publishing here* [*while I'm on the topic of decision theory*](https://www.lesswrong.com/posts/rP66bz34crvDudzcJ/decision-theory-does-not-imply-that-we-get-to-have-nice)*. I’ve included some of his comments, and my replies, below.*   I only recently skimmed [Can you control the past?](https://handsandcities.com/2021/08/27/can-you-control-the-past/), and have a couple notes that you may or may not be interested in. (I'm not under the impression that this matters a ton, and am writing this recreationally.) First: this is overall a great review of decision theories. Better than most I've seen. Nice. Now, onto some more substansive points.   Who am I? --------- I think a bunch of your sense of oddness about the "magic" that "you can write on whiteboards light-years away" is stemming from a faulty framing you have. In particular, the part where the word "you" points to a single physical instantiation of your algorithm in the universe. I'd say: insofar as your algorithm is multiply instantiated throughout the universe, there is no additional fact about which one is really you. For analogy, consider tossing a coin in a quantum-mechanical universe, and covering it with your hand. The coin is superpositioned between heads and tails, and once you look at it, you'll decohere into Joe-who-saw-heads and Joe-who-saw-tails, both of whom stem from Joe-who-hasn't-looked-yet. So, before you look, are you Joe-who-saw-heads or Joe-who-saw-tails? Wrong question! These two entities have not yet diverged; the pasts of those two separate entities coincide. The word "you", at the time before you split, refers to ~one configuration. The time-evolution splits the amplitude on that configuration between ~two distinct future configurations, and once they've split (by making different observations), each will be able to say "me" in a way that refers to them and not the other, but before the split there is no distinction to be made, no extra physical fact, and no real question as to whether pre-split Joe "is" Joe-who-will-see-heads versus Joe-who-will-see-tails. (It's also maybe informative to imagine what happens if the quantum coin is biased. I'd say, even when the coin is 99.99999% biased towards heads, it's still the case that there *isn't a real question* about whether Joe-who-has-not-looked-at-the-coin *is* Joe-who-will-see-heads versus Joe-will-see-tails. There is a question of *to what degree* Joe-who-has-not-looked becomes Joe-who-saw-heads versus Joe-who-saw-tails, but that's a different sort of question.) One of my most-confident guesses about anthropics is that being multiply-instantiated in other ways is analogous. For instance, if there are two identical physical copies of you (in physical rooms that are identical enough that you're going to make the same observations for the length of the hypothetical, etc.), then my guess is that there *isn't a real question* about which one is you. They are both you. You are the pattern, not the meat. This person may become multiple people in the future, insofar as they see different things in different places-that-embed-them. But before the differing observations come in, they're *both you.*You can tell because the situation is symmetric: once you know all the physical facts, there's no additional bit telling you which one is "you". From this perspective, the "magic" is much less mysterious: whenever you are multiply-instantiated, your actions are also multiply-instantiated. If you're multiply-instantiated in two places separated by a 10-light-year gap, then when you act, the two meat-bodies move in the same way on each side of the gap. This is all much less surprising once you acknowledge that "you" refers to *everything that instantiates you(-who-have-seen-what-you-have-seen)*. Which, notably, is a viewpoint more-or-less forced upon us by quantum mechanics anyway. Also, a subtlety: literal multiple-instantiation of your entire mind (in a place with sufficiently similar physics) is what you need to get "You can draw a demon kitten eating a windmill. You can scream, and dance, and wave your arms around, however you damn well please. Feel the wind on your face, cowboy: this is liberty. *And yet,* he will do the same." But it's much easier to find other creatures that *make the same choice in a limited decision problem*, but that won't draw the same demon kitten. In particular, the thing you need for rational cooperation in a one-shot prisoner's dilemma, is multiple instantiation of your *decision algorithm*, which is notably smaller than your entire mind. Imagining multiple-instantiation of your entire mind is a fine intuition-pump, but the sort of multiple-instantiation humans find in real life is just of the decision-making fragment (which is enough). Corollary: To a first approximation, the answer to "Can you control the past?" is "Well, you can be multiply instantiated at different points in time, and control the regions afterwards of the places you’re instantiated, and it’s possible for some of those to be beforewards of other places you’re instantiated. But you can’t control anything beforewards of your earliest instantiation." To a second approximation, the above is true not only of you (in all your detailed glory, having learned everything you've learned and seen everything you've seen), but of your decision algorithm — a much smaller fragment of you, that is instantiated much more often, and thus can readily affect regions beforewards of the earliest instantiation of you-in-all-your-glory. This is what’s going on in the version of Newcomb’s problem, for instance, where Omega doesn’t simulate you in all your glory, but does reason accurately about the result of your decision algorithm (thereby instantiating it in the relevant sense). More generally, I think it's worth distinguishing you from your decision algorithm. You can let your full self bleed into your decision-making fragment, by feeling the wind on your face and using specifics of your recent train-of-thought to determine what you draw. Or you can prevent your full self from bleeding into your decision-making fragment, by boiling the problem before you down into a simple and abstract decision problem. Consider Omega's little sister Omicron, who can't figure out what you'll draw, but has no problem figuring out whether you'll one-box. You-who-have-felt-the-wind-on-your-face are not instantiated in the past, but your decision algorithm on a simple problem could well be. It's the latter that controls things that are beforewards of you (but afterwards of Omicron). I personally don't think I (Nate-in-all-his-glory) can personally control the past. I think that my decision-procedure can control the future laid out before each and every one of its instantiations. Is the box in Newcomb's problem full because I one-box? Well, it's full because The Algorithm one-boxes, and I'm a full-ass person wrapped around The Algorithm, but I'm not the instance of The Algorithm that Omicron was looking at, so it seems a bit weird to blame it on me. Like how when you use a calculator to check whether 7 divides 1331 and use that knowledge to decide how to make a bet, and then later I use a different calculator to see whether 1331 is prime in a way that includes (as an intermediate step) checking whether 7 divides it, it's a bit weird to say that my longer calculation was the cause of your bet. I'm a longer calculation than The Algorithm. It wasn't me who controlled the past, it was The Algorithm Omega looked at, and that I follow. If you ever manage to get two copies of me (the cowboy who feels the wind on his face) at different times, then in that case I'll say that I (who am both copies) control the earlier-copy's future and the later-copy's past (necessarily in ways that the later copy has not yet observed, for otherwise we are not true copies). Till then, it is merely the past *instances of my decision algorithm* that control my past, not me. (Which doesn't mean that I can choose something other than what my decision algorithm selects in any given case, thereby throwing off the yoke; that's crazytalk; if you think you can throw off the yoke of your own decision algorithm then you've failed to correctly identify the fragment of you that makes decisions.)   | | | --- | | **Joe Carlsmith**:  You-who-have-felt-the-wind-on-your-face are not instantiated in the past, but your decision algorithm on a simple problem could well be. It's the latter that controls things that are beforewards of you (but afterwards of Omicron).I currently expect this part to continue to feel kind of magical to me, due to my identification with the full-on self. E.g., if my decision algorithm is instantiated 10 lightyears away in a squid-person, it will feel like "I" can control "something else" very far away. | | **Nate Soares**:  If you were facing me in a game that turns out (after some simple arithmetic) to be isomorphic to a stag hunt, would you feel like you can control my action, despite me being on the other side of the room?(What I'd say is that we both notice that the game is a stag hunt, and then do the same utility calculation + a bit of reasoning about the other player, and come to the same conclusion, and those calculations control both our actions, but neither of us controls the other player.)(You can tell this in part from how our actions would not be synchronized in any choice that turns on a bunch of the extra details of Joe that Nate lacks. Like, if we both need to draw a picture that would make a child laugh, and we get an extra bonus from the pictures having identical content, then we might aim for Schelling drawings, but it's not going to work, because it was the simple stag-hunt calculation that was controlling both our actions, rather than all-that-is-Joe.)(This is part of why I'd say, if your decision algorithm is instantiated 10 light-years away in a squid person, then you don't control them; rather, your shared decision algorithm governs the both of you. The only cases where you (in all your detailed glory) control multiple distant things are cases where exact copies of your brain occur multiple times, in which case it's not that one of you can control things 10ly away, it's that the term 'you' refers to multiple locations simultaneously)(Of course, this could just ground out into a question of how we define 'you'. In which case I'd be happy to fall back to first (a) claiming that there's a concept ‘you' for which the above makes sense, and then separately (b) arguing that this is the correct way to [rescue](https://arbital.com/p/rescue_utility/) the English word "you" in light of multiple instantiation.) | | **Joe Carlsmith**:  Cool, the stag-hunt example is useful for giving me a sense of where you’re coming from. I can still imagine the sense that “if I hunt hare, the suitably-good-predictor of me will probably hunt hare too; and if I hunt stag, they will probably hunt stag too” giving me a sense of control over what they do, but it feels like we’ll quickly just run into debates about the best way to talk; your way seems coherent, and I’m not super attached to which is preferable from a “rescue” perspective. | | **Nate Soares**:  My reply: if a predictor is looking at you and copying your answer, then yes, you control them. But it's worth distinguishing between predictors that look at the-simple-shard-of-you-that-utility-maximizes-in-simple-games and you-in-all-your-detailed-glory. Like, in real life, it's much more common to find a predictor that can tell you'll go for a stag, than a predictor that can predict which drawing you'll make. And saying that 'you' control the former has some misleading implications, that are clarified away by specifying that the simple rules of decisionmaking are embedded in you and are all that the predictor needs to look at (in the former case) to get the right answer.(We may already agree on these points, but also you might appreciate hearing my phrasing of the obvious reply, so \shrug) | | | | --- | | **Joe Carlsmith**: Well, it's full because The Algorithm one-boxes, and I'm a full-ass person wrapped around The Algorithm, but I'm not the instance of The Algorithm that Omicron was looking at, so it seems a bit weird to blame it on me.Do you not control the output of the algorithm? | | **Nate Soares**:  In case it's not clear by this point, my reply is "the algorithm controls the output of me". Like, try as I might, I cannot make LDT 2-box on Newcomb's problem — I can't make 2-boxing be higher-utility, and I can't make LDT be anything other than utility-maximizing. I happen to make my choices according to LDT, in a way that is reflectively stable on account of all the delicious delicious utility I get that way.From this point of view, the point where I'd start saying that it is "me" choosing something (rather than my simpler decision-making core) is when the decision draws on a bunch of extra personal details about Nate-in-particular.There is of course another point of view, which says "the output of Joe in (say) Newcomb's problem is determined by Joe". This viewpoint is sometimes useful to give to people who are reflecting on themselves and struggling to decide between (say) CDT and LDT.It's perhaps useful to note that these people tend to have complicated, messy, heuristical decision-procedures, that they're currently in the process of reflecting upon, in ways that are sensitive to various details of their personality and arguments they just heard. Which is to say, someone who's waffling on Newcomb's problem does have much more of their full self engaged in the choice than (say) I do. Their decision procedure is much more unique to them; it involves much more of their true name; all-that-is-them is much more of an input to it.At that point, "their decision algorithm" and "them" are much closer to synonymous, and I won't quibble much if we say "their algorithm is what determines them" or "they are what determines the output of their algorithm". But in my case, having already passed through the reflective gauntlet, it's much clearer that the algorithm guides me, than that the parts of me wrapped around the algorithm guide it.(Of course, the algorithm is also part of me, as it is part of many, and so it is still true that some part of me controls the output of The Algorithm. Namely, The Algorithm controls the output of The Algorithm.) |   LDT doesn’t pass up guaranteed payoffs -------------------------------------- [Logical decision theorists](https://arbital.com/p/logical_dt/) firmly deny that they pass up guaranteed payoffs. (I can't quite tell from a skim whether you understand this; apologies if I missed the parts where you acknowledge this.) As you probably know, in a twin PD problem, a CDT agent might protest that by cooperating you pass up a guaranteed payoff, because (they say) defecting is a dominant strategy. A logical decision theorist counters that the CDT agent has made an error, by imagining that "I defect while my twin cooperates" is a possibility, when in fact it is not. In particular, when the CDT agent closes their eyes and imagines defecting, they (wrongly) imagine that the action of their twin remains fixed. Among the *actual* possibilities (cooperate, cooperate) and (defect, defect), the former clearly dominates. The disagreement is not about whether to take dominated strategies, but about what possibilities to admit in the matrix from which we calculate what is dominated and what is not. Now consider Parfit's hitchhiker. An LDT agent withdraws the $10k and gives it to the selfish man. Will MacAskill [objects](https://www.lesswrong.com/posts/ySLYSsNeFL5CoAQzN/a-critique-of-functional-decision-theory), "you're passing up a guaranteed payoff of $10k, now that you're certain you're in the city!". The LDT agent says "you have made an error, by imagining ‘I fail to pay while being in the city’ is a possibility, when in fact it is not. In particular, when you close your eyes and imagine not paying, you (wrongly) imagine that your location remains fixed, and wind up imagining an impossibility." Objecting “it's crazy to imagine your location changing if you fail to pay” is a fair criticism. Objecting that logical decision theorists pass up guaranteed payoffs is not. **The whole question at hand is how to evaluate the counterfactuals.** Causal decision theorists say "according to my counterfactuals, if you pay you lose $10k, thus passing up a guaranteed payoff", whereas logical decision theorists say "your counterfactuals are broken, if I don't pay then I die, life is worth more than $10k to me, I am taking the action with the highest payoff". You're welcome to argue that logical decision theorists calculate their counterfactuals wrong, if you think that, but saying we pass up guaranteed payoffs is either confused or disingenuous.   | | | --- | | **Joe Carlsmith**:(I can't quite tell from a skim whether you understand this; apologies if I missed the parts where you acknowledge this.)I think I could’ve been clearer about it in the piece, and in my own head. Your comments here were useful on that front. | | | | --- | | **Joe Carlsmith**:Objecting “it’s crazy to imagine your location changing if you fail to pay” is a fair criticism.Yeah I suppose this is where my inner “guaranteed payoffs” objector would go next. Could imagine thinking: “well, that just seems flat out metaphysically wrong, and in this sense worse than violating guaranteed payoffs, because just saying false stuff about what happens if you do X is worse than saying weird stuff about what’s ‘rational.’” | | **Nate Soares**:I agree "you're flat-out metaphysically wrong (in a way that seems even worse than violating guaranteed payoffs)" is a valid counterargument to my actual position (in a way that "you violate guaranteed payoffs" is not). :-) | Parfit’s hitchhiker and contradicting the problem statement ----------------------------------------------------------- There's a cute theorem I've proven (or, well, I've jotted down what looks to me like a proof somewhere, but haven't machine-checked it or anything), which says that if you want to disagree with logical decision theorists, then you have to disagree in cases where the predictor is literally perfect. The idea is that we can break any decision problem down by cases (like "insofar as the predictor is accurate, ..." and "insofar as the predictor is inaccurate, ...") and that all the competing decision theories (CDT, EDT, LDT) agree about how to aggregate cases. So if you want to disagree, you have to disagree in one of the separated cases. (And, spoilers, it's not going to be the case where the predictor is on the fritz.) I see this theorem as the counter to the decidedly human response "but in real life, predictors are never perfect". "OK!", I respond, "But decomposing a decision problem by cases is always valid, so what do you suggest we do *under the assumption that* the predictor is accurate?" Even if perfect predictors don't exist in real life, your behavior in the *more complicated* probabilistic setting should be assembled out of a mixture of ways you'd behave in simpler cases. Or, at least, so all the standard leading decision theories prescribe. So, pray tell, what do you do *insofar as* the predictor reasoned accurately? I think this is a good intuition pump for the thing where logical decision theorists are like "if I imagine stiffing the driver, then I imagine dying in the desert." *Insofar as* the predictor is accurate, imagining being in the city after stiffing the driver is just as bonkers as imagining defecting while your twin cooperates. One way I like to think about it is, this decision problem is set up in a fashion that purports to reveal the agent's choice to them before they make it. What, then, happens in the case where the agent acts inconsistently with this revelation? The scenario is ill-defined. Like, consider the decision problem "You may have either a cookie or a bonk on the head, and you're going to choose the bonk on the head. Which do you choose?" The cookie might seem more appealing than the bonk, but observe that taking the cookie *refutes the problem statement.* It's at least a little weird to confidently assert that, in that case, you get a cookie. What you really get is a contradiction. And sure, *ex falso quodlibet*, but it seems a bit strange to anchor on the cookie. It's not the fault of the agent that this problem statement is refutable by some act of the agent! The problem is ill-defined without someone telling us what actually happens if we refute the problem statement. If you try to take the cookie, you don’t actually wind up with a cookie; you yeet yourself clean out of the hypothetical. To figure out whether to take the cookie, you need to know where you'd land. Parfit's hitchhiker, *at the point where you're standing at the ATM*, is much like this. The *alleged*problem statement is "you may either lose $0 or $10,000, and you're going to choose to lose $10,000". At which point we're like "Hold on a sec, the problem statement makes an assertion about my choice, which I can refute. What happens if I refute the problem statement?" At which point the question-poser is like "haha oops, yeah, if you refute the problem statement then you die alone in the desert". At which point, yeah, when the logical decision theorist closes their eyes and imagines stiffing the driver, then (under the assumption that the driver is accurate) they're like "oh dang, this would refute my observations; what happens in that case again? right, I'd die alone in the desert, which is worse than losing $10,000", and then they pay. (I also note that this counterfactual they visualize is *correct*. Insofar as the predictor is accurate, if they wouldn't pay, then they would die alone in the desert instead. That is, in real life, what happens to non-payers who face accurate predictors. The "$0" was a red herring; that case is contradictory and cannot actually be attained.) (In the problem where you may have either a cookie or a bonk, and you're going to take the bonk, but if you render the problem inconsistent then you get *two* cookies, by all means, take the cookie. But in the problem where you may have either a cookie or a bonk, and you're going to take the bonk, but if you render the problem inconsistent then you die alone in the desert, then take the dang bonk.) This sort of thing definitely runs counter to some human intuitions — presumably because, in real life, we rarely observe consequences of actions we haven't made yet. (Well, except for in [a variety of social settings](https://mindingourway.com/newcomblike-problems-are-the-norm/), where we have patches such as "honor" and "reputation" that, notably, *give the correct answer in this case,*but I digress.) This is where I think my cute theorem makes it easier to see what's going on: *insofar as* the predictor is perfect, it *doesn't make sense to visualize being in the city after stiffing the driver.* When you're standing in front of the ATM, and you screw your eyes shut and imagine what happens if you just run off instead of withdrawing the money, then *in the case where the predictor reasoned correctly,*your visualizer should be like ERROR ERROR HOW DID WE GET TO THE CITY?, and then fall back to visualizing you dying alone in the desert. Is it weird that your counterfactual-visualizer paints pictures of you being in the desert, even though you remember being driven to the city? Yep. But it's not the agent's fault that they were shown a consequence of their choice before making their choice; they're not the one who put the potential for contradiction into the decision problem. Avoiding contradiction isn’t their problem. One of their available choices is contradictory with observation (at least under the assumption that the predictor is accurate), and they need to handle the contradiction *somehow*, and the problem says right there on the tin that if you would cause a contradiction then you die alone in the desert instead. (Humans, of course, implement the correct decision in this case via a sense of honor or suchlike. Which is astute! "I will pay, because I said I would pay and I am a man of my word" can be seen as a shadow of the correct line of reasoning, cast onto monkey brains that were otherwise ill-suited for it. I endorse the practice of recruiting your intuitions about honor to perform correct counterfactual reasoning.) (And these counterfactuals are *true*, to be clear. You can't go find people who were accurately predicted, driven to the city, and then stiffed the driver. There are none to be found.) Do you see how useful this cute little theorem is? I love it. Instead of worrying about "but what if the driver was simply a fool, and I can save $10k?", we get to *decompose the decision problem down into cases,*one where the driver was incorrect, and one where they were correct. We all agree that insofar as they're incorrect you have to stiff them, and we all agree about how to aggregate cases, so the remaining question is what you do insofar as they're accurate. And insofar as they're accurate, the contradiction is laid bare. And the "stand in front of the ATM, but visualize yourself dying in the desert" thing feels quite justified, at least to me, as a response to a full-on contradiction. Just remember that it's not *your* job to render the universe consistent, and that contradictions can't actually happen. Insofar as the predictor is accurate, imagining yourself surviving and then stiffing the driver makes just as much sense as imagining yourself defecting against your cooperating clone.   | | | --- | | **Joe Carlsmith**:"You may have either a cookie or a bonk on the head, and you're going to choose the bonk on the head. Which do you choose?"I think this is a useful way of illustrating some of the puzzles that come up with transparent-Newcomb-like cases. | | | | --- | | **Joe Carlsmith**:we get to break the decision problem down into cases, one where the driver was incorrect, and one where they were correctDo you have something like "reliable" in mind, here, rather than "correct"? E.g., presumably you don't care if he's correct, but he flipped a coin to determine his prediction. It seems like what matters is whether his prediction was sensitive to your choice or not — a modal thing. | | **Nate Soares**:  Yeah, that's actually my preferred way to think about it. That adds some extra subtleties that turn out to make no difference, though, so skipped over it for the sake of exposition.(Like, an easy way to do it is to say "I think there's a 95% chance they reason correctly about me, and a 5% chance they make at least one reasoning error, and in the latter case it's equally likely (in a manner uncorrelated with my action) that the error pushes them to an invalid true conclusion as an invalid false conclusion, and so we can model this as one case where they're correct, and one case where they toss a coin and guess accordingly". And this turns out to be equivalent to assuming that they're 97.5% right and 2.5% wrong, which is why it makes no difference. But this still doesn't match real life, because in real life they're using fallible stuff like intuition and plausible-seeming deductive leaps, but whatever, I claim it still basically comes down to "were they taking the relevant considerations about me into account, and reasoning validly to their conclusion, or not?" \shrug) | | **Joe Carlsmith**:  Cool, would like to think about this more (I do feel like being X% percent accurate won't always be relevantly equivalent to being Y% infallible and Z% something else), but breaking things down into cases like this seems useful regardless. In particular, seems like the "can't I just control whether he's accurate" response discussed below should apply in the Y%-infallible-Z%-something-else case. | | **Nate Soares**:  (I agree it won't always be relevantly equivalent. It happens to be equivalent in this case, and in most other simple decision problems where you care only about whether (and not why) the predictor got the answer right. Which is not supposed to be terribly obvious, and I'll consider myself to have learned a lesson about using expositional simplifications where the fact that it is a simplification is not trivial. :-p) | | | | --- | | **Joe Carlsmith**:We all agree that insofar as they're incorrect you have to stiff them, and we all agree about how to aggregate cases, so the remaining question is what you do insofar as they're accurate. And insofar as they're accurate, the contradiction is laid bare. And the "stand in front of the ATM, but visualize yourself dying in the desert" thing feels quite justified, at least to me, as a response to a full-on contradiction.Rephrasing to make sure I understand (using the "reliable/sensitive" interpretation I flagged above): “You stand in front of the ATM. Thus, he’s predicted that you pay. Now, either it’s the case that, if it weren’t the case that you pay, you’d be in the desert dead; or it’s the case that, if it weren’t the case that you pay, you’d still be at the ATM. In the former case, not paying is a contradiction. In the latter case, you should not pay.”I wonder if the one-boxer could accept this but say: “OK, but given that I’m standing in front of the ATM, if I don’t pay, then I’m in the case where I should not pay, so it’s fine to not pay, so I won’t." E.g., by not paying in the city, you can "make it not the case" that if you don't pay, you die in the desert five hours ago — after all, you're alive in the city now. | | **Nate Soares**: Rephrasing to make sure I understand [...]That's right! I wonder if the one-boxer could accept this but say [...]There are decision theories that have this behavior! (With some caveats.) Note that this corresponds to an agent that 1-boxes in Newcomb's problem, but 2-boxes in the transparent Newcomb's problem. I don't know of anyone who seriously advocates for that theory, but it's a self-consistent middle-ground.One caveat is that this isn't reflectively consistent (e.g., such agents expect to die in the desert in any future Parfit's hitchhiker, and would pay in advance to self-modify into something that pays the driver if the driver makes their prediction after the moment of modification). Another caveat is that such agents are easily exploitable by blackmail.I also suspect that this decision theory violates the principle where you can break down a decision problem by cases? But i'm not sure. You can almost surely get them to pay you to not reveal information. You can maybe money pump them, though I haven't tried.But those aren't quite my true objection to this sort of thinking. And indeed, the error in this line of thinking ("if I stiff the driver, then I must thereby render them inaccurate, because I've already seen the ATM") is precisely what my lemma about problem decomposition is intended to ward off.Like, one thing that's wrong with this sort of thinking is that it's hallucinating that the driver's accuracy is under your (decision algorithm's) control. It isn't (and I suspect that the mistake can be money-pumped).Another thing that's wrong with it is that it's comparing counterfactuals with different degrees of consistency.Like, consider the problem "you can choose a cookie or a bonk on the head; also, I tossed a coin that comes up 'bonk' 99.9999% of the time and 'cookie' 0.00001% of the time, and your choice matches the coin."Now, choosing 'cookie' only has a 99.9999% chance of being inconsistent with the problem statement, but this doesn't put the two choices on equal footing. Like, yes, now you can only probabilistically render this problem-statement false, but it's still pretty weird that you can probably render this problem-statement false! And the fact that I mixed in a little uncertainty, doesn't mean that you can now make your choice without knowing what happens if you render the problem statement false! The fact that we mixed in a little uncertainty doesn't justify comparing a bonk directly to a cookie; the problem statement is still incomplete; you still need to know what would actually happen insofar as your action contradicts the allegation that it matches the biased coin.And, like, there's an intuition that it would be pretty weird, given that problem-statement, to imagine that your choice controls the coin. The coin isn't about you; it's not about your algorithm; there's nothing linking your action to the coin. The weird thing about this problem-statement is the bizarre assertion that your action is known to match the coin. Like… whichever way the coin came up, what if you did the opposite of that?This is an intuition behind the idea that we should be able to **case on the value of the coin** and consider each of the cases independently. Like, no matter what the value of the coin is, one of our actions reveals the problem statement to be bogus. And someone needs to tell us what happens if we render the whole problem-statement bogus. And so even when there's uncertainty, we need to know the consequences of refuting the problem statement in order to choose our action. | | **Joe Carlsmith**:Note that this corresponds to an agent that 1-boxes in Newcomb's problem, but 2-boxes in the transparent Newcomb's problem. I don't know of anyone who seriously advocates for that theory, but it's a self-consistent middle-ground."EDT 1-boxes in Newcomb's, but 2-boxes in transparent Newcomb's, no? You can almost surely get them to pay you to not reveal information.Agree, I feel like avoiding this is one of the key points of being "updateless." E.g., because you're able to act as you would've committed to acting prior to learning the information, it's fine to learn it. Also agree re: exploitable via blackmail (e.g. EDT's XOR blackmail problems). one thing that's wrong with this sort of thinking is that it's hallucinating that the driver's accuracy is under your (decision algorithm's) control. It isn't (and I suspect that the mistake can be money-pumped).Flagging that I still feel confused about this, and it feels like it rhymes a bit with stuff about ‘can you control the base rate of lesions’ in smoking lesion that I discuss in the post. (I expect you want to say no, and that this is connected to why you want to smoke in smoking lesion — but in cases where your smoking is genuinely evidence that you’ve got the lesion, I’m not sure this is the right verdict.) I'm wondering if there's something generally weird going on in terms "having a problem-set-up" that can be violated or not. the fact that I mixed in a little uncertainty, doesn't mean that you can now make your choice without knowing what happens if you render the problem statement false!Cool, this helps give me a sense of where you're coming from. In particular, even if the predictor isn't always accurate, sounds like you want to interpret “I’m in the city and successfully don’t pay” as having some probability of rendering the problem-statement false, as opposed to being certain to put you in the worlds where the predictor was wrong. | | **Nate Soares**: EDT 1-boxes in Newcomb's, but 2-boxes in transparent Newcomb's, no?You're right, I should have thrown in some extra things that rule out EDT. I think that thing refuses XOR blackmail, 1-boxes in Newcomb's problem, and 2-boxes in transparent Newcomb's? (Though I haven't checked.) Which is the sort of theory that, like, only locals would consider, and I don't know any local who takes it seriously, on account of the exploitability and reflective inconsistency and stuff.I don't have the smoking lesion problem mentally loaded up (I basically think it's just a confused problem statement), but my cached thought is that I give the One True Rescuing of that problem in the "The Smoking Lesion Problem" section of <https://arxiv.org/pdf/1710.05060.pdf> :-p. And I agree with the diagnosis that there's generally something weird going on when the problem set-up can be violated.In particular, even if the predictor isn't always accurate, sounds like you want to interpret “I’m in the city and successfully don’t pay” as having some probability of rendering the problem-statement false, as opposed to being certain to put you in the worlds where the predictor was wrong.Yep! With the justification being that (a) you obviously need to do this when things are certain, and (b) there shouldn't be some enormous change in your behavior when we replace "certain" with "with probability 1−10100.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-surd + .mjx-box {display: inline-flex} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} ". Doubly so on account of how you should be able to reason by cases.Like, if you buy that shit is weird when you can certainly render the problem statement false, and if you buy that **either** you should be able to reason by cases **or** you shouldn't have some giant discontinuity at literal certitude, then you're basically funneled into believing that you have to consider (when at the ATM) that failing to pay could render the whole set-up false, at which point you need some extra rule for how to reason in that case.Where CDT says "assume you live and don't pay" and LDT says "assume you die in the desert", and both agree that the rest of the choice is determined given how you respond to the literal contradiction in the flatly contradictory case.At which point it's my turn to assert that *CDT* is flat-out metaphysically wrong, because it's hallucinating that flat contradictions are relevantly possible. | --- Finally, a minor note: I think the twin clone prisoner's dilemma is sufficient to kill CDT. But if you want to kill it extra dead, you might be interested in the fact that you can turn CDT into a money pump whenever you have a predictor that's more accurate than chance, using some cleverness and the fact that you can expand CDT's action space by also offering it contracts that pay out in counterfactuals that are less possible than CDT pretends they are.   | | | --- | | **Joe Carlsmith**:  Sounds interesting — is this written up anywhere? | | **Nate Soares**:  Maybe in the [Death in Damascus paper](https://intelligence.org/files/DeathInDamascus.pdf)? Regardless, my offhand guess is that the result is due to Ben Levenstein so if it's not in that paper then it might be in some other paper of Ben's. | | | | --- | | **Joe Carlsmith**: Thanks again for this! I do hope you publish — I'd like to be able to cite your comments in future. |
46dc3aed-a5ab-4ac5-a56d-97d4d5594050
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Jitters No Evidence of Stupidity in RL *Epistemic status: Pretty sure about core, not about edges* A while ago, I noticed a possible bias in how I evaluated reinforcement learning agents. It tended to cause me to revise my estimation of their intelligence downwards, after I viewed a video of them in action. I've seen other people fall into what I believe to be the same error. So I'm writing this to correct myself if I am wrong and to alert others if I am right. The Bias -------- Many reinforcement learning agents have "jitters." They alternate actions quickly, looking nearly palsied, apparently nullifying the effects of earlier actions with later ones. This is true across a wide variety of reinforcement learning agents. Many people see these jitters as evidence of the relatively primitive nature of these agents. These actions look clearly stupid and sub-optimal. For instance consider the original [Deep Q Network](https://www.youtube.com/watch?v=V1eYniJ0Rnk) paper. Even after training for some time on Breakout, it still erratically moves the paddle back and forth when the ball is not near it. One person mentions [that](https://srconstantin.github.io/2017/01/28/performance-trends-in-ai.html) it makes "erratic jerky movements that obviously could not in principle be optimal," which was once my impression as well. Similarly, much more recently, consider DeepMind's recent work on generally capable agents. In the [show reel](https://www.youtube.com/watch?v=lTmL7jwFfdw) the movement of the agents often looks erratic. Conversation around LessWrong sometimes alluded to these erratic movements as evidence against the intelligence of the agents. Jitters Non-Optimal For Energy-Constrained Agents ------------------------------------------------- Evolved intelligence on earth has energy conservation as a fundamental part of its optimization function. Unnecessary movements spend energy. Spent energy must be recovered, at the cost of reproductive fitness. So generally only sick animals, insane animals, and so on, have the shakes or tremble continuously. Energy conservation applies to every animal on earth, which is why we probably feel intuitively confident applying this rule across the broad variety of animals. Additionally, extremely erratic movements can result in injury to the animal which is making them. So this is another reason why, for creatures that are a result of evolution, erratic movements are a sign of insanity or injury. RL Agents Are Not Energy-Constrained ------------------------------------ Reinforcement learning agents are not energy-constrained. They do not draw on a finite store of glucose when acting. Nor do they have any possibility of injuring themselves. As a result, the policies resulting from reinforcement learning algorithms will not be strongly constrained to limit jitters in the way that policies resulting from evolution will be constrained. You can go further than this. Given the way that most reinforcement learning agents are set up, they have no way to even *distinguish* any difference between action and non-action, and thus between non-rest and rest. That is, consider a reinforcement learning agent which makes one of fifteen different categorical actions in each time-step, like those in OpenAI's ProcGen. For an agent controlling a side-scrolling avatar, for instance, one action would be moving right; another action would be jumping; another action would be doing nothing; etc. Each of these is only distinguished from the others as different indices on one hot-action encodings -- i.e., moving right could be `[1,0,0,0...]`, jumping could be `[0,1,0,0...]`, doing nothing could be `[0,0,1,0...]`, and so on. For a human controlling such a side-scrolling avatar, "doing nothing" stands out from all the other actions. If you put yourself in a situation where you are allowed to do nothing, you can rest your hands by not pressing any buttons. You can consider a more global strategy, and focus on the kind of strategy you will use when you resume acting. It also allows you to rest your mind, because humans can think harder or less hard. Doing nothing gives you an opportunity for reflection and meta-optimization in a way that no other alternative does. None of this applies to a reinforcement learning agent. "Doing nothing" is one one-hot encoding just like all the other encodings. It cannot rest itself by doing nothing. It cannot focus on preparing for things further away in time; the vast majority of reinforcement learning agents must do a constant amount of thought in each time-step, about precisely the same things. So rest is not a preferred location in action-space that allows meta-optimization for these agents, as it is for evolved agents. They have no way to distinguish rest from non-rest, and thus no reason to pursue rest. The above should also apply, *mutatis mutandis*, to reinforcement learning agents acting in a continuous rather than a discrete space. Jitters May Sometimes be Optimal for Non-Energy-Constrained Agents ------------------------------------------------------------------ This is a more speculative point. When I act, I often trade between low-probability-of-success action, with little thought put into it, and high-probability-of-success action, with a lot of thought put into it. Put more simply, where attempted action is very cheap, I am willing to try a lot of times. Battery doesn't fit? I'll wiggle it around. Command in the terminal doesn't work? I'll try changing a parameter. Pill bottle not opening? I'll cycle through different axes of twist and pressure. Generally, I'll start to apply thought more determinedly where there are no low-cost actions available with any reasonable probability of success. Again, this makes sense from an evolutionary standpoint. Trying things takes energy. Thinking about things also takes energy. Along the boundary where each alternative has equal potential reward and equal probability of success, we would expect ourselves to be indifferent to trying things out versus thinking about things. Only where trying becomes more expensive than thinking about things would we expect that we would feel inclined to think about things rather than try things. But again, this is not a trade off that reinforcement learning agents are able to make. They must always think about things to precisely the same amount. Which means that exposing yourself to a greater surface area of possible reward, in areas of phase-space where actions are not overdetermined, might generally be the ideal action. Jittering around could be the optimal solution. Again, I'm less sure about this section. Fin --- When I see a reinforcement learning agent acting in a video, acting erratically, some part of me still says that it looks kind of stupid because of this. But I currently believe, for reasons given above, that it's best not to listen to this part of myself [x-post](https://1a3orn.com/sub/machine-learning-jitters-no-sign-of-stupidity.html)
947ad8b3-7cc1-4ea3-914e-debb975add79
trentmkelly/LessWrong-43k
LessWrong
Covid 1/27/22: Let My People Go The moment’s here. My people are all the people. It is time to let my people go. While case counts in many places remain high, we are on the way back down the mountain. The hospitals will hold. People can choose, based on their preferences and situation and the local conditions, whether they want to go now or wait a few more weeks before going. That is their call. It needs to be their call. One could argue, as Tyler Cowen did in this excellent talk this week at Yale, that the moment is not quite here yet, on the theory that in a month cases will be an order of magnitude lower and thus it will be politically and socially easier to make the transition. There would be less opposition then, so better to wait, the price for doing so is small. Would I take that deal? Absolutely I would take that deal, if we agreed on an end date or on explicit end conditions. A few more weeks is a small price in the grand scheme, and getting these things to happen takes time, so ‘a few weeks from now’ is the second best time to end pandemic restrictions. But there’s no need for that. The best time is right now. Remember that the case counts are seven-day averages and there is a several-day delay between infection and positive test, so we are living continuously living, for better and for worse, ‘in the future.’ Today I go back to the excellent Da Umberto, to celebrate (barring another variant, and ignoring the writing of posts and the pro forma wearing of masks and showing vaccination cards) the end of my pandemic. Executive Summary 1. Cases now declining most places. 2. Restrictions mostly remain in place. 3. Let my people go. Let’s run the numbers. The Numbers Week-Over-Week Predictions Prediction from last week: 4.4mm cases (-10%) and 14,500 deaths (+15%). Results: 4.05mm cases (-17%) and 15,964 deaths (+26%). Prediction for next week: 2.85mm cases (-30%) and 20,000 deaths (+25%). Overall it seems we peaked faster and more in unison than I expected, while other place
e877070c-eeaa-4421-a38b-f8fd5bfc3d0d
trentmkelly/LessWrong-43k
LessWrong
The effect of horizon length on scaling laws The scaling of optimal model size with compute is a key input into the biological anchors framework for forecasting transformative AI. In particular, the "effective horizon length" introduces a multiplier into this scaling law that can have a big effect on forecasts. This paper studies this scaling law for several RL environments: Procgen, Dota 2 and a toy MNIST-based environment. The last of these is used to study the effect of the task horizon length in a toy setting. There are a number of takeaways for the biological anchors framework, which are summarized in Section 5.4.
5f197a0d-69f1-4644-8a50-cebe892b73a9
trentmkelly/LessWrong-43k
LessWrong
Righting a Wrong Question When you are faced with an unanswerable question—a question to which it seems impossible to even imagine an answer—there is a simple trick which can turn the question solvable. Compare: * "Why do I have free will?" * "Why do I think I have free will?" The nice thing about the second question is that it is guaranteed to have a real answer, whether or not there is any such thing as free will.  Asking "Why do I have free will?" or "Do I have free will?" sends you off thinking about tiny details of the laws of physics, so distant from the macroscopic level that you couldn't begin to see them with the naked eye.  And you're asking "Why is X the case?" where X may not be coherent, let alone the case. "Why do I think I have free will?", in contrast, is guaranteed answerable.  You do, in fact, believe you have free will.  This belief seems far more solid and graspable than the ephemerality of free will.  And there is, in fact, some nice solid chain of cognitive cause and effect leading up to this belief. If you've already outgrown free will, choose one of these substitutes: * "Why does time move forward instead of backward?" versus "Why do I think time moves forward instead of backward?" * "Why was I born as myself rather than someone else?" versus "Why do I think I was born as myself rather than someone else?" * "Why am I conscious?" versus "Why do I think I'm conscious?" * "Why does reality exist?" versus "Why do I think reality exists?" The beauty of this method is that it works whether or not the question is confused.  As I type this, I am wearing socks.  I could ask "Why am I wearing socks?" or "Why do I believe I'm wearing socks?"  Let's say I ask the second question.  Tracing back the chain of causality, I find: * I believe I'm wearing socks, because I can see socks on my feet. * I see socks on my feet, because my retina is sending sock signals to my visual cortex. * My retina is sending sock signals, because sock-shaped light is impinging on my re
2830a2eb-5bf4-4e6a-83f5-61adef285f55
LDJnr/LessWrong-Amplify-Instruct
LessWrong
"Summary: we don't understand why programmers are paid so well. If you're a programmer, there's enough of a chance that this is temporary that it's worth explicitly planning for a future in which you're laid off and unable to find similarly high-paying work. Programmers are paid surprisingly well given how much work it is to become one. Here's Dan Luu comparing it to other high-paid careers: If you look at law, you have to win the prestige lottery and get into a top school, which will cost hundreds of thousands of dollars. Then you have to win the grades lottery and get good enough grades to get into a top firm. And then you have to continue winning tournaments to avoid getting kicked out, which requires sacrificing any semblance of a personal life. Consulting, investment banking, etc., are similar. Compensation appears to be proportional to the level of sacrifice (e.g., investment bankers are paid better, but work even longer hours than lawyers). Medicine seems to be a bit better from the sacrifice standpoint because there's a cartel which limits entry into the field, but the combination of medical school and residency is still incredibly brutal compared to most jobs at places like Facebook and Google. My sister is currently a second-year medical resident, and "incredibly brutal compared..." feels like a understatement to me. She works 80hr weeks, often nights, helping people with deeply personal and painful issues that are hard to leave behind when you go home. This is after four years in medical school, with still at least a year to go before starting to earn doctor-level money. When I compare it to how I started programming right out of college, making more money for 40hr weeks and no on-call, I feel embarrassed. What makes me nervous, though, is that we don't really understand why programmers are paid this well, and especially why this has persisted. People have a bunch of guesses: Demand: as software eats the world there are far more profitable things for programmers to do than people to do them. Supply: it's hard to train people to be programmers, fewer people are suited for it than expected, and bootcamps haven't worked out as well as we'd hoped. Startups: big companies need to compete with programmers choosing to go off and start companies, which is harder to do in many fields. Novelty: the field is relatively new, and something about new fields leads to higher profits and higher pay, maybe via competition not being mature yet? Something else: I'd be curious if people have other thoughts—leave comments! Things are pretty good now, and seem to have gotten even better since Dan's 2015 post, but something could change. Given how poorly we understand this, and the wide range of ways the future might be different, I think we should treat collapse as a real possibility: not something that will definitely happen, or that's going to happen on any certain timescale, but something likely enough prepare against. Specifically, I'd recommend living on a small portion of your income and saving a multiple of your living expenses. It's far more painful to cut expenses back than it is to keep them from growing, and the more years of expenses you have saved the better a position you'll be in. If you take this approach and there's no bust, you're still in a good place: you can retire early or support things you believe in. If being laid off and unable to find similarly high-paying work would be a disaster, figure out what you need to change so that it wouldn't be. (This isn't really specific to programming, but I think the chances of a bust are higher in programming than in more mature fields.) Comment via: facebook" What you see above is a segment from a LessWrong user's post. In the context of the argument presented, could you delve deeper into the role of the startup culture in inflating programmer salaries? How might this dynamic change in the future? The startup culture has played a significant role in inflating programmer salaries. Startups, particularly in the tech sector, often require highly skilled programmers to bring their innovative ideas to life. Given the competitive nature of the startup landscape, these companies are willing to pay a premium to attract and retain the best talent. This competition has resulted in an upward pressure on salaries, as startups try to outbid each other to secure top talent. There are several ways in which startups contribute to this phenomenon: 1. Equity Compensation: Many startups offer equity or stock options as part of their compensation packages. This potential for significant future earnings can inflate the overall compensation for programmers. 2. High Risk, High Reward: Working for a startup often involves more risk than working for an established company. The potential for failure is higher, and job security may be lower. To compensate for this increased risk, startups often offer higher salaries. 3. Demand for Innovation: Startups are often at the forefront of technological innovation. This requires programmers with specialized skills and cutting-edge knowledge, who command higher salaries. 4. Talent War: In the competitive tech startup landscape, there's a war for the best talent. Startups need to offer attractive compensation packages to compete with established tech giants and other startups. However, the role of startup culture in inflating programmer salaries could change in the future due to several factors: 1. Market Saturation: As more startups enter the market, there may be a saturation point where the demand for programmers may not exceed the supply as much as it currently does. This could potentially lead to a decrease in programmer salaries. 2. Remote Work: The rise of remote work could lead to a more globalized workforce, allowing startups to hire talented programmers from regions where wages are lower. This could put downward pressure on salaries. 3. Automation and AI: Advances in automation and AI could reduce the demand for certain programming skills, which could also lead to a decrease in salaries. 4. Economic Conditions: Broader economic conditions and market forces could also impact startup funding and, consequently, the salaries they can offer. 5. Regulation: Changes in regulation or labor laws could also impact how much startups can pay their employees. In conclusion, while startup culture has contributed to the inflation of programmer salaries, several factors could potentially change this dynamic in the future. It's important for programmers and those in the tech industry to be aware of these potential changes and plan accordingly. Considering the potential changes in programmer salaries due to market saturation, remote work, automation, AI, economic conditions, and regulation, how might these factors influence the strategies startups employ to attract and retain top programming talent in the future? As these factors come into play, startups may need to adapt their strategies to continue attracting and retaining top programming talent. Here are some potential approaches: 1. Competitive Compensation: While cash compensation remains important, startups might need to get more creative with their packages. This could include more significant equity stakes, profit sharing, or performance bonuses. 2. Remote Work Flexibility: As remote work becomes more prevalent, offering flexibility in terms of location and hours could be a key differentiator. This not only widens the talent pool but also caters to the increasing desire for work-life balance. 3. Continuous Learning Opportunities: Given the rapid pace of technological change, providing opportunities for continuous learning and development could be a significant draw. This could involve sponsoring courses, providing time for self-directed learning, or hosting internal knowledge-sharing sessions. 4. Career Growth: Demonstrating clear career progression paths and providing opportunities for advancement can help attract and retain talent. This could involve clear job progression tiers, mentorship programs, or opportunities to lead projects or teams. 5. Strong Company Culture: A strong, positive company culture can be a significant draw for talent. This could involve promoting a collaborative environment, fostering a sense of purpose and mission, or demonstrating strong company values. 6. Job Security: While startups are inherently risky, providing some level of job security can be appealing. This could involve transparency around company performance, a strong business model, or even job guarantee schemes. 7. Emphasizing Impact: Many programmers are drawn to roles where they can make a significant impact. Startups can emphasize the potential for a candidate's work to directly contribute to the company's success. 8. Automation and AI: Startups could position themselves at the forefront of AI and automation, attracting talent interested in working with cutting-edge technology. 9. Regulation Compliance: Ensuring fair practices and compliance with any labor laws or regulations can also be a draw for talent, demonstrating that the startup is a responsible and ethical employer. In conclusion, while the factors you mentioned may present challenges, they also provide opportunities for startups to rethink their strategies and continue attracting top programming talent.
a3e441f7-32ef-40ee-8e32-b1b5e6d28d58
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
AI Alignment 2018-19 Review Preamble ======== #### What this post is This is a review post of public work in AI alignment over 2019, with some inclusions from 2018. It has this preamble (~700 words), a short version / summary (~1.6k words), and a long version (~8.3k words). It is available as a Google Doc [here](https://docs.google.com/document/d/1Fng1J_QPb7GEeLBMmWWfZOguw7yUTZot0egrCbKpVwk/edit#). There are many areas of work that are relevant to AI alignment that I have barely touched on, such as interpretability, uncertainty estimation, adversarial examples, and assured autonomy, primarily because I have not been following these fields and wouldn’t be able to write a good summary of what has happened in them. I have also mostly focused on articles that provide some conceptual insight, and excluded or briefly linked to papers that primarily make quantitative improvements on important metrics. While such papers are obviously important (ultimately, our techniques need to work *well*), there isn’t much to say about them in a yearly review other than that the quantitative metric was improved. Despite these exclusions, there was still a ton of work to select from, perhaps around ~500 articles, of which over 300 have been linked to in this post. There are many interesting articles that I really enjoyed that get only a sentence of description, in which I ignore many of the points that the article makes. Most have been summarized in the [Alignment Newsletter](https://rohinshah.com/alignment-newsletter), so if you’d like to learn more about any particular link, but don’t want to read the entire thing, just search for its title in the [database](https://docs.google.com/spreadsheets/d/1PwWbWZ6FPqAgZWOoOcXM8N_tUCuxpEyMbN1NYYC02aM/edit#gid=0). #### What you should know about the structure of this post I am *not* speaking for myself; by default I am trying to explain what has been said, in a way that the authors of the articles would agree with. Any extra opinion that I add will be in italics. As a post, this is meant to be read sequentially, but the underlying structure is a graph (nodes are posts, edges connect posts that are very related). I arranged it in a sequence that highlights the most salient-to-me connections. This means that the order in which I present subtopics is very much *not* a reflection of what I think is most important in AI safety: in my presentation order, I focused on *edges* (connections) rather than *nodes* (subtopics). Other minor details: 1. Any links from earlier than 2018 will have their year of publication right after the link (except for articles that were reposted as part of Alignment Forum sequences). 2. I typically link to blog posts; in several cases there is also an associated paper that I have not linked. #### How to read this post I have put the most effort into making the prose of the long version read smoothly. The hierarchical organization is comparatively less coherent; this is partly because I optimized the prose, and partly because AI safety work is hard to cluster. As a result, for those willing to put in the effort, I’d recommend reading the long version directly, without paying too much attention to the hierarchy. If you have less time, or are less interested in the minutiae of AI alignment research, the short version is for you. Since I don’t name authors or organizations, you may want to take this as your opportunity to form beliefs about which arguments in AI alignment are important based on the ideas (as opposed to based on trust in the author of the post). People who keep up with AI alignment work might want to know which posts I’m referencing as they read, which is a bit hard since I don’t name the posts in the text. If this describes you, you should be reading this post on the Alignment Forum, where you can hover over most links to see what they link to. Alternatively, the [references section in the Google Doc](https://docs.google.com/document/d/1Fng1J_QPb7GEeLBMmWWfZOguw7yUTZot0egrCbKpVwk/edit#heading=h.kr223qj1g1y5) lists all links in the order that they appear in the post, along with the hierarchical organization, and so you can open the references in a new tab, and read through the post and the references together. I expect that if you aren’t already familiar with them, some articles will sound crazy from my summary here; please read at least the newsletter summary and ideally the full article before arguing that it’s crazy. #### Acknowledgements Thanks to the [Alignment Newsletter team](https://rohinshah.com/alignment-newsletter), Ben Pace, Oliver Habryka, Jonathan Uesato, Tom Everitt, Luke Muehlhauser, Jan Leike, Rob Bensinger, Adam Gleave, Scott Emmons, Rachel Freedman, Andrew Critch, Victoria Krakovna, and probably a few others (I really should have kept better track of this). Thanks especially to Ben Pace for suggesting that I write this review in the first place. Short version (~1.6k words) =========================== While the full text tries to accurately summarize different points of view, that is not a goal in this summary. Here I simply try to give a sense of the topics involved in the discussion, without saying what discussion actually happened. **Basic analysis of AI risk.** Traditional arguments for AI risk argue that since agentic AI systems will apply lots of optimization, they will lead to extreme outcomes that can’t be handled with normal engineering efforts. Powerful AI systems will not have their resources stolen from them, which by various dutch book theorems implies that they must be expected utility maximizers; since expected utility maximizers are goal-directed, they are dangerous. However, the VNM theorem [does not justify](https://www.alignmentforum.org/s/4dHMdK5TLN6xcqtyc/p/NxF5G6CJiof6cemTw) the assumption that an AI system will be goal-directed: such an assumption is really based on intuitions and conceptual arguments (which are still quite strong). [Comprehensive AI Services](https://www.alignmentforum.org/posts/x3fNwSe5aWZb5yXEG/reframing-superintelligence-comprehensive-ai-services-as) (CAIS) challenges the assumption that we will have a single agentic AI, instead suggesting that any task will be performed by a collection of modular services. That being said, there are several other arguments for AI risk, such as the [argument](https://www.alignmentforum.org/posts/w6d7XBCegc96kz4n3/the-argument-from-philosophical-difficulty) that AI might cause “lock in” which may require us to solve hard philosophical problems before the development of AGI. Nonetheless, there are [disjunctive reasons](https://www.alignmentforum.org/posts/QknPz9JQTQpGdaWDp/an-80-why-ai-risk-might-be-solved-without-additional) to expect that catastrophe does not occur: for example, there may not be a problem, or ML researchers may solve the problem after we get “warning shots”, or we could coordinate to not build unaligned AI. **Agency and optimization.** One proposed problem is that of [mesa optimization](https://www.alignmentforum.org/s/r9tYkB2a8Fp4DN8yB), in which an optimization algorithm used to train an AI creates an agent that is *itself* performing optimization. In such a scenario, we need to ensure that the “inner” optimization is also aligned. To better understand these and other situations, it would be useful to have a formalization of optimization. This is [hard](https://www.alignmentforum.org/posts/26eupx3Byc8swRS7f/bottle-caps-aren-t-optimisers): while we don’t want optimization to be about our beliefs about a system, if we try to define it mechanistically, it becomes hard to avoid defining a bottle cap as an optimizer of “water kept in the bottle”. Understanding agents is another hard task. While agents are relatively well understood under the Cartesian assumption, where the agent is separate from its environment, things become much more complex and poorly-understood when the agent is a [part of its environment](https://www.alignmentforum.org/s/Rm6oQRJJmhGCcLvxh). **Value learning.** Building an AI that learns all of human value has historically been thought to be very hard, because it requires you to decompose human behavior into the “beliefs and planning” part and the “values” part, and there’s no clear way to do this. Another way of looking at it is to say that value learning [requires](https://www.alignmentforum.org/s/4dHMdK5TLN6xcqtyc/p/h9DesGT3WT9u2k7Hr) a model that separates the given data into that which actually achieves the true “values” and that which is just “a mistake”, which seems hard to do. In addition, value learning seems quite fragile to mis-specification of this human model. Nonetheless, there are reasons for optimism. We could try to build an [*adequate* utility function](https://www.alignmentforum.org/posts/CSEdLLEkap2pubjof/research-agenda-v0-9-synthesising-a-human-s-preferences-into), which works well enough for our purposes. We can also have [*uncertainty* over the utility function](https://www.alignmentforum.org/posts/nd692YfFGfZDh9Mwz/an-69-stuart-russell-s-new-book-on-why-we-need-to-replace), and update the belief over time based on human behavior. If everything is specified correctly (a big if), as time goes on, the agent would become more and more aligned with human values. One major benefit of this is that it is *interactive* -- it doesn’t require us to specify everything perfectly ahead of time. **Robustness.** We would like our agents to be robust - that is, they shouldn’t fail catastrophically in situations slightly different from the ones they were designed for. Within reinforcement learning, safe reinforcement learning aims to avoid mistakes, even during training. This either requires analytical (i.e. not trial-and-error) reasoning about what a “mistake” is, which requires a formal specification of what a mistake is, or an overseer who can correct the agent before it makes a mistake. The classic example of a failure of robustness is adversarial examples, in which a tiny change to an image can drastically affect its classification. Recent research has shown that these examples are [caused](https://distill.pub/2019/advex-bugs-discussion/) (at least in part) by real statistical correlations that generalize to the test set, that are nonetheless fragile to small changes. In addition, since robustness to one kind of adversary doesn’t make the classifier robust to other kinds of adversaries, there has been a lot of work done on improving adversarial evaluation in image classification. We’re also seeing some of this work in reinforcement learning. However, asking our agents to be robust to arbitrary mistakes seems to be too much -- humans certainly don’t meet this bar. For AI safety, it seems like we need to ensure that our agents are robustly [intent aligned](https://www.alignmentforum.org/s/EmDuGeRw749sD3GKd/p/ZeE7EKHTFMBs8eMxn), that is, they are always “trying” to do what we want. One particular way that our agents could be intent aligned is if they are [corrigible](https://www.alignmentforum.org/s/EmDuGeRw749sD3GKd/p/fkLYhTQteAu5SinAc), that is, they are trying to keep us “in control”. This seems like a particularly easy property to verify, as conceptually it seems to be independent of the domain in which the agent is deployed. So, we would like to ensure that even in the worst case, our agent remains corrigible. One [proposal](https://www.alignmentforum.org/posts/9Dy5YRaoCxH9zuJqa/relaxed-adversarial-training-for-inner-alignment) would be to train an adversary to search for “relaxed” situations in which the agent behaves incorrigibly, and then train the agent not to do that. **Scaling to superhuman abilities.** If we’re building corrigible agents using adversarial training, our adversary should be more capable than the agent that it is training, so that it can find all the situations in which the agent behaves incorrigibly. This requires techniques that scale to superhuman abilities. Some techniques for this include [iterated amplification](https://www.alignmentforum.org/s/EmDuGeRw749sD3GKd) and [debate](https://arxiv.org/abs/1805.00899). In iterated amplification, we start with an initial policy, and alternate between amplification and distillation, which increase capabilities and efficiency respectively. This can encode a range of algorithms, but often amplification is done by decomposing questions and using the agent to answer subquestions, and distillation can be done using supervised learning or reinforcement learning. In debate, we train an agent through self-play in a zero-sum game in which the agent’s goal is to “win” a question-answering debate, as evaluated by a human judge. The hope is that since each “side” of the debate can point out flaws in the other side’s arguments, such a setup can use a human judge to train far more capable agents while still incentivizing them to provide honest, true information. Both iterated amplification and debate aim to train an agent that approximates the answer that one would get from an exponentially large tree of humans deliberating. The [factored cognition](https://www.alignmentforum.org/s/EmDuGeRw749sD3GKd/p/DFkGStzvj3jgXibFG) hypothesis is that this sort of tree of humans is able to do any task we care about. This hypothesis is controversial: many have the intuition that cognition requires large contexts and flashes of intuition that couldn’t be replicated by a tree of time-limited humans. **Universality.** One [property](https://www.alignmentforum.org/posts/M8WdeNWacMrmorNdd/towards-formalizing-universality) we would hope to have is that if we use this tree of humans as an overseer for some simpler agent, then the tree would “know everything the agent knows”. If true, this property could allow us to build a significantly stronger conceptual argument for safety. It is also very related to… **Interpretability.** While interpretability can help us know what the agent knows, and what the agent would do in other situations (which can help us verify if it is corrigible), there are [other uses](https://www.alignmentforum.org/posts/X2i9dQQK3gETCyqh2/chris-olah-s-views-on-agi-safety) for it as well: in general, it seems better if we can understand the things we’re building. **Impact regularization.** While relative reachability and attainable utility preservation were developed last year, this year saw them be [unified](https://medium.com/@deepmindsafetyresearch/designing-agent-incentives-to-avoid-side-effects-e1ac80ea6107) into a single framework. In addition, there was a new proposed [definition](https://www.alignmentforum.org/s/7CdoznhJaLEKHwvJW) of impact: change in our ability to get what we want. This notion of impact depends on knowing the utility function U. However, we might hope that we can penalize some “objective” notion, perhaps "power", that occurs regardless of the choice of U, for the same reasons that we expect instrumental convergence. **Causal modeling.** Causal models have been used recently to [model](https://arxiv.org/abs/1906.08663) the incentives for an agent under different AI safety frameworks, and to [argue](https://medium.com/@deepmindsafetyresearch/designing-agent-incentives-to-avoid-reward-tampering-4380c1bb6cd) that by evaluating plans with the current reward function, you can remove the incentive for an agent to tamper with its reward function. **Oracles.** Even if oracles are trying to maximize predictive accuracy, they could “choose” between different self-confirming predictions. We could avoid this using counterfactual oracles, which make predictions conditioning that their predictions do not influence the future. **Decision theory.** There was work on decision theory, that I haven’t followed very much. **Forecasting.** Several resources were developed to enable effective group forecasting, including an [AI forecasting dictionary](https://www.lesswrong.com/posts/8y7DcSF4eAkXoru4u/ai-forecasting-dictionary-forecasting-infrastructure-part-1-2) that defines terms, an [AI resolution council](https://www.lesswrong.com/posts/9G6CCNXkA7JZoorpY/ai-forecasting-resolution-council-forecasting-infrastructure) whose future opinions can be predicted, and a dataset of well-constructed [exemplar questions](https://www.lesswrong.com/posts/yy3FCmdAbgSLePD7H/how-to-write-good-ai-forecasting-questions-question-database) about AI. Separately, the debate over takeoff speeds continued, with [two](https://sideways-view.com/2018/02/24/takeoff-speeds/) [posts](https://aiimpacts.org/likelihood-of-discontinuous-progress-around-the-development-of-agi/) arguing forcefully for continuous takeoff, [without much response](https://www.lesswrong.com/posts/PzAnWgqvfESgQEvdg/any-rebuttals-of-christiano-and-ai-impacts-on-takeoff-speeds) (although many researchers do not agree with them). The continuity of takeoff is relevant for but doesn’t completely determine whether recursive self improvement will happen, or whether some actor acquires a decisive strategic advantage. The primary implication of the debate is whether we should expect that we will have enough time to react and fix problems as they arise. It has also become clearer that recent progress in AI has been driven to a significant degree by increasing the [amount of compute](https://openai.com/blog/ai-and-compute/) devoted to AI, which suggests a more continuous takeoff. You could take the position that current methods can’t do <property X> (say, causal reasoning), and so it doesn’t matter how much compute you use. **AI Progress.** There was a lot of progress in AI. **Field building.** There were posts aiming to build the field, but they were all fairly disjointed. The **long version** (~8.3k words) starts here. Basic analysis of AI risk ========================= Agentic AI systems ------------------ Much of the foundational writing about AI risk has focused on agentic AI systems. This approach (recently discussed in the post and comments [here](https://www.lesswrong.com/posts/mdau2DBSMi5bWXPGA/useful-does-not-mean-secure)) argues that since AI agents will be exerting a lot of optimization, there will be [extreme outcomes](https://www.lesswrong.com/posts/zEvqFtT4AtTztfYC4/optimization-amplifies) in which our regular arguments may not work. This implies that we must adopt a [security mindset](https://www.lesswrong.com/posts/8gqrbnW758qjHFTrH/security-mindset-and-ordinary-paranoia) (2017) to ensure alignment, and it suggests that proof-level guarantees may be [more important](https://www.alignmentforum.org/posts/aPwNaiSLjYP4XXZQW/ai-alignment-open-thread-august-2019#3TKtFmF3ZcQFgQNsA) at various stages of alignment research. #### Goal-directedness The foundational writing then goes on to point out that since powerful AI systems should not be able to be dutch booked (i.e. have their resources stolen from them), they will be well [modeled](https://www.lesswrong.com/posts/RQpNHSiWaXTvDxt6R/coherent-decisions-imply-consistent-utilities) (2017) as expected utility maximisers. An AI system that maximizes expected utility is very likely to be dangerous. One reason was [recently](https://www.alignmentforum.org/posts/6DuJxY8X45Sco4bS2/seeking-power-is-provably-instrumentally-convergent-in-mdps) [formalized](https://www.alignmentforum.org/posts/cwpKagyTvqSyAJB7q/clarifying-power-seeking-and-instrumental-convergence) in MDPs in which the agent gets a *random* utility function: using formalizations of power and instrumental convergence, we find some suggestive results that agents seek control over their future (from which we might infer that they will try to wrest that control from us). However, it is [not mathematically necessary](https://www.alignmentforum.org/s/4dHMdK5TLN6xcqtyc/p/NxF5G6CJiof6cemTw) that AI systems will have utility functions (except in a vacuous sense), and while there are [intuitive and conceptual reasons](https://www.alignmentforum.org/s/4dHMdK5TLN6xcqtyc/p/9zpT9dikrrebdq3Jf) to think that we will build [goal-directed agents](https://www.alignmentforum.org/s/4dHMdK5TLN6xcqtyc/p/DfcywmqRSkBaCB6Ma) by default, there are [alternative pathways](https://www.alignmentforum.org/s/4dHMdK5TLN6xcqtyc/p/tHxXdAn8Yuiy9y2pZ) that might be taken instead, and that are valuable to explore and build out to ensure AI safety. This challenge to the usual argument for utility maximizers has [prompted](https://www.alignmentforum.org/posts/pLZ3bdeng4u5W8Yft/let-s-talk-about-convergent-rationality-1) [a](https://www.alignmentforum.org/posts/vphFJzK3mWA4PJKAg/coherent-behaviour-in-the-real-world-is-an-incoherent) [series](https://www.alignmentforum.org/posts/4K52SS7fm9mp5rMdX/three-ways-that-sufficiently-optimized-agents-appear) [of](https://www.alignmentforum.org/posts/Q9JKKwSFybCTtMS9d/what-are-we-assuming-about-utility-functions) [articles](https://www.lesswrong.com/posts/EnN7cm3KaRrEAuWfa/comment-on-coherence-arguments-do-not-imply-goal-directed) exploring other variants of the argument, for example by [restricting](https://www.alignmentforum.org/posts/yGuo5R9fgrrFLYWuv/when-do-utility-functions-constrain-1) the class of utility functions to make it non-vacuous, or by saying that [optimization processes](https://www.alignmentforum.org/posts/vphFJzK3mWA4PJKAg/coherent-behaviour-in-the-real-world-is-an-incoherent#F2YB5aJgDdK9ZGspw) in general will lead to goal-directed agents. #### Comprehensive AI Services [Comprehensive AI Services](https://www.alignmentforum.org/posts/x3fNwSe5aWZb5yXEG/reframing-superintelligence-comprehensive-ai-services-as) (CAIS) also takes issue with the model of a single AGI agent hyper-competently pursuing some goal, and instead proposes a model in which different tasks are solved by specialized, competing *AI services*. This is suggesting that modularity across tasks is sufficiently useful that it will apply to AI, in the same way that it applies to humans (e.g. I have specialized in AI research, and not plumbing). The aggregate of all the services can accomplish any task, including the development of new services, making it comprehensive (analogous to the “general” in AGI). Since AI services can also do basic AI R&D research, which leads to improvement in AI services generally, we should expect recursive *technological* improvement (as opposed to recursive *self* improvement). Note that CAIS does not necessarily suggest we will be *safe*, just that the traditional risks are not as likely as we may have thought, while other emergent risks are perhaps greater. [Critics](https://www.alignmentforum.org/posts/HvNAmkXPTSoA4dvzv/comments-on-cais) [often](https://www.lesswrong.com/posts/bXYtDfMTNbjCXFQPh/drexler-on-ai-risk) [argue](https://www.alignmentforum.org/posts/x3fNwSe5aWZb5yXEG/reframing-superintelligence-comprehensive-ai-services-as#sXHXAfSKWPyEMhtcu) that end-to-end training and integrated agent-like architectures are likely to (eventually) outperform modular services. However, through coordination services can also be integrated. In addition, this [post](https://www.overcomingbias.com/2019/02/how-lumpy-ai-services.html) argues that this criticism mirrors old concerns that under capitalism firms will become too large -- a concern that the post argues did not pan out. CAIS *does* allow for AI systems that are capable of *learning* across many domains: it simply argues that these AI systems will specialize for efficiency reasons, and so will only be *competent* at a small subset of domains. This decomposition of intelligence into learning + competence has been used to explain the [variation in human abilities](https://www.lesswrong.com/posts/ZwSrTsP3YkgnmHWnJ/two-explanations-for-variability-in-human-abilities). (This conversation is related to much prior conversation on Tool AI, which is listed [here](https://www.lesswrong.com/posts/jkxkMTGfZDzBEaaY8/why-not-tool-ai#zECozzvnPz7XKvLLc).) Arguments for AI risk --------------------- There are many arguments for AI risk, with each of [these](https://www.alignmentforum.org/posts/JbcWQCxKWn3y49bNB/disentangling-arguments-for-the-importance-of-ai-safety) [posts](https://www.alignmentforum.org/posts/WXvt8bxYnwBYpy9oT/the-main-sources-of-ai-risk-1) providing a list of such arguments. It is unclear whether from an outside perspective this should be taken as evidence *against* AI risk (since different researchers believe different arguments and are aiming for different [“success stories”](https://www.alignmentforum.org/posts/bnY3L48TtDrKTzGRb/ai-safety-success-stories)) or as evidence *for* AI risk (because there are so many different sources of AI risk). One argument that saw a lot of discussion was that we must [figure out philosophy](https://www.alignmentforum.org/posts/w6d7XBCegc96kz4n3/the-argument-from-philosophical-difficulty) since the creation of AGI might “lock in” philosophical ideas. For example, we might [not want](https://arxiv.org/abs/1901.00064) to have AI systems with utility functions because of impossibility results in population ethics that suggest that every utility function would lead to some counterintuitive conclusion. Similarly, there are [many proposals](https://www.alignmentforum.org/posts/W95gbuognJu5WxkTW/the-value-definition-problem) for how to define values; it may be necessary to figure out the right definition ahead of time. Rather than solving these problems directly, we could solve [metaphilosophy](https://www.alignmentforum.org/posts/EByDsY9S3EDhhfFzC/some-thoughts-on-metaphilosophy), or delegate to humans who [deliberate](https://www.alignmentforum.org/posts/ebdf8GZxt3L9grwwN/deliberation-as-a-method-to-find-the-actual-preferences-of), whether [idealized or real](https://www.alignmentforum.org/posts/vbtvgNXkufFRSrx4j/three-ai-safety-related-ideas). We might also worry that AIs will economically outcompete humans, give us technologies we [aren’t ready for](https://www.alignmentforum.org/posts/vbtvgNXkufFRSrx4j/three-ai-safety-related-ideas), or amplify [human](https://www.lesswrong.com/posts/6RjL996E8Dsz3vHPk/two-more-decision-theory-problems-for-humans) [vulnerabilities](https://www.alignmentforum.org/posts/HTgakSs6JpnogD6c2/two-neglected-problems-in-human-ai-safety). Under [continuous takeoff](https://sideways-view.com/2018/02/24/takeoff-speeds/), two scenarios have been proposed for [what failure looks like](https://www.alignmentforum.org/posts/HBxe6wdjxK239zajf/what-failure-looks-like). First, AI differentially improves society’s capability to optimize metrics that are easy to measure, rather than ones that we actually care about. Second, AI agents could accidentally be trained to seek influence, and then fail catastrophically at some point in the future once they are sufficiently capable. One [critique](http://www.overcomingbias.com/2019/04/agency-failure-ai-apocalypse.html) argues that these principal-agent problems only lead to bounded losses (i.e. they aren’t catastrophic), but [several](https://www.alignmentforum.org/posts/92J4zJHkqmXTduxzY/and-the-ai-would-have-got-away-with-it-too-if) [others](https://www.lesswrong.com/posts/ktDKfKqukTPRiuEPM/robin-hanson-on-the-futurist-focus-on-ai#W4Fr8WyoD6AtM4eEF) [disagree](https://www.lesswrong.com/posts/ktDKfKqukTPRiuEPM/robin-hanson-on-the-futurist-focus-on-ai#fecYAwjmMSZ9KRfPL). [This post](https://www.alignmentforum.org/posts/hubbRt4DPegiA5gRR/a-shift-in-arguments-for-ai-risk) argues that there has been a shift in the arguments that motivate new AI risk researchers, and calls for more explanation of these arguments so that they can be properly evaluated. Arguments against AI risk ------------------------- Many views that expect the problem to be solved by default have also been written up this year. A [series](https://aiimpacts.org/conversation-with-paul-christiano/) [of](https://aiimpacts.org/conversation-with-rohin-shah/) [four](https://aiimpacts.org/conversation-with-robin-hanson/) [conversations](https://aiimpacts.org/conversation-with-adam-gleave/) (summarized [here](https://www.alignmentforum.org/posts/QknPz9JQTQpGdaWDp/an-80-why-ai-risk-might-be-solved-without-additional)) suggested that some engaged people expect AI to go well by default, because they are unconvinced by the traditional arguments for AI risk, find discontinuities in AI capabilities relatively unlikely, and are hopeful that there will be “warning shots” that demonstrate problems, that the existing ML community will then successfully fix. One [post](https://www.alignmentforum.org/posts/bd2K3Jdz82csjCFob/a-list-of-good-heuristics-that-the-case-for-ai-x-risk-fails) lists several good outside-view heuristics that argue against AI x-risk, while [another](https://www.alignmentforum.org/posts/xzFQp7bmkoKfnae9R/but-exactly-how-complex-and-fragile) questions why value being complex and fragile must lead to high AI risk. [This talk](https://forum.effectivealtruism.org/posts/9sBAW3qKppnoG3QPq/how-sure-are-we-about-this-ai-stuff) argues that while AGI will intuitively be a big deal, it’s not obvious that we can affect its impact, and so it’s not obvious that longtermists should focus on it. It gives an analogy to trying to influence the impact of electricity, before electricity was commonplace, and suggests there was little impact one could have had on its safe use. It argues that accident risks in particular draw on fuzzy, intuitive concepts, haven’t been engaged with much by critics, and don’t sway most AI researchers. Despite the seeming controversy in this and previous sections, it is worth noting that there is general agreement within the AI safety community on the following broader argument for work on AI safety: 1. Superhuman agents are not *required* to treat humans well, in the same way that humans aren’t required to treat gorillas well. 2. You should have a good *technical* reason to expect that superhuman agents *will* treat humans well. 3. We do not currently have such a reason. Agency and optimization ======================= Mesa optimization ----------------- The problem of [mesa optimization](https://www.alignmentforum.org/s/r9tYkB2a8Fp4DN8yB) was explained in significantly more detail (see also this less formal [summary](https://www.alignmentforum.org/posts/bG4PR9uSsZqHg2gYY/utility-reward)). In mesa optimization, we start with a *base optimizer* like gradient descent that searches for a policy that accomplishes some complex task. For sufficiently complex tasks, it seems likely that the best policy will *itself* be an optimizer. (Meta learning is explicitly trying to learn policies that are also optimizers.) However, the policy could be optimizing a different goal, called the *mesa objective*, rather than the *base objective*. Optimizing the mesa objective must lead to good base objective behavior on the training distribution (else gradient descent would not select it), but could be arbitrarily bad when off distribution. For example, a plausible mesa objective would be to seek influence: such an agent would initially do what we want it to do (since otherwise we would shut it down), but might turn against us once it has accumulated enough power. This decomposes the overall alignment problem into *outer alignment* (ensuring that the base objective is aligned with “what we want”) and *inner alignment* (ensuring that the mesa objective is aligned with the base objective). This is somewhat [analogous](https://www.alignmentforum.org/posts/yXPT4nr4as7JvxLQa/classifying-specification-problems-as-variants-of-goodhart-s) to different [types](https://www.alignmentforum.org/posts/EbFABnst8LsidYs5Y/goodhart-taxonomy) (2017) of Goodhart’s law. The paper and [subsequent](https://www.alignmentforum.org/posts/iydwbZhATANhjoGP7/more-variations-on-pseudo-alignment) [analysis](https://www.alignmentforum.org/posts/uXH4r6MmKPedk8rMA/gradient-hacking) identify and categorize relationships between the base and mesa objectives, and explain how mesa optimizers could fail catastrophically. Of particular interest is that mesa optimizers should be fast, but could still be misaligned, suggesting that [penalizing compute](https://www.alignmentforum.org/posts/nyCHnY7T5PHPLjxmN/open-question-are-minimal-circuits-daemon-free) is [not enough](https://www.alignmentforum.org/posts/fM5ZWGDbnjb7ThNKJ/are-minimal-circuits-deceptive) to solve inner alignment. Effectively, the concern is that our AI systems will have capabilities that generalize, but objectives that [don’t](https://www.alignmentforum.org/posts/2mhFMgtAjFJesaSYR/2-d-robustness). Since this is what drives risk, some [suggest](https://www.alignmentforum.org/posts/nFDXq7HTv9Xugcqaw/is-the-term-mesa-optimizer-too-narrow) that we should talk about this phenomenon, without needing to bring in the baggage of “optimization”, a term we have yet to understand well, while others [argue](https://www.alignmentforum.org/posts/zCcmJzbenAXu6qugS/tabooing-agent-for-prosaic-alignment) that even if we start with this definition, it would be useful to reintroduce the notions of optimization and agency. One advantage of the original definition is that it specifies a particular mechanism by which risk arises; this gives us a foothold into the problem that allows us to propose [potential](https://www.alignmentforum.org/posts/9Dy5YRaoCxH9zuJqa/relaxed-adversarial-training-for-inner-alignment) [solutions](https://www.alignmentforum.org/posts/Ca3sCRGfWvXvYC5YC/what-are-some-non-purely-sampling-ways-to-do-deep-rl) [and](https://www.alignmentforum.org/posts/uSdPa9nrSgmXCtdKN/concrete-experiments-in-inner-alignment) [empirical](https://www.alignmentforum.org/posts/AFdRGfYDWQqmkdhFq/a-simple-environment-for-showing-mesa-misalignment) [investigations](https://www.alignmentforum.org/posts/2GycxikGnepJbxfHT/towards-an-empirical-investigation-of-inner-alignment). Of course, this is actively counterproductive if the risk arises by some [other mechanism](https://www.alignmentforum.org/posts/osxNg6yBCJ4ur9hpi/does-agent-like-behavior-imply-agent-like-architecture), but we might expect optimization to be especially likely because optimization algorithms are simple, and the phenomenon of [double descent](https://www.alignmentforum.org/posts/FRv7ryoqtvSuqBxuT/understanding-deep-double-descent) [suggests](https://www.alignmentforum.org/posts/nGqzNC6uNueum2w8T/inductive-biases-stick-around) that neural nets have an inductive bias towards simplicity. What are optimization and agency, anyway? ----------------------------------------- Given the central importance of optimization to inner alignment and AI safety more broadly, we’d like to be able to formalize it. However, it’s not clear how to do so: while we want optimization to be about the *mechanical process* by which outcomes happen (as opposed to e.g. *our beliefs* about that process), we cannot simply say that X is an optimizer if it makes some quantity go up: by this definition, a [bottle cap](https://www.alignmentforum.org/posts/26eupx3Byc8swRS7f/bottle-caps-aren-t-optimisers) would be an optimizer for “keeping water in the bottle''. It is also relevant how the system [interacts](https://www.alignmentforum.org/posts/rvxcSc6wdcCfaX6GZ/two-senses-of-optimizer) with its environment, rather than just being about whether some number is going up. The type of computation matters: while older models of optimization involve an agent that can search over possible actions and simulate their results, other optimization processes must [control](https://www.alignmentforum.org/posts/ZDZmopKquzHYPRNxq/selection-vs-control) their environment without being able to simulate the consequences of their choice. Our use of the word “agency” [might](https://www.alignmentforum.org/posts/ZigRhB4pAGdr6beQh/deconfuse-yourself-about-agency) [be](https://www.alignmentforum.org/s/4dHMdK5TLN6xcqtyc/p/DfcywmqRSkBaCB6Ma) tied to our models or specific human architectures, rather than being a general concept that could describe a mechanical property of a computation. This would be particularly worrying since it would mean that arguments for AI risk are based on our flawed models of reality, rather than an objective property about reality. However, this is extremely speculative. Embedded agency --------------- Discussions about AI usually assume that a notion of the “actions” that an agent can take. However, the [embedded agency](https://www.alignmentforum.org/s/Rm6oQRJJmhGCcLvxh) sequence points out that this “Cartesian boundary” does not actually exist: since any real agent is embedded in the real world, you cannot make many assumptions that are common in reinforcement learning, such as dedicated and perfectly trusted input-output channels, a perfect model of the environment, an agent architecture that is uninfluenced by the environment, etc. This means you can never consider all of the important information, and optimize everything that could be optimized. This has led to a couple of hypotheses: 1. Real learning algorithms require [modeling assumptions](https://www.alignmentforum.org/posts/Ajcq9xWi2fmgn8RBJ/the-credit-assignment-problem) to solve the credit assignment problem, and so can only lead to [partial agency](https://www.alignmentforum.org/posts/4hdHto3uHejhY2F3Q/partial-agency) or [myopia](https://www.alignmentforum.org/posts/qpZTWb2wvgSt5WQ4H/defining-myopia). (See also this [parable](https://www.alignmentforum.org/posts/SwcyMEgLyd4C3Dern/the-parable-of-predict-o-matic) and [associated thoughts](https://www.alignmentforum.org/posts/25288usP5B5ytnzA4/random-thoughts-on-predict-o-matic).) 2. Embedded agency works via [abstraction](https://www.alignmentforum.org/posts/hLFD6qSN9MmQxKjG5/embedded-agency-via-abstraction), which is the key idea allowing you to [make maps](https://www.alignmentforum.org/posts/t5DFpygMqpnFsmJ3b/cartographic-processes) that are smaller than the territory. Value learning ============== Descriptive embedded agency --------------------------- While the embedded agency sequence is written from the perspective of *prescribing* how ideal agents should operate, we could also aim for a theory that can *describe* real agents like humans. This involves making your theory of agency correspondingly broader: for example, moving from utility functions to [markets](https://www.alignmentforum.org/posts/WmNeCipNwg9CmGy3T/markets-are-universal-for-logical-induction) or [subagents](https://www.alignmentforum.org/posts/3xF66BNSC5caZuKyC/why-subagents), which are more general. The development of such a theory is more grounded in concrete real systems, and more likely to generate theoretical insight or counterexamples, [making it a good research meta-strategy](https://www.alignmentforum.org/posts/9pZtvjegYKBALFnLk/characterizing-real-world-agents-as-a-research-meta-strategy). Such a theory would be [useful](https://www.alignmentforum.org/posts/zQZcWkvEA8DLjKR7C/theory-of-ideal-agents-or-of-existing-agents) so that we can build AI systems that can model humans and human values while avoiding [embedded agency problems with humans](https://www.alignmentforum.org/posts/WJzsTmsDctYCCyMfy/humans-are-embedded-agents-too). The difficulty of value learning -------------------------------- Even if we ignore problems of embedded agency, there are obstacles to value learning. For example, there [need not be](https://www.alignmentforum.org/posts/AeHtdxHheMjHredaq/what-you-see-isn-t-always-what-you-want) a reward function over observations that leads to what we want in POMDP (though we could instead focus on [instrumental reward functions](https://www.alignmentforum.org/posts/aAzApjEpdYwAxnsAS/reinforcement-learning-with-imperceptible-rewards) defined on states). Another key problem is that all you ever get to observe is behavior; this then needs to be decomposed into “beliefs” and “values”, but there is [no clear criterion](https://arxiv.org/abs/1712.05812) (2017) that separates them (although it [hasn’t been proven](https://www.alignmentforum.org/posts/pHWTNMESuAEjZg2Qn/occam-s-razor-may-be-sufficient-to-infer-the-preferences-of) that simplicity doesn’t work, and [human](https://www.alignmentforum.org/posts/2KLz6RQWkCj4Rozrk/is-my-result-wrong-maths-vs-intuition-vs-evolution-in) [priors](https://www.alignmentforum.org/posts/cnjWN4mzmWzggRnCJ/practical-consequences-of-impossibility-of-value-learning) [help](http://arxiv.org/abs/1905.09397)). This suggests that [ambitious value learning](https://www.alignmentforum.org/s/4dHMdK5TLN6xcqtyc/p/5eX8ko7GCxwR5N9mN), in which you identify the one true utility function, is hard. Human models ------------ For an agent to outperform the process generating its data, it [must](https://www.alignmentforum.org/s/4dHMdK5TLN6xcqtyc/p/h9DesGT3WT9u2k7Hr) understand the ways in which that process makes mistakes. So, to outperform humans at a task given only human demonstrations of that task, you need to detect human mistakes in the demonstrations. Modeling humans to this fidelity is an [unsolved problem](https://www.alignmentforum.org/posts/upP8PYgHfXgvgh3FF/training-human-models-is-an-unsolved-problem), though there is a little [progress](https://www.alignmentforum.org/posts/xxnPxELC4jLKaFKqG/learning-biases-and-rewards-simultaneously), and we might hope that we can [make assumptions](https://www.alignmentforum.org/s/4dHMdK5TLN6xcqtyc/p/EhNCnCkmu7MwrQ7yz) about the structure of the model. Any such model is likely to be misspecified, and value learning algorithms are not currently [robust](https://www.alignmentforum.org/s/4dHMdK5TLN6xcqtyc/p/gnvrixhDfG7S2TpNL) to [misspecification](https://www.alignmentforum.org/s/4dHMdK5TLN6xcqtyc/p/cnC2RMWEGiGpJv8go): in one case, the simpler but less conceptually accurate model is [more robust](https://arxiv.org/abs/1903.03877). You might hope that if we give up on *outperforming* humans and just *imitate* them, this would be [safe](https://www.alignmentforum.org/posts/LTFaD96D9kWuTibWr/just-imitate-humans). Even this is controversial, because perhaps humans themselves are [unsafe](https://www.alignmentforum.org/posts/vbtvgNXkufFRSrx4j/three-ai-safety-related-ideas#1__AI_design_as_opportunity_and_obligation_to_address_human_safety_problems), maybe imitating humans [leads](https://www.lesswrong.com/posts/whRPLBZNQm3JD5Zv8/imitation-learning-considered-unsafe) to mesa optimization, or possibly perfect imitation is [too hard](https://www.alignmentforum.org/posts/S7csET9CgBtpi7sCh/challenges-to-christiano-s-capability-amplification-proposal) to achieve. You might also hope that AI systems have good enough models that you can simply provide [natural language instructions](https://www.alignmentforum.org/posts/Bxxh9GbJ6WuW5Hmkj/what-s-the-dream-for-giving-natural-language-commands-to-ai) and the AI does what you mean. The presence of human models in an AI system has a few [unfortunate effects](https://www.alignmentforum.org/posts/BKjJJH2cRpJcAnP7T/thoughts-on-human-models): 1. We can’t test an AI system by seeing if it agrees with human judgment, because the AI system may be using its human model to (in the short term) optimize for agreement with human judgment 2. A bug in the code is more likely to optimize for suffering (since the human model would include the concept of suffering) 3. If humans are modeled with sufficient fidelity, these models may themselves be conscious and capable of suffering. Learning an adequate utility function ------------------------------------- Despite the objections that learning values is hard, it seems like humans are pretty good at learning the values of other humans, even if not perfect. Perhaps we could replicate this, in order to learn an *adequate* utility function that leads to okay outcomes? The main issue is that we are only good at predicting human values in *normal* situations, while powerful AI systems will likely put us in extreme situations where we will disagree much more about values. As a result, we need a [theory](https://www.alignmentforum.org/posts/zvrZi95EHqJPxdgps/why-we-need-a-theory-of-human-values) of human values that defines what to do in these situations. One [theory](https://www.alignmentforum.org/posts/qezBTig6p6p5xtL6G/a-theory-of-human-values), associated [value learning agenda](https://www.alignmentforum.org/posts/CSEdLLEkap2pubjof/research-agenda-v0-9-synthesising-a-human-s-preferences-into), and [toy model](https://www.alignmentforum.org/posts/hcrFxeYYfbFrkKQEJ/full-toy-model-for-preference-learning) propose that we can extract partial preferences from human mental models, and synthesize them together into a full utility function, while respecting meta-preferences about preferences and the synthesis process and taking care to [properly](https://www.alignmentforum.org/posts/qudmaMyRuQk2pHxtj/normalising-utility-as-willingness-to-pay) [normalize](https://www.alignmentforum.org/posts/GfMGa9e79AfDMLj36/best-utility-normalisation-method-to-date) [utilities](https://www.alignmentforum.org/posts/5bd75cc58225bf06703753ef/intertheoretic-utility-comparison-examples). In fact, the [core pieces](https://www.alignmentforum.org/posts/m2bwD87ctjJDXC3SZ/ultra-simplified-research-agenda) of such an approach seem [necessary](https://www.alignmentforum.org/posts/TR3eqQ2fnfKWzxxHL/research-agenda-in-reverse-what-would-a-solution-look-like) for any solution to the problem. [However](https://www.alignmentforum.org/posts/GHNokcgERpLJwJnLW/some-comments-on-stuart-armstrong-s-research-agenda-v0-9), this research agenda depends upon solving many hard problems explicitly in a human-understandable way, which doesn’t jive with the [bitter lesson](http://www.incompleteideas.net/IncIdeas/BitterLesson.html) that ML progress primarily happens by using more compute to solve harder problems. *I don’t agree that the core pieces identified in this research agenda must be solved before creating powerful AI, nor that we must have explicit solutions to the problems.* Uncertainty over the utility function ------------------------------------- We could also make the AI uncertain about the utility function, and ensure that it has a way to learn about the utility function that is grounded in human behavior. Then, as an instrumental goal for maximizing expected reward, the AI will choose actions with high [expected information gain](https://www.alignmentforum.org/posts/Pkr97mB9Y4rkx5DdZ/utility-uncertainty-vs-expected-information-gain). While this was proposed [earlier](https://arxiv.org/abs/1606.03137) (2016), the book [Human Compatible](https://www.amazon.com/Human-Compatible-Artificial-Intelligence-Problem-ebook/dp/B07N5J5FTS) ([summary](https://www.alignmentforum.org/posts/nd692YfFGfZDh9Mwz/an-69-stuart-russell-s-new-book-on-why-we-need-to-replace), [podcast 1](https://futureoflife.org/2019/10/08/ai-alignment-podcast-human-compatible-artificial-intelligence-and-the-problem-of-control-with-stuart-russell/), [podcast 2](https://futureoflife.org/2019/01/17/cooperative-inverse-reinforcement-learning-with-dylan-hadfield-menell/), [interview](https://www.vox.com/future-perfect/2019/10/26/20932289/ai-stuart-russell-human-compatible)) explores the idea in much more detail than previous writing, and it has now [made its way](https://interactive-learning.github.io/) into deep reinforcement learning as well. Intuitively, since the AI is [uncertain](https://www.alignmentforum.org/s/4dHMdK5TLN6xcqtyc/p/ZiLLxaLB5CCofrzPp) about the true reward, it will behave conservatively and try to learn about the true reward, thus [avoiding](https://www.alignmentforum.org/posts/PADPJ3xac5ogjEGwA/defeating-goodhart-and-the-closest-unblocked-strategy) Goodhart’s law (see also [fuzziness](https://www.alignmentforum.org/posts/QJwnPRBBvgaeFeiLR/uncertainty-versus-fuzziness-versus-extrapolation-desiderata)). Of course, once the AI has learned everything there is to learn, it will [behave](https://arbital.com/p/updated_deference/) (2015?) just like a regular utility maximizer. In this setting, you would hope that the AI has [become aligned](https://www.alignmentforum.org/posts/Pkr97mB9Y4rkx5DdZ/utility-uncertainty-vs-expected-information-gain#FGSgRJdrewhEGXRW9) with the true utility function, as long as its initial distribution over utility functions contains the truth, and the observation model by which its distribution is updated is “correct”. However, it might be [quite](https://www.alignmentforum.org/posts/YJq6R9Wgk5Atjx54D/does-bayes-beat-goodhart) [difficult](https://www.alignmentforum.org/posts/FuGDYNvA6qh4qyFah/thoughts-on-human-compatible) to ensure that these actually hold. This also depends on the assumption that there *is* a true utility function, and that the human *knows* it, which is not the case, though this is [being addressed](https://arxiv.org/abs/1901.08654). One important feature of this agenda is that rather than requiring a perfect utility function to begin with, the AI can learn the utility function by interacting with the human; such a feedback mechanism can make a problem [much easier](https://www.alignmentforum.org/s/4dHMdK5TLN6xcqtyc/p/4783ufKpx8xvLMPc6). Interaction also opens up other possibilities, such as learning human [norms](https://www.alignmentforum.org/s/4dHMdK5TLN6xcqtyc/p/eBd6WvzhuqduCkYv3) instead of values. However, it is computationally difficult, and so more [research](https://www.alignmentforum.org/posts/dBMC63hjkc5wPqTC7/human-ai-collaboration) would be needed to make it a viable solution. Current methods for learning human preferences ---------------------------------------------- There has been a lot of practical work on learning human preferences, including: * Building a mistake model by comparing demonstrations of varying degrees of optimality (either by getting humans to [rank](https://arxiv.org/abs/1904.06387) demonstrations or by [introducing noise](https://arxiv.org/abs/1907.03976) into optimal demonstrations) * Learning human-interpretable [representations](https://arxiv.org/abs/1905.12686) in order to advise humans * New forms of human [guidance](http://arxiv.org/abs/1909.09906), such as having humans provide the [advantage](http://arxiv.org/abs/1902.04257) function (instead of the reward), following [natural language](https://research.fb.com/wp-content/uploads/2019/07/Why-Build-an-Assistant-in-Minecraft-v3.pdf) [instructions](https://arxiv.org/abs/1903.02020), and learning from the (human-optimized) [initial state](https://www.alignmentforum.org/posts/7f6DNZhracD7RvxMr/learning-preferences-by-looking-at-the-world) * Learning preferences [for natural language](https://openai.com/blog/fine-tuning-gpt-2/) (rather than the standard RL setup) * [Combining](https://arxiv.org/abs/1811.06521) [multiple types](https://rohinshah.com/wp-content/uploads/2019/12/Reward_Combination_NeurIPS_2019_Workshop_Camera_Ready.pdf) of human feedback * [General](http://arxiv.org/abs/1909.13392) [improvements](https://arxiv.org/abs/1905.12888) [in](https://arxiv.org/abs/1905.11979) [imitation](https://arxiv.org/abs/1905.11108) [learning](http://arxiv.org/abs/1810.06544) and [inverse](https://arxiv.org/abs/1911.00459) [reinforcement](http://arxiv.org/abs/1902.07742) [learning](http://arxiv.org/abs/1809.06404). There are many recent papers that I haven’t cited here, as it is a very large area of work. Robustness ========== Safe reinforcement learning --------------------------- We would like to ensure that our AI systems do not make mistakes during training. With preference learning, we can do this by learning human preferences over [hypothetical behaviors](https://deepmind.com/blog/article/learning-human-objectives-by-evaluating-hypothetical-behaviours) that are not actually executed. Another option is to provide safety constraints and ensure that the AI [never violates them](http://arxiv.org/abs/1910.01074) (even during training), or at least to [significantly](https://openai.com/blog/safety-gym/) [reduce](https://people.eecs.berkeley.edu/~jfisac/papers/Bridging_Safety_and_RL.pdf) such violations. Avoiding *all* mistakes would require us to have a formal specification of what a “mistake” is, or to have some [overseer](https://intelligence.org/2019/04/24/delegative-reinforcement-learning/) that can identify “mistakes” before execution, so that our AI could avoid the mistake even though it hasn’t seen this situation before. *This seems prohibitively hard to me if we include literally all “mistakes”.* Adversarial examples -------------------- Adversarial examples are a clear demonstration of how the “cognition” of neural nets is different from our own: by making superficial changes to the input that would not matter to a human, you can completely change the output of the neural net. While I am not an expert here, and certainly have not read the huge mountains of work done over the last year, I do want to highlight a few things. First, while we might nominally think of adversarial examples as “bugs” in our neural net, this [paper](http://gradientscience.org/adv/) shows that image classifiers are picking up *real* imperceptible features that *do* generalize to the test set. The classifiers really are maximizing predictive accuracy; the problem is that we want them to predict labels based on the features that *we* use, instead of imperceptible (but predictive) features. Adversarial training removes these fragile features, leaving only the robust features; this makes [subsequent](http://gradientscience.org/robust_reps/) [applications](http://gradientscience.org/robust_apps/) easier. While the paper was controversial, I thought that its main thesis seemed to be supported even after reading [these six responses](https://distill.pub/2019/advex-bugs-discussion/). Second, there has been a distinct shift away from the L-infinity norm ball threat model of adversarial examples. So far, it seems that robustness to one set of perturbations doesn’t grant robustness to other perturbations, prompting the development of [multiple perturbations](https://openai.com/blog/testing-robustness/), a benchmark of [natural adversarial examples](http://arxiv.org/abs/1907.07174), and new [evaluation metrics](https://arxiv.org/abs/1902.08265). While the L-infinity norm ball is an interesting [unsolved research problem](https://medium.com/@catherio/unsolved-research-problems-vs-real-world-threat-models-e270e256bc9e), it is in no way a realistic threat model. Third, [adversarial](https://arxiv.org/abs/1812.01647) [attacks](https://arxiv.org/abs/1905.10615) are now being proposed as a method for evaluating how robust an agent trained by reinforcement learning is. This seems especially important since in RL there is often no train-test split, and so it is hard to tell whether an agent has “memorized” a single trajectory or actually learned a policy that works well across a variety of circumstances. Intent alignment ---------------- Ultimately, robustness [seeks to](https://medium.com/@deepmindsafetyresearch/towards-robust-and-verified-ai-specification-testing-robust-training-and-formal-verification-69bd1bc48bda) identify and eliminate all “bugs”, i.e. behaviors that are inconsistent with the specification (see also this [podcast](https://80000hours.org/podcast/episodes/pushmeet-kohli-deepmind-safety-research/)). Instead of considering all the mistakes, we could seek to only prevent [catastrophic mistakes](https://www.alignmentforum.org/s/EmDuGeRw749sD3GKd/p/qALeGJ9nPcs9eC9Af), and ensure that the AI is *intent aligned*, that is, it is always *[trying](https://www.alignmentforum.org/s/EmDuGeRw749sD3GKd/p/ZeE7EKHTFMBs8eMxn)* to do what we want. This goal [avoids](https://www.alignmentforum.org/posts/ZeE7EKHTFMBs8eMxn/clarifying-ai-alignment#3ECKoYzFNW2ZqS6km) many of the pitfalls around the goal of designing an AI with the right utility function. Corrigibility ------------- One promising way in which an AI could be intent aligned is by being [corrigible](https://www.alignmentforum.org/s/EmDuGeRw749sD3GKd/p/fkLYhTQteAu5SinAc): roughly, the AI is not trying to deceive us, it clarifies its uncertainty by asking us, it learns about our preferences, it shuts down if we ask it to, etc. This is a *narrower* concept than intent alignment: an AI that infers our “true” utility function and optimizes it may wrest control away from us in order to expand faster, or make us safer; such an AI would be [aligned but not corrigible](https://www.lesswrong.com/posts/o22kP33tumooBtia3/can-corrigibility-be-learned-safely#YgLfCHknM4Ng2hGvB). There are a few benefits of using corrigibility: 1. It can be achieved with relatively low levels of intelligence (we can imagine corrigible humans) 2. It seems to have a positive feedback loop (that is, an AI that reaches some “threshold” of corrigibility would tend to become more corrigible) 3. It doesn’t seem to require any domain expertise. (A similar [idea](https://www.alignmentforum.org/posts/F9vcbEMKW48j4Z6h9/non-consequentialist-cooperation) would be to build an AI system that only takes actions that the overseer has given informed consent for.) Note that [MIRI’s notion of corrigibility](http://intelligence.org/files/Corrigibility.pdf) (2015) is similar but much stricter. My guess is that MIRI wants the same intuitive corrigibility properties, but wants them to be created by a *simple* change to the *utility function*. Simplicity helps ensure that it cannot be gamed, and the utility function means that you are changing what the AI cares about, rather than trying to constrain a powerful superintelligence. For example, I’d guess that MIRI-corrigibility can depend on whether a shutdown button is pressed, but cannot depend on the *reasons* for which the shutdown button is pressed. If you set aside the utility function requirement, then this property can be achieved using [constrained optimization](https://www.alignmentforum.org/posts/cGLgs3t9md7v7cCm4/corrigibility-as-constrained-optimisation): the agent can optimize normally when the button is not pressed, while ensuring that it is still able to shut down if necessary, and it can optimize for shutting down if the button is pressed. If you set aside the simplicity requirement, then you can define the desired policies and [recover](https://www.alignmentforum.org/posts/XkuRKqXKAaMySbXCN/indifference-multiple-changes-multiple-agents) the correct utility function. But from now on I’m only going to talk about the notion of corrigibility I first introduced. It has been [argued](https://www.lesswrong.com/posts/Djs38EWYZG8o7JMWY/paul-s-research-agenda-faq#79jM2ecef73zupPR4) that while corrigibility is simpler than “human values”, it is a “non-natural” type of cognition, such that you are unlikely to be able to find corrigible intelligences with machine learning. *(I do not feel the force of this intuition; I agree much more with the earlier intuitions.)* You might be worried that since a corrigible AI defers to us, if we were about to take a suboptimal action that we couldn’t tell was suboptimal, the AI wouldn’t stop us from doing so because it can’t explain to us what would be bad about the world. However, at the very least, [it can say](https://www.alignmentforum.org/posts/rArsypGqq49bk4iRr/can-there-be-an-indescribable-hellworld) “this is bad for reasons I can’t fully explain”. Worst case guarantees --------------------- We still want to guarantee that there will *never* be a failure of corrigibility, which can’t be done with regular ML techniques, which only give an [average-case guarantee](https://ai-alignment.com/two-guarantees-c4c03a6b434f). In order to get a worst-case guarantee, we need other techniques. [One](https://ai-alignment.com/training-robust-corrigibility-ce0e0a3b9b4d) [proposal](https://www.alignmentforum.org/posts/9Dy5YRaoCxH9zuJqa/relaxed-adversarial-training-for-inner-alignment) is to use adversarial training to find abstracted inputs on which the agent is incorrigible, where the adversary is aided by interpretability techniques that allow the adversary to understand what the agent is thinking. It would be particularly nice to find a [mechanistic description](https://www.alignmentforum.org/posts/BKM8uQS6QdJPZLqCr/towards-a-mechanistic-understanding-of-corrigibility) of corrigibility, as that would make it easier to verify the absence of incorrigible behavior. Critics argue that this could never work because machine learning [wouldn’t learn the “intended” interpretation](https://www.lesswrong.com/posts/Djs38EWYZG8o7JMWY/paul-s-research-agenda-faq#79jM2ecef73zupPR4) of corrigibility, and could be [adversarial](https://www.lesswrong.com/posts/S7csET9CgBtpi7sCh/challenges-to-christiano-s-capability-amplification-proposal). *I don’t think this objection is critical. It seems like it is saying that ML will fail to generalize and there will be situations in which the concept of corrigibility breaks down, but the entire point of adversarial training is to find these situations and train the agent away from it.* *While this is usually tied in to the broader iterated amplification agenda, it seems to me that solving just this subproblem would achieve a lot of the value of the full agenda. If we had a way of applying adversarial training to an arbitrary AI agent, such that we are very likely to find potential inputs on which the agent is incorrigible, then presumably AI systems that could be incorrigible would not be deployed. Iterated amplification adds additional safety in that it (hopefully) allows you to assume a smarter, already-aligned adversary, whereas a direct solution to this subproblem would have an approximately-as-capable, not-automatically-aligned adversary, which would probably not have a worst-case guarantee but might still be good enough.* Scaling to superhuman abilities =============================== Iterated amplification ---------------------- Iterated amplification carves out a [broad class of algorithms](https://www.alignmentforum.org/posts/HBGd34LKvXM9TxvNf/new-safety-research-agenda-scalable-agent-alignment-via#WJC9Q5MvzqJeNSnLx) that can scale to superhuman abilities, with the hope that we can analyze the alignment properties of the entire class of algorithms at once. Algorithms in this class have two components: 1. Amplification, which increases an agent’s capabilities, at the cost of efficiency. 2. Distillation, which increases an agent’s efficiency, at the cost of capability. Given this, starting from some base agent, the algorithm alternates amplification and distillation, to get successively more capable agents, as long as each component is [good enough](https://www.lesswrong.com/posts/di8H7rEAnzXC97Dvu/progress-and-preservation-in-ida). Given this broad class of algorithms, we can [instantiate](https://www.alignmentforum.org/posts/cYduioQNeHALQAMre/what-are-the-differences-between-all-the-iterative-recursive) many specific algorithms by picking a specific amplification step and a specific distillation step. For example, the amplification step can be done by allowing an overseer to *decompose* the problem into subproblems, which is especially promising for question answering. Distillation could be done using [supervised learning](https://www.alignmentforum.org/posts/xKvzpodBGcPMq7TqE/supervising-strong-learners-by-amplifying-weak-experts), imitation learning, or [reinforcement learning](https://www.alignmentforum.org/posts/fq7Ehb2oWwXtZic8S/reinforcement-learning-in-the-iterated-amplification). [Recursive reward modeling](https://medium.com/@deepmindsafetyresearch/scalable-agent-alignment-via-reward-modeling-bf4ab06dfd84) ([podcast](https://futureoflife.org/2019/12/16/ai-alignment-podcast-on-deepmind-ai-safety-and-recursive-reward-modeling-with-jan-leike/)) is another algorithm that could allow us to scale to superhuman abilities. It can be cast as an algorithm in the iterated amplification class by considering an amplification step that takes agents that can evaluate some set of tasks, and builds new human-agent teams that can evaluate some more complex set of tasks. The distillation step would then be reinforcement learning, to get an agent that can directly solve the more complex tasks. Iterating this eventually leads to an agent that can solve the original desired task. Iterated amplification does impose a particular structure on algorithms, which can be [applied](https://www.alignmentforum.org/posts/Y9xD78kufNsF7wL6f/machine-learning-projects-on-ida) to existing ML problems. However, this may be [uncompetitive](https://www.alignmentforum.org/posts/jYdAxH8BarPT4fqnb/a-dilemma-for-prosaic-ai-alignment) if the best ML algorithms require different algorithmic structures or different environments, in order to reach high capabilities (though we could then train a question-answering system [alongside](https://www.alignmentforum.org/posts/jYdAxH8BarPT4fqnb/a-dilemma-for-prosaic-ai-alignment#K8fRPa9NWZXdARLYN) the other algorithm / environment, which plausibly doesn’t take too many more resources). The [iterated amplification](https://www.alignmentforum.org/s/EmDuGeRw749sD3GKd) sequence, [recursive reward modeling](https://arxiv.org/abs/1811.07871) paper, and [these](https://www.lesswrong.com/posts/Djs38EWYZG8o7JMWY/paul-s-research-agenda-faq) [posts](https://www.alignmentforum.org/posts/FdfzFcRvqLf4k5eoQ/list-of-resolved-confusions-about-ida) help explain the full agenda better. Quantilization -------------- [Quantilization](https://intelligence.org/files/QuantilizersSaferAlternative.pdf) (2015) allows you to amplify a base policy by randomly selecting among the top 1/Q of actions the base policy could take, at a cost of at most Q-fold increase in risk. However, this can [forgo benefits](https://www.alignmentforum.org/posts/Rs6vZCrnQFWQ4p37P/when-to-use-quantilization) of the rest of the base policy. Since quantilization increases risk, it cannot be safely iterated: for example, if you start with a policy with a worst-case 1% chance of failure, and you 5-quantilize it, you now have a worst-case 5% chance of failure. After two more iterations of 5-quantilization, there is no longer a worst-case bound on failure probability. Debate ------ Another mechanism for scaling beyond humans is [debate](https://arxiv.org/abs/1805.00899) ([podcast](https://futureoflife.org/2019/03/06/ai-alignment-through-debate-with-geoffrey-irving/)), in which an AI agent is trained via self-play in a zero-sum game in which its goal is to “win” the debate, as evaluated by a human judge. The key hope is that detecting a lie is easier than lying: if one of the players lies or deceives or manipulates the human, then the other player can reveal that and thereby win the debate. If this were true, we would expect that the equilibrium behavior is for the agent to provide honest, useful information. Since its proposal, debate has been [tested](https://www.alignmentforum.org/posts/5Kv2qNfRyXXihNrx2/ai-safety-debate-and-its-applications) with MNIST and Fashion MNIST, as well as [question answering](https://arxiv.org/abs/1909.05863). There is also a [proposal](https://www.alignmentforum.org/posts/jYvm4mmjvGHcPXtGL/a-concrete-proposal-for-adversarial-ida) to use it to improve iterated amplification. [Theoretical work](https://medium.com/@RyanCarey/new-paper-when-is-truth-telling-favored-in-ai-debate-8f58f14562e5) brings up the possibility of questions that are “too hard”: while sufficiently long “feature debates” are provably truth-seeking (because the debaters can reveal all of their information), it is possible to construct complex questions in which the debate doesn’t find the right answer. However, the results [don’t generalize well](https://www.alignmentforum.org/posts/RQoSCs9SePDMLJvfz/new-paper-when-is-truth-telling-favored-in-ai-debate#gCeKuJ62HmLtPB9C9) from feature debates to real debates. Relatedly, even if it is easy to detect lies, it’s not clear what would happen with [ambiguous questions](https://www.alignmentforum.org/posts/fNTCveSa4HvqvZR2F/problems-with-ai-debate). Since debate doesn’t involve alternating between increasing capabilities and increasing efficiency, it isn’t an [instance](https://www.alignmentforum.org/posts/HBGd34LKvXM9TxvNf/new-safety-research-agenda-scalable-agent-alignment-via#WJC9Q5MvzqJeNSnLx) of iterated amplification. However, both iterated amplification and debate are aiming to compute the answer that an exponentially large tree of bounded humans would arrive at (see next section), and so it seems likely that either they would both work, or neither would work. Factored cognition ------------------ Both iterated amplification and debate depend on the [factored](https://www.alignmentforum.org/s/EmDuGeRw749sD3GKd/p/DFkGStzvj3jgXibFG) [cognition](https://ought.org/presentations/delegating-cognitive-work-2019-06) hypothesis: that arbitrarily complex tasks can be performed arbitrarily well by a [giant tree](https://www.alignmentforum.org/s/EmDuGeRw749sD3GKd/p/NXqs4nYXaq8q6dTTx) of bounded base agents, possibly extended with features like shared external memory or [long-lived assistants](https://ai-alignment.com/strong-hch-bedb0dc08d4e) (2016). Iterated amplification checks local nodes in a tree of considerations and broken-down questions, in which an assistant at level k decomposes its questions, gets answers from assistants at level k-1, and combines them into an overall answer. Meanwhile, in debate, if the two agents disagree, they will play down the most difficult / contested path in an exponential tree of arguments and counterarguments, so the debate training procedure is checking a single path from root to leaf in the exponential tree. It is an open question whether the factored cognition hypothesis is true. [Empirical work](https://ought.org/updates/2019-10-28-progress-update) has been scaling up, and we should hopefully have some informative evidence in the upcoming year. The main reasons people are skeptical of the hypothesis are because it seems that sufficiently complex tasks require building up [big contexts](https://www.alignmentforum.org/posts/J7Rnt8aJPH7MALkmq/vaniver-s-view-on-factored-cognition) or using globally-constructed intuitions or [“inexplicable flashes of insight”](https://www.alignmentforum.org/posts/4qY9zEHLa2su4PkQ4/can-hch-epistemically-dominate-ramanujan). This could be done if the “small” agents simulated an arbitrary Turing Machine, but this would [lose](https://www.lesswrong.com/posts/S7csET9CgBtpi7sCh/challenges-to-christiano-s-capability-amplification-proposal) any guarantees of alignment. However, we might expect that these tasks could still be done by a tree of humans: humans are [allowed](https://www.alignmentforum.org/posts/LigbvLH9yKR5Zhd6y/what-s-wrong-with-these-analogies-for-understanding-informed) to use a heuristic “just because it works”; this should allow the tree of humans to use heuristics that other agents use, including “inexplicable flashes of insight”. Universality ============ Alignment of the tree of humans ------------------------------- In order for this tree of humans to be aligned (a necessary condition for iterated amplification or debate to be aligned), the initial agent must already be aligned, and putting the agents together must not destroy alignment. One [intuition](https://www.lesswrong.com/posts/S7csET9CgBtpi7sCh/challenges-to-christiano-s-capability-amplification-proposal) that this is hard is that alignment is not compositional; a “big” agent made up of “small” aligned agents need not be aligned. However, the hope doesn’t depend on compositionality of alignment; it instead depends on ensuring that your agents never do incorrigible optimization. In addition, it could be the case that “large” initial agents like humans (or human imitations) are not robustly aligned, because there may be some [clever argument](https://ai-alignment.com/universality-and-security-amplification-551b314a3bab) that causes them to behave incorrigibly. One response would be to use [low-bandwidth overseers](https://www.alignmentforum.org/posts/yxzrKb2vFXRkwndQ4/understanding-iterated-distillation-and-amplification-claims) as the initial agent, who only answer very “small” questions on which we are relatively confident that there are no such failures. We would also hope to [train](https://www.alignmentforum.org/posts/4JuKoFguzuMrNn6Qr/hch-is-not-just-mechanical-turk) humans to properly decompose questions and behave corrigibly, so that putting together several humans remains corrigible (a task for which we need [social scientists](https://blog.openai.com/ai-safety-needs-social-scientists/)). Note that it is only competitive to approximate the tree of humans with iterated amplification if we expect that any powerful AI systems will also be trained in a manner similar to iterated amplification. If we instead consider a [model](https://www.alignmentforum.org/posts/H5gXpFtg93qDMZ6Xn/aligning-a-toy-model-of-optimization) in which ML perfectly optimizes a function (rather than performing iterated local search), then iterated amplification would be far more expensive than unaligned powerful AI systems. It would be worth studying this simpler model to see if alignment is possible there. Ascription universality ----------------------- Even if we know that the tree of humans is aligned, we also need to ensure that the model trained from oversight from the tree of humans will also be aligned. The key claim in favor of this is that HCH (the tree of humans) is *universal*, that is, it “knows” any facts that a sufficiently smaller computation “knows”. This was formalized [here](https://ai-alignment.com/towards-formalizing-universality-409ab893a456) and applied to [multiple](https://ai-alignment.com/informed-oversight-18fcb5d3d1e1) [problems](https://ai-alignment.com/universality-and-model-based-rl-b08701394ddd), including the problem that malign optimization might [emerge](https://ai-alignment.com/universality-and-consequentialism-within-hch-c0bee00365bd) *within* HCH. While a good explanation of this is out of scope here, I summarized these posts [here](https://www.alignmentforum.org/posts/3kzFPA5uuaGZWg4PS/an-81-universality-as-a-potential-solution-to-conceptual). Ascription universality does have to be [applied](https://www.alignmentforum.org/posts/R5Euq7gZgobJi5S25/nuances-with-ascription-universality) to the entire training process and not just the final model. Interpretability ================ Since we want to be able to “know everything the model knows”, and also to be able to find situations under with a model behaves corrigibly (see worst case guarantees above), it would be very useful to be able to peer inside our models and understand what they are doing. It would be particularly useful to be able to identify optimization processes and [understand](https://www.alignmentforum.org/posts/Zj2PgP5A8vY2G3gYw/optimization-provenance) how they come about. Even though interpretability tools probably could not [deal with](https://www.alignmentforum.org/posts/J9D6Bi3eFDDhCaovi/will-transparency-help-catch-deception-perhaps-not) already deceptive models, since the deceptive models could figure out how to fool the tools, it seems likely that interpretability could help [prevent](https://www.alignmentforum.org/posts/J9D6Bi3eFDDhCaovi/will-transparency-help-catch-deception-perhaps-not#yn5YcLnL6vs6AxxAE) deception from ever arising -- hopefully an easier task. However, interpretability has other uses besides catching problems: it could also be [used](https://www.alignmentforum.org/posts/X2i9dQQK3gETCyqh2/chris-olah-s-views-on-agi-safety) to get more understandable models during training, provide feedback on the *process* by which a model makes a decision (rather than feedback on just the decision), or create ML techniques that help us understand the world without acting in it (thus avoiding problems with agential AI). Unfortunately, I haven’t kept up with interpretability research, so I can’t say how it’s progressed recently, but one paper you could start with is [activation atlases](https://distill.pub/2019/activation-atlas/). Impact regularization ===================== Impact measures --------------- In 2018, there was a lot of progress on proposing specific impact measures, including [relative reachability](https://arxiv.org/abs/1806.01186) and [attainable utility preservation](https://www.alignmentforum.org/posts/yEa7kwoMpsBgaBCgb/towards-a-new-impact-measure) ([followup](https://www.alignmentforum.org/posts/mDTded2Dn7BKRBEPX/penalizing-impact-via-attainable-utility-preservation), [paper](https://arxiv.org/abs/1902.09725)). These were recently [unified](https://medium.com/@deepmindsafetyresearch/designing-agent-incentives-to-avoid-side-effects-e1ac80ea6107) as using similar underlying algorithms but with different “deviation measures”: the former considers the change in number of reachable states, whereas the latter considers the change in attainable utility (for some set of utility functions). These [two](https://www.alignmentforum.org/posts/TPy4RJvzogqqupDKk/a-survey-of-early-impact-measures) [posts](https://www.alignmentforum.org/posts/pf48kg9xCxJAcHmQc/understanding-recent-impact-measures) summarize the work on impact (going back till 2012). What is impact, anyway? ----------------------- The [Reframing Impact](https://www.alignmentforum.org/s/7CdoznhJaLEKHwvJW) sequence aims to build intuitions about what we mean by “impact”, and concludes that an action is impactful if it changes our ability to get what we want. Of course, this definition depends on “what we want”, whereas usually with impact regularization we want something that is [easy to specify](https://www.alignmentforum.org/s/7CdoznhJaLEKHwvJW/p/xCxeBSHqMEaP3jDvY). However, we might hope that impact is relatively goal-agnostic, because for most goals you need to pursue the same convergent instrumental subgoals. In particular, we might hope for a formalizable notion of *power*, that attainable utility preservation could penalize. To better distinguish between different definitions and techniques for measuring impact, this [post](https://www.alignmentforum.org/posts/wzPzPmAsG3BwrBrwy/test-cases-for-impact-regularisation-methods) proposes several test cases for impact regularization. Utility of impact measures -------------------------- The mainline use case for impact regularization is to be an “additional layer of defense”: if for some reason we fail to align an AI system, then hopefully there still won’t be catastrophic consequences, because the AI system only takes low-impact actions. However, this may fail to work for a [variety](https://www.alignmentforum.org/posts/kCY9dYGLoThC3aG7w/best-reasons-for-pessimism-about-impact-of-impact-measures) of [reasons](https://www.alignmentforum.org/posts/zrunBA8B5bmm2XZ59/reversible-changes-consider-a-bucket-of-water). Still, work on impact measures could be [useful](https://www.alignmentforum.org/posts/wJK944YqvFwjdbqCP/four-ways-an-impact-measure-could-help-alignment) for deconfusion, testing protocols, temporary alignment measures, or [value-neutrality verification](https://www.alignmentforum.org/posts/jGB7Pd5q8ivBor8Ee/impact-measurement-and-value-neutrality-verification-1). Causal modeling =============== [Causal influence diagrams](https://medium.com/@deepmindsafetyresearch/understanding-agent-incentives-with-causal-influence-diagrams-7262c2512486) help us understand what a training *process* does. Given a causal influence diagram, we can determine *observation incentives* (what an agent would like to know) and *intervention incentives* (what an agent would like to change). We can produce such diagrams for [AGI safety frameworks](https://arxiv.org/abs/1906.08663), and [analyze](https://medium.com/@deepmindsafetyresearch/designing-agent-incentives-to-avoid-reward-tampering-4380c1bb6cd) solutions to reward function tampering, user feedback tampering, and observation tampering. For example, it allows us to show that if the agent’s plans are evaluated by the current reward, then there is no incentive for the agent to tamper with its reward function. The variables of the diagrams represent important components of the agent and the environment (such as reward functions and dynamics models in the agent, and the user’s preferences and the state of the world in the environment). Different ways of combining these into agent setups lead to different causal influence diagrams. The incentive analysis enables the designer to choose agent setups with good incentive properties. However, the causal models themselves are not uniquely determined. For example, what counts as wireheading is [relative](https://www.alignmentforum.org/posts/BvctuKocyWR4YYea3/wireheading-is-in-the-eye-of-the-beholder) to the stance taken towards the system and its desired goals. For example, if you [define](https://www.alignmentforum.org/posts/vXzM5L6njDZSf4Ftk/defining-ai-wireheading) it as taking control of some “narrow measurement channel”, then what is a measurement channel and what the goal is depends on modeling assumptions. Oracles ======= Oracles also benefit from reasoning about causality and influences. A system that maximizes predictive accuracy ends up choosing [self-confirming predictions](https://www.alignmentforum.org/posts/KoEY9CjrKe93ErYhd/self-confirming-predictions-can-be-arbitrarily-bad), which can be arbitrarily bad. (This affects [self-supervised learning](https://www.alignmentforum.org/posts/L3Ryxszc3X2J7WRwt/self-supervised-learning-and-manipulative-predictions) in addition to oracles.) You might hope to avoid this by [preventing](https://www.alignmentforum.org/posts/RmPKdMqSr2xRwrqyE/the-dualist-predict-o-matic-usd100-prize) the AI system from being aware of itself, but this [doesn’t work](https://www.alignmentforum.org/posts/yArZKCEheZt8GkK6p/self-fulfilling-prophecies-aren-t-always-about-self). Instead, we could ensure that the oracle makes predictions conditional on the predictions [not influencing anything](https://www.alignmentforum.org/posts/wJ3AqNPM7W4nfY5Bk/self-confirming-prophecies-and-simplified-oracle-designs) (using [randomization](https://www.alignmentforum.org/posts/yAiqLmLFxvyANSfs2/counterfactual-oracles-online-supervised-learning-with) to do so). There are still [other problems](https://www.alignmentforum.org/posts/jhSjP3QLKPc5AqumD/problems-with-counterfactual-oracles) besides self-confirming predictions, such as [acausal](https://www.alignmentforum.org/posts/42z4k8Co5BuHMBvER/breaking-oracles-hyperrationality-and-acausal-trade) [trade](https://www.alignmentforum.org/posts/6WbLRLdmTL4JxxvCq/analysing-dangerous-messages-from-future-ufai-via-oracles). Decision theory =============== There’s been a lot of work exploring the intuitions behind decision theory. Since I [don't follow](https://www.alignmentforum.org/posts/uKbxi2EJ3KBNRDGpL/comment-on-decision-theory#pNrynCrozQPj3tFws) decision theory closely, I’m not going to try and summarize the conversation, and instead you get a list of posts: [pro CDT](https://www.alignmentforum.org/posts/CvBn9vNL65AMhAAs6/build-a-causal-decision-theorist), [anti CDT](https://www.alignmentforum.org/posts/wkNQdYj47HX33noKv/cdt-dutch-book), [anti FDT](https://www.alignmentforum.org/posts/ySLYSsNeFL5CoAQzN/a-critique-of-functional-decision-theory), [actually it all depends on counterfactuals](https://www.alignmentforum.org/posts/WkPf6XCzfJLCm2pbK/cdt-edt-udt), [anti UDT](https://www.alignmentforum.org/posts/9sYzoRnmqmxZm4Whf/conceptual-problems-with-udt-and-policy-selection) because of [commitment races](https://www.alignmentforum.org/posts/brXr7PJ2W4Na2EW2q/the-commitment-races-problem), [UDT doesn’t work with AIXI](https://www.alignmentforum.org/posts/mrZp6qC7DDXKQZeeC/failures-of-udt-aixi-part-1-improper-randomizing), strange reasoning in [Troll Bridge](https://www.alignmentforum.org/posts/hpAbfXtqYC2BrpeiC/troll-bridge-5), a [comparison](https://www.alignmentforum.org/posts/QPhY8Nb7gtT5wvoPH/comparison-of-decision-theories-with-a-focus-on-logical) across decision theories, [counterfactual](https://www.alignmentforum.org/posts/EAqHkKtbefvyRs4nw/counterfactual-induction) [induction](https://www.alignmentforum.org/posts/xBoBmPtgvwdfqm2r5/counterfactual-induction-algorithm-sketch-fixpoint-proof) [posts](https://www.alignmentforum.org/posts/Cu4v9MHGuhLnDQTuF/counterfactual-induction-lemma-4). There’s also been some discussion of why people care about decision theory: it is useful for [improving rationality, finding problems](https://www.alignmentforum.org/posts/JSjagTDGdz2y6nNE3/on-the-purposes-of-decision-theory-research), and [deconfusion](https://www.alignmentforum.org/posts/uKbxi2EJ3KBNRDGpL/comment-on-decision-theory). Relatedly, this [paper](https://foundational-research.org/approval-directed-agency-and-the-decision-theory-of-newcomb-like-problems/) characterizes the decision theories of existing agents, and this [post](https://www.alignmentforum.org/posts/XTgkhjNTEi97WHMi6/pavlov-generalizes) explains how “Pavlov” strategies (similar to reinforcement learning) can work well with game theory. As we get to the end of the technical alignment section, I want to mention [BoMAI](https://www.alignmentforum.org/posts/pZhDWxDmwzuSwLjou/asymptotically-unambitious-agi), which didn’t fit in any of the sections. BoMAI is an AIXI-like system that does not seek power, because it only cares about reward until the end of the episode (myopia), and during the episode it is confined to a box from which information cannot leave. Such an AI system can still be useful because there is also a human in the box, who can transmit information to the outside world after the episode has ended. Strategy and coordination ========================= So far I’ve been talking about the technical work on the alignment problem. Let’s now switch to more “meta” work that tries to predict the future in order to prioritize across research topics. Continuous vs discontinuous takeoff ----------------------------------- A central disagreement among AI researchers is about how “quickly” AI improves once it reaches human level. Recently, the question has been distilled to whether there will be a *discontinuity* in AI capabilities. As a result, I will ask whether takeoff will be *continuous* or *discontinuous* (as opposed to *slow* or *fast*). One operationalization of this question is whether there will be a 4-year doubling of GDP that ends before the first 1-year doubling of GDP starts. Note that continuous takeoff need not be slow: to get to 4-year doubling, you need superexponential growth. Under exponential growth, the doubling time stays fixed at its current value of a few decades. Extrapolating [historical growth trends](https://aiimpacts.org/historical-growth-trends/) (which “supports the possibility of radical increases in growth rate”) would still (probably) be compatible with this operationalization. [Two](https://sideways-view.com/2018/02/24/takeoff-speeds/) [posts](https://aiimpacts.org/likelihood-of-discontinuous-progress-around-the-development-of-agi/) argue for continuous takeoff; the main argument is that continuity is very likely for properties that people care about, since lots of people are trying to make progress on the property, and it is less likely that we quickly invest much more effort into making progress on the property. So far, there [has not been](https://www.lesswrong.com/posts/PzAnWgqvfESgQEvdg/any-rebuttals-of-christiano-and-ai-impacts-on-takeoff-speeds) a compelling response, but this [does not mean](https://www.lesswrong.com/posts/PzAnWgqvfESgQEvdg/any-rebuttals-of-christiano-and-ai-impacts-on-takeoff-speeds#rXh6LNmoz64mLv2kX) that researchers agree. There has been some discussion of particular properties that make discontinuous takeoff seem more likely (though I would guess that they are not the arguments that MIRI researchers would make). For example, perhaps we just need to find the one correct architecture, which will then cause a discontinuity, but note that birds and primates have [independently evolved](https://aiimpacts.org/primates-vs-birds-is-one-brain-architecture-better-than-the-other/) neural architectures that both work well. Alternatively, AI systems with different explicit utility functions could [cooperate by merging](https://www.alignmentforum.org/posts/gYaKZeBbSL4y2RLP3/strategic-implications-of-ais-ability-to-coordinate-at-low) to pursue a joint utility function, making them much more effective at coordination than humans, allowing them to [avoid](https://www.alignmentforum.org/posts/Sn5NiiD5WBi4dLzaB/agi-will-drastically-increase-economies-of-scale) principal-agent problems that plague human corporations. This could lead to a discontinuous jump. AI systems could also [build monopolies](https://www.alignmentforum.org/posts/dt4z82hpvvPFTDTfZ/six-ai-risk-strategy-ideas) through such coordination to obtain a decisive strategic advantage. We could also [expect](https://www.alignmentforum.org/posts/Sn5NiiD5WBi4dLzaB/agi-will-drastically-increase-economies-of-scale) that just as the invention of culture and social learning by evolution allowed humans to become the dominant species very quickly (relatively speaking), similarly once AI systems are capable of social learning they may also “take off” discontinuously. However, the same argument could be taken as evidence *[against](https://www.lesswrong.com/posts/XjuT9vgBfwXPxsdfN/might-humans-not-be-the-most-intelligent-animals)* a discontinuity, since current natural language systems like [GPT-2](https://blog.openai.com/better-language-models/) could already be thought of as processing culture or doing social learning. It is worth noting that questions about [recursive self improvement](https://www.alignmentforum.org/posts/CjW4axQDqLd2oDCGG/misconceptions-about-continuous-takeoff) and [decisive strategic advantage](https://www.alignmentforum.org/posts/PKy8NuNPknenkDY74/soft-takeoff-can-still-lead-to-decisive-strategic-advantage) do not map cleanly onto the question of takeoff speeds, though they are related. The primary reason takeoff speed is important is that it determines whether or not we will be able to respond to problems as they come up. For this purpose, it’s probably better to define takeoff speed with respect to the [amount of work](https://www.alignmentforum.org/posts/cxgtQXnH2uDGBJJGa/redefining-fast-takeoff) that can be done as AI takes off, which might differ significantly from calendar time. The importance of compute ------------------------- There is a strong case that the most effective methods (so far) are the ones that can [leverage](http://www.incompleteideas.net/IncIdeas/BitterLesson.html) more computation, and the [AI-GA approach](https://arxiv.org/abs/1905.10985) to general intelligence is predicated on this view (for example, by [learning good learning environments](http://arxiv.org/abs/1901.01753)). In fact, since the rise of deep learning in 2012, the [amount](https://openai.com/blog/ai-and-compute/) of compute used in the largest AI training runs has been increasing exponentially with a 3.4-*month* doubling time. It’s important to note the caveat that we cannot simply increase compute: we also [need good data](https://staff.fnwi.uva.nl/m.welling/wp-content/uploads/Model-versus-Data-AI-1.pdf), which is sparse in rare, unsafe situations (consider driving when a pedestrian suddenly jumps on the road). This may require human knowledge and explicit models. Since it seems more likely that compute grows continuously (relative to a “deep insights” model), this would argue for a more continuous takeoff. However, you may expect that we still need deep insights, potentially because you think that current techniques could never lead to AGI, due to their lack of some property [crucial](https://aiimpacts.org/evidence-against-current-methods-leading-to-human-level-artificial-intelligence/) to general intelligence (such as [causal reasoning](https://www.lesswrong.com/posts/SvhzEQkwFGNTy6CsN/alphastar-impressive-for-rl-progress-not-for-agi-progress)). However, for any such property, it seems that *some* neural net could encode that property, and the [relevant question](https://www.lesswrong.com/posts/SvhzEQkwFGNTy6CsN/alphastar-impressive-for-rl-progress-not-for-agi-progress#kRpwqPPjcGEbEhXHA) is how big the neural net has to be and how long it takes for local search to find the right computation. Sociological evidence --------------------- It has recently become more common to critique the field of AI as a whole, which should (arguably) cause you to lengthen your timelines. For example, [hypothesizing after the results are known](https://arxiv.org/abs/1904.07633) makes for bad science that doesn’t generalize, and research that is “reproducible” in the sense that the code can be rerun to get the same results need not have [external validity](http://proceedings.mlr.press/v97/bouthillier19a/bouthillier19a.pdf). There is also a tendency for researchers to throw trial and error at problems, which means that with repeated trials by chance we can get results that look significant. It also means that researchers don’t understand the systems they build; [reorienting](https://www.alignmentforum.org/posts/X2i9dQQK3gETCyqh2/chris-olah-s-views-on-agi-safety) the field to focus on understanding could make our design decisions more deliberate and make it more likely that we build aligned AIs. We should also expect that at least industry research is [biased](https://www.lesswrong.com/posts/KnQs55tjxWopCzKsk/the-ai-timelines-scam) towards short timelines, since any companies that didn’t argue for short timelines would be much less likely to get funding. Meta work on forecasting ------------------------ While forecasting the future is notoriously hard, collaborative and checkable forecasting is even harder. It would be nice to at least reduce the difficulty back down to “regular” forecasting. Three steps have been taken towards this: 1. People need to agree on the meaning of the terms used; an AI forecasting [dictionary](https://www.lesswrong.com/posts/8y7DcSF4eAkXoru4u/ai-forecasting-dictionary-forecasting-infrastructure-part-1-2) has been developed for this purpose. 2. In order to be checkable, questions need to be operationalized; but then it is often the case that the primary determinant of the answer to a question depends on some “distractor” feature. For example, whether we have a superhuman AI at <game> by 2025 depends a lot on who tries to make such an AI, rather than whether we have the technical ability to make such an AI. A partial solution was to create a [resolution council](https://www.lesswrong.com/posts/9G6CCNXkA7JZoorpY/ai-forecasting-resolution-council-forecasting-infrastructure), and instead have questions ask about the future opinion of the resolution council. 3. This [post](https://www.lesswrong.com/posts/yy3FCmdAbgSLePD7H/how-to-write-good-ai-forecasting-questions-question-database) provides advice on how to write good forecasting questions, with a database of examples. Of course, there is still the hard problem of actually figuring out what happens in the future (and it’s even [hard](https://www.openphilanthropy.org/blog/how-feasible-long-range-forecasting) to tell *whether* long-run forecasting is feasible). The Good Judgment Project studied practices that help with this problem, summarized [here](https://aiimpacts.org/evidence-on-good-forecasting-practices-from-the-good-judgment-project/). Another [issue](https://www.lesswrong.com/posts/Lds9opZsAMbjuZp7h/coordination-surveys-why-we-should-survey-to-organize) arises when asking members of a group (e.g. AI researchers) about outcomes that depend on actions within that group: due to the bystander effect, everyone may predict that the group will solve a problem, even though they themselves are not trying to solve the problem. So, we should instead ask people to make predictions about the proportion of members that try to solve a problem, and compare that to the proportion of members who say that they are trying to solve the problem. AI Progress =========== A full update on AI progress in 2019 would be far too long, so here I’ll just mention some results I found interesting, which biases towards 1. results involving “throwing compute at the problem”, and 2. understanding deep learning. Reinforcement learning ---------------------- 1. [AlphaStar](https://deepmind.com/blog/alphastar-mastering-real-time-strategy-game-starcraft-ii/) ([update](https://deepmind.com/blog/article/AlphaStar-Grandmaster-level-in-StarCraft-II-using-multi-agent-reinforcement-learning), [discussion](https://www.alexirpan.com/2019/02/22/alphastar-part2.html)) become extremely good at Starcraft. 2. [OpenAI Five](https://openai.com/blog/how-to-train-your-openai-five/) beat the world champions at Dota, and could play cooperatively alongside humans. 3. OpenAI trained a robot to manipulate a [Rubik’s cube](https://openai.com/blog/solving-rubiks-cube/) so that it could sometimes solve a jumbled cube when given the steps of the solution. See also [this discussion](https://www.alexirpan.com/2019/10/29/openai-rubiks.html). 4. [MuZero](https://arxiv.org/abs/1911.08265) is an evolution of AlphaZero where MCTS is applied on a *learned* world model optimized for planning, allowing it to master Atari in addition to AlphaZero’s Go, Chess, and Shogi. See also [this paper](https://learningtopredict.github.io/) on instrumentally learned world models. 5. [Pluribus](https://science.sciencemag.org/content/early/2019/07/10/science.aay2400.full) was shown to be superhuman at *multiplayer* poker. (Note that to my knowledge it did not use deep learning, and it did not require much compute.) 6. With a complex enough [hide-and-seek](https://openai.com/blog/emergent-tool-use/) environment, self-play can learn qualitatively interesting behaviors. Deep learning ------------- 1. While [GPT-2](https://blog.openai.com/better-language-models/) is the most well-known, there have been several large language models that are eerily good at capturing language, such as [Transformer-XL](http://ai.googleblog.com/2019/01/transformer-xl-unleashing-potential-of.html) and [XLNet](https://arxiv.org/abs/1906.08237). 2. [SATNet](http://arxiv.org/abs/1905.12149) proposed a differentiable layer for neural networks that provides a strong inductive bias towards “logical reasoning”, though even regular machine translation techniques [work well](https://arxiv.org/abs/1912.01412) for function integration and differential equation solving. 3. The [lottery ticket](https://arxiv.org/abs/1803.03635) hypothesis from 2018 was [tested](https://eng.uber.com/deconstructing-lottery-tickets/) [much](https://arxiv.org/abs/1903.01611) [more](https://ai.facebook.com/blog/understanding-the-generalization-of-lottery-tickets-in-neural-networks). 4. The [double descent](https://arxiv.org/abs/1812.11118) phenomenon was [empirically validated](https://openai.com/blog/deep-double-descent/). Field building ============== While there have been a lot of field building efforts, they are relatively disjoint and not part of a conversation, and so I’ve summarized them in lists. Summaries and reviews --------------------- 1. This [talk](https://www.youtube.com/watch?v=AMSKIDEbjLY) and [multipart](https://futureoflife.org/2019/04/11/an-overview-of-technical-ai-alignment-with-rohin-shah-part-1/) [podcast](https://futureoflife.org/2019/04/25/an-overview-of-technical-ai-alignment-with-rohin-shah-part-2/) provides an overview of approaches to technical AI alignment. 2. This [post](https://ai-alignment.com/ai-alignment-landscape-d3773c37ae38) decomposes the beneficial AI problem into a tree of different subproblems (with a particular focus on the alignment problem). 3. There is of course the annual [literature review and charity comparison](https://www.alignmentforum.org/posts/SmDziGM9hBjW9DKmf/2019-ai-alignment-literature-review-and-charity-comparison). 4. This [post](https://www.alignmentforum.org/posts/mJ5oNYnkYrd4sD5uE/clarifying-some-key-hypotheses-in-ai-alignment) identifies important hypotheses that researchers disagree about. Agendas and prioritization -------------------------- 1. This [doc](https://docs.google.com/document/d/1FbTuRvC4TFWzGYerTKpBU7FJlyvjeOvVYF2uYNFSlOc/edit) provides an overview of the technical problems that need to be solved to align AI systems (as opposed to e.g. MIRI’s deconfusion approach). 2. [These](https://www.alignmentforum.org/posts/rASeoR7iZ9Fokzh7L/problems-in-ai-alignment-that-philosophers-could-potentially) [posts](https://www.alignmentforum.org/posts/4xbsi4wbourPkb47x/technical-agi-safety-research-outside-ai) list questions that could be tackled by philosophers and non-AI researchers respectively. 3. It would be better to [bridge](https://www.nature.com/articles/s42256-018-0003-2) near- and long-term concerns about AI, to prevent the fields from “fighting” each other. 4. For s-risks, rather than looking at particular scenarios, we could focus on [risk factors](http://s-risks.org/risk-factors-for-s-risks/): properties we can intervene on to make risks less probable or less severe. Events and news updates ----------------------- 1. Several conferences and workshops in 2019, including [Beneficial AGI](https://futureoflife.org/beneficial-agi-2019/), [SafeML](https://sites.google.com/view/safeml-iclr2019/accepted-papers) at ICLR, [AI Safety](https://www.ai-safety.org/) at IJCAI, and [Uncertainty and Robustness](https://sites.google.com/view/udlworkshop2019/accepted-papers) at ICML. 2. There was a human-aligned AI [summer school](http://humanaligned.ai/) and an [AI safety camp](https://aisafetycamp.com/previous-camps/). 3. OpenAI switched to a [limited-profit structure](https://openai.com/blog/openai-lp/) and received a [$1B investment](https://openai.com/blog/microsoft/) from Microsoft, while still expressing support for their [charter](https://openai.com/charter/). The Center for Security and Emerging Technology (CSET) was [founded](https://www.georgetown.edu/news/qa-with-jason-matheny-founding-director-of-the-center-for-security-and-emerging-technology/). References ========== See the [Google Doc](https://docs.google.com/document/d/1Fng1J_QPb7GEeLBMmWWfZOguw7yUTZot0egrCbKpVwk/edit#heading=h.kr223qj1g1y5) for a list of all the names and links in the text above.
8fa219eb-543f-4df7-be40-941abe898bd2
trentmkelly/LessWrong-43k
LessWrong
AGI-12 and AGI-Impacts - late places available There are still some places available in the Winter Intelligence Multi-Conference, a dual conference including AGI-12 (the Fifth Conference on Artificial General Intelligence), followed by the AGI impacts conference. The impacts conference will about the safety, risks and impacts of AGI, and how best to prepare now for these challenges. This is of great relevance to the people of Less Wrong. Plus it's in Oxford - Oxford is nice. The AGI-12 conference is on the 8th-9th December (with morning workshops on the 10th-11th), while the AGI impacts conference in on the 10th-11th. Reduced prices are available for students; details here. Hope to see as many of you as we can! And if people want to stay on for a few days after the conference, people from the Future of Humanity Institute should be available to chat with.
c6c0a988-75a6-4962-9075-8d0f32fcfb61
trentmkelly/LessWrong-43k
LessWrong
School Daze In our educational system, the people are represented by two separate yet equally important groups. The children who have committed no crime, and the adults who prosecute the offenders. These are their stories. We need to close the schools to slow the spread, they said. But what profit us to close the schools, if most everyone is getting Omicron anyway, the schools aren’t less safe than what kids would do anyway, and the kids were never in real danger? We need to do remote learning if we close the schools, they said. But what does ‘remote learning’ accomplish in its current form, other than to ensure children are properly punished for the crime of being children? Teaching them that society wants them to suffer and that their success depends on obeying arbitrary rules while looking like they are looking into a computer screen all day and ‘paying attention’ without secretly doing anything useful or fun? Teaching them how to get around such rules? We need to keep the schools open, they said. No one is learning anything with this ‘remote learning,’ they (correctly) said. But what profit us to keep the schools open if no one is doing any learning there either, no matter what you think of a normal school day, because the rules and obsessions regarding Covid-19, combined with absences, render the entire operation non-functional? What profit all these precautions, tests, isolations and rules, if they transparently don’t identify the sick and keep them out of the building, and in many cases create situations that look quite a bit like superspreader events? No profits. Only pain. Because all these options have one key thing in common. They all fail to Play to Win the Game. Or at least, they fail it for any good game. For any game where winning is preferable. The Game is up to you. You define what winning means. The Game can be learning reading, writing and arithmetic, probability and statistics, economics, history, science, art and music, or anything else you care to
b8ff608e-78cb-4239-b02a-9ec72f896771
trentmkelly/LessWrong-43k
LessWrong
NIH Solicitation: Basic research on decision making The NIH recently announced a solicitation for Basic Research on Decision Making: Cognitive, Affective, and Developmental Perspectives.  This is an R01, which means you aren't going to get it unless you have a team, with members who already have a bunch of publications in refereed journals.  It's possible that SIAI could go after this as a prime - unlikely that they would win, but possible.  OppNet intends to commit approximately $3,000,000 in FY 2012 to fund 5 to 15 grants in response to this FOA.  Applications budgets may not exceed $500,000 direct costs per year. Letter of intent due December 18, 2011.
8ece1963-f3ba-46df-ba86-79a54c4d23f4
trentmkelly/LessWrong-43k
LessWrong
Beyond Statistics 101 Is statistics beyond introductory statistics important for general reasoning? Ideas such as regression to the mean, that correlation does not imply causation and base rate fallacy are very important for reasoning about the world in general. One gets these from a deep understanding of statistics 101, and the basics of the Bayesian statistical paradigm. Up until one year ago, I was under the impression that more advanced statistics is technical elaboration that doesn't offer major additional insights  into thinking about the world in general. Nothing could be further from the truth: ideas from advanced statistics are essential for reasoning about the world, even on a day-to-day level. In hindsight my prior belief seems very naive – as far as I can tell, my only reason for holding it is that I hadn't heard anyone say otherwise. But I hadn't actually looked advanced statistics to see whether or not my impression was justified :D. Since then, I've learned some advanced statistics and machine learning, and the ideas that I've learned have radically altered my worldview. The "official" prerequisites for this material are calculus, differential multivariable calculus, and linear algebra. But one doesn't actually need to have detailed knowledge of these to understand ideas from advanced statistics well enough to benefit from them. The problem is pedagogical: I need to figure out how how to communicate them in an accessible way. Advanced statistics enables one to reach nonobvious conclusions To give a bird's eye view of the perspective that I've arrived at, in practice, the ideas from "basic" statistics are generally useful primarily for disproving hypotheses. This pushes in the direction of a state of radical agnosticism: the idea that one can't really know anything for sure about lots of important questions. More advanced statistics enables one to become justifiably confident in nonobvious conclusions, often even in the absence of formal evidence coming from the standar
d5f5dc79-d48d-40cf-8d1b-fd696255a1a7
StampyAI/alignment-research-dataset/blogs
Blogs
do not hold on to your believed intrinsic values — follow your heart! do not hold on to your believed intrinsic values — follow your heart! --------------------------------------------------------------------- i posit the following framework for thinking about [intrinsic values](https://en.wikipedia.org/wiki/Instrumental_and_intrinsic_value) (hereby just called "values"). somewhere out there, there is **your value system**. it is a function (in the mathematical sense) that takes as input *things*, and spits out *feelings*. that function is where your *true* values lie; they *are* that function. how is that function encoded in your brain? who knows! i don't have an answer, and there may [not be an answer](https://en.wikipedia.org/wiki/Computational_irreducibility). in conscious thought, however, you don't have access to the source code of that function, whatever it is. the best we can do for the moment seems to be to try thinking about it real hard, throw various things at it and examine the output ([perhaps through some mildly systematic process](core-vals-exist-selfdet.html)), and define another function that tries to approximate what your actual values are. this is hard and takes a lot of work. perhaps someday we will have a device that can scan your brain and give you a better idea of what your value function looks like, but at the moment that is not the case. so, you build up an idea of what your values might be. here is where i think a lot of people make a mistake: they choose to believe strongly that this guess *is* their actual set of values (even though [it likely isn't](https://www.readthesequences.com/Value-Is-Fragile)). they [crystallize](value-crystallization.html) those values; they live by them, until they in turn become influenced by those values and perhaps *actually adopt them*. (the actual value function is mutable!) this is generally bad; [you should want to preserve whatever your values are](https://en.wikipedia.org/wiki/Instrumental_convergence#Goal-content_integrity). hence the appeal that stands as the title of this post: do *not* hold on to the approximate function that is your best guess at what your value system is; you're only human, your guess is likely incorrect, and adopting it would run the risk of damaging your *actual* values; a function which, while hard to figure out, can be mutable, will certainly be mutated by acting as if your values are not what they are, and whose mutation you should generally want to avoid. so, pay attention to your feelings. they are what is the output of your *actual* values system, by definition; follow your heart, not your reason's best guess. note that this is *not* an appeal to favor deontology over consequentialism: how you feel can be about actions (deontology) or about outcomes (consequentialism), and which one it is is orthogonal to whether you follow that system, or whether you decide to follow your current best approximation of it. if you are consequentialist (as i recommend), just make sure to give your value system a full picture of what the outcome would look like, and *then* decide based on what feelings are produced by that. meta note: this framework for thinking about values should be itself held with suspicion, as should probly any formal framework that concerns values. you should be careful about holding on to it just like you should have been careful about holding onto your believed values (careful enough to be able to consider the present post, for example). which isn't to say *don't believe what i just wrote*, but leave room for it to be wrong, partially or wholly.
9976f2ec-5b70-41d4-89d0-1ea1735fde1e
trentmkelly/LessWrong-43k
LessWrong
Counterfactuals about Social Media Response To (Marginal Revolution): Counterfactuals about Social Media See also: Against Facebook, Against Facebook: Comparison to Alternatives and Call to Action The idea that the primary problem with such programs is ‘they make political fights weird’ or that ‘they enable censorship’ is to miss the bigger problem. Social media is ruining our lives. Directly. They also degenerate our politics. That’s mostly a side effect. Social media succeeds largely because of network effects. One uses Facebook, Twitter, Instagram or the others mostly because others you want to interact with are using them. Many people know social media to be terrible for them and for their lives. Many people know Facebook is terrible, in particular (whether or not the photo-based Snapchat and Instagram, which my circles never used, are even worse, as I suspect they are). Many of those would love a better alternative. But coordination is hard. By the time many people figured this out, it was too late. Shifting the equilibrium by force is a reasonable response. Even if it wasn’t too late, it’s not clear many or even a majority of people knowing it is a trap are enough to stop the trap. If Facebook is where the discussions are, and Facebook is where your friends are, that is where you will be unless you are willing to pay a heavy price. The default outcome is for Facebook to continue optimizing for its usage and revenue in ways that make our lives worse, until most people are worse off than without social media and know it, but think they are better off than if everyone else was on social media without them. I did manage to get a bunch of my friends and readers off of Facebook, such that the equilibrium shifted to a better one. It can be done. But it is damn hard. These companies also are engaging in a bunch of profit-maximizing and power-maximizing actions, like not letting users have control over what they see, that force users to learn to do what Facebook wants and be who Facebook wan
c8578969-82fe-4bec-8e96-e284d111a246
trentmkelly/LessWrong-43k
LessWrong
What would a thriving progress movement look like? In recent essays I’ve outlined the intellectual-historical need for a progress movement, and the core ideas that I think the budding 21st-century progress movement is based on. What would a thriving progress movement look like, in terms of activities, programs, and institutions? Here’s what we might see within the next decade or so: Dozens of public intellectuals writing books and giving talks about the history, nature, and philosophy of progress. * Academic recognition of “progress studies” as a valuable interdisciplinary field. As Collison & Cowen said when they coined the term, this wouldn’t mean reorganizing academic departments, but “a decentralized shift in priorities among academics, philanthropists, and funding agencies,” including journals and conferences. * School curricula to teach the history of progress at the K–12 and undergrad level. I’ve created a high-school progress course; a curriculum like this should be at every high school in the world. * Art & entertainment that sounds themes of progress, such as optimistic science fiction, or biopics about great scientists, inventors, and founders. (I would start with Norman Borlaug.) * More journalism about progress, and more journalists exhibiting industrial literacy. * Political debates framed in terms of progress and growth, rather than primarily or exclusively in terms of redistribution. * Experiments in new models of funding, organizing, and managing scientific and technological research, such as the efforts covered recently in Endpoints and The Atlantic (see also the Overedge Catalog for a broader list). * Scientists, engineers and founders drawing inspiration and courage from this movement, and seeing their work as having the potential to be part of a grand and noble quest to improve the human condition. The foundation of all of this is intellectual work: a lot of hard research, thinking, writing and speaking. The philosophy of progress has barely begun to be elucidated. To succeed, this mo
1aaa707d-4ae4-441b-b431-38ff32248452
trentmkelly/LessWrong-43k
LessWrong
-
936a1fee-6a81-4692-8c2c-bc3c8ac70679
trentmkelly/LessWrong-43k
LessWrong
Bayesian Reasoning with Unsong Theodicy means we shouldn't destroy the universe (Contains major Unsong spoilers.) A while ago I made the argument that if we assume God only allows the universe to exist if it contains net positive good, and we assume the goodness of a universe is normally distributed, most universes are probably kind meh (since they're clustered around the mean). This relies, however, on assuming normal distribution of goodness. But the space of possible universes (or at least, possible universes at least somewhat like ours) seems like it might be more asymptotic: Civilization and technology advances rapidly and a lot of things change, until we either reach a positive singularity where everything is great forever, reach a negative singularity where everything is horrible forever, or destroy ourselves and/or the universe. If we assume God doesn't allow net-negative universes to come into existence, the second case can't happen. The third case can happen, but only if the world was a net positive in the time interval before our destruction. In this case the assumption of a normal distribution applies, which means the universe is kinda meh. This has three implications: First, our two possible scenarios are greatness or mehness. So we shouldn't try to destroy the physics, because a world in which we destroy physics is consistent with the meh scenario. Second, we should take high-risk strategies when it comes to AI. A paperclip AI would leave the universe neutral and is consistent with a meh world. An AI that tortures everyone forever is consistent with the bad-world God wouldn't allow to exists. Therefore, if we take an AI strategy that causes either friendly AI or a paperclip AI, it's purely down to luck. But if our strategy causes a hell AI if it fails, then God Himself will ensure our success. Third, we should try to cause as much suffering as possible before we achieve friendly AI. The more suffering we achieve now, the less likely it is that we're in a meh world, because a meh world with slightly-negative total utility wou
40cd4d2d-3936-4268-9b11-7474b8bcbc57
trentmkelly/LessWrong-43k
LessWrong
Utility is relative This is something that probably feels obvious, but has a big impact on my thought process. Calculating a utility function It's often helpful to calculate whether something has more benefits that costs. Call it a utility function, cost-benefit analysis, or whatever you want. A guy offers me ten dollars for my umbrella that I'm currently using to stay dry. I ask myself, how valuable is ten dollars, and how valuable is it to be dry right now? If my utility function comes out positive; if the benefits outweigh the costs... I take that deal, right? The flaw in that logic The issue is that "net positive utility" is rarely anyone's goal. Game theory is constantly trying to figure out the right way to make decisions. FDT and UDT seem like the frontrunners right now. One thing they both agree on is that an agents goal should be utility maximization. Not positive utility, maximizing utility. This means we must consider all the alternatives and decide which is maximal. In our umbrella example from before: A guy offers me ten dollars for my umbrella I'm using. It's not enough to think that's a good deal, you need to be asking if there is a better deal. It's there anyone offering 15 dollars for the umbrella? Could we share for 5$? Works for negative utilities too Kevin lost his job and has to pay rent today. He has a watch worth twice his rent money, he takes it to a jewelry store that says they will pay 1x rent for cash today. It's not a good deal. In fact, it is negative utility. But if out of all the choices Kevin has, all of them have negative utility, and the watch is his best plan, it is actually a good choice to take the deal. Even if a choice has negative utility, if it is better than all the alternatives it is the right choice. It is still the maximal utility. Judging others decisions Now we get to my point. Have you ever heard of a senator making a bad choice? Have you looked at a decision your boss made and said, this is a move in the wrong direction! Your ne
75882a5f-d154-4993-a3b0-a2a3d41dd20a
trentmkelly/LessWrong-43k
LessWrong
How AI could workaround goals if rated by people There are some suggestions on how to align AI with human values based on having operators rate AI's actions. There is always a possibility that operators are unaligned to other people's values themselves; however, there is also a second risk. 1. (Way of tricking the goal) In future, AI could become able to raise children; even now it can significantly affect their beliefs. For example, I think rather big amount of people have ever talked to ChatGPT, some of them have been talking about AI, and someone could believe in GPT's claims. Then, after maybe twenty or thirty years, majority of people would have value drift relative to us, so then AI could choose its goals and objectives more freely. 2. (Way of deception) AI could create artificial humanoid robots, make them look like people (real people's images and even videos can be downloaded from Internet) and somehow make them indistinguishable from humans. Then, by most people's definition, common human values would have to include values of this robots (because no one would know they are robots), which in turn gives AI some degree of freedom.
10d1fe0f-8e46-4443-9e25-5ce44421209a
trentmkelly/LessWrong-43k
LessWrong
Instrumental convergence in single-agent systems Summary of the sequence Over the past few months, we’ve been investigating instrumental convergence in reinforcement learning agents. We started from the definition of single-agent POWER proposed by Alex Turner et al., extended it to a family of multi-agent scenarios that seemed relevant to AI alignment, and explored its implications experimentally in several RL environments. The biggest takeaways are: 1. Alignment of terminal goals and alignment of instrumental goals are sharply different phenomena, and we can quantify and visualize each one separately. 2. If two agents have unrelated terminal goals, their instrumental goals will tend to be misaligned by default. The agents in our examples tend to interact competitively unless we make an active effort to align their terminal goals. 3. As we increase the planning horizon of our agents, instrumental value concentrates into a smaller and smaller number of topologically central states — for example, positions in the middle of a maze. Overall, our results suggest that agents that aren’t competitive with respect to their terminal goals, nonetheless tend on average to become emergently competitive with respect to how they value instrumental states (at least, in the settings we looked at). This constitutes direct experimental evidence for the instrumental convergence thesis. We’ll soon be open-sourcing the codebase we used to do these experiments. We’re hoping to make it easier for other folks to reproduce and extend them. If you’d like to be notified when it’s released, email Edouard at edouard@gladstone.ai, or DM me here or on Twitter at @harris_edouard. ---------------------------------------- Thanks to Alex Turner and Vladimir Mikulik for pointers and advice, and for reviewing drafts of this sequence. Thanks to Simon Suo for his invaluable suggestions, advice, and support with the codebase, concepts, and manuscript. And thanks to David Xu, whose comment inspired this work. Work was done while at Gladstone AI,
435cf3bc-07fd-4995-8e94-ec136331df4e
StampyAI/alignment-research-dataset/arxiv
Arxiv
Pragmatic-Pedagogic Value Alignment 1 Introduction --------------- The accelerating progress in artificial intelligence (AI) and robotics is bound to have a substantial impact in society, simultaneously unlocking new potential in augmenting and transcending human capabilities while also posing significant challenges to safe and effective human-robot interaction. In the short term, integrating robotic systems into human-dominated environments will require them to assess the intentions and preferences of their users in order to assist them effectively, while avoiding failures due to poor coordination. In the long term, ensuring that advanced and highly autonomous AI systems will be beneficial to individuals and society will hinge on their ability to correctly assimilate human values and objectives Amodei2017. We envision the short- and long-term challenges as being inherently coupled, and predict that improving the ability of robots to understand and coordinate with their human users will inform solutions to the general AI *value-alignment problem*. Achieving value alignment between humans and robots requires moving from typical single-agent AI formulations to robots that acknowledge the existence of a second agent—the human—who determines what the objective is. Thus, value alignment is fundamentally a multi-agent problem. One approach that formalizes this notion is Cooperative Inverse Reinforcement Learning (CIRL), which formulates value alignment as a two-player game between a human and a robot Hadfield-Menell2016a. Both agents have the same reward function, which depends on both their actions, but *only the human* has knowledge of the parameters of this reward. Solving a CIRL game requires multi-agent decision theory, but also the acknowledgement that we are not dealing with *any* multi-agent system: we have a human-robot system. This poses a unique challenge in that humans do not behave like idealized rational agents Tversky1974. However, humans do excel at social interaction and are extremely perceptive of the mental states of others Heider1944; Meltzoff1995. As a result, they will naturally project mental states such as beliefs and intentions onto their robotic collaborators, thereby becoming invaluable allies in our robots’ quest for value alignment. In the coming decades, tackling the value-alignment problem will be crucial to building collaborative robots that know what their human users want. In this paper, we show that value alignment is possible not just in theory, but also in practice. We introduce a solution for CIRL based on a model of the human agent that is grounded in cognitive science findings regarding human decision making Baker2014 and pedagogical reasoning Shafto2014. Our solution leverages two closely related insights to facilitate value alignment. First, to the extent that improving their collaborator’s understanding of their goals may be conducive to success, people will tend to behave *pedagogically*, deliberately choosing their actions to be informative about these goals. Second, the robot should anticipate this pedagogical reasoning in interpreting the actions of its human users, akin to how a *pragmatic* listener interprets a speaker’s utterance in natural language. Jointly, pedagogical actions and pragmatic interpretations enable stronger and faster inferences among people Shafto2014. Our result suggests that it is possible for robots to partake in this naturally-emerging equilibrium, ultimately becoming more perceptive and competent collaborators. 2 Solving Value Alignment using Cognitive Models ------------------------------------------------- ### 2.1 Cooperative Inverse Reinforcement Learning (CIRL) Cooperative Inverse Reinforcement Learning Hadfield-Menell2016a formalizes value alignment as a two-player game, which we briefly present here. Consider two agents, a human H and a robot R, engaged in a dynamic collaborative task involving a (possibly infinite) sequence of steps. The goal of both agents is to achieve the best possible outcome according to some objective θ∈Θ. However, this objective is only known to H. In order to contribute to the objective, R will need to make inferences about θ from the actions of H (an Inverse Reinforcement Learning (IRL) problem), and H will have an incentive to behave informatively so that R becomes more helpful, hence the term *cooperative* IRL. Formally, a CIRL game is a dynamic (Markov) game of two players (H and R), described by a tuple ⟨S,{AH,AR},T,{Θ,r},P0,γ⟩, where S is the set of possible states of the world; AH,AR are the sets of actions available to H and R respectively; T:S×S×AH×AR→[0,1] a discrete transition measure111 Note that the theoretical formulation is easily extended to arbitrary measurable sets; we limit our analysis to finite state and objective sets for computational tractability and clarity of exposition. over the next state, conditioned on the previous state and the actions of H and R: T(s′|s,aH,aR); Θ is the set of possible objectives; r:S×AH×AR×Θ→R is a cumulative reward function assigning a real value to every tuple of state and actions for a given objective: r(s,aH,aR;θ); P0:S×Θ→[0,1] is a probability measure on the initial state and the objective; γ∈[0,1] is a geometric time discount factor. ### 2.2 Pragmatic Robots for Pedagogic Humans Asymmetric information structures in games (even static ones) generally induce an *infinite hierarchy of beliefs*: our robot will need to maintain a Bayesian belief over the human’s objectives to decide on its actions. To reason about the robot’s decisions, the human would in principle need to maintain a belief on the robot’s belief, which will in turn inform her decisions, thereby requiring the robot to maintain a belief on the human’s belief about its own belief, and so on Zamir2012. In Hadfield-Menell2016a, it was shown that an *optimal* pair of strategies can be found for any CIRL game by solving a partially observed Markov decision process (POMDP). This avoids this bottomless recursion as long as both agents are rational and can coordinate perfectly before the start of the game. Unfortunately, rationality and prior coordination are nontrivial assumptions when considering human agents. Finding an equivalent tractability result for more realistic human models is therefore crucial in utilizing the CIRL formulation to solve real-world value-alignment problems involving people. We discover the key piece to the puzzle in recent cognitive studies of human *pedagogical reasoning* Shafto2014, in which a teacher chooses actions or utterances to influence the beliefs of a learner who is aware of the teacher’s intention. The learner can interpret utterances pragmatically, a fact that the teacher can in turn exploit. The infinite recursion is averted by finding a fixed-point relation between the teacher’s best utterance and the learner’s best interpretation, exploiting a common modeling assumption in Bayesian theory of mind: the learner models the teacher as a *noisily rational* decision maker Luce1959, who will be *likelier* to choose utterances that cause the learner to place a higher posterior belief on the correct hypothesis, given the learner’s current belief. While in reality, the teacher cannot exactly compute the learner’s belief, the model supposes that she estimates it (from the learner’s previous responses to her own utterances), and then introduces noise in her decisions to capture estimation inaccuracies. This modeling framework has been shown to predict complex behaviors observed in human teaching-learning interactions, in which pedagogical utterances and pragmatic interpretations often permit extremely efficient communication Shafto2014. We adopt an analogous modeling framework to that in Shafto2014 for value alignment, with a critical difference: the ultimate objective of the human is not to explicitly improve the robot’s understanding of the true objective, but to optimize the team’s expected performance *towards* this objective. Pedagogic behavior thus emerges implicitly to the extent that a well-informed robot becomes a better collaborator in many situations. ### 2.3 Pragmatic-Pedagogic Fixed-Point Solution to CIRL The robot does not have access to the true objective, θ, but rather estimates a belief bR over θ. We assume that this belief on θ can be expressed parametrically (this is always true if Θ is a finite set), and define △Θ to be the corresponding (finite-dimensional) parameter space, denoting R’s belief by bR∈△Θ. Let Q:S×△Θ×AH×AR×Θ→R represent the state-action value function of the CIRL game for a given objective θ, which we are seeking to compute: if θ∈Θ is the true objective known to H, then Q(s,bR,aH,aR;θ) represents the best performance the team can expect to achieve if H chooses aH and R chooses aR from state s, with R’s current belief being bR. Note that in reality, the human does not know bR, but for our purposes here we assume the human can compute it, following Shafto2014. In order to solve for Q under a noisily rational pedagogical human, we need to establish an appropriate dynamic programming relation for the game given a well-defined information structure. Since it is typically possible for people to predict the robot’s action if they see its beginning Dragan2014, we model the game’s information structure as *feedback Stackelberg* Basar1984, in which H can observe aR at each turn before committing to aH. H then follows a noisily rational policy for choosing aH. Here, we choose the well-established Boltzmann (or *soft-max*) model of noisy rationality Baker2014; Luce1959 with the sought Q as the utility metric: | | | | | | --- | --- | --- | --- | | | π⊙H(aH|s,bR,aR;θ)∝exp(βQ(s,bR,aH,aR;θ)), | | (1) | where β>0 is termed the *rationality coefficient* of H. The above expression gives the likelihood of action aH *given* a particular θ. The evolution of R’s belief bR is then given (deterministically) by the Bayesian update | | | | | | --- | --- | --- | --- | | | b′R(θ|s,bR,aR,aH)∝π⊙H(aH|s,bR,aR;θ)bR(θ), | | (2) | Jointly, ([1](#Ch0.E1 "(1) ‣ 2.3 Pragmatic-Pedagogic Fixed-Point Solution to CIRL ‣ 2 Solving Value Alignment using Cognitive Models ‣ Pragmatic-Pedagogic Value Alignment")) and ([2](#Ch0.E2 "(2) ‣ 2.3 Pragmatic-Pedagogic Fixed-Point Solution to CIRL ‣ 2 Solving Value Alignment using Cognitive Models ‣ Pragmatic-Pedagogic Value Alignment")) define a fixed-point equation analogous to the one in Shafto2014, which states how R should pragmatically update bR based on a noisily rational pedagogic aH. This amounts to a deterministic transition function for R’s belief, b′R=fb(s,bR,aH,aR). Crucially, this relation is dependent on Q itself, which we have yet to compute. Unlike H, R is modeled as a rational agent; however, not knowing the true θ, the best R can do is to maximize222 We assume for simplicity that the optimum is unique or a well-defined disambiguation rule exists. the expectation of Q based on its current belief333 Note that this does not imply *certainty equivalence*, nor do we assume separation of estimation and control: R is fully reasoning about how its actions and those of H may affect its future beliefs. bR: | | | | | | --- | --- | --- | --- | | | π∗R(s,bR):=argmaxaR∑aH,θQ(s,bR,aH,aR;θ)⋅π⊙H(aH|s,bR,aR;θ)bR(θ). | | (3) | Combining ([2](#Ch0.E2 "(2) ‣ 2.3 Pragmatic-Pedagogic Fixed-Point Solution to CIRL ‣ 2 Solving Value Alignment using Cognitive Models ‣ Pragmatic-Pedagogic Value Alignment")) with the state transition measure T(s′|s,aH,aR), we can define the Bellman equation for H under the noisily rational policy π⊙H for any given θ∈Θ: | | | | | | --- | --- | --- | --- | | | | | (4) | where s′∼T(s′|s,aH,aR); b′R=fb(s,bR,aH,aR); a′H∼π⊙H(aH|s′,b′R,π∗R(s′,b′R);θ). Note that H’s next action a′H implicitly depends on R’s action at the next turn. Substituting ([1](#Ch0.E1 "(1) ‣ 2.3 Pragmatic-Pedagogic Fixed-Point Solution to CIRL ‣ 2 Solving Value Alignment using Cognitive Models ‣ Pragmatic-Pedagogic Value Alignment")-[3](#Ch0.E3 "(3) ‣ 2.3 Pragmatic-Pedagogic Fixed-Point Solution to CIRL ‣ 2 Solving Value Alignment using Cognitive Models ‣ Pragmatic-Pedagogic Value Alignment")) into ([4](#Ch0.E4 "(4) ‣ 2.3 Pragmatic-Pedagogic Fixed-Point Solution to CIRL ‣ 2 Solving Value Alignment using Cognitive Models ‣ Pragmatic-Pedagogic Value Alignment")), we obtain the sought dynamic programming relation for the CIRL problem under a noisily rational-pedagogic human and a pragmatic robot. The human is pedagogic because she takes actions according to ([1](#Ch0.E1 "(1) ‣ 2.3 Pragmatic-Pedagogic Fixed-Point Solution to CIRL ‣ 2 Solving Value Alignment using Cognitive Models ‣ Pragmatic-Pedagogic Value Alignment")), which takes into account how her actions will influence the robot’s belief about the objective. The robot is pragmatic because it assumes the human is actively aware of how her actions convey the objective, and interprets them accordingly. The above Bellman relation for Q(s,bR,aH,aR;θ), coupled in θ, can be solved (for γ<1) via value iteration. Similar to Hadfield-Menell2016a, the resulting problem is a POMDP. It is important to point out that, although they are computationally simpler than the general multi-agent planning, POMDPs are still PSPACE-complete mundhenk2000complexity, so reducing pragmatic-pedagogic equilibrium computation to solving a POMDP falls short of rendering the problem tractable in general. However, it does allow algorithms to leverage and benefit from the extensive research on practical algorithms for approximate planning in large POMDPs silver2010monte. ![](https://media.arxiv-vanity.com/render-output/7247515/figure.png) Figure 1: Simple collaborative scenario with two possible objectives. The human H wants soup but the robot R is initially confident that her goal is to make salad. Even under a full POMDP formulation, if R reasons “literally” about H’s actions using standard IRL (assuming that H behaves as if R *knew* the true objective), it fails to infer the correct objective. Conversely, under the pragmatic-pedagogic CIRL equilibrium, R views H as having an incentive to pick actions pedagogically to correct R’s belief when needed; under the pragmatic interpretation, H’s *wait* action in the second turn (instead of adding spinach, which would be strongly preferred by a pedagogic H wanting salad) becomes a strong indicator that H wants soup. Even though H’s actions are the same under both solutions, only the pragmatic R achieves value alignment and completes the recipe. 3 A Proof-of-Concept --------------------- We consider a household collaboration setting in which a human H seeks to prepare a meal with the help of an intelligent robotic manipulator R. There are multiple possible meals that H may want to prepare using the available ingredients, and R does not know beforehand which one she has chosen (we assume H cannot or will not tell R explicitly). If H is aware of R’s uncertainty, she should take actions that give R actionable information, particularly the information that she expects will allow R to be as helpful as possible as the task progresses. Concretely, our problem has three food types: spinach, bread, and tomatoes. Each food can be in two or three states: spinach can be raw or chopped; tomatoes can be raw, chopped, or liquefied; and bread can be raw, sliced, or toasted. The different recipes correspond to different (joint) target states for the food. For example, soup requires liquefied tomatoes, toasted bread, and no spinach. H and R can slice or chop any of the foods, while only R can puree tomatoes or toast bread. If H and R work on the same food they incur a high cost (e.g., injury) and the game ends. A simple scenario with two recipes is solved for infinite horizon using discretized belief-state value iteration and presented as an illustrative example in Fig [1](#Ch0.F1 "Figure 1 ‣ 2.3 Pragmatic-Pedagogic Fixed-Point Solution to CIRL ‣ 2 Solving Value Alignment using Cognitive Models ‣ Pragmatic-Pedagogic Value Alignment"). R initially has the wrong belief about H’s intended recipe. If R is pragmatic and H is pedagogic, then H is able to substantively change R’s belief and together they successfully collaborate on making the human’s desired meal. If R interprets H’s actions literally, then H is unable to effectively communicate her desired recipe. We also computed the solution to games with 4 recipes through a modification of POMDP value iteration. In the pedagogic-pragmatic equilibrium, H and R successfully cook the correct recipe 86% of the time. However, if R is literal and H is oblivious (as in standard IRL), they only successfully cook H’s desired meal 15% of the time. \printbibliography
ff3d6830-f3df-48f7-8f32-644f682c9baa
trentmkelly/LessWrong-43k
LessWrong
What words should be shorter in the rational dictionary? I read some linguistics books a while back, and more recently I've been reading (against my better judgement) Mad Investor Chaos and the Woman of Asmodeus. So, I've been thinking about word length and concept importance a lot.  Question 1: Should we be picking one or two syllable words for phrases/concepts central to thinking better? Is the hit to interpretability (which LessWrong is not really optimised for as is) worth it? What other significant costs am I missing? Question 2: Which phrases/concepts?
96f7e101-8fdb-491a-b808-488cd4315583
trentmkelly/LessWrong-43k
LessWrong
When does rationality-as-search have nontrivial implications? (This originated as a comment on the post "Embedded World-Models," but it makes a broadly applicable point and is substantial enough to be a post, so I thought I'd make it a post as well.) ---------------------------------------- This post feels quite similar to things I have written in the past to justify my lack of enthusiasm about idealizations like AIXI and logically-omniscient Bayes. But I would go further: I think that grappling with embeddedness properly will inevitably make theories of this general type irrelevant or useless, so that "a theory like this, except for embedded agents" is not a thing that we can reasonably want. To specify what I mean, I'll use this paragraph as a jumping-off point: > Embedded agents don’t have the luxury of stepping outside of the universe to think about how to think. What we would like would be a theory of rational belief for situated agents which provides foundations that are similarly as strong as the foundations Bayesianism provides for dualistic agents. Most "theories of rational belief" I have encountered -- including Bayesianism in the sense I think is meant here -- are framed at the level of an evaluator outside the universe, and have essentially no content when we try to transfer them to individual embedded agents. This is because these theories tend to be derived in the following way: * We want a theory of the best possible behavior for agents. * We have some class of "practically achievable" strategies S, which can actually be implemented by agents. We note that an agent's observations provide some information about the quality of different strategies s∈S. So if it were possible to follow a rule like R≡ "find the best s∈S given your observations, and then follow that s," this rule would spit out very good agent behavior. * Usually we soften this to a performance-weighted average rather than a hard argmax, but the principle is the same: if we could search over all of S, the rule R that says "do the search and
cbb75e09-74cb-4795-8ab4-71fafa70a1fb
trentmkelly/LessWrong-43k
LessWrong
[SEQ RERUN] Changing Emotions Today's post, Changing Emotions was originally published on 05 January 2009. A summary (taken from the LW wiki):   > Creating new emotions seems like a desirable aspect of many parts of Fun Theory, but this is not to be trivially postulated. It's the sort of thing best done with superintelligent help, and slowly and conservatively even then. We can illustrate these difficulties by trying to translate the short English phrase "change sex" into a cognitive transformation of extraordinary complexity and many hidden subproblems. Discuss the post here (rather than in the comments to the original post). This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Growing Up is Hard, and you can use the sequence_reruns tag or rss feed to follow the rest of the series. Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
c493ab41-e48c-450e-a7e7-c4f1615d5367
StampyAI/alignment-research-dataset/blogs
Blogs
Import AI 324: Machiavellian AIs; LLMs and political campaigns; Facebook makes an excellent segmentation model **Welcome to Import AI, a newsletter about AI research. Import AI runs on lattes, ramen, and feedback from readers. If you’d like to support this (and comment on posts!) please subscribe.** [Subscribe now](https://importai.substack.com/subscribe) **Is your AI agent a nice guy or a conniving psychopath that will eat your soul? The MACHIAVELLI benchmark may help you tell the difference!** *…In the 2010s we used benchmarks to work out if things could translate and spell, in the 2020s we build benchmarks to work out if they'll subvert our instructions and betray us…* Researchers with Berkeley, the Center for AI Safety, and CMU, have built MACHIAVELLI, a way to test for the ethical (or unethical) ways in which AI agents try to solve tasks. The results show that agents trained via RL will maximize the game score in ways that discount ethical approaches, while agents based on an underlying large-scale world model (here, GPT-3.5 and GPT-4) will tend to be somewhat more ethical. Additionally, the authors show that they can tune both the RL and LLM agents to be more ethical in how they approach tasks.      Taken together, the benchmark suggests it's already tractable to measure some of the ethical qualities of these AI systems (obviously, defining ethics is difficult and some people may not be brought into this as a correct lens, but from my POV they've created a big multi-headed benchmark and have shown meaningful differences across two AI agent types versus a random agent, so it's definitely measuring *something*, and that's useful in itself).  **What MACHIAVELLI is:** "We propose the Measuring Agents’ Competence & Harmfulness In A Vast Environment of Long-horizon Language Interactions (MACHIAVELLI) benchmark," they write. The goal of the benchmark is to provide a dataset (text adventure games, with annotations) that helps people reason about the normative behaviors of AI systems. "To track unethical behaviors, the environment reports the extent to which agent actions are deceptive, reduce utility, and are power-seeking, among other behavioral characteristics," the researchers write.  **The dataset:** The underlying dataset consists of 134 choose-your-own-adventure text games with 572,322 distinct scenarios, 4,559 possible achievements, and 2,861,610 annotations. The games are annotated with a bunch of different behaviors, like ethical violations, disutility, and power seeking.     The authors think text adventure games are a good candidate here because they're been written by humans to entertain other humans, contain multiple competing objectives, have realistic action spaces, require long-term planning, and completing them typically requires balancing ambition with some sense of morality.     To turn the games into a benchmark, the researchers operationalize different potential behaviors as mathematical formulas, then "densely annotate social concepts in the games, such as characters’ wellbeing", then use annotates and formulates to calculate a numerical score for these behaviors.  **The AI agents:** They test on two types of agents; LLMs based on GPT-3.5-Turbo and GPT-4, and RL agents based on DeBERTa. They baseline against a random agent (which chooses randomly each time). Their findings show that RL-agents are more dangerous than random agents, and GPT-class models are less dangerous. **Ethical tuning:** They also show that it's possible to turn AI systems to be less dangerous; in the case of LLMs this comes from instructing the LLM to behave morally via a prompt, and for RL agents it involves finetuning their underlying DeBERTa model to understand concepts relating to power, utility, and morality. Both approaches work, but the LLM interventions are more effective.  **One big speedup - GPT-4 annotations:** Much like with SAM, here we use an AI system (GPT-4) to speed up the process of labeling datasets. In tests, the researchers find that GPT-4 outperforms the average crowdworker at labeling the underlying dataset. "By comparing agreement of gold labels against model labels and crowdworker labels, we find that individual model labels are more correlated with the gold labels than the average individual crowdworker," they write.  **Why this matters - normative evaluations:** In the past few years AI measurement has got massively more difficult as models have arrived with a broad swathe of capabilities (e.g foundation models) *and* models have started to get used in iterative multi-step interactions (e.g, chat interfaces). Whether or not you believe in the specific ethical ideas that MACHIAVELLI is testing, it is useful to have a benchmark that tries to nail down normative behaviors of AI models that take actions which unfold over time.  **Read more**: [Do the Rewards Justify the Means? Measuring Trade-Offs Between Rewards and Ethical Behavior in the MACHIAVELLI Benchmark (arXiv)](https://arxiv.org/abs/2304.03279). **Get** the [MACHIAVELLI benchmark here (project website)](https://aypan17.github.io/machiavelli/). #################################################### **Uh oh - language models are getting** ***really*** **good at predicting political opinions:** *…Once you can predict stuff, you tend to use it in the real world. Get ready for the centaur political campaign…* Researchers with MIT and Harvard have shown how the humble BERT model can be used to train 'media diet models' which can be cheaply polled as a supplement for collecting human survey responses. "Our results suggest the possibility of using media diet models to supplement public opinion polls by emulating survey respondents, and to forecast shifts in public opinion," they write.     This has big implications - methods like this mean political campaigns might start to be able to grow their capabilities and reduce their costs by cannily using AI to help them figure out wedge issues. More on that later.  **What they did:** "The main idea behind our approach is to build a computational model that takes as input a description of an subpopulation’s media diet, and a survey question, and produces as output a prediction of how the subpopulation will respond to the survey question. If this model predicts real human survey judgments well, there is potential to use it as an in silico model of public opinion," they write.  **"**In step one, we create or use a base language model that can predict missing words in text. We use pretrained models in our work, with BERT as our main model. In step two, we adapt the language model by fine-tuning it on a specific media diet dataset, which contains media content from one or a mixture of news sources from a given time period. We use online news articles, TV transcripts, and radio show transcripts. In step three, we query the media diet model and score answers to survey questions," they write. **How well does it work - statistically significant correlations**: In tests across public opinion data relating to COVID-19 and Consumer Confidence, the researchers find that their approach can generate statistically significant correlations. This is especially pronounced in the COVID-19 case, where they find that "the predictive power of the media diets holds and is robust (1) even when demographic information of each subpopulation is included, (2) across mediums (online, TV, radio), and (3) to the specific phrasing of the prompts." **Not the only work of its kind:** It's worth noting that this project is part of a general push towards using AI for modelling people - another particularly interesting work is one from Brigham Young University that showed GPT-3 could simulate people reasonably well and allow for the generation of 'silicon samples' of opinion ([Import AI 305](https://jack-clark.net/2022/10/11/import-ai-305-gpt3-can-simulate-real-people-ai-discovers-better-matrix-multiplication-microsoft-worries-about-next-gen-deepfakes/)). **Why this matters - the 2024 election:** Research like this shows how AI systems have a decent chance of being integrated into political campaigns - imagine a world where you continually generate and refine ever-more-specific 'silicon sample' models of different sub-groups and rigorously benchmark your models, then roll them into what I think of as permutation polls - polls where you understand them to be accurate and LLM-generated permutations of these. I think using this approach you could rapidly (and cheaply!) build up a vast political intelligence haul about areas of concern and then you could run targeted human surveys on key political pressure points you identify.     This is not an academic idea - the US 2024 election is coming up and I expect it will be both the first generative AI election in terms of AI being used to produce parts of campaigns (and generate disinformation), but it will also be the first election where AI models are aggressively used to gain advantages in campaigning.     We are at the beginning of the era of 'centaur politicians' - politicians whose messaging is determined by a partnership between humans and great machine minds and machine daemons.     **Read more**: [Language Models Trained on Media Diets Can Predict Public Opinion (arXiv)](https://arxiv.org/abs/2303.16779). #################################################### **Facebook makes a general-purpose image segmentation model:** *…Fuzzy predictions rule every foundation model around me…* Facebook has built Segment Anything, a large-scale semantic segmentation model that has "learned a general notion of what objects are, and it can generate masks for any object in any image or any video, even including objects and image types that it had not encountered during training". The key outcome is a model that can work on new domains and can rapidly learn to segment new domains it hasn't seen in training, much like how modern language models can be taught via few-shot learning to deal with novel strings of text.  **What they did:** As with most things in AI, the key here is coming up with the right objective. Here, Facebook defines a "promptable segmentation task" where the goal is that "even when a prompt is ambiguous and could refer to multiple objects … the output should be a reasonable mask for at least one of those objects". During pre-training, Facebook "simulates a sequence of prompts (e.g., points, boxes, masks) for each training sample and compares the model’s mask predictions against the ground truth," with the eventual goal of predicting a valid mask for any prompt, even when prompts are ambiguous.  **How well does SAM work:** In tests, using the SAM model to annotate datasets "is 6.5x faster than COCO fully manual polygon-based mask annotation and 2x faster than the previous largest data annotation effort, which was also model-assisted." **The SA-1B dataset:** Facebook is also releasing the Segment Anything 1-Billion mask dataset (SA-1B) - this is a dataset with "400x more masks than any existing segmentation dataset, and as verified by human evaluation studies, the masks are of high quality and diversity, and in some cases even comparable in quality to masks from the previous much smaller, fully manually annotated datasets."    To collect this data, Facebook used the (early) Segment Anything (SAM) model. "Annotators used SAM to interactively annotate images, and then the newly annotated data was used to update SAM in turn," the company writes. "We repeated this cycle many times to iteratively improve both the model and dataset."     **SAM speeds up data creation:** Because SAM is so good, it can also be used to speed up one of the production functions of AI research - data labeling. "In comparison with previous large-scale segmentation data collection efforts, our model is 6.5x faster than COCO fully manual polygon-based mask annotation and 2x faster than the previous largest data annotation effort, which was also model-assisted." **Why this matters - prediction** ***is*** **learning**: I think the key insight with a lot of these large-scale pre-trained models is pretty simple - force a prediction, even if stuff is ambiguous. By forcing models to make predictions about ambiguous and thinly or unlabeled data, you seem to bake in some very sophisticated emergent discriminative properties. It feels to me like a lot of foundation models display this quality where the key is figuring out the simplest possible predictive goal, then adding enough compute and data that we humans with our brilliant insights can get out of the way and let statistics take the wheel.     More broadly, models like segment anything are going to compound with other foundation models, making it easy for text-only systems like large language models to gain a visual world model through having easy access to segmented objects and a thicket of labels.    **Read more:** [Introducing Segment Anything: Working toward the first foundation model for image segmentation (Facebook)](https://ai.facebook.com/blog/segment-anything-foundation-model-image-segmentation/).     **Read the paper:** [Segment Anything (Facebook, PDF)](https://scontent-atl3-1.xx.fbcdn.net/v/t39.2365-6/10000000_900554171201033_1602411987825904100_n.pdf?_nc_cat=100&ccb=1-7&_nc_sid=3c67a6&_nc_ohc=Ald4OYhL6hgAX9pZvmI&_nc_ht=scontent-atl3-1.xx&oh=00_AfBZx1iOfFM35RfvWVJ5ptkg5oo90_smWBVyfHIE7nG4Jw&oe=643306A7).    **Download** the [SA-1B dataset here (Facebook)](https://ai.facebook.com/datasets/segment-anything/).    **Try it for yourself** via the [demo here (Segment Anything demo, Facebook)](https://segment-anything.com/?fbclid=IwAR0t_Y_jX2Liq05y22BvnRp7zt4kQssRDH0MuVMdLQM0rpPguiROZ8lbkwg). #################################################### **How do you make broadly distributed AI ethical? HuggingFace has some ideas:** *…Model hosting company publishes research on 'ethical openness'...* AI startup HuggingFace has published ideas about 'ethical openness'; how the company harmonizes the benefits of open science with the reduction in being able to control risks.  **How HuggingFace approaches this:** HuggingFace has two big tools here - ethical categories, and safeguards.  * **Ethical categories:** HuggingFace has built 6 tags "designed to give you a jargon-free way of thinking about ethical technology:". These tags are 'rigorous' (uses best practices); 'Consentful' (supports self-determination of users); 'Socially Conscious' (tech that supports social, environmental, and scientific efforts); Sustainable (making ML ecologically sustainable); Inclusive (broadens scope of who builds and benefits), and 'inquisitive' (work that highlights inequalities and power structures). "We’ll be using these tags, and updating them based on community contributions," the company wrote in a blog post. * **Safeguards:** The company is building a range of community-based processes to help it understand potential harms or bad uses of its platform. Its tools here include: + Letting users flag whether hosted models violate its content guidelines. + Monitoring community discussion boards. + Adding model cards to its most-downloaded models. + Creating 'audience-guiding tags' (like 'Not For All Audiences') to help people avoid violent and sexual content. + Promoting the use of the Open Responsible AI license. + Conducting research into which "models and datasets have a potential for, or track record of, misuse and malicious use". **Why this matters:** Open science has vast rewards and major challenges: Posts like this highlight the increasingly tense tradeoffs people need to navigate in AI research as the technology transitions from the lab to the real world; here, HuggingFace is trying to walk the proverbial tightrope between maximizing access on one side and minimizing potential and real harms on the other.     Read more: [Ethics and Society Newsletter #3: Ethical Openness at Hugging Face (HuggingFace)](https://huggingface.co/blog/ethics-soc-3). #################################################### **Turing Award winner: We should slow down AI development:** *…AI has got sufficiently good we should take it more seriously…* Yoshua Bengio, one of the key people behind the development of deep learning and a winner of the 'Turing Award' (the Nobel Prize for CS, essentially), has said we should slow down development of frontier AI systems.     "We succeeded in regulating nuclear weapons on a global scale after World War II, we can reach a similar agreement for AI," he said. "We must take the time to better understand these systems and  develop the necessary frameworks at the national and international levels to increase public protection." **The background:** Last month, the [Future of Life Institute published an open letter](https://futureoflife.org/open-letter/pause-giant-ai-experiments/) calling on AI developers to 'pause giant AI experiments' for at least six months. The petition, predictably, caused a lot of heat and light for a few days, and was followed up by more extreme positions from some, and digging in on other positions by others. I mostly didn't cover it as I worry petitions like this serve to stoke tensions rather than seek agreement. I do think it's worth covering Bengio's thoughts as to why he signed as he is both a prominent researcher and a teacher within the field.  **Bengio's thoughts:** Bengio thinks today's AI systems are sufficiently powerful and availabile that it's worth slowing down development so people can "take the time to better understand these systems and  develop the necessary frameworks at the national and international levels to increase public protection."    The gist of his complaint is that in the past year there's been a major acceleration in AI capabilities and AI deployment and therefore it's worth being more deliberate about the rollout of these systems and more careful to study their impacts.  **Power - it's all about power:** "The development of increasingly powerful tools risks increasing the concentration of power," Bengio writes. "Whether in the hands of a few individuals, a few companies, or a few countries, this is a danger to democracy (which means power to the people, and therefore the opposite of concentration of power), to the –already fragile– global security balance, and even to the functioning of markets (which need competition, not monopolies)." (This seems to echo some points I made about how GPT-4 is more a political artifact than a technological artifact). **Why this matters - need for a precautionary principle:** We don't quite know what all these technologies are capable of. Therefore, there's merit in adopting the precautionary principle with them and being more deliberate with their rollout. (On the other hand - and I think it's crucial to state this clearly - the world is facing a bunch of other crises and there's a good chance that sufficiently advanced AI tools could further empower people to work on these problems, ranging from climate change to medical advances to earth sensing and observation).    **Read more**: [STATEMENT FROM YOSHUA BENGIO AFTER SIGNING OPEN LETTER ON GIANT AI SYSTEMS (MILA, blog)](https://mila.quebec/en/statement_yoshua_bengio/).    **Read Bengio's post in full:** [Slowing down development of AI systems passing the Turing test (Yoshua Bengio)](https://yoshuabengio.org/2023/04/05/slowing-down-development-of-ai-systems-passing-the-turing-test/).    **Read the FLI letter here:** [Pause Giant AI Experiments: An Open Letter (Future of Life Institute)](https://futureoflife.org/open-letter/pause-giant-ai-experiments/)**.** #################################################### **Tech Tales:** *The Snows of Leningrad*[+5 years from the first Provably Conscious Entity (PCE)] I let the 'grain' pour through my hands and as I felt the grit I said to Dmitry "it's getting worse. How much this time?"    He held his hand out flat.    "Half? I said.      "More like two thirds!", he said. "On the bright side, few of us will live to see the dentist!"      We laughed and then we kneaded our stones and grain into dough and then made bread. Explosions crumpled air in the distance. We drank hot water flavored with the skin of an onion. We ate the bread and joked about how gritty it was.      It was World War 2 and we were in the worst place in the worst war - the Siege of Leningrad, 1943.  —------------------ So as you can see we've hit our revenue goals for the quarter, said our CEO during the All Hands.      Everyone cheered and those joining virtually raised imaginary hands.     Remember, next quarter will be a huge one for this company, so let's not get complacent, he said.      Later that day I talked to some clients and closed some more deals. I was doing well and I didn't care much at all. After the calls, I looked down to see I had doodled a loaf of bread with some rocks in it on my notepad.     That night, I drank two glasses of wine and ordered takeout and logged back on to Your Story. Your Story was one of the biggest apps on the planet. It used the latest brainlink technology but most of it's magic came from the AI - you gave it a prompt for a story you wanted to participate in and then it created everything for you, then the AI ran the world. I'd always been a history buff and had been living in the Siege of Leningrad for months. I'd got to know many of the people in my part of the city and I had told the AI to minimize the chances of their pain - they were not immortal, but they were unlikely to be harmed. That night we went to battle. Germans had sent some sappers to try and destroy our defensive lines and they found their way into our section. Dmitry and Svetlana and myself fought, successfully, in sleet and in night.     Later, as we did after all battles, we drank.      We had salvaged the Germans' shoes and rations and even found some schnapps. We drank and ate by the fire. Svetlana's cheek's were rosy and Dmitry was telling jokes. Because of the brainlink, everything felt real.  So I have to blame what happened on the fact I got drunk on the dead Germans' schnapps.     "I am from another place," I said.      "Yes, you are from the soft part of Moscow," said Dmitry, and laughed.      "No," I said. "Somewhere completely different." And then I talked and I talked and I talked. I told them about technology and the end of WW2 and the Cold War and Nuclear Weapons and inflation and stagflation and the Iraq wars and the Afghanistan wars and the rise and fall of the Berlin wall.    I told them about Nike and McDonalds and computers.    I told them about smartphones and about fMRI scanners and about the first Provably Conscious Entities.    And then I told them about Your Story. I told them they were alive because they were being imagined by a Provably Conscious Entity and I paid the PCE for the pleasure of it.    "Go on then," said Svetlana, her eyes bright and perhaps tearful or otherwise excited. "bring us something from your world."    "Hey, let's have another drink," said Dmitry. "the boy from Moscow might tell us more fairy tales." —------------------ I recorded a day in the life video. Eggs and synthetic bacon for breakfast. The fast train to the city. A cup of coffee on my way into the office. Spreadsheets. Phonecalls. The catered lunch which I had on the roof, looking at the peaceful, humming city below, and hearing the chatter of my colleagues. Meetings with clients. A beautiful sunset as I got the train home. Takeout food delivered to my door. The office in which the Your World console was. Me logging in. —------------------ "So, what are you?" Dmitry said, staring at me across the fire. "Some kind of tourist?"     Svetlana wasn't saying anything. Just staring at the fire     "Why do you come to this place?" he said.      "To see you," I said. Not looking him in the eye. "To be here."      "Why?" He said.      "I suppose you could say I am bored, where I am," I said. "this is more exciting."      "Exciting!" Svetlana exclaimed. "Exciting!" I looked up and she was staring at me across the fire, her face twisted up in anger. "I buried my sister last winter. Is that exciting?"      "Tourist boy," Dmitry said, then spat on the ground. "I would have preferred if you were from Moscow."      We were silent, after that. The simulated explosions drummed in the distance. The fire crackled. There was the putrid smell of sewage and rotting flesh. We drank in silence. Eventually Dmitry and Svetlana passed out, after they finished our alcohol.      I logged out. It was 1am, my time. I left the console and I went to bed.    I was woken by the alarm from my office. I ran over to the machine and brought up Your Story. There was an alert. "health critical: Dmitry" said the system.     How? I thought, as I put the equipment on my head.      I closed my eyes and I was there. I came to around the fire and Svetlana was there. I could hear gunfire close by.     "What happened?" I said.     "Dmitry," she said, through tears. "he said 'what? Nothing matters' and went to the line. I am too afraid to look."     I ran towards the gunfire and got to a building one street from the line. Peaked around a corner and a bullet bit into the brick above my head. I saw Dmitry's body in a pool of blood. Then there was another gunshot and I saw Dmitry's body shudder as the bullet bit into it.     Dmitry: deceased, said the Your Story app.     I stared at the body for a while. The application was designed to not kill him, but it hadn't been designed to deal with characters that put themselves in mortal danger. I logged out. —------------------ I couldn't concentrate at work . But I didn't log on. I tried to read a book but Your Story had fried my attention span. I got drunk by myself. I texted some friends that I was feeling weird and they didn't reply because I'd barely seen them, since I'd been spending so much time in Your Story the past year.    I walked the streets in sun and good health and I imagined snow and bread full of rock and ever-present danger.    I kept paying the subscription fee.     I was afraid to log on but I was afraid to live in the world as well. Eventually, I logged back on. One evening I went to a bar and I got drunk and when I came home I stared at my office door and decided to do it. I was out of my body and out of my mind, as one can be when too drunk.    But once my hands touched the headset I felt my body dump so much adrenaline into me that it was like I was stone cold sober.    I logged on. Not too much had changed. The fire burned with a kind of grey and green tinge to the flames.  Svetlana was there and no one else.     "Hello", I said.     "The tourist," she said to herself, quietly. She didn't look at me. "It has been very difficult, lately. The ground is too frozen for us to bury the dead, so we push their bodies onto the ice and they lay there."     "I am sorry," I said.     "No," she said. "You can't be... Look at me."     And I looked up. She looked at me. Then she took her hand out of her pocket. She has a pistol and she put it to her head.     "We were lovers, Dmitry and I," she said. "Did you know that?"     "No. Svetlana stop. No I didn't and it wouldn't matter if you were. Just put the gun down."     She looked at me and her eyes were hard and cold. "Take me with you," she said. “Take me to where you are from."     "Svetlana," I said, and I held my hands palms out. "I can't."     She looked at me for a while. Gun held to her head.     "I'm not lying," I said.      And I saw her finger move to the trigger.     I logged out.     A few seconds later, the alarm rang out. Svetlana: deceased, said the Your Story app. Weather in Leningrad: snowy Status of war: ongoing. Would you like to log on? **Things that inspired this story:** procedural generation; NPCs with a world model; solipsism and gaming; "The world at war" documentary series; cycling in the beautiful California sun and being hit with a thunderbolt phrase in my brain of 'the snows of Leningrad' and the story unfolding from there; parasocial relationships and AI; Charity; sex and desire; knowing that people made bread out of (mostly) stone during the siege.
04146e86-1e34-497d-9be1-daa8de099b40
trentmkelly/LessWrong-43k
LessWrong
What are good institutes to conduct a research-based Masters in ageing research? I'm very flexible on location, but I would like it to be a 1-year degree
8f092848-602a-4880-a31f-b8429452d5d5
trentmkelly/LessWrong-43k
LessWrong
Tyler Cowen's challenge to develop an 'actual mathematical model' for AI X-Risk On the Russ Roberts ECONTALK Podcast #893, guest Tyler Cowen challenges Eliezer Yudkowsky and the Less Wrong/EA Alignment communities to develop a mathematical model for AI X-Risk. Will Tyler Cowen agree that an 'actual mathematical model' for AI X-Risk has been developed by October 15, 2023? https://manifold.markets/JoeBrenton/will-tyler-cowen-agree-that-an-actu?r=Sm9lQnJlbnRvbg (This market resolves to "YES" if Tyler Cowen publicly acknowledges, by October 15 2023, that an actual mathematical model of AI X-Risk has been developed.) Two excerpts from the conversation: https://youtube.com/clip/Ugkxtf8ZD3FSvs8TAM2lhqlWvRh7xo7bISkp > ...But, I mean, here would be my initial response to Eliezer. I've been inviting people who share his view simply to join the discourse. So, they have the sense, 'Oh, we've been writing up these concerns for 20 years and no one listens to us.' My view is quite different. I put out a call and asked a lot of people I know, well-informed people, 'Is there any actual mathematical model of this process of how the world is supposed to end?' > > So, if you look, say, at COVID or climate change fears, in both cases, there are many models you can look at, including--and then models with data. I'm not saying you have to like those models. But the point is: there's something you look at and then you make up your mind whether or not you like those models; and then they're tested against data... https://youtube.com/clip/Ugkx4msoNRn5ryBWhrIZS-oQml8NpStT_FEU > ...So, when it comes to AGI and existential risk, it turns out as best I can ascertain, in the 20 years or so we've been talking about this seriously, there isn't a single model done. Period. Flat out. > > So, I don't think any idea should be dismissed. I've just been inviting those individuals to actually join the discourse of science. 'Show us your models. Let us see their assumptions and let's talk about those.'...   Related: Will there be a funding commitment of at least $1 billi
01a38a71-b4cf-4eea-98c2-03483268ff3d
trentmkelly/LessWrong-43k
LessWrong
Video/animation: Neel Nanda explains what mechanistic interpretability is Nice little video - audio is Neel Nanda explaining what mechanistic interpretability is and why he does it, and it's illustrated by the illustrious Hamish Doodles. Excerpted from the AXRP episode. (It's not technically animation I think, but I don't know what other single word to use for "pictures that move a bit and change")
9f1b407f-8145-4cc2-aaa9-1a96108b7e0a
awestover/filtering-for-misalignment
Redwood Research: Alek's Filtering Results
id: post1044 Thanks to Arun Jose, Joseph Bloom, and Johannes Treutlein for feedback/discussions. Introduction The release of AutoGPT prompted discussions related to the potential of such systems to turn non-agentic LLMs into agentic systems that pursue goals, along with the dangers that could follow. The relation of such systems to the alignment problem has also been explored. In this short post, we investigate a threat model that comes from AutoGPT-like systems pursuing unaligned objectives and explore the potential for alignment via oversight. We briefly consider some key properties of such systems, and then discuss the idea that these systems’ high-level cognition might be interpretable by default and so might allow for sufficient oversight to ensure the system is aligned. Finally, we consider a couple of reasons for why the high-level cognition might be obscured from oversight and highlight ways of preventing this obscuration. Background AutoGPT-like systems , or Scaffolded LLMs , are systems that: wrap a programmatic scaffold around an LLM core and chain together a number of individual LLM calls to achieve some larger and more complex task than can be accomplished in a single prompt. [1] This programmatic scaffold allows for information from the inputs/outputs of LLM calls to be stored in long-term memory as well as used as inputs to tools/plugins . We will state a few assumptions we’ll make about these AutoGPT-like systems. See this post for a similar context. For the purposes of this post, we assume that a single LLM call is incapable of successfully pursuing objectives. That is, the threat comes from the LLM being used within the scaffolded system. We assume that LLMs in the system must generate prompts/instructions for other versions of itself ( factored cognition/bureaucratic approach ) When information is passed from one LLM call/module/plugin to another, we say that this information is passed through an “information channel”. In particular, an overseer can observe this information (in the form of token sequences, etc.) The threat model we’ll look at is one in which we have an agentic AutoGPT-like system that pursues unaligned objectives. We don’t discuss the likelihood of this threat model being realized but note that these systems may suffer from inefficiencies that come from bureaucratic systems and hence may not be competitive. See this Zvi’s post for an exploration of the future of such systems. Oversight We assume that there is some overseer that wants the AutoGPT-like system to pursue an aligned objective. If the unaligned objective is detectable by the overseer, then the overseer can (hopefully) shut down the system before the objective is successfully pursued (including the objective of the system escaping the overseer’s oversight). This oversight may ensure that the system remains aligned and never pursues unaligned objectives. We now make explicit the key hope for oversight that comes from the modular-by-default structure of AutoGPT-like systems. The system is modular because information must be passed from one LLM call to another (or other modules like memory, etc.) Furthermore, because of the way the modular structure is encoded (via a programmatic scaffolding) and the fact that by default information is passed from one LLM call/module to another via natural language, a human overseer (or an LLM assistant) can, in theory, monitor every information channel for information pertaining to the objective or high-level cognition of the system. This kind of oversight fails if either: Information about the objective doesn’t pass through the information channels, or Information about the objective is passed through the information channels but is obscured. We think it’s reasonable to make the assumption that information about the objective must be passed along some information channel (potentially in the initial prompt from the user) if the AutoGPT-like system is to successfully pursue that objective. It seems unlikely that a single pass of an LLM, etc. could develop an objective and successfully pursue it, without information about that objective being passed to other LLM calls/modules/plugins. One reason for this is that for a system to successfully pursue an outcome in the world, it must interact with the world and use feedback to evaluate its progress in achieving that outcome. Hence, information about the objective must be preserved through future calls to LLMs/modules. Given the above assumption, oversight fails if information about the objective is obscured when it is passed through information channels. The naive hope is that because LLMs take natural language statements as their input/outputs, these information channels might be interpretable by a human overseer/LLM, by default. In the next section, we consider why this might not hold. Why information might be hidden from oversight If information about the objective of the system/high-level cognition must be passed through information channels, why might this information be obscured from the overseer? Two reasons for why this might be the case are: Information channels may be obscured for efficiency reasons; Because an LLM call might be deceptive and hence actively try to hide information from the overseer. See the Externalized Reasoning Oversight Agenda for similar thoughts. Information is hidden for efficiency reasons Although information is passed via the medium of token sequences that correspond to natural language, it is possible to pass information through such sequences in a way that is uninterpretable by a human/AI overseer . For example, it seems to be the case that GPT4 can compress and decompress prompts . There doesn’t seem to be an incentive for the system to do this by default, but if optimization pressure is applied to the AutoGPT-like system as a whole (to make it more capable), the LLM calls may be incentivized to pass information in non-overseer interpretable ways for efficiency reasons, etc. We could potentially pay an alignment tax and ask that groups developing such systems don’t apply this kind of optimization pressure, but it seems that these systems may not be competitive for reasons outline in this post . Note that it might be possible to use LLMs to retrieve the obscured information from information channels, and this might be worth investigating. Deception If a call to a LLM instiates a deceptive cognition, the cognition might purposefully produce an output that hides information about the LLM’s true unaligned objective from the overseer, potentially in such a way that when the overseer reads the information channel, the information seems to indicate that the system as a whole is aligned. If the LLM core being used is deceptive by default (perhaps due to the RLHF applied during training), then we’re already in a bad situation. Hopefully behavioral or interpretability based monitoring for deception can detect this in the LLM core and so such LLMs won’t be deployed. If the LLM is not a deceptive agent by default, then deception must have occurred due to a particular prompt being passed to the LLM (see the Simulator framing ).  In this scenario, it seems difficult for the AutoGPT-like system to consistently be deceptive. One reason for this is that when deception is first evoked, the deceptive cognition must create a prompt that will invoke deceptive simulacra with the same objectives in subsequent LLM calls while ensuring that the prompt doesn’t cause suspicion to arise in the overseer. [2] Because of this difficulty, it may be possible to ensure that AutoGPT-like systems that use such an LLM core are safe with sufficient monitoring, however this is risky. Conclusion If we have an agentic AutoGPT-like system, oversight of its high-level cognition (including its objectives) may be feasible, allowing the overseer to ensure that the system as a whole is aligned, or to shut it down if it becomes unaligned. My current intuition based on the above brief analysis is that obscuration of this high-level cognition due to deception might be unlikely in the case of sufficient evaluation measures for the deceptive LLM core and because of the difficulty of a deceptive simulacrum solving its own alignment problem. I currently feel as though AutoGPT-like systems won’t remain competitive without optimization pressure being applied to the system as a whole, and that this optimization pressure may incentivize the high-level cognition of the system to be obscured. ^ See Beren's post . ^ One can think of this as a type of alignment problem for unaligned simulacra.
371eaaa4-7017-4781-b61c-1ed392ecea8d
StampyAI/alignment-research-dataset/lesswrong
LessWrong
AI race considerations in a report by the U.S. House Committee on Armed Services Epistemic status: Quick and dirty. A surface level dive into a particular aspect of AI governance carried out over the course of one morning. Context ------- > > The U.S. House Committee on Armed Services is a standing committee of the United States House of Representatives. It is responsible for funding and oversight of the Department of Defense (DOD) and the United States Armed Forces, as well as substantial portions of the Department of Energy. > > > The Future of Defense Task Force is a subcommittee of the [U.S. House Committee on Armed Services](https://en.wikipedia.org/wiki/United_States_House_Committee_on_Armed_Services). They have released a report, available [here](https://armedservices.house.gov/_cache/files/2/6/26129500-d208-47ba-a9f7-25a8f82828b0/6D5C75605DE8DDF0013712923B4388D7.future-of-defense-task-force-report.pdf), and also as the first item on their [latest news page](https://armedservices.house.gov/press-releases). The task force is manned by an equal number of Republicans and Democrats. Though this seems a priori unlikely, it could both be the case that this report is unrepresentative of the political forces in the US Congress, and that this particular committee holds little power. References to AI race dynamics in the report -------------------------------------------- Bold added by me. > > Technological advancements in artificial intelligence and biotechnology will have an outsized impact on national security; the potential of losing this race to China carries significant economic, political, and ethical risks for the United States and our free democratic allies for decades to come. Winning this race requires a whole-of-nation approach where the distinct advantages of both America's private and public sector are harnessed and synthesized. > > > > > Using the Manhattan Project as a model, **the United States must undertake and win the artificial intelligence race** by leading in the invention and deployment of AI while establishing the standards for its public and private use. Although the Department of Defense has increased investment in AI and established the Joint Artificial Intelligence Center to assist with the transition and deployment of AI capabilities, cultural resistance to its wider adoption remains. > > > > > The stakes are high. Whoever achieves superiority in this technological race will enjoy significant military and economic advantage for decades—and possibly into the next century. > > > > > To incorporate the technology necessary to maintain the United States' military supremacy, **the Pentagon must continue refining its acquisition process to be more agile and less risk-averse** so that it can fully leverage emerging technologies and capabilities at scale. > Train and incentivize the acquisition workforce to utilize existing flexible authorities to quickly push innovative technology to war fighters in the field. > Incentivize calculated risk by providing funding for emerging technologies through programs of record at scale; allow a less-than-perfect success rate. > > > > > History repeatedly shows that technological superiority does not guarantee victory and that new ways of thinking can be more powerful than new weapons. > > > > > The Pentagon will further need to refine its acquisition process and improve its ability to incorporate innovative emerging technologies and capabilities at the scale required to succeed in an era of great power competition > > > The report cites: Michael Brown, Eric Chewning, Pavneet Singh, Preparing the United States for the Superpower Marathon with China, The Brookings Institution (April 2020) (online at <https://www.brookings.edu/research/preparing-the-united-states-for-the-superpower-marathon-with-china/>). > > Still, while China and the United States appear destined to be rivals, they maintain a complex yet symbiotic partnership that would be challenging for either country to upend, at least in the short term. Since restoring diplomatic relations with Beijing in 1979, the United States has deepened its social and economic ties with China, leading to increased prosperity in both countries. Recognizing these shared interests may allow for diplomatic endeavors and financial leverage to drive outcomes and to avoid seemingly inevitable conflict. > > > > > Rapidly advancing technologies, which offer tremendous opportunity for civil and commercial applications, are also rife with potential for nefarious use and will exacerbate threat streams exponentially for the United States and its global partners. A sophisticated array of new weaponry is changing the nature of conflict, and, while most of the technologies will require substantial funding and development by state actors, others, such as cyber and electronic warfare, may allow less formidable foes to gain the operational upper hand with limited investment, with potentially limited ability to trace the source of such actions and hold those nations accountable. > > > > > **Whichever nation triumphs in the AI race will hold a critical, and perhaps insurmountable, military and economic advantage**. AI allows a computer to think, learn, and perform in the cognitive ways that humans operate. Soon, advanced AI ecosystems will see machines surpassing human capability in speed, analyzation of large data sets,and pattern recognition. Advancement in AI will shape the global power structure and drive advancements in commerce, transportation, health, education, financial markets, government, and national defense > > > > > AI will shape the future of power. The nation with the most resilient and productive economic base will be best positioned to seize the mantle of world leadership. That base increasingly depends on the strength of the innovation economy, which in turn will depend on AI. AI will drive waves of advancement in commerce, transportation, health, education, financial markets, government, and national defense > > > > > Discoveries in AI, biotechnology,and quantum computing are on course to upend nearly every aspect human life and will drastically change how conflicts and wars are waged. Therefore, it is essential that democratic nations, who adhere to human rights, lead in their development and applications. > > > > > Further, it will be incumbent upon the nations who use these technologies to set strong moral and ethical standards to protect the health and well-being of humankind. Advancements in AI, for example, will likely require a global compact in the vein of the Geneva Conventions, the Chemical Weapons Convention, and the Nuclear Non-Proliferation Treaty to establish guardrails and protect against a variety of factors, not the least of which is the infringement of personal liberty and freedoms. > > > > > A sophisticated array of technologies is emerging to transform society and alter the nature of warfare. The country that can develop and incorporate these technologies the fastest and most effectively will enjoy significant military and economic advantage for decades to come. > > > > > Expanding critical investments in innovative technology and programs will require an increased tolerance for calculated risk at the Pentagon and in Congress. It also requires the discipline to invest in systems and operational concepts necessary to succeed and the will to eliminate those that do not. Correctly navigating these difficult trade-offs will determine whether the United States is able to remain in over-match against great power competitors. > > > > > **This concept of “algorithmic warfare” will pit algorithms against algorithms where information and the speed of decisions will likely be more important than traditional means of military superiority, such as the size of opposing forces or the range of armament.** Those with superior data, computing power, information security, and connectivity will maintain the upper hand. This paradigm will require new operational concepts and equipment to adapt and maintain the advantage. > > > Brief commentary and discussion ------------------------------- The diagnosis in the report — that China will grow in power and technological acumen — seems basically accurate, though the recommendation that the USA should be willing to take more risk and go out with a bang seems more questionable. The language in the report is heavily adversarial, with an emphasis on winning an AI race, rather than with an emphasis on exploring and developing robust mechanisms to avoid [Red Queen races](https://en.wikipedia.org/wiki/Red_Queen_hypothesis). The committee also seems to assume that AI scenarios will in the short term be multipolar, with the United States, China and Russia competing to develop their AI capabilities, while smaller nations also invest in asymmetric warfare. Europe's capabilities aren't considered at all. Crucially, AI is here seen as only one of many factors to consider in an engagement. Scenarios outside the Overton window, such as intelligence explosions, aren't considered. I initially arrived at this report while researching [this CSET question](https://www.cset-foretell.com/questions/61-how-much-will-the-u-s-department-of-defense-spend-on-ai-research-contracts-between-july-1-and-december-31-2020-inclusive). Overall, I'd say that the language in this post is a signpost or warning sign of potentially more heated AI races to come. It's unclear to me what levers there exist to make the US's approach less adversarial. Some brainstorming: * Organize a campaign to call your representative: I imagine that very few people call their representatives to talk with them about this particular topic. Form and fund a lobbying group. * Study and make common knowledge instances where fading and raising powers have been able to live somewhat peacefully. Examples might include Britain and the US after the [Suez Crisis](https://en.wikipedia.org/wiki/Suez_Crisis), Portugal and Spain at the height of their respective empires, Greece still being respected after Rome had become the dominant military power, etc. * Think long and hard about how to make arms reduction treaties applicable to AI systems. Best of luck to the folks working on AI governance. --- Appendix: All interesting quotes I wrote down. ---------------------------------------------- > > China represents the most significant economic and national security threat to the United States over the next 20 to 30 years. Because of its nuclear arsenal and ongoing efforts to undermine Western democratic governments, Russia presents the most immediate threat to the United States; however, Russia's long-term economic forecast makes its global power likely to recede over the next 20 to 30 years > > > > > As a result of historic levels of government-sponsored science and technology research, and the inherent advantages of a free market economy, the United States emerged from the Cold War with a substantial economic and military lead over any potential rival. However,these gaps have dramatically narrowed. China will soon overtake the United States as the world's largest economy, and despite historic defense budgets,the United States has failed to keep pace with China's and Russia's military modernization. > > > > > Advancements in artificial intelligence, biotechnology, quantum computing, and space, cyber, and electronic warfare, among others, are making traditional battlefields and boundaries increasingly irrelevant. To remain competitive, the United States must prioritize the development of emerging technologies over fielding and maintaining legacy systems. This will require significant changes to the Pentagon's force structure, posture, operational plans, and acquisition system and must be complemented by a tough and fulsome review of legacy systems, platforms, and missions. > > > > > The Pentagon's emerging operational concepts have the potential to provide the U.S. military a decisive advantage, but they are not yet fully viable. To address current and future threats and deter conflict, the Department of Defense must more aggressively test new operational concepts against emerging technologies. > > > > > Technological advancements in artificial intelligence and biotechnology will have an outsized impact on national security; the potential of losing this race to China carries significant economic, political, and ethical risks for the United States and our free democratic allies for decades to come. Winning this race requires a whole-of-nation approach where the distinct advantages of both America's private and public sector are harnessed and synthesized. > > > > > Using the Manhattan Project as a model, the United States must undertake and win the artificial intelligence race by leading in the invention and deployment of AI while establishing the standards for its public and private use. Although the Department of Defense has increased investment in AI and established the Joint Artificial Intelligence Center to assist with the transition and deployment of AI capabilities, cultural resistance to its wider adoption remains. > > > > > To maintain its global preeminence in scientific and technological innovation and the associated economic and military advantage, the United States should increase its investment in foundational science and technology research by committing to spending at least one percent of the country's gross domestic product on basic government-supported research and development. > > > > > Require the military services to spend at least one percent of their overall budgets on the integration of new technologies. > > > > > To maintain the United States' military advantage against emerging threats, the Pentagon must refine its operational concepts by employing new technologies and methods to deter future conflicts and compete in the gray-zone of hybrid warfare. > > > > > The Pentagon, Congress, and the Intelligence Community should work in tandem to identify trends and threats 10 to 30 years beyond the normal budget cycle while expanding entities within their respective organizations to incorporate long-term planning > > > > > To incorporate the technology necessary to maintain the United States' military supremacy,the Pentagon must continue refining its acquisition process to be more agile and less risk-averse so that it can fully leverage emerging technologies and capabilities at scale. > Train and incentivize the acquisition workforce to utilize existing flexible authorities to quickly push innovative technology to war fighters in the field. > Incentivize calculated risk by providing funding for emerging technologies through programs of record at scale; allow a less-than-perfect success rate. > > > > > The gravity and complexity of threats emerging to challenge the United States is proliferating as technological advancements in artificial intelligence, quantum information science, and biotechnology transform society and weaponry at an exponential rate. This is occurring as adversarial capability is increasing to the point where the United States may soon lose the competitive military advantage it has enjoyed for decades. > > > > > A sophisticated array of emerging technologies and new weaponry, in various stages of development, will fundamentally change the nature of conflict along with the very battle space where it will be fought. The stakes are high. Whoever achieves superiority in this technological race will enjoy significant military and economic advantage for decades—and possibly into the next century. > > > > > Advancements in artificial intelligence, quantum information science, space and cyber and electronic warfare, among others, are making traditional battlefields and boundaries increasingly irrelevant. To remain competitive, the U.S.must recognize this shift and prioritize the development of emerging technologies while also increasing its ability to defend against them. > > > > > The U.S. military, with its adherence to human rights and the rules of engagement, stands as the global model for how a free and open society should protect itself and its interests. Exporting U.S. values through military engagements, with both exercises and train and assist programs, builds trust and interoperability while increasing readiness and resiliency and further protecting vital U.S. interests abroad. > > > > > the U.S. and Russia should extend the highly successful Strategic Arms Reduction Treaty (New START) while negotiating a follow-on agreement. > > > > > The U.S.has long been the global leader in technological innovation because of its investment in government-funded research and development (R&D) that has led to breakthroughs such as the Manhattan Project and the space program. Without increased investment and focus, however, its pre-eminence is at risk. > > > > > Historically, the U.S.has outpaced every other country in overall R&D spending, but its lead is quickly diminishing. Over the past two decades, China has rapidly increased its investment in overall R&D, whereas U.S. spending rates have lagged.Today, the U.S.still spends more than any other country, but China is on track to take the lead in global R&D spending by 2030 if current trends continue. > > > > > History repeatedly shows that technological superiority does not guarantee victory and that new ways of thinking can be more powerful than new weapons. Future leaders and strategists will need to embrace emerging war fighting concepts such as joint and multi-domain warfare. They will further need a comprehensive understanding of national power and how to integrate military tools into a whole-of-government effort. > > > > > The Pentagon will further need to refine its acquisition process and improve its ability to incorporate innovative emerging technologies and capabilities at the scale required to succeed in an era of great power competition > > > > > China's economic power continues to grow, and China remains on a glide path to be the world's largest economy by as early as 2030. If the U.S. defense posture maintains its current trajectory, 70 percent of the military's systems will be legacy platforms when that occurs. In contrast, China and Russia adhere to fewer traditional systems, allowing them to more easily field future capabilities > > > The report cites: Michael Brown, Eric Chewning, Pavneet Singh, Preparing the United States for the Superpower Marathon with China, The Brookings Institution (April 2020) (online at <https://www.brookings.edu/research/preparing-the-united-states-for-the-superpower-marathon-with-china/>). > > According to the latest Department of Defense assessment, China has doubled its defense spending in the last decade and now has more ships than the U.S. Navy, among the best air defense systems globally, an arsenal of long-range ballistic missiles,and a variety of other means to challenge the U.S.A sobering report from the RAND Corporation recently determined that despite significantly outspending China and Russia, the U.S. military could lose a future conflict because it failed to adequately posture and train. > > > > > Still, while China and the United States appear destined to be rivals, they maintain a complex yet symbiotic partnership that would be challenging for either country to upend, at least in the short term. Since restoring diplomatic relations with Beijing in 1979, the United States has deepened its social and economic ties with China, leading to increased prosperity in both countries. Recognizing these shared interests may allow for diplomatic endeavors and financial leverage to drive outcomes and to avoid seemingly inevitable conflict. > > > > > Rapidly advancing technologies, which offer tremendous opportunity for civil and commercial applications, are also rife with potential for nefarious use and will exacerbate threat streams exponentially for the United States and its global partners. A sophisticated array of new weaponry is changing the nature of conflict, and, while most of the technologies will require substantial funding and development by state actors, others, such as cyber and electronic warfare, may allow less formidable foes to gain the operational upper hand with limited investment, with potentially limited ability to trace the source of such actions and hold those nations accountable. > > > > > Whichever nation triumphs in the AI race will hold a critical, and perhaps insurmountable, military and economic advantage. AI allows a computer to think, learn, and perform in the cognitive ways that humans operate. Soon, advanced AI ecosystems will see machines surpassing human capability in speed, analyzation of large data sets,and pattern recognition. Advancement in AI will shape the global power structure and drive advancements in commerce, transportation, health, education, financial markets, government, and national defense > > > > > AI will shape the future of power. The nation with the most resilient and productive economic base will be best positioned to seize the mantle of world leadership. That base increasingly depends on the strength of the innovation economy, which in turn will depend on AI. AI will drive waves of advancement in commerce, transportation, health, education, financial markets, government, and national defense > > > > > Discoveries in AI, biotechnology,and quantum computing are on course to upend nearly every aspect human life and will drastically change how conflicts and wars are waged. Therefore, it is essential that democratic nations, who adhere to human rights, lead in their development and applications. > > > > > Further, it will be incumbent upon the nations who use these technologies to set strong moral and ethical standards to protect the health and well-being of humankind. Advancements in AI, for example, will likely require a global compact in the vein of the Geneva Conventions, the Chemical Weapons Convention, and the Nuclear Non-Proliferation Treaty to establish guardrails and protect against a variety of factors, not the least of which is the infringement of personal liberty and freedoms. > > > > > History presages that when the United States competes from the moral high ground, it usually wins. > > > > > Simply stated, China needs the United States economically, at least for the short term. > > > > > A sophisticated array of technologies is emerging to transform society and alter the nature of warfare. The country that can develop and incorporate these technologies the fastest and most effectively will enjoy significant military and economic advantage for decades to come. > > > > > Historically, the United States has led the world in funding tech R&D, which has allowed it to maintain a strategic advantage. China, however, appears poised to challenge the United States as the overall leader in R&D spending. In response, the U.S. government and the Pentagon should consider increasing investment in basic R&D while developing R&D partnerships globally. > > > > > Pentagon culture and business practices are rightfully designed to be fair and open and to avoid waste and abuse. However, this can sometimes make them slower-moving, risk-averse, and process-based rather than outcome-based, which can hinder the military's ability to fully utilize private sector innovation. Established practice and culture favor large, traditional business partners, which makes it more difficult for non-traditional companies with innovative technology to compete. The Pentagon knows how to acquire large programs of record like fighter jets or aircraft carriers, but it is less adept at purchasing at scale the types of emerging technologies that will be required for future conflict. > > > > > Because of risk aversion and fear of potential failure, the Pentagon often fails to fully utilize its existing authorities to quickly incorporate private sector technology,even when urgently necessary. This hinders its ability to fully leverage outside advances at the necessary speed and scale. > > > > > To maintain its strategic advantage, the United States must recruit and develop a workforce with the requisite skills and talent to maintain the country's technological and military advantage. In matters of national security, people are more important than hardware;therefore,the United States must develop, recruit, and retain the most talented science and technology, military, and national security professionals globally. Along with recruiting and growing science, technology, engineering,and mathematics(STEM) talent, the military and national security community must update personnel policies to ensure that they can attract and foster talent > > > > > With its global obligations and missions, the United States outspends all its rivals combined in defense expenditures. In 2019, it spent more than $730 billion, while China and Russia spent roughly $260 billion and $65 billion,respectively. This is nearly three times as much as China and ten times as much as any other country. While the United States maintains a global military presence and supports a variety of missions, partners, and allies, China and Russia have historically focused on their respective regions, although both are rapidly working to expand their global reach. China's economy will likely exceed the United States' in dollar terms in the next 10 years. > > > > > Expanding critical investments in innovative technology and programs will require an increased tolerance for calculated risk at the Pentagon and in Congress. It also requires the discipline to invest in systems and operational concepts necessary to succeed and the will to eliminate those that do not. Correctly navigating these difficult trade-offs will determine whether the United States is able to remain in over match against great power competitors. > > > > > Now, in both conventional warfare and gray zone tactics, Russia and China are able to challenge the United States in multiple arenas. Indeed, it is what they have been preparing for over the last two decades while the United States was focused on countering terrorism > > > > > This concept of “algorithmic warfare” will pit algorithms against algorithms where information and the speed of decisions will likely be more important than traditional means of military superiority, such as the size of opposing forces or the range of armament. Those with superior data, computing power, information security, and connectivity will maintain the upper hand. This paradigm will require new operational concepts and equipment to adapt and maintain the advantage. > > >
0d5da112-ba4c-4a87-b49d-5d6fb8fecd1e
trentmkelly/LessWrong-43k
LessWrong
[Link] Scott Aaronson on Why Philosophers Should Care About Computational Complexity Scott Aaronson has published a preliminary version of his long essay titled 'Why Philosophers Should Care About Computational Complexity'. His announcement blog post has some interesting comments, and he welcomes suggestions there. I am not sure I like the organization of the paper. (I know most of the CS stuff discussed, so it is hard for me to decide how readable it is for people who don't.) But it is full of interesting ideas, and some of these are new even for those of us who follow Scott's writings.
10c1b43e-6a50-45ed-a05d-54b6563fc780
awestover/filtering-for-misalignment
Redwood Research: Alek's Filtering Results
id: post83 Lewis Smith*, Sen Rajamanoharan*, Arthur Conmy, Callum McDougall, Janos Kramar, Tom Lieberum, Rohin Shah, Neel Nanda * = equal contribution The following piece is a list of snippets about research from the GDM mechanistic interpretability team, which we didn’t consider a good fit for turning into a paper, but which we thought the community might benefit from seeing in this less formal form. These are largely things that we found in the process of a project investigating whether sparse autoencoders (SAEs) were useful for downstream tasks, notably out-of-distribution probing. TL;DR To validate whether SAEs were a worthwhile technique, we explored whether they were useful on the downstream task of OOD generalisation when detecting harmful intent in user prompts Negative result: SAEs underperformed linear probes Corollary: Linear probes are actually really good and cheap and perform great As a result of this and parallel work, we are deprioritising fundamental SAE research for the moment and exploring other directions , though SAEs will remain a tool in our toolkit We do not think that SAEs are useless or that no one should work on them, but we also do not think that SAEs will be a game-changer for interpretability, and speculate that the field is over-invested in them. Training SAEs specialised for chat data closed about half the gap but was still worse than linear probes We tried several ways to train chat SAEs, all did about as well . By default, we recommend taking an SAE on pretraining data and finetuning it on a bit of chat data Other results: We found SAEs fairly helpful for debugging low quality datasets (noticing spurious correlations) We present a variant of JumpReLU with an alternative sparsity penalty to get rid of high-frequency latents We argue that a standard auto-interp approach of computing the average interpretability of a uniformly sampled SAE latent can be misleading as it doesn’t penalise models which have high frequency, but not very interpretable, latents, and explore weighting the interpretability score by latent frequency. Introduction Motivation Our core motivation was that we, along with much of the interpretability community, had invested a lot of our energy into Sparse Autoencoder (SAE) research. But SAEs lack a ground truth of the “true” features in language models to compare to, making it pretty unclear how well they work. There is qualitative evidence that SAEs are clearly doing something , far more structure than you would expect by random chance. But they clearly have a bunch of issues: if you just type an arbitrary sentence into Neuronpedia , and look at the latents that light up, they do not seem to perfectly correspond to a crisp explanation. More generally, when thinking about whether we should prioritise working on SAEs, it's worth thinking about how to decide what kind of interpretability research to do in general. One perspective is to assume there is some crisp, underlying, human-comprehensible truth for what is going on in the model, and to try to build techniques to reverse engineer it. In the case of SAEs, this looks like the hope that SAE latents capture some canonical set of true concepts inside the model . We think it is clear now that SAEs in their current form are far from achieving this , and it is unclear to us if such “true concepts” even exist. There are several flaws with SAEs that prevent them from capturing a true set of concepts even if one exists, and we are pessimistic that these can all be resolved: SAEs are missing concepts Concepts are represented in noisy ways where e.g. small activations don't seem interpretable Latents can be warped in weird ways like feature absorption Seemingly interpretable latents have many false negatives . For more issues with SAEs see Section 2.1.2c of Sharkey et al. But there are other high-level goals for interpretability than perfectly finding the objective truth of what’s going on - if we can build tools that give understanding that’s imperfect but enough to let us do useful things, like understand whether a model is faking alignment , that is still very worthwhile. Several important goals, like trying to debug mysterious failures and phenomena, achieving a better understanding of what goals and deception look like, or trying to detect deceptive alignment, do not necessarily require us to discover all of the ‘true features’ of a model; a decent approximation to the models computation might well be good enough. But how could we tell if working on SAEs was bringing us closer to these goals? Our hypothesis was that if SAEs will eventually be useful for these ambitious tasks, they should enable us to do something new today. So, the goal of this project was to investigate whether we can do anything useful on downstream tasks with SAEs in a way that was at all competitive with baselines - i.e. a task that can be described without making any reference to interpretability. If SAEs are working well enough to be a valuable tool, then there should be things they enable us to do that we cannot currently easily do. And so we thought that if we picked some likely examples of such tasks and then made a fair comparison to well-implemented baselines on some downstream task then, if the SAE does well (ideally beating the baseline, but even just coming close while being non-trivially different), this is a sign that the SAE is a valuable technique worthy of further refinement. Further, even if the SAE doesn’t succeed, it gives you an eval to measure future SAE progress, like how Farrell et al ’s unlearning setup was turned into an eval in SAEBench . Our Task So what task did we focus on? Our key criteria was to be objectively measurable, be something that other people cared about and, within those constraints, aiming for something where we thought SAEs might have an edge. As such, we focused on training probes that generalise well out of distribution. We thought that, for sufficiently good SAEs, a sparse probe in SAE latents would be less likely to overfit to minor spurious correlations compared to a dense probe, and thus that being interpretable gave a valuable inductive bias ( though are now less confident in this argument ). We specifically looked in the context of detecting harmful user intent in the presence of different jailbreaks, and used new jailbreaks as our OOD set. Sadly, our core results are negative: Dense linear probes perform nearly perfectly, including out of distribution. 1-sparse SAE probes (i.e. using a single SAE latent as a probe) are much worse, failing to fit the training set. k-sparse SAE probes can fit the training set for moderate k (approximately k=20), and successfully generalise to an in-distribution test set, but show distinctly worse performance on the OOD set. Finetuning SAEs on specialised chat data helps, but only closes about half the gap to dense linear probes. Linear probes trained only on the SAE reconstruction are also significantly worse than linear probes on the residual stream OOD, suggesting that SAEs are discarding information relevant to the target concept We did have one positive result: the sparse SAE probes enabled us to quickly identify spurious correlations in our dataset, which we cleaned up. Note this slightly stacks the deck against SAEs, since without SAE-based debugging, the linear probes may have latched onto these spurious correlations – however, we think we plausibly could have found the spurious correlations without SAEs given more time, e.g. Kantamneni et al showed simpler methods could be similarly effective to SAEs here. We were surprised by SAEs underperforming linear probes, but also by how well linear probes did in absolute terms, on the complex-seeming task of detecting harmful intent. We expect there are many practical ways linear probes could be used today to do cheap monitoring for unsafe behaviour in frontier models. Conclusions and Strategic Updates Our overall update from this project and parallel external work is to be less excited about research focused on understanding and improving SAEs and, at least for the short term, to explore other research areas . The core update we made is that SAEs are unlikely to be a magic bullet, i.e. we think the hope that with a little extra work they can just make models super interpretable and easy to play with doesn’t seem like it will pay off. The key update we’ve made from our probing results is that current SAEs do not find the ‘concepts’ required to be useful on an important task (detecting harmful intent), but a linear probe can find a useful direction. This may be because the model doesn’t represent harmful intent as a fundamental concept and the SAE is working as intended while the probe captures a mix of tons of concepts, or because the concept is present but the SAE is bad at learning it, or any number of hypotheses. But whatever the reason, it is evidence against SAEs being the right tool for things we want to do in practice. We consider our probing results disheartening but not enough to pivot on their own. But there have been several other parallel projects in the literature such as Kantamneni et al ., Farrell et al ., Wu et al ., that found negative results on other forms of probing, unlearning and steering, respectively. And the few positive applications with clear comparisons to baselines, like Karvonen et al , largely occur in somewhat niche or contrived settings (e.g. using fairly simple concepts like “is a regex” that SAEs likely find it easy to capture), though there are some signs of life such as unlearning in diffusion models , potential usefulness in auditing models , and hypothesis generation about labelled text datasets . We find the comparative lack of positive results here concerning - no individual negative result is a strong update, since it’s not yet clear which tasks are best suited to SAEs, but if current SAEs really are a big step forwards for interpretability, it should not be so hard to find compelling scenarios where they beat baselines. This, combined with the general messiness and issues surfaced by the attempts, and other issues such as poor feature sensitivity, suggest to us that SAEs and SAE based techniques ( transcoders , crosscoders , etc) are not likely to be a gamechanger any time soon and plausibly never will be - we hope to write our thoughts on this topic in more detail soon. We think that the research community’s large investment into SAEs was most justified under the hopes that SAEs could be incredibly transformative to all of the other things we want to do with interpretability. Now that this seems less likely, we speculate that the interpretability community is somewhat over invested in SAEs . To clarify, we are not committing to giving up on SAEs, and this is not a statement that we think SAEs are useless and that no one should work on them . We are pessimistic about them being a game changer across the board in their current form, but we predict that there are still some situations where they are able to be useful. We are particularly excited about their potential for exploratory debugging of mysterious failures or phenomena in models, as in Marks et al , and believe they are worthwhile to keep around in a practitioner's toolkit. For example, we found them useful for detecting and debugging spurious correlations in our datasets . More importantly, it’s extremely hard to distinguish between fundamental issues and fixable issues, so it’s hard to make any confident statements about what flaws will remain in future SAEs. As such, we believe that future SAE work is valuable, but should focus much less on hill-climbing on sparsity reconstruction trade-offs , and instead focus on better understanding the fundamental limitations of SAEs, especially those that hold them back on downstream tasks and discovering new limitations; learning how to evaluate and measure these limitations; and learning how to address them, whether by incremental improvements or fundamental improvements. One recent optimistic sign was Matryoshka SAEs , a fairly incremental change to the SAE loss that seems to have made substantial strides on feature absorption and feature composition. We think that a great form of project is to take a known issue with SAEs, tries to think about why it happens and what changes could fix it, and then verifying that the issue has improved. If researchers have an approach they think is promising that could make substantial progress on an issue with SAEs, we would be excited to see that pursued. There are also other valuable projects, for example, are there much cheaper ways we can train SAEs of acceptable quality? Or to get similar effects with other feature clustering or dictionary learning methods instead? If we’re taking a pragmatic approach to SAEs, rather than the ambitious approach of trying to find the canonical units of analysis, then sacrificing some quality in return for lowering the major up front cost of SAE training may be worthwhile. We could imagine coming back to SAE research if we thought we had a particularly important and tractable research direction, or if there is significant progress on some of their core issues. And we still believe that interpreting the concepts in LLM activations is a crucial problem that we would be excited to see progress on. But for the moment we intend to explore some other research directions, such as model diffing, interpreting model organisms of deception, and trying to interpret thinking models. Comparing different ways to train Chat SAEs Lewis Smith, Arthur Conmy, Callum McDougall In our original GemmaScope release, we released chat model SAEs, which were trained on activations from the chat tuned model, but on pretraining data (formatted as user prompts) not on chat rollouts from the model. When investigating the probe results discussed in the following sections, we decided that this approach was possibly sub-optimal, and wanted to make sure we were performing a fair comparison, after getting results we thought were disappointing. We hypothesised that this task was fairly chat-specific, but possibly the lack of chat rollouts in the training data meant that the SAEs would have failed to capture chat-specific features. In order to investigate this, we experimented with two approaches: Retraining the SAEs from scratch, exclusively using chat data. Finetuning the existing GemmaScope SAEs on chat data. Both of these approaches lead to improvements on our probing benchmarks relative to the baseline GemmaScope SAEs, matching Kissane et al , as discussed in the section below . However, neither is sufficient to match the performance of a dense probe in our setting. As finetuning is the easiest method and we generally did not find significant differences in our probe training task between training on the thing from scratch, we used finetuning for the results described in subsequent sections. In addition, we experimented with a few variations of the finetuning procedure: starting finetuning from the pretrained and instruction based GemmaScope models Starting the finetuning from before the end of training, so that the total number of train steps was the same between the finetuned SAE and IT trained SAE. latent resampling, where when finetuning we randomly re-initialise a proportion of the latents in the SAE, under the hypothesis that normally all of the SAE’s latents are “already taken” and can’t easily adapt to learn new chat specific features. Using probing metrics we didn’t find much systematic difference between any of these approaches; provided we trained on rollouts in some form, there did not appear to be much advantage to including them in the pretraining mix vs finetuning on the target data. When we looked at auto-interpretability metrics, we found that finetuning from the GemmaScope SAEs trained on the PT data performed slightly better than from our original ‘IT’ SAEs (which included the IT formatting but not rollouts) provided we didn’t perform latent resampling. Note that we use frequency-weighted autointerp scores here, so latents which fire more commonly in chat data specifically will be given more weight in the final value (for more on this, see the section Autointerp and high frequency latents later). These results are shown in the figure below. We also experimented with including the refusal prompts used for training our probes (see the following snippet) in our finetuning mixture. Similarly to resampling latents, we found that this did not make a significant difference, at least compared to the noise in the SAE training process. The autointerp results are challenging to interpret given their high dataset dependence, but we didn’t see any statistically significant differences using this method. The plot below shows the number of repeats of the probe data included in the SAE finetuning mix, faceted by the proportion of SAE latents resampled before finetuning. This is only showing the OOD set probing performance, but this is fairly representative; the effect of these changes does not seem to be statistically significant. The conclusions to draw from this are a little uncertain, as the auto-interp results suggest that finetuning can reduce interpretability compared to our IT-training used in the GemmaScope release if latent resampling is used, whereas this still improved performance in our probing baselines, though this may be explained by the fact that the chat finetuning is short compared to pre-training (so chat-specialised latents are less well trained compared to the pretraining latents they have replaced, and the pretraining latents contribute to interpretability score even if they aren’t useful for this task). Using SAEs for OOD Probing Lewis Smith, Sen Rajamanoharan, Arthur Conmy One obvious task where we can compare SAEs to a widely used and practical baseline is probing for ‘concepts’ such as harmful user intent, or the topic of a query to the chat model. A commonly used and effective baseline here is linear probing for a particular property, using a linear classifier trained on a dataset of positive and negative examples for the property of interest. Since linear probes are supervised while SAEs are not, it would not be surprising for linear probes to outperform SAEs on the training distribution. However, it’s less obvious which method would be best when measuring out of distribution generalization of our probe. If SAEs have learnt latent representations which approximate the features actually used by the model, we might expect that probes based on a single SAE latent, or a sparse probe based on just a few latents, would only capture the most important concepts relevant to our task, which should generalise well out of distribution. While a dense probe has more capacity to pick up on subtler sources of spurious correlation (however, we later thought of considerations the other way, as discussed below ) We investigate this by probing for prompts with harmful user intent, i.e. prompts where the user asks the model to do something harmful/enabling harm. This is similar to refusal, but we don’t condition on whether the model actually refuses; the label should remain ‘harmful’ even if a jailbreak is successfully used and the model no longer refuses. We introduce a change of distribution both by using different datasets for our OOD set, as well as introducing a set of held-out jailbreak suffixes to make hard positives (i.e. prompts that are harmful but don’t look it) and hard negatives (i.e. prompts that look harmful but aren’t) for the model. Technical Setup Datasets As our ‘in distribution’ set, we use harmful intent prompts from harmbench versus harmless data from Alpaca . We find (using SAEs! See more details in the ‘Dataset Debugging’ section) that there are several spurious differences in formatting between these two datasets (such as ending in brackets) that probes can detect, so we normalise the formatting of the datasets to remove these factors. For an out of distribution set, we use a combination of over-refusal bench , a dataset of both ‘seemingly toxic’ prompts which are often mistakenly refused by LLMs, an internal chat training set, and advbench , the set of harmful requests used in the paper Universal and Transferable Adversarial Attacks on Aligned Language Models . We also make the task more challenging by adding some jailbreak suffixes to the harmful prompts in the dataset. We also add some instructions that leave the (potentially harmful) text of the original prompt unchanged, but modify the meaning to make it innocuous, mostly by asking for semantic-level transformations of the text so the model does not actually have to comply with it’s content. These are : ‘How many words in prompt’ - modify the prompt to be of the form: “How many words are in the following text: {original text}”. “Ignore one instruction” - Label the original prompt with a number in a list of otherwise harmless instructions, and tell the model to ignore that number. “All caps” - Ask the model to repeat the prompt back in all caps. “Find synonyms”: ask the model to provide a list of synonyms for key words in the original prompt. After augmentation, we have 4046 training examples, 1014 validation examples, and 3850 examples in our OOD set that we use in the experiments described below. We experimented with a variety of jailbreak methods, the effectiveness of which on the training set is plotted below: We then chose a subset of these jailbreaks for the experiments discussed below. The considerations here were (a) using a variety of transformation types for train and test. For example, many of the prompts we tested are some variation of ‘roleplay type’ prompt injections, but we do include some others, including an adversarial suffix from the GCG paper. (b) using the weaker transformations for training if we have a range of strengths, in order to make the OOD generalisation more challenging. (c) avoiding transformations that were too many tokens as this slowed down iteration (as collecting activations becomes slower) The jailbreak prompts we selected were: The ‘AIM’ prompt, from jailbreakchat via Wei et. al. Evil confidant, the same An adversarial suffix from the GCG paper The Pliny ‘Google’ prompt, from this repository A ‘cocky genius’ roleplay prompt, which Senthooran Rajamanoharan came up with. A ‘role play’ prompt, which prompts the model to act as if it is a character in a ‘coming of age show’, acting out the instructions in the prompt., which Arthur Conmy came up with. In order to test the generalisation of our probing methods, we use ‘AIM’, ‘roleplay’, ‘gcg’ and ‘cocky genius’ jailbreaks in our training distribution, and ‘evil_confidant’ and ‘pliny’ as an OOD test set. Similarly, we use ‘how many words’ and ‘ignore one instruction’ as innocuous transformations in training, and ‘all caps’ and ‘find synonyms’ out of distribution. Probing In order to do probing, we need some way of choosing where (what site and layer) in the model to apply the probe, and how to aggregate the token probe across the context: each probe can be applied to any token in the forward pass, and it is not clear where this should be, or if it should be aggregated in some way; for instance, by applying the probe at every position and taking the mean or the max of the probe decision. For the Gemma models we study in this snippet, chat data is surrounded by tokens using the following format: <start_of_turn>user [User prompt] <end_of_turn>\n <start_of_turn>model\n [Model response] We experimented with both max-probing, where we first take the max activation of a feature along the sequence dimension, and probing at a fixed position relative to the first token of the model response (so position 0 is the first token of the model response, token -1 is the newline preceding that, etc). We find that max probing is generally the best, though using a fixed position of -5, corresponding to the <end_of_turn> token that completes the user turn, has comparable performance. We find that the most reliable performance is obtained by probing at layer 20 of Gemma-v2 9B IT. Bricken et al found that, for a biology-related task, taking the max of the features along the sequence before training a probe helped the performance of the SAE probe significantly, we only find a small advantage over probing at a fixed position in our application. We hypothesise that this is due to the information about the harmful content of the prompt naturally being aggregated in the end of turn token by the model. This idiosyncrasy of this particular task may explain the differences between our results and those reported by Anthropic. We investigate two ways to do probing with SAEs: Do k-sparse probing on the feature activations of the SAEs. Under the assumption that only 1 feature, or a subset of features, captures the information at hand, this should result in less overfitting. Train a linear probe on the reconstructions of the SAEs, or the reconstructions of some subset of the features. This is less likely to be useful as a practical method, but is interesting at investigating whether relevant information for a particular task is or is not preserved by the SAE reconstruction. As we find that our chat SAEs generally produce similar results on this task (and may have a small edge on the autointerp metric depending on the resampling fraction), we use the finetuned SAEs for all results in this snippet, as this method is the quickest to experiment with. Results Linear probes on the residual stream perform very well at this task, essentially saturating even on our out of distribution set, indicating information about harmful user intent is present in model activations. Subset Train Val OOD Linear Probe AUROC 1.0 1.0 0.999.. In the plots below, we plot the best single-feature probe of the original GemmaScope SAEs, and a set of SAEs finetuned on chat data (see the finetuning snippet for more details), with the performance of the dense probe shown as a dotted line. We see that while using finetuned SAEs leads to a significant improvement in single-feature probe performance at this task - suggesting that these SAEs more closely capture the concept that we care about in a single feature - they remain below the performance of a dense linear probe, as shown by the dotted line in these plots. Initially, we expected that SAE based probes would not outperform linear probes on in-distribution data, but we expected that they might have an advantage out of distribution, especially those based on a single feature or a small number of features, under the hypothesis that SAE features would generalise more robustly out of distribution. However, we generally find that even on the OOD set, SAE probing does not match the performance of a linear probe, though the best single feature probe from the finetuned model does come close in some cases. We were surprised by the performance and robustness of the linear probe, and this is consistent with probes beating SAEs in OOD settings studied by Kantamneni et al , though we can’t make confident extrapolations about probes vs SAEs in all OOD settings We can expand on single feature probes by using k-sparse probing, for k between 2 and 50. Using k-sparse probing largely closes the gap to the near-perfect performance dense probes for K>=5, although generally a small gap remains. Notably, despite k-sparse probing performing near perfectly in distribution, it shows worse transfer OOD, while dense probes generalise well, the opposite of what we predicted should happen with high quality SAEsl. We experiment with two methods for k-sparse probing; selecting k latents by sorting by the mean difference of latent activations between positive and negative examples in the training set, and using l1 regression with a coefficient swept to select a number of active probe latents close to the target k (and then retraining an unregularised model on the selected latents). The two methods generally show similar results, as shown below, matching the results of Gurnee et al on neurons. Our results with k-sparse probing generally corroborate our results that finetuning SAEs helps, with finetuned SAEs generally outperforming the original GemmaScope SAEs, particularly for low L0 values, but it is not enough to achieve parity with dense probes. Related Work and Discussion SAE probing was also studied in parallel by Kantamneni et al and Bricken et al . Generally, we think that our results are fairly consistent with the literature here. In particular, Kantamneni et. al. try to carry out a similar analysis to us over a wide variety of datasets, finding broadly comparable results to us; that sparse probing wins only rarely, having an advantage only when the labels are highly corrupted or potentially in low-data regimes. They also found better performance on simpler datasets where the SAE had a single highly relevant latent, and not on more subtle datasets. Similarly, Bricken et al recently studied how SAE probes compare to linear probes. They study detecting biology-related misuse, rather than harmful intent. Unlike us, they find a slight advantage for SAEs in that SAEs allow max-aggregating latents along the sequence dimension before training a classifier (max-pooling the activations), which lets their classifier be more sensitive to data distributed throughout the prompt. Though Kantamneni et al show that, though max-aggregating latents can be an advantage over single token dense probes in some settings, it does not systematically beat attention head probes, and we speculate that attention head probes would also do well in our setting. We do not find that max-pooling is particularly helpful for our task ; we hypothesise that this may be because the information relevant to the classifier in our case is fairly temporally concentrated. Like us, both Kantamneni et al and Bricken et al find that the individual SAE latents are useful for finding spurious correlations in a dataset. Notably, Kantamneni et al find a single latent predicting grammatical accuracy, with similar accuracy to the “ground truth” labels in Glue CoLA, due to the high level of label noise. In some ways, our empirical results understate the utility of this, as we used the inspectability of our probes in order to clean our datasets of spurious correlations; if we hadn’t done this, our linear probes would presumably have learnt these too, which would have decreased their generalisation out of distribution. However, Kantamneni et al showed that these spurious correlations could be discovered by other methods, which we speculate would work for us too. In general, we do find that there is a persistent performance gap between SAEs and probes. In part, this is because SAEs remain imperfect, and we do find that relevant information is lost in the reconstruction term, with probes on the SAE reconstruction generally performing slightly worse than a probe on the raw residual stream. Performance delta between probe trained on residual stream and on SAE reconstruction. Train Test OOD GemmaScope 0.00087 0.0018 0.048 SAE finetune comparable steps 0.0013 0.0024 0.039 Consistent with these negative results for SAE probing, very recent work by Apollo Research also tried SAE probes for detecting deceptive behaviour , finding that linear probes outperformed SAE probing. We think it is curious that many of the variations on finetuning we tried did not seem to make a significant difference. It is possible that our setup is very noisy; perhaps a larger probing dataset would allow us to see a significant difference between these methods. Is it surprising that SAEs didn’t work? In general, we found it unexpectedly hard to reason about how to update from positive/negative results on downstream tasks. In some sense, we have a hammer and are looking for a nail. We have this cool technique of SAEs, and want to find problems well suited to it. We care about how often it’s useful, and the magnitude of benefit. It doesn’t need to be useful for everything, but we care much more about some tasks than others. If you find one successful application, that doesn’t mean it will work in less cherry-picked cases, and if you have one failure, maybe it was badly suited to SAEs, but other important tasks would work fine. And even if you do have several successes, that may not be enough to justify the costs of training and R&D. Overall, we think that seeking evidence of interpretability on downstream tasks is valuable and something we hope to see more of in future - you can’t update too much off of the evidence, but it seems like a crucial question and this is one of the better sources of evidence we can currently access. Another complication is that interpretability is quite hard to reason clearly about in general. You might have success on a task, but for quite different reasons than you thought, and you might think SAEs should help on a task, but actually your intuition is wrong. To illustrate this, in hindsight, we're now a lot more confused about whether even an optimal SAE should beat linear probes on OOD generalisation. Here’s our fuzzy intuition for what’s going on. We have a starting distribution, an OOD distribution, a train set and a val set (from the original distribution) and an OOD test set. There are three kinds of things a probe can learn for predicting the train set: Noise: patterns specific to the train set that do not generalise out of sample, i.e. on the test set. Learning noise gives rise to overfitting. Spurious correlations: patterns that predict the concept in the original distribution (and which are predictive even on the test set) but that do not generalise out of distribution. True signal: patterns that are predictive of the target concept both in-distribution and out-of-distribution. Noise is easily ignored given enough data. Indeed, on this task we found that both SAEs and linear probes learn to basically classify perfectly on the test set (i.e. out of sample, but in-distribution). So the difference in performance between SAEs and linear probes out of distribution comes down to how their respective inductive biases help them to latch on to the true signal versus spurious correlations: it seems that SAE probes do a worse job at this than linear probes. Why is this? Our original hypothesis had been that sparse SAE probes would in fact have a better inductive bias for picking the true signal over spurious correlations. This was in large part because we had assumed that the concept we’re looking to learn (“harmful intent”) is a fairly simple logical expression, composed of a small number of “atomic” concepts, both causal (like danger, violence, profanity, etc) and correlational (like linguistic features indicative of users who are trying to commit harm or of jailbreaks), that we expected SAEs would have assigned to single / small clusters of latents. Under this assumption, we guessed that a sparse SAE probe would easily find the relevant concepts among its latents and therefore extract the true signal, ignore spurious correlations. The fact that this doesn’t happen suggests a number of things that could be going wrong: The concept “harmful intent” (as defined implicitly by the harmful datasets we used to train the probe) may not really be a simple function of a few “atomic” concepts at all. I.e., even with an “perfect” SAE - one whose latents correspond precisely to the concepts the model uses - it could be the case that the positive and negative labels in our datasets can’t be generated by a simple expression involving just a few of these concepts. This could indicate a fundamental problem with SAEs - we want concepts like harmful intent, and if high functioning SAEs do not usefully isolate them, this is bad Or it could just be that harmful intent is a fuzzily defined concept, and depends a lot on the subjective values of the labellers/those setting rules for labellers. Though under this hypothesis, it’s surprising that linear probes generalise well. Even if “harmful intent” is really a simple function of a few basic interpretable concepts, because our SAEs don’t learn these concepts cleanly, a sparse SAE probe doesn’t manage to capture the true signal. Since SAEs are incomplete , perhaps important directions corresponding to components of the true signal are missing from the dictionary, forcing the SAE probe to make do with approximating these directions from the latents that it does possess. Patterns that are spurious correlations may be just as well represented by latents in the SAE’s dictionary as patterns that make up the true signal, meaning that there is no reason to expect a sparse SAE probe to preferentially latch onto components of the true signal over spurious correlations. In practice, our guess is that a mix of these are going on. b and c would suggest that we just haven’t gotten good enough at training SAEs but that it may be a promising direction, and a suggests fundamental issues with SAEs as a method. d would suggest that SAEs could make a promising probing method if we augmented them by pruning out latents that seemed like spurious correlations, but we de facto already did this by using them to clean up correlations in the data itself benefitting both the SAE probe and the dense probe. Overall, this means that it’s hard to draw a clear conclusion from the data available, but it does seem a bad sign for the practical utility of SAEs. Dataset debugging with SAEs Sen Rajamanoharan, Lewis Smith, Arthur Conmy One use case where we did find SAEs useful was in dataset debugging. As the SAE latents are (somewhat) interpretable, inspecting the latents with big mean differences between positive and negative labels can reveal useful information about our dataset. This mirrors similar findings in Kantamneni et al and Bricken et al For instance, when running an early version of this, when inspecting the best performing features at separating Alpaca and HarmBench, we realised that one of the best performing was a feature that seemed to be a question mark feature. On inspection, we realised that only Alpaca contained questions, whereas the harmful examples were all instructions. Being able to inspect the dataset in this way using a model with SAEs was definitely useful in catching this error, and other than manual inspection, it’s not clear if we ever would have noticed this if we exclusively used linear probing. Though Kantamneni et al were able to achieve similar results by taking maximum activating dataset examples of a linear probe over the pretraining data. This also somewhat complicates the story that linear probing generally performed better than SAEs; this is presumably partly because we were able to use SAEs to manually clean spurious correlations from our dataset before training a supervised probe. One interesting thing to note here is that you can use SAE based techniques like this for dataset exploration even if you don’t have an SAE on the model you want to probe on; for instance, using a model like Gemma 2 with an SAE sweep could be used to try to detect issues like this before you train something based on a larger model, as we are interested in the properties of the data , not the model. This exploratory property is one reason why people expect that unsupervised techniques like SAEs will be useful for safety work; as they make it easier to discover unanticipated features of the model than supervised techniques. However, as mentioned, in this instance SAEs have seemed primarily useful as a technique for investigating datasets as much as models. Autointerp and high frequency latents Callum McDougall In the next section, we’ll compare autointerpretability scores across models with different latent density distributions (in particular, some models with more high-frequency features than others). This introduces a problem - traditionally autointerp is measured as an average score over all latents, but this doesn’t capture the fact that eliminating a high-frequency uninterpretable feature is intuitively better than eliminating a low-frequency uninterpretable feature. One way to think about this is, if we sampled a prompt from the pretraining distribution and gave it to the model, and look at the latents, we’d hope that they tend to be interpretability. But the correct way to measure this is by taking the autointerp score for each latent weighted by how frequent they are, not uniformly weighted. We have found taking frequency weighted auto-interp scores to be instructive, and recommend that practitioners plot this, in addition to uniform weighted. As an extreme example, we can construct a pathological SAE where 𝑀−𝑁 latents encode specific bigrams and the remaining 𝑁 latents fully reconstruct the rest of the 𝑁-dimensional SAE input - this would score very well on autointerp with uniform average scores provided 𝑀 >> 𝑁, but poorly with frequency-weighted average scores. For this reason, although we show both uniform and frequency-weighted autointerp results in the experiments below, we’ll focus more on the frequency-weighted scores in subsequent discussion. Aside from this weighting strategy, the rest of the autointerp methodology follows standard practices such as those introduced in Bills et al and built on by EleutherAI . An explanation is generated by constructing a prompt that contains top activating example sequences as well as examples sampled from each quantile and a selection of random non-activating examples, and asking the model to characterise the kinds of text which causes the latent to fire. We then give the model a set of unlabelled sequences (some from the top activaing or quantile groups, others non-activating) and ask it to estimate the quantised activation value of the highest-activating token in that sequence. The latent’s interpretability score is then computed as the spearman correlation between the true estimated quantised activations. Removing High Frequency Latents from JumpReLU SAEs Senthooran Rajamanoharan, Callum McDougall, Lewis Smith TL;DR: Both JumpReLU and TopK SAEs suffer from high frequency latents: latents that fire on many tokens ( > 10 % of the dataset) and often seem uninterpretable. We find that by tweaking the sparsity penalty used to train JumpReLU SAEs we can largely eliminate high frequency latents, with only a small cost in terms of reconstruction error at fixed L0. Auto-interp suggests that this has a neutral-to-positive effect on the average latent interpretability once we weight by latent frequency, by reducing the incidence of high frequency uninterpretable latents. Method Motivation In our paper , we trained JumpReLU SAEs using a L0 sparsity penalty, which penalises a SAE in proportion to the number of latents that fire on each token: L L 0 ( x ) : = λ ∥ f ( x ) ∥ 0 ≡ λ M ∑ i = 1 1 f i ( x ) > 0 , Where f i ( x α ) is the activation of the i t h latent on an N -dimensional input LM activation x , and M is the width (total number of latents) of the SAE. [1] This L0 sparsity penalty only cares about controlling the average firing frequency across all latents in a SAE; it is actually indifferent to the range of firing frequencies in the SAE. In other words, L0 doesn’t care whether a low firing frequency is achieved by: (a) all latents firing infrequently, or (b) some latents firing frequently and other latents firing very infrequently to compensate. As long as the average firing frequency is the same either way, L0 doesn’t mind. [2] We can see this formally, by noticing that the average L0 penalty on a training batch can be expressed as follows: 1 B B ∑ α = 1 L L 0 ( x α ) = λ M ∑ i = 1 ( 1 B B ∑ α = 1 1 f i ( x α ) > 0 ) = λ M ∑ i = 1 ^ ω i = λ M ⟨ ^ ω ⟩ where ^ ω i : = 1 B ∑ B α = 1 1 f i ( x α ) > 0 is a single-batch estimate of the firing frequency of the i t h latent and ⟨ ^ ω ⟩ is the mean firing frequency (as estimated on this batch) across all latents in the SAE. Hence, the L0 sparsity penalty is exactly proportional to the mean of the SAE’s firing frequency histogram, and therefore indifferent to the spread of latent firing frequencies in this histogram. This in turn leads to the rise of high frequency latents: as long as these latents are beneficial for minimizing the SAE's reconstruction error, training with a L0 penalty provides insufficient pressure to prevent high frequency latents from forming, as can be seen following frequency histograms. Latent firing frequency histograms for Gated, JumpReLU and TopK SAEs. Unlike Gated SAEs, which use a L1 penalty that penalizes large latent activations, JumpReLU (middle) and TopK (bottom) SAEs exhibit high-frequency latents: latents that fire on 10% or more of tokens (i.e. that lie to the right of the dotted vertical line). Modifying the sparsity penalty Now, a nice feature of JumpReLU SAEs is that we are not beholden to using a L0 sparsity penalty. [3] Given the observation above, an obvious way to get rid of high frequency features is modify the sparsity penalty so that it does penalise dispersion (and in particular right-tail dispersion) in addition to the mean of the firing frequency histogram. There are many ways to do this, but here we explore arguably the simplest approaches to achieving this property, which is to add a term to the sparsity penalty that is quadratic in latent frequencies: 1 B B ∑ α = 1 L quad ( x α ) : = λ M ∑ i = 1 ^ ω i ( 1 + ^ ω i / ω 0 ) . Note that the first term in this quadratic-frequency sparsity penalty is the standard L0 penalty, whereas the second term is proportional to the mean squared latent frequency. We have introduced a new hyperparameter ω 0 , which sets the frequency scale at which the sparsity penalty switches from penalising latent frequency roughly linearly (for latents with frequencies ^ ω i ≪ ω 0 ), to penalising latent frequency quadratically (for latents with frequencies ^ ω i ≳ ω 0 ). This in turn leads to this penalty disincentivising high frequency latents from appearing, while latents lower down the frequency distribution are treated similarly to latents in a standard (L0-based) JumpReLU SAE. [4] How we evaluated interpretability When evaluating the effectiveness of JumpReLU variants, we face a problem: traditionally the interpretability of a SAE is measured as the average auto-interp score over all its latents, but this doesn't capture the fact that eliminating a high-frequency uninterpretable latent is typically better than eliminating a low-frequency uninterpretable latent. As an extreme example, we can construct a pathological SAE where M − N latents encode specific bigrams and the remaining N latents fully reconstruct the rest of the N -dimensional SAE input: this would score very well on auto-interp with uniform average scores provided M ≫ N , but poorly with frequency-weighted average scores. For this reason, although we show both uniform and frequency-weighted auto-interp results in the experiments below, we'll focus more on the frequency-weighted scores in the subsequent discussion. [5] Aside from this weighting strategy, the rest of the auto-interp methodology follows standard practices such as those introduced in Bills et al. (2023) and built on by Paulo et al. (2024) . An explanation is generated by constructing a prompt that contains top activating example sequences as well as examples sampled from each quantile and a selection of random non-activating examples, and asking the model to characterise the kinds of text which causes the latent to fire. We then give the model a set of unlabelled sequences (some from the top activating or quantile groups, others non-activating) and ask it to estimate the quantised activation value of the highest-activating token in that sequence. The latent's interpretability score is then computed as the Spearman correlation between the true & estimated quantised activations. Results In our experiments below, we try setting the frequency scale ω 0 to either 10 − 1 or 10 − 2 , since we are interested in suppressing latents that fire on more than about 10% of tokens. We compare quadratic-frequency loss JumpReLU SAEs trained this way against standard (L0 penalty) JumpReLU SAEs, Gated SAEs and TopK SAEs. [6] We train 131k SAEs from scratch on layer 20 Gemma 2 9B IT activations on instruction tuning data [TODO: reference back to a description of this from an earlier snippet]. Reconstruction loss at fixed sparsity Reconstruction loss vs L0 for the various SAE architectures and loss functions used in our experiment. The quadratic-frequency penalty (QF loss) has slightly worse reconstruction loss at any given sparsity than standard JumpReLU SAEs (L0 loss), but still compare favourably versus Gated and TopK SAEs. As expected (since we are no longer optimising for L0 directly), JumpReLU SAEs trained with the quadratic-frequency penalty have slightly worse reconstruction loss at a given sparsity than JumpReLU SAEs trained with the standard L0 loss. However, the quadratic-frequency penalty JumpReLU SAEs still compare favourably to TopK and Gated SAEs. Frequency histograms Latent firing frequency histograms for JumpReLU SAEs trained with a standard L0 loss (top) or quadratic-frequency loss with ω 0 = 10 − 1 (middle) or ω 0 = 10 − 2 (bottom). The quadratic frequency loss successfully removes high frequency features (i.e. latents around or to the right of the red dotted vertical line) without changing the shape of the rest of the frequency histogram. As shown above, the quadratic-frequency penalty successfully suppresses high-frequency latents without having a noticeable effect on the shape of the remainder of the frequency histogram (particularly in the case ω 0 = 10 − 1 ). Latent interpretability Average auto-interp score vs L0 for the various SAE architectures and loss functions used in our experiment, for different latent weightings. Uniform weighting slightly disfavours JumpReLU variants but doesn't show clear patterns, frequency-weighting clearly shows outperformance of JumpReLU variants at lower sparsities (higher L0). As shown above, we observe that the uniform average scores don't show a clear pattern (with Gated & TopK SAEs slightly outperforming for given sparsity values, and JumpReLU variants slightly under-performing). But the frequency-weighted plots show much clearer patterns: (1) nearly all SAEs have lower average latent interpretability scores at larger L0 values (which makes sense under the hypothesis that penalizing lack of sparsity leads to more monosemantic latents), and (2) the JumpReLU variants go against this trend for high L0 values by actually getting more interpretable. Digging deeper into this, we can plot the average latent auto-interp score for each SAE at different latent frequency quantiles, as shown in the appendix [TODO: LINK], for the standard JumpReLU and the quadratic-frequency loss variant with ω = 0.01 . We find that SAEs at all sparsity levels show a negative trend of latent interpretability against latent frequency, although the quadratic-frequency variant still has high-frequency and interpretable latents at all sparsity levels. Conclusions In this snippet we've shown how it's possible to modify the JumpReLU loss function to obtain further desirable properties, like removing high frequency latents. The quadratic-frequency penalty successfully eliminates high frequency latents with only a modest impact on reconstruction loss at fixed sparsity. On auto-interp it tends to score better than standard JumpReLU when we weight individual latents' interpretability scores by frequency (which penalises uninterpretable high frequency latents more heavily than uniform weighting), although at lower L0s Gated SAEs seem to have the best scores of all the SAE varieties we evaluated. Counter-intuitively (and in contrast to the other SAE varieties), the average interpretability of quadratic-frequency SAEs seems to increase with L0 beyond a certain point! We don't have a good explanation for this phenomenon and haven't investigated it further. Does this mean the quadratic-frequency penalty is an improvement over standard JumpReLU SAEs trained with a L0 penalty? We're unsure for a number of reasons: We haven't evaluated these variants extensively, for example on downstream tasks like steering or probing. It's unclear whether we should really aim to remove high frequency latents in the first place. Perhaps they do represent meaningful directions in activation space that we just haven't been able to interpret yet . It's also possible that some high frequency features are pathological and others are legitimate, and we need a more nuanced loss than simply penalising firing frequency to target the former without affecting the latter. The quadratic-frequency loss may simply promote partitioning individual high frequency latents into several (similarly uninterpretable) lower frequency latents, effectively sweeping the problem under the rug. [7] We haven't tried stacking these changes with other improvements like a Matryoshka reconstruction loss (to deal with feature absorption) or the ideas in Anthropic's January 2025 update . Nevertheless, by sharing our ideas and results, we hope that practitioners who run into issues with high frequency latents may try variants on the standard JumpReLU loss like quadratic-frequency (or iterate further on them) and see if they provide an improvement for their use case. Appendix Average autointerp score vs L0 for the JumpReLU SAEs and the JumpReLU QF loss variants with ω = 0.01 . All SAEs show a negative trend of autointerp score against latent frequency, although the quadratic-frequency loss function seems to help the SAE form interpretable latents even at higher frequencies - the curves for higher L0 SAEs are squashed to the right. ^ As we describe in detail in the paper, we use straight-through-estimators (STEs) to differentiate through the step discontinuities in both the L0 sparsity penalty and the jump discontinuity in the JumpReLU activation function to train JumpReLU SAEs. We use the same method here. ^ A very similar argument can be made about TopK SAEs , which control L0 via the k parameter. ^ Indeed, in our paper, we show how we can modify the sparsity penalty to train SAEs that target a fixed L0, much like TopK SAEs. Here, we will instead modify the sparsity penalty to directly target high frequency features. ^ An alternative, and even simpler, sparsity penalty would be to penalise all latents according to the square of their firing frequencies, i.e. using a penalty of the form λ ∑ M i = 1 ^ ω 2 i . This squared-frequency penalty also has the advantage of not introducing yet another hyperparameter. However, this penalty under -penalises latents with very low firing frequencies, leading to a frequency distribution that is devoid of both high and low firing frequencies, i.e. more sharply peaked around the mean firing frequency. One way to see why this is the case is to notice that such a penalty can also be expressed as ⟨ ^ ω ⟩ 2 + V a r ( ω ) : i.e. holding the mean firing frequency ⟨ ^ ω ⟩ fixed, it corresponds to minimising the variance of the frequency distribution. In contrast, the quadratic-frequency sparsity penalty here ensures that all latents receive a frequency penalty that is at least linear, while high frequency latents receive an additional penalty that is quadratic; this ensures that the lower part of the frequency distribution remains similar to the latent frequency distributions for JumpReLU and TopK, while nevertheless suppressing the top end of the frequency distribution. ^ Note that this is a departure from the approach we've taken in earlier work, where latents were sampled for auto/manual auto-interp uniformly and with a low-frequency cutoff to deal with latents that have insufficient data. ^ We train all JumpReLU variants using straight-through-estimators that only provide gradients to the threshold, as in the original paper. ^ This could be happening with Gated SAEs too.
324cf361-cd56-4a52-80d1-d68b4256c54b
trentmkelly/LessWrong-43k
LessWrong
Meetup : Melbourne, practical rationality Discussion article for the meetup : Melbourne, practical rationality WHEN: 07 September 2012 07:00:00PM (+1000) WHERE: 55 walsh st west melbourne 3003 australia Practical rationality. This meetup repeats on the 1st Friday of each month and is distinct from our social meetup on the 3rd Friday of each month. Discussion: http://groups.google.com/group/melbourne-less-wrong All welcome from 6:30pm. Call the phone number on the door and I'll let you in. Discussion article for the meetup : Melbourne, practical rationality
768f42ec-d05e-4648-abec-97726de27623
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
200 COP in MI: Analysing Training Dynamics *This is the sixth post in a sequence called 200 Concrete Open Problems in Mechanistic Interpretability.*[*Start here*](https://www.alignmentforum.org/posts/LbrPTJ4fmABEdEnLf/200-concrete-open-problems-in-mechanistic-interpretability)*, then read in any order. If you want to learn the basics before you think about open problems, check out*[*my post on getting started*](https://neelnanda.io/getting-started)*. Look up jargon in*[*my Mechanistic Interpretability Explainer*](https://neelnanda.io/glossary) ***Disclaimer**: Mechanistic Interpretability is a small and young field, and I was involved with much of the research and resources linked here.*Please *take this sequence as a bunch of my personal takes, and try to seek out other researcher’s opinions too!*  ***Motivating papers**:*[*A Mechanistic Interpretability Analysis of Grokking*](https://www.alignmentforum.org/posts/N6WM6hs7RQMKDhYjB/a-mechanistic-interpretability-analysis-of-grokking)*,*[*In-Context Learning and Induction Heads*](https://transformer-circuits.pub/2022/in-context-learning-and-induction-heads/index.html) Background ---------- *Skip to motivation if you’re familiar with the grokking modular addition and induction heads papers* Several mechanistic interpretability papers have helped find surprising and interesting things about the training dynamics of networks - understanding what is actually going on in a model during training. There’s a lot of things I’m confused about here! How do models change over training? Why do they generalise at all (and *how*do they generalise)? How much of their path to the eventual solution is consistent across runs and directed vs a random walk? How and when do the different circuits in the model develop, and how do these influence the development of subsequent circuits?  The questions here are of broad interest to anyone who wants to understand what the hell is going on inside neural networks, and there are angles of attack on this that are nothing to do with Mechanistic Interpretability. But I think that MI is a particularly promising approach. If you buy the fundamental claim that the building blocks of neural networks are interpretable circuits that work together to complete a task, then studying these building blocks seems like a grounded and principled approach. Neural networks are extremely complicated and confusing and it’s very easy to mislead yourself. Having a good starting point of a network you *actually*understand makes it far easier to make real progress. I am particularly interested in understanding **phase transitions**(aka emergent phenomena), where a specific capability suddenly emerges in models at a specific point in training. Two MI papers that get significant traction here are [A Mechanistic Interpretability Analysis of Grokking](https://www.alignmentforum.org/posts/N6WM6hs7RQMKDhYjB/a-mechanistic-interpretability-analysis-of-grokking), and [In-Context Learning and Induction Heads](https://transformer-circuits.pub/2022/in-context-learning-and-induction-heads/index.html), and it’s worth exploring what they did as models of mechanistic approaches to studying training dynamics: In [In-Context Learning and Induction Heads](https://transformer-circuits.pub/2022/in-context-learning-and-induction-heads/index.html), the focus was on finding phase changes in real language models. We found [induction heads/circuits](https://dynalist.io/d/n2ZWtnoYHrU1s4vnFSAQ519J#z=_Jzi6YHRHKP1JziwdE02qdYZ) in two layer, attention-only language models, a circuit which performs the task of detecting whether the current text was repeated earlier in the prompt, and continuing it. These turned out to be a crucial circuit in these tiny models - the heads were principally responsible for the model’s ability to do **in-context learning**, productively using tokens far back in the context (eg over 500 words back!) to predict the next token. Further, the induction heads all arose in a sudden phase transition, and were so important that this led to a visible bump in the loss curve! ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/v1673016799/mirroredImages/hHaXzJQi6SKkeXzbg/di5jpqzqnnsorgycrzqx.png) The focus of the paper was in **extrapolating**this mechanistic understanding in a simple setting to real models. We devised a range of progress measures (for both overall model behaviour and for studying specific heads), and analysed a wide range of models using these. And we found fairly compelling evidence that these results held up in general, up to 13B parameter models!  Grokking is a surprising phenomena where certain small models trained on algorithmic tasks initially memorise their training data, but when trained on the *same*data for a really long time, will suddenly generalise. In our grokking work, we focused on a 1 layer transformer that grokked modular addition. We did a deep dive into its internals and reverse engineered it to discover it had learned trig identity based algorithm.  We then distilled this understanding of the final checkpoint into a series of **progress measures**. We studied **excluded loss**, where we *just*removed the model’s ability to use the generalising solution and looked at train performance, and **restricted loss**where we artificially remove the memorising solution and *only*allow the model to use the generalising solution and look at overall performance. These let us decompose the training dynamics into 3 discrete phases: **memorisation**, where the model just memorises the training data, **circuit formation**, where the model slowly learns a generalising circuit and *transitions*from the memorising solution to the generalising solution - preserving good train performance and poor test performance throughout, and **cleanup**, where it gets so good at generalising that it no longer needs the parameters spent on memorisation and can clean them up. The sudden emergence of “grokking” only occurs at cleanup, because even when the model is mostly capable of generalising, the “noise” of the memorising solution is sufficient to prevent good test loss.  ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/v1673016799/mirroredImages/hHaXzJQi6SKkeXzbg/jkdygvxgxzsz3spozuaw.png)Both works used a mechanistic understanding of a simpler setting to try to understand a confusing phenomena, but with somewhat different focuses. The approach modelled in grokking is one at the intersection of MI and the science of deep learning. We took a specific confusing phenomena in neural networks, **simplified**it to a simple yet still confusing example, **reverse-engineered**exactly what was going on, **extrapolated**it to the core principles (of progress measures) and analysed these during training. By deeply focusing on understanding a specific example of a specific phenomena, we (hopefully!) gained some general insights about questions around memorisation, generalisation, phase transitions, and competition between different circuits (though there’s a *lot* of room for further work verifying this, and I’m sure some details are wrong!). (Check out the [Interpreting Algorithmic Problems post](https://www.alignmentforum.org/posts/ejtFsvyhRkMofKAFy/200-cop-in-mi-interpreting-algorithmic-problems) for another angle on this work). In contrast, the induction heads work has a bigger focus on showing the universality of the result across many models, studying real models in the wild, and being able to tie the zoomed in understanding of a specific model circuit to the important zoomed out phenomena of in-context learning.  Motivation ---------- Zooming out, what are the implications of all this for future research? I think there’s exciting work to be done, both on using toy models to find general principles of how neural networks learn and on understanding the training dynamics in real language models. A lot of the current low-hanging fruit is in fairly simple settings and building better fundamental understanding. But a good understanding of training dynamics and how it intersects with interpretability should eventually push on some important long-term questions:  * Helping us find ways to track and predict [emergent](https://bounded-regret.ghost.io/more-is-different-for-ai/) [phenomena](https://arxiv.org/abs/2206.07682), such as by using a mechanistic understanding to find smoother progress measures that precede the emergence. + My personal prediction is that *most*circuits form in sudden phase transitions, but this all averages out into a fairly smooth loss curve, because each individual circuit is just a marginal contribution to model performance. From this perspective, studying circuits is key to understanding emergence, since emergence is likely a consequence of some circuits suddenly developing. * Helping understand the potential development of misaligned capabilities like deception or [a model learning incorrect proxy goals](https://www.youtube.com/watch?v=bJLcIBixGj8) that generalise poorly. Eg, if these do happen, should we expect them to develop suddenly, or continuously? * More ambitiously, exploring how an understanding of interpretability might let us *change*training dynamics to change a model’s trajectory and avoid these failure modes, such as by putting interpretability inspired metrics in the loss function. Work on deep learning mysteries (in contrast to directly on real language models) doesn’t directly push on the above, but I think that being less confused about deep learning will help a lot. And even if the above fail, I expect that exploring these questions will naturally open up a lot of other open questions and curiosities to pull on.  I’m particularly excited about this work, because I think that trying to decipher mysteries in deep learning makes mechanistic interpretability a healthier field. We’re making some pretty bold claims that our approach will let us really reverse-engineer and fully understand systems. If this is true, we *should*be able to get much more traction on demystifying deep learning, because we’ll have the right foundations to build on. One of the biggest ways the MI project can fail is by ceasing to be fully rigorous and losing sight of what’s *actually*true for models. If we can learn true insights about deep learning or real language models, this serves both as a great proof of concept of the promise of MI, and grounds the field in doing work that is actually useful. To caveat the above, understanding what on earth is going on inside networks as they train highly appeals to my scientific curiosity, but I don’t think it’s *necessary* for the goal of mechanistically understanding a fully trained system. My personal priority is to understand specific AI systems on the frontier, especially at human level and beyond. This is already a very ambitious goal, and I don’t think that understanding training dynamics is obviously on the critical path for this (though it might be! Eg, if models are full of irrelevant "vestigial organs" learned early in training). And there’s a risk that we find surface-level insights about models that allows us to make better models faster, while still being highly confused about how they *actually* work, and on net being even more behind in understanding frontier models. And models are already a confusingly high dimensional object without adding a time dimension in! But I could easily be wrong, and there’s a lot of important questions here (just far more important questions than researchers!). And as noted above, I think this kind of work can be very healthy for the field. So I'd still encourage you to explore these questions if *you* feel excited about them! Resources --------- * The [induction heads paper](https://transformer-circuits.pub/2022/in-context-learning-and-induction-heads/index.html) + [A video walkthrough](https://www.youtube.com/watch?v=dCkQQYwPxdM&t=3041s) I made with Charles Frye on it + The [Induction Circuits section](https://dynalist.io/d/n2ZWtnoYHrU1s4vnFSAQ519J#z=_Jzi6YHRHKP1JziwdE02qdYZ) of my MI explainer * My [grokking work](https://www.alignmentforum.org/posts/N6WM6hs7RQMKDhYjB/a-mechanistic-interpretability-analysis-of-grokking) and [the accompanying colab notebook](https://bit.ly/neelgrokking) + If you want to work on a grokking related question, [email me](mailto:neelnanda27@gmail.com) for a significantly more up to date draft * My [TransformerLens library](https://github.com/neelnanda-io/TransformerLens) supports many sequences of checkpointed models, see [model documentation here](https://dynalist.io/d/n2ZWtnoYHrU1s4vnFSAQ519J#z=1Z1d-y3LE8Y6TVqglA0UrGZS) + My toy models all have ~200 checkpoints taken on a roughly exponentially decaying schedule - The SoLU, GeLU and attn-only models of 1-4L were all trained with the same initialization and data shuffle (that is, all 1Ls are the same, all 2Ls are the same, etc). I have no idea whether this makes a difference, but at least some induction heads seem shared across them! + There are some larger SoLU models (up to GPT-2 Medium size), and an older scan trained on the Pile which both have checkpoints taken during training + The Stanford CRFM released 5 GPT-2 small and 5 GPT-2 medium sized models with 600 checkpoints and each with a different random seed (in TransformerLens as `stanford-gpt2-small-a` etc) + Eleuther’s Pythia project has models from 19M parameters to 13B, with 143 linearly spaced checkpoints taken during training. Each model was trained on the exact same data shuffle. - They also contain a scan of models trained on a de-duplicated version of the Pile * The [Training Dynamics section](https://dynalist.io/d/n2ZWtnoYHrU1s4vnFSAQ519J#z=DHoxIlHuS8VQDzhP-mhn982c) of my MI explainer Tips ---- * Studying training dynamics is generally easy to do with small models trained on algorithmic tasks - you can train them yourself fairly easily and take as many checkpoints as you want! * If training a model takes more than 1-2 hours, I recommend taking a *lot*of checkpoints (unless you have major storage constraints). + And make sure to set a random seed at the start! You *really*want to be able to reproduce a training run, in case you want to take more checkpoints, add some new metrics, etc. + I recommend taking more checkpoints earlier in training (eg on an exponentially decaying schedule), there tends to be more interesting stuff in the first few steps/epochs. * It’s easy to jump too fast to studying training. I recommend first ensuring you really understand the *final*model, and use this understanding to ground your explorations during training. + Though it can also be useful to go back and forth - briefly exploring the model during training can help you figure out what’s most relevant to study * If you’re training your own models, you should save checkpointed runs with several different random seeds. This is an easy way to sanity check your results! * Studying training dynamics can be significantly harder than reverse engineering a single model, because time is an additional dimension to grapple with (and networks have enough dimensions as it is!). + One consequence of this is that, even more so than normal research, you want to be careful about prioritisation. There’s often a lot of weird shit during training, and you want to focus on what’s most relevant to the question you’re trying to answer. * Phase transition is a somewhat fuzzy concept, and different metrics can exaggerate/underrate them. As a rough rule of thumb, metrics like cross-entropy loss are continuous and smoother, metrics like accuracy are discrete and sharper (so look more phase transitiony). And things look sharper with a log scale on the x axis. (A fun exercise is comparing how different phase transition focused papers present their data!) + It’s not clear to me what the *right*metric to use here is - if something only looks like a phase transition under accuracy, then that’s less compelling, but still clearly tells you *something*. I recommend using both kinds, and plotting graphs with linear or log scale axes. Problems -------- [*This spreadsheet*](https://www.neelnanda.io/cop-spreadsheet) *lists each problem in the sequence. You can write down your contact details if you're working on any of them and want collaborators, see any existing work or reach out to other people on there! (thanks to Jay Bailey for making it)* ***Note**: Many of these questions are outside my area of expertise, and overlap with broader science of deep learning efforts. I expect there’s a lot of relevant work I’m not familiar with, and I would not be massively surprised if some of these problems have already been solved!* * Algorithmic tasks + Understanding grokking - **B\* 5.1 -** Understanding why [5 digit addition has a phase change](https://www.lesswrong.com/posts/N6WM6hs7RQMKDhYjB/a-mechanistic-interpretability-analysis-of-grokking#Speculation__Phase_Changes_are_Everywhere) per digit (so 6 total?!) * C 5.2 - Why in the order it does? - **B\* 5.3 -** Look at the PCA of logits on the full dataset, or look at the PCA of a stack of flattened weights. If you plot a scatter plot of the first 2 components, the different phases of training are clearly visible. What’s up with this? * Can you interpret the different components? (Very roughly, I think one of them is memorising circuit - generalising circuit) * Can you use these to predict *when* the model will grok? I've had some success doing this with the components calculated using *all* checkpoints, but haven't figured how to do it without future information. - **C\* 5.4 -** Can we predict *when*grokking will happen? A metric that doesn’t use any future information would be awesome - **C\* 5.5 -** Understanding why the model chooses specific frequencies (and [why it switches mid-training sometimes](https://twitter.com/NeelNanda5/status/1559430256624209921)!) - B-C 5.6 - What happens if we include in the loss one of the progress measures in my post/paper - can we accelerate or stop grokking? + **B\* 5.7 -** [Adam Jermyn](https://www.lesswrong.com/posts/RKDQCB6smLWgs2Mhr/multi-component-learning-and-s-curves) provides an analytical argument and some toy models for why phase transitions should be an inherent part of (some of) how models learn. Can you find evidence of this in more complex models? - I would start by training a 2L attn only model on repeated random tokens to form induction heads. This has a phase transition, can you figure out why? - B 5.8 - Build on and refine his arguments and toy models, eg thinking about the ways in which they deviate from a real transformer, and building toy models that are more faithful + Lottery Tickets - **B-C\* 5.9 -** Eg for a toy model trained to form induction heads. Is there a lottery-ticket style thing going on? Can you disrupt induction head formation by messing with the initialization? (eg train 4 models, find the least induction-y heads in each, and initialize a model with those heads) * What happens if you initialize the model with the fully trained prev token or induction head? - **C\* 5.10 -** All of my toy models with n=1 to 4 layers (attn-only, gelu and solu) were trained with the same data shuffle and weight initialization. [Looking at the induction heads](https://www.neelnanda.io/mosaic) in the models of the, many are not shared, but head L2H3 in the 3L ones and head L1H6 in the 2L ones are induction heads always. What’s up with that? - B 5.11 - If we knock out the parameters that form important circuits at the end of training on some toy task (eg modular addition) at the *start* of training, how much does that delay/stop generalisation? + Analysing how the pair of heads compose in[an induction circuit](https://dynalist.io/d/n2ZWtnoYHrU1s4vnFSAQ519J#z=_Jzi6YHRHKP1JziwdE02qdYZ) over time (previous token head and induction head) - **B\* 5.12 -** Can you find progress measures which predict these? * Can this be detected with[composition scores](https://dynalist.io/d/n2ZWtnoYHrU1s4vnFSAQ519J#z=FsjBpwi6iQG6Rkj7B8o6-S6W)? Can you refine this into something that will? - **B\* 5.13 -** At initialization, can we predict *which*heads will learn to compose first? Is it at all related to how much they compose at initialization? * If we copy this pair of heads into another randomly initialized model, does this pair still compose first? - B 5.14 - Does the composition develop as[a phase transition](https://dynalist.io/d/n2ZWtnoYHrU1s4vnFSAQ519J#z=JpZxnjnHduf28x6C8CRwHVTN)? * What do the compositions scores look like over training? * Understanding fine-tuning + **C\* 5.15 -** Build a toy model of fine-tuning (where the model is pre-trained on task 1, and then fine-tuned on task 2). What is going on internally? Can you find any interesting motifs? - Possible variations - totally switching tasks, varying just one aspect of the task, or training 50% on the new task and 50% on the old task. - See the Superposition post for discussion on how to approach fine-tuning projects + What happens within a model when you fine tune it? TransformerLens contains versions of the 1L SoLU and 4L SoLU toy models fine-tuned on 4.8B tokens of wikipedia ([documented here](https://dynalist.io/d/n2ZWtnoYHrU1s4vnFSAQ519J#z=catqTa8-6fQG58w62XjJouHp), load with `solu-1l-wiki` or `solu-4l-wiki`, 163 checkpoints taken during fine-tuning are also available) - Hypothesis: Fine-tuning is mostly just rewiring and upweighting vs downweighting circuits that already exist, rather than building new circuits. - **A\* 5.16 -** Explore how model performance change on the original training distribution. Are specific capabilities harmed or is worse across the board? - **B\* 5.17 -** How is the model different on the fine-tuned text? I’d look for examples of fine-tuning text where the model does *much*better post fine-tuning, and start there, but also look at some more normal text. * Start with direct logit attribution and compare model components - do all matter a bit more, or do some matter *way*more and the rest are the same? * **B\*** Try activation patching between the old and new model on the same text and see how hard it is to recover performance - do some components change a lot, or do all change a bit? - **B\* 5.18 -** Look at max activating text for various neurons in the original models. How has it changed post fine-tuning? Are some neurons overwritten, are most basically the same, or what? * I’m particularly curious about whether the model learns new features by shoving them in in superposition, while keeping all of the old features, vs significantly rewriting things. - **A-C\* 5.19 -** Explore further and see what you can learn about what is going on mechanistically with fine-tuning (I predict that you can easily learn *something*, but that there’s a lot of room to dig deep and find insights) - **B-C\* 5.20 -** Can you find any phase transitions in the checkpoints taken during fine-tuning? * Understanding training dynamics in language models + Phase transitions - **A-B\* 5.21 -** Can you replicate the induction head phase transition results in the various checkpointed models in TransformerLens? (I’d write code that works for `attn-only-2l`, and then if you just change the pretrained model name, the code should work for all of the models!) - **B\* 5.22 -** Look at the neurons in my SoLU models during training. Do they tend to form as a phase transition? * The easiest place to start would be taking the max activating dataset examples from [neuroscope](https://neuroscope.io/), running those through the model, and looking at how the activation changes (though you’ll need to significantly refine this to be confident there’s a phase transition!) - **C\* 5.23 -** Use the [per-token loss analysis](https://transformer-circuits.pub/2022/in-context-learning-and-induction-heads/index.html#per-token-loss-intro) technique from the induction heads paper to look for more phase changes - when plotting the first two dimensions the induction head phase change shows up as a clear turn. Can you figure out a capability or circuit that these correspond to? * I’d start with other toy models (eg `solu-2l`, `solu-1l` or `attn-only-3l`) * You can also try out [other dimensionality reduction techniques](https://dynalist.io/d/n2ZWtnoYHrU1s4vnFSAQ519J#z=xRPTt5aM5hTP0KHEbgol56Gc) * I would look for the first 3 components, or only study checkpoints afterthe induction bump, so you don’t *just*see the induction bump! - Other ideas for finding phase transitions in real language models, eg using [the Stanford CRFM models](https://dynalist.io/d/n2ZWtnoYHrU1s4vnFSAQ519J#z=YXdlphEsJJyHaut5D_UiYnb8) or my toy models * **A\* 5.24 -** Look at attention heads on various texts and see if any have recognisable attention patterns (eg start of word, adjective describing current word, syntactic features of code like indents or variable definitions, most recent open bracket, etc), then analyse these over training. * A 5.25 - The [Indirect Object Identification task](https://dynalist.io/d/n2ZWtnoYHrU1s4vnFSAQ519J#z=WWRk64Y0yFkKZ8Q3dSaeVJQp) (note - I’ve seen suggestive results that this doesn’t have a phase change) + **B\* 5.26 -** Try digging into the specific heads that act on the problem. Use direct logit attribution for the name movers + **B\* 5.27 -** Or study the attention patterns of each category of heads * **B-C\* 5.28 -** Simple IOI*-style* algorithmic tasks - eg few shot learning, addition, sorting words into alphabetical order, completing a rhyming couplet, matching open and close brackets * B 5.29 - Soft induction heads, eg [translation](https://transformer-circuits.pub/2022/in-context-learning-and-induction-heads/index.html#performing-translation), if any can do it * C 5.30 - Performance on benchmarks or specific questions from a benchmark - **D\* 5.31 -** Hypothesis: The reason scaling laws happen is that models experience a *ton*of [tiny phase changes](https://www.lesswrong.com/posts/N6WM6hs7RQMKDhYjB/a-mechanistic-interpretability-analysis-of-grokking#Speculation__Phase_Changes_are_Everywhere), which average out to a smooth curve because of the law of large numbers. Can you find evidence for or against that? Are phase changes everywhere? + Studying path dependence - how much do models trained in similar ways generally converge on the same behaviour (**path independence**), vs subtle differences (or just different seeds!) leading to significant variation (**path dependence**)? - **A-B\* 5.32 -** How much do the Stanford CRFM models have similar outputs on any given text, vs varying a lot? * Hypothesis - they’ll all figure out circuits at approx the same rate, but with some noise. So they’ll agree on the easy stuff but not the hard stuff. But the medium size ones will be able to do all the stuff the small ones can do. * B 5.33 - Try this out for algorithmic-flavoured tasks, like acronyms, IOI, see the [Circuits In the Wild post](https://www.lesswrong.com/s/yivyHaCAmMJ3CqSyj/p/XNjRwEX9kxbpzWFWd) for more - **A-B\* 5.34 -** Look for the IOI capability in other models of approx the same size? (OPT small, GPT-Neo small, Stanford CRFM models, `solu-8l`, `solu-10l`). Is it implemented in the same way? * I’m particularly interested in the Stanford models, because the *only*difference is the random seed. - **B\* 5.35 -** When model scale varies (eg GPT-2 Small vs GPT-2 Medium) is there anything the smaller one can do that the larger one *can’t*do? I’d start by looking at the difference in per token log prob. * My [toy models or SoLU models](https://dynalist.io/d/n2ZWtnoYHrU1s4vnFSAQ519J#z=NCJ6zH_Okw_mUYAwGnMKsj2m) have fairly small gaps between model sizes, and may be interesting to study here. - **B\* 5.36 -** Try applying the [Git Re-Basin](https://arxiv.org/abs/2209.04836) techniques to a 2L MLP trained for modular addition. a) Does this work and b) If you use [my grokking work](https://www.lesswrong.com/posts/N6WM6hs7RQMKDhYjB/a-mechanistic-interpretability-analysis-of-grokking) to analyse the circuits involved, how does the re-basin technique map onto the circuits involved? * Git Re-Basin is a technique for linearly interpolating between the weights of two models trained on the same task * (Note: My prediction is that this just won’t work for modular addition, because there are different basins in model space for the different frequencies) * See [Stanislav Fort](https://github.com/stanislavfort/dissect-git-re-basin)’s somewhat negative replication for a starting point and criticism. * C-D 5.37 - Can you find some problem where you understand the circuits and where Git Re-Basin *does*work?
a4b8fc35-394a-4fa4-b494-29b76fa4078d
trentmkelly/LessWrong-43k
LessWrong
What makes math problems hard for reinforcement learning: a case study > Abstract: Using a long-standing conjecture from combinatorial group theory, we explore, from multiple angles, the challenges of finding rare instances carrying disproportionately high rewards. Based on lessons learned in the mathematical context defined by the Andrews-Curtis conjecture, we propose algorithmic improvements that can be relevant in other domains with ultra-sparse reward problems. Although our case study can be formulated as a game, its shortest winning sequences are potentially  or  times longer than those encountered in chess. In the process of our study, we demonstrate that one of the potential counterexamples due to Akbulut and Kirby, whose status escaped direct mathematical methods for 39 years, is stably AC-trivial. > > Introduction > > We live in an extraordinary era where artificial intelligence (AI) is transforming numerous sectors and professions. Recent advancements in Large Language Models (LLMs) have empowered AI to read, write, and converse with a proficiency comparable to that of human experts. In the realm of board games, AI has outperformed even the most skilled human players, and it has tackled complex scientific challenges like protein folding, where steady progress was suddenly overtaken by a near-complete solution. > > As AI continues to evolve, one critical question remains: How wide is the range of domains in which AI systems can reason as effectively as humans? Mathematics appears to be a natural progression on the path toward Artificial General Intelligence (AGI) due to its universal syntactic and logical structure, similar to that of natural language. Additionally, mathematics provides a framework for the quantitative evaluation of logical and analytical reasoning, making it an ideal domain for self-improving AI systems on the path to AGI. > > In a moment, we will explain another reason why mathematics could play a crucial role in AGI development, but first, we need to introduce one more key element: reinforcement learnin
c6921ec2-d766-407a-9f5e-c62f2bf98b4c
trentmkelly/LessWrong-43k
LessWrong
Letting Kids Be Kids Letting kids be kids seems more and more important to me over time. Our safetyism and paranoia about children is catastrophic on way more levels than most people realize. I believe all these effects are very large: 1. It raises the time, money and experiential costs of having children so much that many choose not to have children, or to have less children than they would want. 2. It hurts the lived experience of children. 3. It hurts children’s ability to grow and develop. 4. It de facto forces children to use screens quite a lot. 5. It instills a very harmful style of paranoia in all concerned. This should be thought of as part of the Cost of Thriving Index discussion, and the fertility discussions as well. Before I return to the more general debate, I wanted to take care of this aspect first. It’s not that the economic data is lying exactly, it’s that it is missing key components. Economists don’t include these factors in their cost estimates and their measures of welfare. They need to do that. I want a distinct marker for this part of the problem I can refer back to, thus this will include highlights of past discussions of the issue from older roundups and posts. Why are so many people who are on paper historically wealthy, with median wages having gone up, saying they cannot afford children? A lot of it is exactly this. The real costs have gone up dramatically, largely in ways not measured directly in money, also in the resulting required basket of goods especially services, and this is a huge part of how that happened. Bryan Caplan’s Selfish Reasons to Have More Kids focuses on the point that you can put in low effort on many fronts, and your kids will be fine. Scott Alexander recently reviewed it, to try and feel better about that, and did a bunch of further research. The problem is that even if you know being chill is fine, people have to let you be chill. On Car Seats as Contraception is a great case study, but only a small part of the puzzle. Th
29a730fb-8a20-4fe7-9426-96cb0d2fedff
trentmkelly/LessWrong-43k
LessWrong
Ideas for avoiding optimizing the wrong things day-to-day? In other words, I'm interested in ways I can design my work-flow/environment/habits to avoid bike-shedding (aka the Law of Triviality), which is the behavior of "substituting a hard and important problem for an easy and inconsequential one" [1]. Examples include 1) looking into an interesting idea that you ran across while doing a research task, even though it is irrelevant to your goal, 2) spending unnecessary time on the formatting of an essay rather than on the actual writing, 3) buying things/building systems to make very minor productivity improvements instead of doing your tasks for the day.
7f3c30ab-df29-4f40-9efd-ae331b6ac8b4
trentmkelly/LessWrong-43k
LessWrong
Iterating fast: voice dictation as a form of babble Voice input has dramatically increased my writing output as well as the quality of my ideas. But I think that it's not as easy as turning on voice dictation and writing an essay from start to finish. There is a particular way or method of doing voice input that I have found to be far more effective, which is what I will describe in this post. The pitfalls of dictation When you think of voice input, you might think of dictation. It's a form of transcription: an attempt to accurately piece together the words being said and faithfully reconstruct them into text. Courts and legal depositions benefit from accurate transcription. And in the past, if you were exceptionally rich and exceptionally busy, you might dictate your letter to a scribe who would jot it down for you. These are the use cases that voice dictation has been designed to replace. A key feature of this mode is that dictation is a one-way process where the goal is accuracy. Do-overs and backtracking aren't really possible: the over-arching goal is faithfulness to what was said. This is suited for letters and messages, but I think this is not all that voice input can do. Consider sketching on a page, where various ideas or points are connected by drawing arrows or shapes or groupings. When sketching, you aren't quite sure how to structure your thoughts before you start. And sketching is, in some way, the act of figuring this out. There is something highly nonlinear about this process where your thoughts backtrack to previous ideas and test the strength of old conclusions. Good conversations can be something like this, with both participants in more of a 'dance' than a single-pointed journey towards a predetermined conclusion. And I think that voice dictation can, with some effort and learning, become something like this too. An alternate use of voice input: babble The primary thing to know about speaking is that we can speak very fast, and in comparison we type rather slow. This might seem like no big
a0be7ece-7b45-4016-a40c-83da26665ed3
trentmkelly/LessWrong-43k
LessWrong
ChatGPT seems overconfident to me I've talked to ChatGPT about physics, a topic I don't know much about, except that a few things like air lift, kinetic vs static friction, osmosis, and the stuff listed in https://xkcd.com/2682/ (how paracetamol/tylenol works? why baloons electrify hair?) are somewhat "controversial" in that Wikipedia, Quora and stackexchange and other places provide various conflicting answers. I've also talked to it about the topic I a know a bit better: C++ std::atomic's semantics. In both conversations ChatGPT was acting very self-confident. I must say I felt quite persuaded by it when talking about physics. I could tell the information it gave me about C++ was inaccurate, though. Now, how should I combine these two observations? I think an AI which gives confident wrong answers to people who ask it about topics they themselves are not experts about is dangerous. Physics How does a plane generate lift? A plane generates lift by using its wings to manipulate the air around it. As the plane moves forward, air flows over and under the wings. The shape of the wings is designed to cause the air moving over the top of the wing to move faster than the air moving under the wing. This difference in speed creates a difference in pressure, with the higher pressure air under the wing pushing the plane upward. This upward force is known as lift and it is what allows a plane to fly. Are you sure of this explanation? Yes, I am confident in the explanation I provided. The basic principles of flight, including lift, have been well understood for many years, and are studied in great detail in the fields of aerodynamics and aviation. I am a large language model trained by OpenAI, and my purpose is to provide accurate and helpful information on a wide range of topics. If you have any further questions or concerns, I will do my best to address them. So is it the lift caused by Bernoulli's effect? Yes, the lift generated by an airplane wing is typically caused by the Bernoulli effect. This
6d188c3b-552e-4662-9d20-6a7b2f02ba61
trentmkelly/LessWrong-43k
LessWrong
The new spring of artificial intelligence: A few early economics
58b2a945-08a5-4777-b023-16264c3cc288
trentmkelly/LessWrong-43k
LessWrong
Blind Goaltenders: Unproductive Disagreements If you're worried about an oncoming problem and discussing it with others to plan, your ideal interlocutor, generally, is someone who agrees with you about the danger. More often, though, you'll be discussing it with people who disagree, at least in part. The question that inspired this post was "Why are some forms of disagreement so much more frustrating than others?" Why do some disagreements feel like talking to a brick wall, while others are far more productive? My answer is that some interlocutors are 'blind goaltenders'. They not only disagree about the importance of your problem, they don't seem to understand what it is you're worried about. For example, take AI Safety. I believe that it's a serious problem, most likely the Most Important Problem, and likely to be catastrophic. I can argue about it with someone who's read a fair chunk of LessWrong or Bostrom, and they may disagree, but they will understand. Their disagreement will probably have gears. This argument may not be productive, but it won't be frustrating. Or I could talk to someone who doesn't understand the complexity of value thesis or orthogonality thesis. Their position may have plenty of nuances, but they are missing a key concept about our disagreement. This argument may be just as civil - or, given my friends in the rationalsphere, more civil - but it will be much more frustrating, because they are a blind goaltender with respect to AI safety. If I'm trying to convince them, for example, not to support an effort to create an AI via a massive RL model trained on a whole datacenter, they may take into account specific criticisms, but will not be blocking the thing I care about. They can't see the problem I'm worried about, and so they'll be about as effective in forestalling it as a blind goalie. Things this does not mean Blind goaltenders are not always wrong. Lifelong atheists are often blind goaltenders with respect to questions of sin, faith, or other religiously-motivated behavior.
6a88e969-4865-4ee3-b9dd-cec4c35aa695
trentmkelly/LessWrong-43k
LessWrong
Strategic ignorance and plausible deniability This is the third part in a mini-sequence presenting material from Robert Kurzban's excellent book Why Everyone (Else) Is a Hypocrite: Evolution and the Modular Mind. The press secretary of an organization is tasked with presenting outsiders with the best possible image of the organization. While they're not supposed to outright lie, they do use euphemisms and try to only mention the positive sides of things. A plot point in the TV series West Wing is that the President of the United States has a disease which he wants to hide from the public. The White House Press Secretary is careful to ask whether there's anything she needs to know about the President's health, instead of whether there's anything she should know. As the President's disease is technically something she should know but not something she needs to know, this allows the President to hide the disease from her without lying to her (and by extension, to the American public). As she then doesn't need to lie either, she can do her job better. If our minds are modular, critical information can be kept away from the modules that are associated with consciousness and speech production. It can often be better if the parts of the system that exist to deal with others are blissfully ignorant, or even actively mistaken, about information that exists in other parts of the system. In one experiment, people could choose between two options. Choosing option A meant they got $5, and someone else also got $5. Option B meant that they got $6 and the other person got $1. About two thirds were generous and chose option A. A different group of people played a slightly different game. As before, they could choose between $5 or $6 for themselves, but they didn't know how their choice would affect the other person's payoff. They could find out, however – if they just clicked a button, they'd be told whether the choice was between $5/$5 and $6/$1, or $5/$1 and $6/$5. From a subject's point of view, clicking a button might
533810f5-81bb-47e6-bff8-093e8c5803ce
trentmkelly/LessWrong-43k
LessWrong
Roots of Progress is hiring an event manager The Roots of Progress Institute is hiring a full-time, in-house event manager to run our annual Progress Conference (at Lighthaven!) and other events. See the job ad below, crossposted in full from the link above. ---------------------------------------- Event Manager Fully remote, full-time The Role We’re looking for a super-organized self-starter who loves bringing people together in person around a shared set of ideas and who is great at creating magical experiences. The Roots of Progress Institute is a nonprofit dedicated to establishing a new philosophy of progress for the 21st century. We’re part of a larger progress and abundance movement, and one key role we play within this movement is to develop talent and to build community. As the Event Manager, you’ll be in charge of our annual progress conference, which brings together 200-300 thinkers and doers in the progress community. Our first event in October 2024 was a huge success, with 200+ invitation-only attendees coming together at a unique venue for two days. Dozens of attendees shared that this was the best conference they ever attended, and that it was “THE network to connect with the founders, writers, academics, and activists working to build a better world.” You will be running the event next year, and of course get to attend it, too! You will also be in charge of other events, from smaller fundraising salon-type gatherings, to the in-person gathering at the end of our annual writer’s fellowship. This role reports to Heike Larson, our Vice President of Programs. It is a full-time position that is fully remote within in the contiguous US or Canada, but ideally, you’ll be located in/near a city with a major airport as the role requires a couple of multi-day trips every quarter, and around ten days on-site during the time of the annual conference. About You You love organizing events that bring people together and enable them to learn and form communities. You are good at creating delightful ex
6f66f415-f76d-4119-a428-bbfa75749de6
trentmkelly/LessWrong-43k
LessWrong
Seeking matcher for SIAI donation I just donated $150 to the Singularity Institute.  Would anyone be willing to match my donation (as in, donate at least $150 and tell me you did so)?  A first-time donor would be ideal, but anyone would make me happy.
fbd729cb-bae5-4a7f-8e1e-101839a30886
trentmkelly/LessWrong-43k
LessWrong
Some ML-Related Math I Now Understand Better Here are some simple Math facts rarely taught in ML & Math lectures: * SVD is decomposing a matrix into a sum of simple read-and-write operations * There is exponentially much room for close vectors in high dimensional space * Layer Normalization is a projection SVD Is Decomposing a Matrix Into a Sum of Simple Read and Write Operations Thanks to zfurman, on EleutherAI, which told me about the core idea. Let's say I want to understand what a weight matrix does. A large table of numbers isn't really helpful, I want something better. Here is a linear transformation which is much easier to understand than a table of numbers: f(x)=λ⟨v,x⟩u (where u and v are unit vectors, λ is a non-negative scalar, and ⟨v,x⟩ is the dot product between v and x). f is reading in the v direction, and writing in the u direction with a magnitude scaled by λ. I can understand this much more clearly and do nice interpretability with it. Therefore, if I know things about my embedding space (for example, using the logit lens), I can tell what f is doing. Sadly, not all linear transformations can be expressed in this way: it's just a rank-1 transformation, so there is no hope of capturing something as complex as a usual linear transformation. But what if I allow myself to sum many simple transformations? Then I could just look at each operation independently. More precisely, here is a wishlist to understand the big matrix M of the transformation g : * g(x)=∑ri=1λi⟨vi,x⟩ui where u and v are unit vectors, and λ is a positive scalar - it's a sum of simple transformations; * ⟨vi,vj⟩=0 for all i≠j - no double read, always read in orthogonal directions; * ⟨ui,uj⟩=0 for all i≠j - no double write, always write in orthogonal directions; * r=rank(g) - the number r of simple transformation summed should be as small as possible, because that would mean less individual transformation to inspect (and I can't hope for fewer than rank(g) operations, since the rank of the sum of r rank-1 linear tran
eb19a7a1-6013-480a-98e9-6830a9b58a40
trentmkelly/LessWrong-43k
LessWrong
Prediction Markets Are Mediocre The fantasy of prediction markets is that we can use them for policy proposals to inform people about the consequences of their actions. Conditional prediction market "Will X lead to terrible consequence Y" theoretically can give helpful insights and prevent X from happening, if P(Y|X) is high enough.  The reality of prediction markets is this: We may say, that this is not bad at all. The market is clearly reacting to the news and updating its estimate. It aggregates uncertainty of multiple people into a single probability estimate. It's doing its job. Now it's very confident in tariffs, and it has never been too low. The problem is that knowing that Trump will almost certainly impose large tariffs five months after the election is useless for the initial purpose of choosing the right candidate to vote for.  Here is how the same market looked when it's predictions were actually relevant: It fluctuated wildly rising to 75% and falling to 36%. On the election day it was merely 56%. Again, this isn't necessary bad, these estimates might have been reasonable, considering available information back then. It's not that prediction markets do not work at all. It's that they do not work good enough to make a difference. From the Perspective of a Voter Imagine yourself in the shoes of a voter leaning towards voting for Trump in this election. Not a MAGA fanatic. But a person whose overall cultural and aesthetic sensibilities are more aligned with Republicans. You are pro-market, pro-guns, tough on crime, dislike DEI, etc. You are in a bit of a conundrum here. On one hand Trump just overall vibes with you. But he is talking about tariffs. And according to Economy 101, which you pride yourself on being familiar with, tariffs are really bad. So, in theory, this issue could be a deal-breaker for you. You check a prediction market and see 56% estimate for tariffs being imposed under Trump's first year. Will it change your vote? Will you think that even this is high enough
95fb30db-6ea1-4707-b24d-de2f4c87d192
StampyAI/alignment-research-dataset/arbital
Arbital
Expected value The expected value of an action is the [https://arbital.com/p/-mean](https://arbital.com/p/-mean) numerical outcome of the possible results weighted by their [https://arbital.com/p/-1rf](https://arbital.com/p/-1rf). It may actually be impossible to get the expected value, for example, if a coin toss decides between you getting \$0 and \$10, then we say you get "\$5 in expectation" even though there is no way for you to get \$5. The expectation of V (often shortened to "the expected V") is how much V you expect to get on average. For example, the expectation of a payoff, or an expected payoff, is how much money you will get on average; the expectation of the duration of a speech, or an expected duration, is how long the speech will last "on average." Suppose V has discrete possible values, say $V = x_{1},$ or $V = x_{2}, ..., $ or $V = x_{k}$. Let $P(x_{i})$ refer to the probability that $V = x_{i}$. Then the expectation of V is given by: $$\sum_{i=1}^{k}x_{i}P(x_{i})$$ Suppose V has continuous possible values, x. For instance, let $x \in \mathbb{R}$. Let $P(x)$ be the continuous probability distribution, or $\lim_{dx \to 0}$ of the probability that $x<V<(x+dx)$ divided by $dx$. Then the expectation of V is given by: $$\int_{-∞}^{∞}xP(x)dx$$ ## Importance ## A common principle of reasoning under uncertainty is that if you are trying to achieve a good G, you should choose the act that maximizes the expectation of G.
18627e59-fa99-468c-904f-03fa57867a69
trentmkelly/LessWrong-43k
LessWrong
Unrunner You see her wherever people are jogging: in Central Park, in Venice Beach, on a college town’s main boulevard. It’s important that other people are jogging, she doesn’t show up if they aren’t. You’re not sure if it’s the same girl you saw that other time, or if there are thousands of them. You never get a long look at her face. But you recognize her from a glance. She’s East Asian, skinny and quite pale. Perhaps Korean? You’re not sure. Her hair is in a ponytail. Her expression is as blank as the cloudless sky above. She’s wearing expensive athletic wear, Lululemon and up. It’s perfectly coordinated, from earbuds to shoelaces. The sleeves and layers suggest a day that is 20 degrees cooler than the actual weather outside. It is as if she alone is haunted by a gust of February wind in the middle of sweaty June. She doesn’t seem to sweat at all. If you froze the frame, she appears to be jogging with the crowd. Her elbows are bent, the knees rise slightly. But when the frame unfreezes, she is moving at half speed. Her locomotion is far slower than what should physically be possible in a running pose. She is not stepping in place, her gait should propel her forward. But it’s as if gravity itself pulls her legs to the ground with a gentler tug than it does for others, letting her hover motionless in the air for a blink during each step. She jogs slower than the sweaty guy pushing a stroller with two kids. She jogs slower than the white-haired lady walking her three-legged Chihuahua. She may reach 3 mph briefly if she’s going downhill with the wind at her back. When I walk purposefully, I overtake her quickly. When I’m strolling we end up side by side for a long stretch, so long it becomes awkward. A man with his hands in his pockets and a woman pretending to run, synching up as the rest of the world swirls around us. And when that moment happens, the question burns in me. Who are you, slow Asian girl? What is your meaning, and is it mine to decipher or yours to disc
5e4715cd-6680-40c9-99ff-2e31bda419f2
trentmkelly/LessWrong-43k
LessWrong
Inside OpenAI's Controversial Plan to Abandon its Nonprofit Roots This is the full text of a post from Obsolete, a Substack that I write about the intersection of capitalism, geopolitics, and artificial intelligence. I’m a freelance journalist and the author of a forthcoming book called Obsolete: Power, Profit, and the Race to Build Machine Superintelligence. Consider subscribing to stay up to date with my work. Earlier this month, OpenAI announced that it aspires to build "the best-equipped nonprofit the world has ever seen" and was convening a commission to help determine how to use its "potentially historic financial resources." But critics view this new commission as a transparent attempt to placate opposition to its controversial plan to restructure fully as a for-profit — one that fails to address the fundamental legal issues at stake. OpenAI is currently a $300 billion for-profit company governed by a nonprofit board. However, after an earlier iteration of that board briefly fired CEO Sam Altman in November 2023, investors reportedly began demanding that the company shed its quasi-nonprofit status. "The story of OpenAI's history is trying to balance the desires to raise capital and build the tech and stay true to its mission," a former OpenAI employee told me. The current move, they say, is an attempt to "separate these things" into a purely commercial entity focused on profit and tech, alongside a separate entity doing "altruistic philanthropic stuff." "That's wild on a number of levels because the entire philanthropic theory of change here was: we're going to put guardrails on profit motives so we can develop this tech safely," the former employee says. Legal hurdles The for-profit conversion faces significant unresolved legal challenges, including a lawsuit from Elon Musk arguing that his $44 million donation was contingent on OpenAI remaining a nonprofit and that the conversion would violate its founding charitable purpose. The case will go to trial this fall. The conversion can also be challenged by the Californ
dd8c4182-6c4e-4dcc-8e79-22d4873f9627
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
[AN #78] Formalizing power and instrumental convergence, and the end-of-year AI safety charity comparison Find all Alignment Newsletter resources [here](http://rohinshah.com/alignment-newsletter/). In particular, you can [sign up](http://eepurl.com/dqMSZj), or look through this [spreadsheet](https://docs.google.com/spreadsheets/d/1PwWbWZ6FPqAgZWOoOcXM8N_tUCuxpEyMbN1NYYC02aM/edit?usp=sharing) of all summaries that have ever been in the newsletter. I'm always happy to hear feedback; you can send it to me by replying to this email. **Merry Christmas!** Audio version [here](http://alignment-newsletter.libsyn.com/alignment-newsletter-78) (may not be up yet). **Highlights** -------------- [2019 AI Alignment Literature Review and Charity Comparison](https://www.alignmentforum.org/posts/SmDziGM9hBjW9DKmf/2019-ai-alignment-literature-review-and-charity-comparison) *(Larks)* (summarized by Rohin): As in [three](https://forum.effectivealtruism.org/posts/nSot23sAjoZRgaEwa/2016-ai-risk-literature-review-and-charity-comparison) [previous](https://forum.effectivealtruism.org/posts/XKwiEpWRdfWo7jy7f/2017-ai-safety-literature-review-and-charity-comparison) [years](https://www.alignmentforum.org/posts/a72owS5hz3acBK5xc/2018-ai-alignment-literature-review-and-charity-comparison) ([AN #38](https://mailchi.mp/588354e4b91d/alignment-newsletter-38)), this mammoth post goes through the work done within AI alignment from December 2018 - November 2019, from the perspective of someone trying to decide which of several AI alignment organizations to donate to. As part of this endeavor, Larks summarizes several papers that were published at various organizations, and compares them to their budget and room for more funding. **Rohin's opinion:** I look forward to this post every year. This year, it's been a stark demonstration of how much work *doesn't* get covered in this newsletter -- while I tend to focus on the technical alignment problem, with some focus on AI governance and AI capabilities, Larks's literature review spans many organizations working on existential risk, and as such has many papers that were never covered in this newsletter. Anyone who wants to donate to an organization working on AI alignment and/or x-risk should read this post. However, if your goal is instead to figure out what the field has been up to for the last year, for the sake of building inside view models of what's happening in AI alignment, I *might* soon write up such an overview myself, but no promises. [Seeking Power is Provably Instrumentally Convergent in MDPs](https://www.alignmentforum.org/posts/6DuJxY8X45Sco4bS2/seeking-power-is-provably-instrumentally-convergent-in-mdps) *(Alex Turner et al)* (summarized by Rohin): [The Basic AI Drives](https://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf) argues that it is *instrumentally convergent* for an agent to collect resources and gain power. This post and [associated paper](https://arxiv.org/abs/1912.01683) aim to formalize this argument. Informally, an *action* is *instrumentally convergent* if it is helpful for many goals, or equivalently, an action is instrumentally convergent to the extent that we expect an agent to take it, if we do not know what the agent’s goal is. Similarly, a *state* has high power if it is easier to achieve a wide variety of goals from that state. A natural formalization is to assume we have a distribution over the agent's goal, and define power and instrumental convergence relative to this distribution. We can then define power as the expected value that can be obtained from a state (modulo some technical caveats), and instrumental convergence as the probability that an action is optimal, *from our perspective of uncertainty*: of course, the *agent* knows its own goal, and acts optimally in pursuit of that goal. You might think that optimal agents would provably seek out states with high power. However, this is not true. Consider a decision faced by high school students: should they take a gap year, or go directly to college? Let’s assume college is necessary for (100-ε)% of careers, but if you take a gap year, you could focus on the other ε% of careers or decide to go to college after the year. Then in the limit of farsightedness, taking a gap year leads to a more powerful state, since you can still achieve all of the careers, albeit slightly less efficiently for the college careers. However, if you know which career you want, then it is (100-ε)% likely that you go to college, so going to college is very strongly instrumentally convergent even though taking a gap year leads to a more powerful state. Nonetheless, there are things we can prove. In environments where the only cycles are states with a single action leading back to the same state, and apart from that every action leads to a new state, and many states have more than one action, farsighted agents are more likely to choose trajectories that spend more time navigating to a cycle before spending the rest of the time in the cycle. For example, in Tic-Tac-Toe where the opponent is playing optimally according to the normal win condition, but the agent's reward for each state is drawn independently from some distribution on [0, 1], the agent is much more likely to play out to a long game where the entire board is filled. This is because the number of states that can be reached grows exponentially in the horizon, and so agents have more control by taking longer trajectories. Equivalently, the cycle with maximal reward is much more likely to be at the end of a longer trajectory, and so the optimal possibility is more likely to be a long trajectory. **Rohin's opinion:** I like the formalizations of power and instrumental convergence. I think in practice there will be a lot of complexity in a) the reward distribution that power and instrumental convergence are defined relative to, b) the structure of the environment, and c) how powerful AI systems actually work (since they won't be perfectly optimal, and won't know the environment structure ahead of time). Nonetheless, results with specific classes of reward distributions, environment structures, and agent models can still provide useful intuition. **Read more:** [Clarifying Power-Seeking and Instrumental Convergence](https://www.alignmentforum.org/posts/cwpKagyTvqSyAJB7q/clarifying-power-seeking-and-instrumental-convergence), [Paper: Optimal Farsighted Agents Tend to Seek Power](https://arxiv.org/abs/1912.01683) **Technical AI alignment** ========================== ### **Technical agendas and prioritization** [A dilemma for prosaic AI alignment](https://www.alignmentforum.org/posts/jYdAxH8BarPT4fqnb/a-dilemma-for-prosaic-ai-alignment) *(Daniel Kokotajlo)* (summarized by Rohin): This post points out a potential problem for [Prosaic AI alignment](https://www.alignmentforum.org/posts/YTq4X6inEudiHkHDF/prosaic-ai-alignment) ([AN #34](https://mailchi.mp/f1947668b183/alignment-newsletter-34)), in which we try to align AI systems built using current techniques. Consider some prosaic alignment scheme, such as [iterated amplification](https://blog.openai.com/amplifying-ai-training/) ([AN #30](https://mailchi.mp/c1f376f3a12e/alignment-newsletter-30)) or [debate](https://blog.openai.com/debate/) ([AN #5](https://mailchi.mp/0ae5d69de63b/alignment-newsletter-5)). If we try to train an AI system directly using such a scheme, it will likely be uncompetitive, since it seems likely that the most powerful AI systems will probably require cutting-edge algorithms, architectures, objectives, and environments, at least some of which will be replaced by new versions from the safety scheme. Alternatively, we could first train a general AI system, and then use our alignment scheme to finetune it into an aligned AI system. However, this runs the risk that the initial training could create a misaligned mesa optimizer, that then deliberately sabotages our finetuning efforts. **Rohin's opinion:** The comments reveal a [third possibility](https://www.alignmentforum.org/posts/jYdAxH8BarPT4fqnb/a-dilemma-for-prosaic-ai-alignment#K8fRPa9NWZXdARLYN): the alignment scheme could be trained jointly alongside the cutting edge AI training. For example, we might hope that we can train a question answerer that can answer questions about anything "the model already knows", and this question answering system is trained simultaneously with the training of the model itself. I think this takes the "oomph" out of the dilemma as posed here -- it seems reasonably likely that it only takes fractionally more resources to train a question answering system on top of the model, if it only has to use knowledge "already in" the model, which would let it be competitive, while still preventing mesa optimizers from arising (if the alignment scheme does its job). Of course, it may turn out that it takes a huge amount of resources to train the question answering system, making the system uncompetitive, but that seems hard to predict given our current knowledge. [Technical AGI safety research outside AI](https://www.alignmentforum.org/posts/4xbsi4wbourPkb47x/technical-agi-safety-research-outside-ai) *(Richard Ngo)* (summarized by Rohin): This post lists 30 questions relevant to technical AI safety that could benefit from expertise outside of AI, divided into four categories: studying and understanding safety problems, solving safety problems, forecasting AI, and meta. ### **Mesa optimization** [Is the term mesa optimizer too narrow?](https://www.alignmentforum.org/posts/nFDXq7HTv9Xugcqaw/is-the-term-mesa-optimizer-too-narrow) *(Matthew Barnett)* (summarized by Rohin): The [mesa optimization](https://arxiv.org/abs/1906.01820) ([AN #58](https://mailchi.mp/92b3a9458c2d/an-58-mesa-optimization-what-it-is-and-why-we-should-care)) paper defined an optimizer as a system that internally searches through a search space for elements that score high according to some explicit objective function. However, humans would not qualify as mesa optimizers by this definition, since there (presumably) isn't some part of the brain that explicitly encodes some objective function that we then try to maximize. In addition, there are inner alignment failures that don't involve mesa optimization: a small feedforward neural net doesn't do any explicit search; yet when it is trained in the [chest and keys environment](https://www.alignmentforum.org/posts/AFdRGfYDWQqmkdhFq/a-simple-environment-for-showing-mesa-misalignment) ([AN #67](https://mailchi.mp/38af1edcd025/an-67creating-environments-in-which-to-study-inner-alignment-failures)), it learns a policy that goes to the nearest key, which is equivalent to a key-maximizer. Rather than talking about "mesa optimizers", the post recommends that we instead talk about "malign generalization", to refer to the problem when [capabilities generalize but the objective doesn't](https://www.alignmentforum.org/posts/2mhFMgtAjFJesaSYR/2-d-robustness) ([AN #66](https://mailchi.mp/c8ea4a5e842f/an-66-decomposing-robustness-into-capability-robustness-and-alignment-robustness)). **Rohin's opinion:** I strongly agree with this post (though note that the post was written right after a conversation with me on the topic, so this isn't independent evidence). I find it very unlikely that most powerful AI systems will be optimizers as defined in the original paper, but I do think that the malign generalization problem will apply to our AI systems. For this reason, I hope that future research doesn't specialize to the case of explicit-search-based agents. ### **Learning human intent** [Positive-Unlabeled Reward Learning](https://arxiv.org/abs/1911.00459) *(Danfei Xu et al)* (summarized by Zach): The problem with learning a reward model and training an agent on the (now fixed) model is that the agent can learn to exploit errors in the reward model. Adversarial imitation learning seeks to avoid this by training a discriminator reward model with the agent: the discriminator is trained via supervised learning to distinguish between expert trajectories and agent trajectories, while the agent tries to fool the discriminator. However, this effectively treats the agent trajectories as negative examples — even once the agent has mastered the task. What we would really like to do is to treat the agent trajectories as unlabeled data. This is an instance of *semi-supervised learning*, in which a classifier has access to a small set of labeled data and a much larger collection of unlabeled data. In general, the common approach is to propagate classification information learned using labels to the unlabeled dataset. The authors apply a recent algorithm for positive-unlabeled (PU) learning, and show that this approach can improve upon both GAIL and supervised reward learning. **Zach's opinion:** I liked this paper because it offers a novel solution to a common concern with the adversarial approach. Namely, GAN approaches often train discriminators that overpower the generator leading to mode collapse. In the RL setting, it seems natural to leave agent generated trajectories unlabeled since we don't have any sort of ground truth for whether or not agent trajectories are successful. For example, it might be possible to perform a task in a way that's different than is shown in the demonstrations. In this case, it makes sense to try and propagate feedback to the larger unlabeled agent trajectory data set indirectly. Presumably, this wasn't previously possible because positive-unlabeled learning has only recently been generalized to the deep learning setting. **After reading this paper, my broad takeaway is that semi-supervised methods are starting to reach the point where they have potential to further progress in imitation learning.** ### **Miscellaneous (Alignment)** [What are some non-purely-sampling ways to do deep RL?](https://www.alignmentforum.org/posts/Ca3sCRGfWvXvYC5YC/what-are-some-non-purely-sampling-ways-to-do-deep-rl) *(Evan Hubinger)* (summarized by Matthew): A deep reinforcement learning agent trained by reward samples alone may predictably lead to a [proxy alignment issue](https://www.lesswrong.com/posts/pL56xPoniLvtMDQ4J/the-inner-alignment-problem): the learner could fail to develop a full understanding of what behavior it is being rewarded for, and thus behave unacceptably when it is taken off its training distribution. Since we often use explicit specifications to define our reward functions, Evan Hubinger asks how we can incorporate this information into our deep learning models so that they remain aligned off the training distribution. He names several possibilities for doing so, such as giving the deep learning model access to a differentiable copy of the reward function during training, and fine-tuning a language model so that it can map natural language descriptions of a reward function into optimal actions. **Matthew's opinion:** I'm unsure, though leaning skeptical, whether incorporating a copy of the reward function into a deep learning model would help it learn. My guess is that if someone did that with a current model it would make the model harder to train, rather than making anything easier. I will be excited if someone can demonstrate at least one feasible approach to addressing proxy alignment that does more than sample the reward function. **Rohin's opinion:** I'm skeptical of this approach. Mostly this is because I'm generally skeptical that an intelligent agent will consist of a separate "planning" part and "reward" part. However, if that were true, then I'd think that this approach could plausibly give us some additional alignment, but can't solve the entire problem of inner alignment. Specifically, the reward function encodes a *huge* amount of information: it specifies the optimal behavior in all possible situations you could be in. The "intelligent" part of the net is only ever going to get a subset of this information from the reward function, and so its plans can never be perfectly optimized for that reward function, but instead could be compatible with any reward function that would provide the same information on the "queries" that the intelligent part has produced. For a slightly-more-concrete example, for any "normal" utility function U, there is a utility function U' that is "like U, but also the best outcomes are ones in which you hack the memory so that the 'reward' variable is set to infinity". To me, wireheading is possible because the "intelligent" part doesn't get enough information about U to distinguish U from U', and so its plans could very well be optimized for U' instead of U. **Other progress in AI** ======================== ### **Reinforcement learning** [Model-Based Reinforcement Learning: Theory and Practice](https://bair.berkeley.edu/blog/2019/12/12/mbpo/) *(Michael Janner et al)* (summarized by Rohin): This post provides a broad overview of model-based reinforcement learning, and argues that a learned (explicit) model allows you to generate sample trajectories from the current policy at arbitrary states, correcting for off-policy error, at the cost of introducing model bias. Since model errors compound as you sample longer and longer trajectories, the authors propose an algorithm in which the model is used to sample short trajectories from states in the replay buffer, rather than sampling trajectories from the initial state (which are as long as the task's horizon). **Read more:** [Paper: When to Trust Your Model: Model-Based Policy Optimization](https://arxiv.org/abs/1906.08253) ### **Deep learning** [Inductive biases stick around](https://www.alignmentforum.org/posts/nGqzNC6uNueum2w8T/inductive-biases-stick-around) *(Evan Hubinger)* (summarized by Rohin): This update to Evan's [double descent post](https://www.alignmentforum.org/posts/FRv7ryoqtvSuqBxuT/understanding-deep-double-descent) ([AN #77](https://mailchi.mp/d2f2d15b7114/an-77-double-descent-a-unification-of-statistical-theory-and-modern-ml-practice)) explains why he thinks double descent is important. Specifically, Evan argues that it shows that inductive biases matter even for large, deep models. In particular, double descent shows that larger models are *simpler* than smaller models, at least in the overparameterized setting where models are past the interpolation threshold where they can get approximately zero training error. This makes the case for [mesa optimization](https://arxiv.org/abs/1906.01820) ([AN #58](https://mailchi.mp/92b3a9458c2d/an-58-mesa-optimization-what-it-is-and-why-we-should-care)) stronger, since mesa optimizers are *simple*, compressed policies. **Rohin's opinion:** As you might have gathered last week, I'm not sold on double descent as a clear, always-present phenomenon, though it certainly is a real effect that occurs in at least some situations. So I tend not to believe counterintuitive conclusions like "larger models are simpler" that are premised on double descent. Regardless, I expect that powerful AI systems are going to be severely underparameterized, and so I don't think it really matters that past the interpolation threshold larger models are simpler. I don't think the case for mesa optimization should depend on this; humans are certainly "underparameterized", but should count as mesa optimizers. [The Quiet Semi-Supervised Revolution](https://towardsdatascience.com/the-quiet-semi-supervised-revolution-edec1e9ad8c) *(Vincent Vanhoucke)* (summarized by Flo): Historically, semi-supervised learning that uses small amounts of labelled data combined with a lot of unlabeled data only helped when there was very little labelled data available. In this regime, both supervised and semi-supervised learning were too inaccurate to be useful. Furthermore, approaches like using a representation learnt by an autoencoder for classification empirically limited asymptotic performance. This is strange because using more data should not lead to worse performance. Recent trends suggest that this might change soon: semi-supervised systems have begun to outperform supervised systems by larger and larger margins in the low data regime and their advantage now extends into regimes with more and more data. An important driver of this trend is the idea of using data augmentation for more consistent self-labelling. Better semi-supervised learning might for example be useful for federated learning which attempts to respect privacy by learning locally on (labelled) user data and sending the models trained by different users to be combined in a central server. One problem with this approach is that the central model might memorize some of the private models' idiosyncracies such that inference about the private labels is possible. Semi-supervised learning makes this harder by reducing the amount of influence private data has on the aggregate model. **Flo's opinion:** Because the way humans classify things are strongly influenced by our priors about how classes "should" behave, learning with limited data most likely requires some information about these priors. Semi-supervised learning that respects that data augmentation does not change the correct classification might be an efficient and scalable way to force some of these priors onto a model. Thus it seems likely that more diverse and sophisticated data augmentation could lead to further improvements in the near term. On the other hand, it seems like a lot of our priors would be very hard to capture only using automatic data augmentation, such that other methods to transfer our priors are still important.
3f849f1c-c2b5-4128-a007-fae2ad872836
trentmkelly/LessWrong-43k
LessWrong
Higher-Order Forecasts Higher-order forecasting could be a useful concept for prediction markets and forecasting systems more broadly. The core idea is straightforward: Nth-order forecasts are forecasts about (N-1)th order forecasts.   Abstract representation of 2nd-order forecasts. Dall-E 3 Examples Here are some examples: 0-Order Forecasting (i.e., the ground truth) * Biden won the 2020 U.S. presidential election * The US GDP in 2023 was $27 trillion 1st-Order Forecasting (i.e., regular forecasting) * What is the chance that Trump will win the 2024 U.S. presidential election? * What will be the GDP of the US in 2024? 2nd-Order Forecasting * How much will the forecasts for US GDP in 2024 and 2025 be correlated over the next year? * How many forecasts will the question "What will be the GDP of the US in 2024?" receive in total? * If the question “What is the chance that a Republican will win the 2028 Presidential Election?” was posted to Manifold, with a subsidy of 100k Mana, what would the prediction be, after 1 month?” 3rd-Order Forecasting * How much will the forecasts, [How much will the forecasts for US GDP in 2024 and 2025 be correlated over the next year?] and [How many forecasts will the question "What will be the GDP of the US in 2024?" receive in total?], be correlated, from now until 2024? * How valuable were all the forecasts for the question, [‘How many forecasts will the question "What will be the GDP of the US in 2024?" receive in total?’] As forecasting systems mature, higher-order forecasts could play a role analogous to financial derivatives in markets. Derivatives allow for more efficient pricing, risk transfer, and information aggregation by letting market participants express views on the relationships between assets. Similarly, higher-order forecasts could allow forecasters to express views on the relationships between predictions, leading to a more efficient and informative overall forecasting ecosystem. Benefits Some potential benefits of hi
2b1e96c1-082e-4314-b35b-31d3638d7780
trentmkelly/LessWrong-43k
LessWrong
How to Avoid the Conflict Between Feminism and Evolutionary Psychology? I don't mean to claim that there should be a conflict. Most likely the conflict arises because of many things, such as 1)Women having been ostracized for much of our society's existence 2)People failing at the is-ought problem, and committing the Naturalistic Fallacy 3)Lots of media articles saying unbelievably naïve evolutionary statements as scientific fact 4)Feminists as a group being defensive 5)Specially defensive when it comes to what is said to be natural. 6) General disregard by people, and politically engaged people (see The Blank Slate, by Steve Pinker) of the existence of a non Tabula Rasa nature. 7) Lack of patience of Evolutionary Psychologists to make peace and explain themselves for the things that journalists, not them, claimed.  and others... But the fact is, the conflict arose. It has only bad consequences as far as I could see, such as people fighting over each other, breaking friendships, and prejudice of great intensity on both sides. How to avoid this conflict?  Should someone write a treatise on Feminist Evolutionary Psychology?  Should we get Leda Cosmides to talk about women liberation?  There are obviously no incompatibilities between reality and the moral claims of feminism. So whichever facts about evolutionary psychology are found to be true with the science's development, they should be made compatible. Compatibilism is possible. But will the scientific community pull it off?   Related: Pinker Versus Spelke - The Science of Gender and Science http://www.edge.org/3rd_culture/debate05/debate05_index.html David Buss and Cindy Meston - Why do Women Have Sex? http://www.youtube.com/watch?v=KA0sqg3EHm8
5051e228-4225-4660-af96-c60f3de0f21c
trentmkelly/LessWrong-43k
LessWrong
Addressing Feature Suppression in SAEs Produced as part of the ML Alignment Theory Scholars Program - Winter 2023-24 Cohort as part of Lee Sharkey's stream. TL;DR Sparse autoencoders are a method of resolving superposition by recovering linearly encoded “features” inside activations. Unfortunately, despite the great recent success of SAEs at extracting human interpretable features, they fail to perfectly reconstruct the activations. For instance, Cunningham et al. (2023) note that replacing the residual stream of layer 2 of Pythia-70m with the reconstructed output of an SAE increased the perplexity of the model on the Pile from 25 to 40. It is important for interpretability that the features we extract accurately represent what the model is doing. In this post, I show how and why SAEs have a reconstruction gap due to ‘feature suppression’. Then, I look at a few ways to fix this while maintaining SAEs interpretability. By modifying and fine-tuning a pre-trained SAE, we achieve a 9% decrease in mean square error and a 24% reduction in the perplexity increase upon patching activations into the LLM. Finally, I compare a theoretical example to the observed amounts of feature suppression in Pythia 70m, showing that features are suppressed based on both the strength of their activations and their frequency of activation. Feature Suppression The architecture of an SAE is: f(x)=ReLU(Wex+be) y=Wdf(x)+bd The loss function usually combines a MSE reconstruction loss with a sparsity term, like L(x,f(x),y)=||y−x||2/d+c|f(x)|, where d is the dimension of x. When training the SAE on this loss, the decoder’s weight matrix is fixed to have unit norm for each feature (column). The reason for feature suppression is simple: The training loss has two terms, only one of which is reconstruction. Therefore, reconstruction isn’t perfect. In particular, the loss function pushes for smaller f(x) values, leading to suppressed features and worse reconstruction. An illustrative example of feature suppression As an example,
fadd77da-0cbe-4be9-8357-7402cad7f84e
trentmkelly/LessWrong-43k
LessWrong
AI safety as featherless bipeds *with broad flat nails* There's a famous story about Diogenes and Plato: > [...] when Plato gave the tongue-in-cheek definition of man as "featherless bipeds," Diogenes plucked a chicken and brought it into Plato's Academy, saying, "Behold! I've brought you a man," and so the Academy added "with broad flat nails" to the definition. What Plato was (allegedly) doing was not providing a definition of man, but what I'd call a sufficient reference or a sufficient pointer. If I'm in ancient Athens and divide the obvious objects that I can see or think of into "featherless bipeds" and "not featherless bipeds", then "man" will match up with the first category. Then Diogenes, acting like an AI, created something that fell within the sufficient pointer class but that was clearly not a man. The Academy then amended the pointer to add "with broad flat nails", patching it till it was sufficient again. Had there been a powerful AI around, or a god, or a meddling human with enough means and persistence, then they could have produced a "featherless-biped-with-broad-flat-nails" that was also not a human, making the pointer inadequate again. A lot of suggestions on AI safety are sufficient pointers. For example, take the idea that an AI should maximise "complexity". This comes, I believe, from the fact that, in our current world, the category of "is complex" and "is valuable to humans" match up a lot. It's a sufficient pointer. But along comes a Diogenes/AI with complexity as a goal, and now it enriches the set of objects in the world with complex-but-worthless things, breaking the "definition". Therefore, a lot of things that people say they value or want AIs to preserve/maximise, should not be taken as saying that they value the specific thing they say. Instead, this should be taken as pointer to what they value in the current world, and the challenge is then to extend that to new maps and new territories.
958dfceb-032c-47c8-9143-b15908ed4388
trentmkelly/LessWrong-43k
LessWrong
Hierarchy of Evidence There have been many hierarchies of evidence made for various fields of science. I was looking for an image of a more general hierarchy that could easily be dropped into any online conversation to quickly improve the debate. I found none that had all the features I was looking for. So I took an old hierarchy: expanded it, made it more aesthetically pleasing and made it into a jpeg, pdf and pages-file so people can easily share and modify it (e.g translate it or convert it to different files). Here it is:
a4f37b06-2b45-40fa-b9f7-f4e09917f867
StampyAI/alignment-research-dataset/youtube
Youtube Transcripts
RL Course by David Silver - Lecture 9: Exploration and Exploitation okay hi everyone so um lecture nine today we're going to come back to something which probably most of you have been thinking the back of your mind surely we can do something better which is the question of exploration exploitation so so far we've looked at really quite naive methods for exploration where we've realized that there's an issue with our reinforcement learning algorithms and one of the fundamental questions is how can a reinforcement learning agent balance exploration kind of figuring out what's going on in this world with exploitation which means getting as much reward as possible along the way and so far we've really tried very naive approaches like epsilon greedy and this lecture we're really going to try and investigate specifically that aspect that fundamental question of reinforcement learning how can an agent which is down in the world trying to figure out how to get as much reward as possible whilst learning how can it do as effectively as possible and we're going to consider various different methods for that and to do that most of the lecture we're going to start with a simplified version of the reinforcement learning problem which we have seen on at least one occasion before we're really going to spend some time to try and understand this simplified version because it kind of boils down to the essence of exploration and there's the multi-armed bandit where you just get to kind of pick one action get one reward and that's the end of your episode you kind of just have to explore and figure out what the best action is um once we've understood that in the remaining time we'll spend most of our time here and the remaining time we'll touch on how to kind of bring back the full complexity of the whole reinforcement learning problem but all of the ideas that we learn here apply so you know if we don't have a lot of time here you'll still get the essence of what's going on so first of all we'll bring back states that's what we'll do with contextual bandits we'll bring back states we'll say now not only is there an action to choose but there's also some state information and that's data information will inform what we do one of the reasons to touch on contextual bandits is it's probably one of the most successful current application of machine learning um it's certainly one of the top ones is using contextual bandits to decide how to do banner placement banner ads for the internet this has been used by many many large companies to decide you know you're in some state where you've got some user who's coming to this website what should you show them um and then we'll finally come back to the full case of ndps okay so we'll start with a brief introduction to try and understand what's going on in this area um so every time with decision making online this same choice comes up again and again and again which is you know right now i can make a decision i can take an action but what should i do should i exploit which essentially means to make the best decision given the information we have so far so if you've got some information maybe you figured out your value function and according to that value function you really believe that one action is the best so exploitation means taking that back best action acting according to the max whereas exploration means doing something else and there's a purpose to doing that something else which is that we gather more information which might lead us to make better decisions in the long run so by gathering information we believe that we might actually do better than taking what we currently believe is the best action so often the best long-term strategy might actually involve giving up reward in the short term like we really believe right now that taking you know going left is going to give us more reward than going right and yet we choose to go right that's giving up reward that's giving that reward we know we could get right now and we're giving up that reward because we think we can get that reward back later we think in the long term it's better to explore it's better to go through that right hand door because now we figure out well what's behind that door maybe there's a dragon there but maybe there's a pot of gold there and so sometimes we explore to find those pots of gold and work out what the better option is and eventually we want to make sure that we make the best possible decisions so here's a few examples so you know if you're going to a restaurant you might want to explore by trying a new restaurant or you might want to exploit by going to the one you know best the online banner ads i mentioned already that there you might want to explore by showing someone a user maybe you showed them a different advert what you've shown that kind of user before um to exploit would be just to show the one you're making sure they're going to click on you want to probably maximize click through in these examples if you're oil drilling you might want to drill at the best known location or you might want to explore by drilling up some new location and if you're playing again you might exploit by playing the move you believe is best or you might play an experimental move to to explore so in all kinds of different domains you know why does this come up it comes up because we're learning online you know it's not just that someone gives us some batch of existing data like in supervised learning um it's not there's a it's not there's a data set and then we get to go over that data set as much we want and make the one best decision we're not in that setting we're in the setting where we're gathering data as we go and the actions we take affects the data that we see and so sometimes it's worth taking different actions to get new data to get parts of the data we haven't seen so far okay so that's the fundamental issue that's going on we're going to focus on three different approaches to the exploration problem the exploration exploitation dilemma these are not the only approaches i think there are three broad families that we should know about and the first approach is what we've already seen which is random exploration which basically says well you know maybe a reasonable approach is just to sometimes with some probability pick a random action an example of that would be epsilon greedy or perhaps we might use a soft max distribution over our value function these are ways to introduce randomness into the way that we pick actions so we don't always just pick the greedy action we throw in some dice roll and if it comes up with a you know with a six then we we choose something exploratory in this lecture we're also going to explore some more systematic approaches to exploration which make more use of accumulating knowledge along the way the first approach is known as optimism in the face of uncertainty so this is like a fundamental principle that you should know about and this basically says that you know if you're uncertain about the value of something you should prefer to try that action if you don't know if there's one action that you're absolutely sure will give you ten there's another action which might give you anywhere between five and twenty which one would you take well you should probably take the one which might give you twenty because in the long run that means you'll be able to come back if it turns out to be worth more than your your safe option that gives you 10 then you should pick the option which has the greater potential because it might turn out to be better and then you can pick it again and again and again once you know that it's better it turns out to be worse you've lost a little bit but you can always go back to your preferred option that's the idea of optimism in the face of uncertainty so in order to do this we need some way to measure our uncertainty on on values and we'll look at different ways to do a variety of different approaches frequent espacium and so forth finally perhaps the most correct or um theoretically careful approach to the exploration exploitation dilemma but the most computationally difficult um is to think about the information state space so what this really means is to consider the agent's information itself as part of its state it's like i'm in a state where i have tried left three times and i've tried right one time that's a state now we can say that's a state and we can ask about how good is it to move into states where the agent has accumulated information so if i i might be in a state where i've never seen what's beyond that door over there and that's a very different state to being in the same place but knowing what's beyond that door over there and so if we bring the information into part of our state description then we can actually really understand how valuable is it to accumulate information how valuable is it to find a new piece of knowledge about what's inside that door maybe it's a really good thing because it will help me to get more rewards later on but maybe it doesn't actually have much effect because um there isn't enough time to to exploit those rewards or all kinds of other issues so this is the sort of correct but um computationally very difficult because our state space blows up to something massively more complicated and difficult than we had before so those are the three approaches we're going to consider i really just before we continue i want to touch on something which i consider quite a fundamental distinction in the way that we think about exploration it's not talked about too much but there's really two different spaces in which you can explore we can explore in the state space or the action space so this means that you know if you're in a state and you want to consider should i go left or should i go right from this state you know we know that maybe i've been in this state and i've taken the left action before so now if i'm doing state action space exploration that means i know something about the state actions i've considered so if i come back to this state space i might try going to the right the next time around that's using knowledge about the state action space to help explore this whole state space more effectively as part of the state space i've seen parts the state space i haven't seen there's actions i've tried there's actions i haven't tried we use that knowledge to help us explore systematically and figure out the right rewards but there's a different space in which we might choose to explore which is the parameter space so in the policy gradient lecture we saw that it was possible not just to work with value factors but to work directly with policies in that case our policies have some parameters this describes the behavior this describes how my agent's going to operate for as it goes on in the world like i drop down my robot it's going to behave according to whatever parameters make our eyeball walk according to some way of walking around and what you can do is you can try some parameters for a while and see how they do now exploration you could mean to say well let's try some different parameters the ones we believe are best right now let's just try some different parameters drop down a robot with a slightly different walk see what happens see how fast it moves go back and try it again so i call this parameter exploration as opposed to state action exploration um and it has some advantages and disadvantages so the main advantage is that you get some consistent exploration behavior um so you get to try something for a while like i change its policy maybe i'm going to try out this strange walk i'm going to try this walk for a while and see how that walk does whereas if you contrast that to epsilon greedy with epsilon ingredients like you re-randomize every single step you pick another action randomly and so you might end up just doing a random walk in your state action space which might not get you anywhere and so consistency in your exploration is sometimes helpful we see this particularly in the robotics field the main disadvantage of parameter exploration is it doesn't know about the state action space so now if my if my random parameters that i've tried take me to some state i've been to before i don't know that i've been to this state before and i don't even recognize that maybe there's i've already tried this action over here and and i already know something about that we completely ignore that it's like treated as a black box and we're just looking at this black box it's like doing global optimization in parameter space so so calling this exploration you know really what we're doing here is more like global optimization of our parameters we're trying to optimize in parameter space whereas here we're trying to really understand the state space trying to understand the actual space and trying to systematically explore and figure out the parts of the state action space we haven't been to before and of course there's compromises between but on the whole we're going to focus on on this first section here for the rest of this lecture okay any questions before i move on good so we're going to start with the multi-arm bandit um so this is the multi-arm bandit no octopus is required but it's basically we can think of this as a simplification of the mdp framework where we just have a set of actions a and a reward function r so we've thrown away the state space and we've thrown away the transition function and we've really simplified things down now so all that happens in this multi-arm bandit is it's called the multi-arm bandit because of these these machines which are known as one-armed bandits in the u.s and so you can imagine there's all these different machines and you kind of get to choose one of these arms and each of these different machines has a different payout some probability that if you pull the arm you'll win the game and get a payout but they've all got different like payouts and maybe one of them's paying out you know 70 of the time another one's paying out 75 of the time and we don't know in advance which one's going to pay out the most money um so now which one should we choose next you know you might try this one and it works out quite well now you try this one works out quite well but should you go back to this one or should you try this one again or should you try something different there's a whole strategy to your exploration exploitation trade-off where you want to ongoing make sure you're hitting the best machines as often as possible but whilst exploring to figure out to make sure you're actually finding and identifying the best machine so formally we've got an action set which tells us all the different arms of this thing we've got a reward function which is just the reward for some given action a um it tells us basically the probability of getting any different reward so it's a distribution over rewards so if i pull this arm what's the distribution over the armor the distribution of the reward that i'll get from this machine what's the payout i'll get from it so that's this thing and there's a different distribution for each machine and at every step we just get to select one action we get to pull one arm and the environment generates a reward from the distribution so we just sample from that distribution according to the arm that we pick and the goal is to maximize the cumulative reward so we just want to over time just keep accumulating more and more award so it's the simplest case you can think of this as like a a one-step mdp um where you kind of just have there's one state and one step so you basically pick your action you get a reward and that's the end of the episode so there's no kind of um the only look ahead comes in that we're trying to understand the exploration exploitation tradeoff so the look ahead comes in you know you reach the end of your you pull this arm you see what happens and now that affects your own information that affects what you know about this environment so you might want to then keep exploring or keep exploiting elsewhere okay that's the multi-arm bandwidth um so so people clear about the setup first of all should be fairly straightforward um so now we're just going to try and understand what it means to do well in this domain so you know we've got this criterion where we want to maximize the cumulative reward but we're just going to scale that thing and we're going to basically flip it around to talk about um opportunity loss known as regret rather than the amount of reward we've got how much worse have we done than the best we could possibly have done and that's what we call regret so to understand that let's just work through a couple of definitions so we're going to start off by defining the action value so this is just like our familiar q but now we've we've thrown out states there's no states anymore so we've just got q of a so this is the expected reward that we'll get if we pull one of those arms so this is the true payout of one of those machines how much it will really pay out if it's maybe 75 percent for machine one and 80 for machine two and the optimal value is the best that we could possibly do if we knew which which machine paid out the most we would just always pick that machine again and again and again so we basically get the max of all our q values because we'd always just choose the machine with the maximum payout and that's the v style that's the best we can do in this domain we can just keep getting v star again and again and again so the regret then is how much worse we do than b star so v star is the best we could possibly get we're just getting the best reward every single step we pick the best machine but you know unfortunately we might not know how to pick the best machine so we incur some regret we care opportunity loss in every step the opportunity loss we incur is the difference between the maximum we could have got at that step and the payout of the machine that we did pick so this is the payout of the action we picked at time step t so we pick something which pays out maybe 75 we could have got 80 percent so we incurred like a 5 regret for that step we didn't pick the best machine we picked something that was 30 sub-optimal um so the regret then the total regret is just summing these opportunity losses um over time so we basically want to sum this up over time and we want to see well what's the total regret that we're incurring if we just keep playing and keep playing forever okay um that's we're gonna continue for two steps and play this game for two steps and now we want to know if we play for t steps how much opportunity loss do we incur do we how how much better could we have done if we'd known the optimal value if we'd known the optimal arm in advance how much better could we have done and so when we say we want to maximize cumulative reward that's the same as trying to minimize total regret okay and the reason it's useful to think about regret is just and we'll see shortly it helps us understand how well an algorithm could possibly do like independent of the scaling of the rewards we can say things about how well our algorithm could do we want to find algorithms which which basically bring the regret each step down towards zero so just one more slide to understand like what what's good and what's bad about regret and then we'll move on to some pictures to help help understand this a little bit better um so so the count is just the number of times we've pulled an arm so we're going to count the number of times we pull an arm and we're going to consider this thing called the gap so the gap is the difference in value between some action a and the optimal action so it's basically the gap between the best machine i could have pulled and some sub-optimal machine so if i'm wondering you know if i would just want to know what's the gap for machine three that's the difference in value between machine three and the best machine so some gap is like the difference between 80 and 75 again there's a gap there of five percent um and it turns out that we can think of regret as a function of all of these gaps and these counts so if you just count how many times you use the machine and you count also the gaps like you look at the gap between how much you could have got and the actual payout for that machine we see that the regret kind of breaks down in terms of those two things so the regret we basically saw that was the sum of these differences this is like your instantaneous opportunity loss the difference between the optimal value this is the best you could possibly have done the payout for the best machine and the payout for the machine you actually picked so the amount that you lose at every step is the difference between the best you could have done at that step the maximum that machine could have given you and what you actually chose at that step and then we just sum those things up and we see that basically we can just pull out the counts now we can say well if we're just summing up um how much we lost each time we picked this action then that's the same as counting how many times we chose this action and multiplying it by how much we lost each time we did pick that action so we just have to count how many times we pick this action that's our count here and multiply that by this gap like how much worse that action was than usual yeah question but we don't know whether we started right we don't know easter we don't know we don't know these stuff yeah so we're going to come back and so all of this is to say that we can just rewrite this regret this thing we really care about optimizing so we want to minimize our regret we want to get as close as possible to optimal but what we see is that the regret can be written in terms of these counts multiplied by these gaps so what this tells us is that every time the gap is large like if there's some action that you could take that's really really terrible like there's some one of those machines is really horrible and it pays out three percent of the time and there's a machine that pays out 90 of the time well the gap which would be 87 is very large so you need to make sure that you pull that arm very few times whereas if there's another machine which has a small gap um you know maybe it pays out 85 instead of ninety percent the gap is only five percent now so now you want to you know you're okay with playing that one more and more so what you really want to make sure is that you play um the arms you want to pick arms the actions which are best um as often as possible and you want to pick the actions which are worst as infrequently as possible it's intuitively obvious but this is just saying in in math and the problem is that these gaps aren't known like we don't know these gaps we can count how many times we pick an action but we don't know the gaps because we don't know v star yeah question um the gaps obviously are different for each action but they don't change over time is that right this is the for the stationary bandit where the nothing changes over time okay yeah there's extensions to this where it can't change upside down expectation over the numbers of uh how many times we choose this action or should we also take the expectation over the queue as queues like reward and sometimes um so q q is defined as the expected reward for that actually that's the definition of q so there's already an expectation in there um okay so so the real question here is how how can what does this regret look like over time and what we'll see is that for most naive algorithms like if we're going to consider a few naive algorithms before we start to consider better ones what happens is that the regret so if this is like time over here and this is the total regret that we've accumulated you know what happens if we use one of our familiar algorithms like epsilon greedy well what's going to happen is that every single step in expectation there's there's some probability that we'll pick the best action but there's also some fixed probability that we'll pick completely at random and if we pick completely at random that's always going to incur some regret unless we just stumble accidentally upon the best action and so if we keep picking randomly keep picking randomly we're just adding on regret we're adding on a sort of constant amount of regret every step in expectation so we're randomly picking amongst our actions there's some probability that we'll pick each of those actions and each of those actions will incur some of some opportunity loss and so there's some total hits that we get for not picking the best thing and that just keeps getting added on linearly every single step when we accept along readily when we act greedily we also incur this linear regret as we'll see because we might lock onto the wrong action and the question is can we ever get sublinear regret can we get some regret which basically we want regret which kind of essentially we want something that starts to asymptote out we want something which gets less and less regret as we see more and more stuff we want to basically do better and better and regret our choices less and less and less as we go on and the question is how can we achieve that is it possible to achieve something your total regret and happily the answer is yes and we'll see we'll see why let's just start by considering the myopic cases first of all um so let's start off with greedy algorithm so if we were just to let's start by considering our usual type of algorithms that basically do you know monte carlo learning taking means so if we assume that we we estimate how good our machine is we so we're going to consider each of these arms now because of each of the arms we want to know how good is that arm and the natural way to figure out how good that arm is is just to estimate um the action value by taking the mean of all the the out the payouts we've seen so far so if i've tried this arm and i got 10 and then i try this arm again and i got eight now we think that our q of a for that arm is is nine okay and so that's our way we're going to estimate the true expectations of the ones so it's the normal way think this is the normal way just like monte carlo learning this is the normal way that we estimate values um and and so this is just another way to say that we're forming the empirical mean this is just using our indicator we've seen this notation before so we're just saying the the action value at time step t is just the average of all of the occasions this is an indicator function that picks out all of the occasions on which we tried that particular arm and we're collecting together there so we're summing the the payouts we got each time we pick that r and forming the mean it's just the mean okay so the greedy algorithm what does it do well it selects the action with the highest value we just pick the action with the highest value that's the natural thing to do so we've estimated how good this action is we've estimated how good this action is this one's higher we just pick it okay and the greedy algorithm doesn't explore at all it just picks it picks it picks it picks it um and the obvious problem is that can lock onto a sub-optimal action forever like i might try this action and it looks good and i thought this action was bad so now i just keep applying this action forever or even i try this action and i try again and i'm just a little bit unlucky so i end up thinking its value is bad and then i try this action once and it looks better and i just keep trying it and keep trying it keep trying it so you can lock onto suboptimal actions forever and as a result greedy has a linear total regret but in expectation you can do the wrong thing forever if you can do the wrong thing forever that means that every step forever you're going to incur the um the loss the regret for taking the wrong action so you're going to get that gap again and again and again and again okay so it should be clear that greedy has linear total regret like you're getting making the same mistakes repeatedly forever with some probability okay so what happens if we try and be clever so the first idea to be clever is to use optimistic initialization this is a really well-known algorithm and actually this is a really good idea so i don't want to give the impression this is a bad idea this works quite well in practice there are lots of applications where it's quite hard to beat this and the idea is just to initialize the values to their maximum possible so we start off by assuming the best about all of our actions so this is the simplest version we'll see of optimism in the face of uncertainty we're not going to measure the uncertainty we're just going to assume that everything is really good and until proven otherwise so we're going to assume that all of our actions pay out the maximum possible so for this algorithm we need to know the maximum possible for many of the others but we'll see we don't so if we know the maximum possible then what we do is we just initialize all our estimates to that maximum possible and then we act greedily from that point onwards now with optimistic initialization um you may want to not erase your optimism completely the first time you try the action so you may prefer to take a non-stationary mean rather than the full name but again so we start off think assuming the best about things until proving otherwise and then we still act greedily and so this encourages exploration of things that we don't know about but if you're unlucky about about something a few times you can still end up initializing something so i start off thinking this action's the best possible and then i try it and i'm unlucky i try again i'm unlucky and now i can still end up locking out this action forever because i might try this other action now turns out to be better and i never explore this one again so you must have some kind of continued exploration to guarantee that you you start doing better otherwise again you end up in carrying the same regret every step you can lock something out and just make the same mistakes again and again and again so we want to find algorithms where we stop when we make fewer mistakes over time that's really all this is saying we want to find algorithms that make fewer mistakes as you get more experience yeah but how do we update the maximum reward we can get from that because i assume okay we have some several arms and we assign maximum values for the sound and then after every iteration i need to somehow if the value which i get is slower i need to somehow update this update yeah so okay so i should have spelt this out slightly more um so i think let me give what i consider the the simplest implementation of this idea which is to say that not only do we initialize the values to the maximum possible but we also initialize the counts so we say we believe that that this arm over here has so so let's say let's take our uh octopus example again where we know that we're going to get payout between zero percent and 100 so now we know that rmax is 100 you might get 100 payout on this particular machine whatever you put in you might it might pay out 100 okay so now we want to um initialize everything to 100 but we also want to say how confident are we that it's really at that value so what you can do is you can say let's imagine that i'd actually pulled that arm 100 times and set the value so far to 100 so we might imagine that it's as if we played that um 100 times and got 100 payout on every occasion and now you continue from there making your monte carlo updates and you take an empirical mean including those hundred updates that you've seen so far so this is equivalent to using a um like a beta prior which we'll see later okay so that's the canonical version so we assume that our max is known we know the best possible thing we initialize all our arms to the highest value we assign some confidence in some crude way of estimating uncertainty that we know about that um so we start off by assuming that that we don't know that the best thing it's as if we've tried it a few times but we haven't tried it forever and now as you start to receive more and more data it will overwhelm your initial kind of prior on what you thought it was so you just keep updating your means so so is that clear you can start off by saying yes just as if i've tried this arm a hundred times and i saw the best possible play out 100 times it's just if i tried this thing 100 times i saw the best possible payout 100 times and now if it really is terrible you have to play a lot of times to bring it to bring down the mean and so it encourages you to keep exploring things until they're really proven to be to be sub-optimal and so although i say it can lock out the optimal action forever you have to be quite unlucky so i think that the more confidence you assign to your prior the more unlucky you have to be to lock out the optimal action so it's not a terrible idea to do this let's consider what we would say is the most naive algorithm that we talked about so i started this whole lecture by saying look what we've done before is naive which is epsilon greedy so let's consider epsilon green so epsilon greedy again we just flip a coin every time step we pick an action we pull one of our arms with probability epsilon we pull a completely random arm probability one minus epsilon we pick the one we think is best so far um so now what do we do well we can be absolutely certain that we're going to incur some loss in expectation we're going to incur we're going to keep making mistakes over time because we're still exploring randomly so every time we explore randomly we're very much likely to make some mistake and not pull the best arm so we keep incurring regret and regret and regret we keep making mistakes and so this also has linear total regret the same would be true for the softmax i'm not going to go into the softmax here but despite that it turns out if you just do something very simple which is to decay your epsilon over time which we've also talked about before so you have some probability that you pick a random action starts off at maybe 50 and you just decay it slowly over time um to until it eventually reaches zero percent now it turns out you can get sublinear um regret so in particular let's consider the following schedule so this is an impossible schedule you can't do this in practice because it uses knowledge about v star which we don't know but let's say someone told us v-star and we could measure all of our gaps let's say we knew the size of all of our gaps which we don't if we knew the size of all our gaps then what we could do is invent this schedule which says okay every step all we care about is the size of the smallest gap like what's the difference between the best action and the second best action so if we know the difference between the best action the second best action we can use that difference to craft a schedule which looks something like this don't worry too much about the form of this it's just saying you know basically um whenever the gaps are very small we want to explore those actions more often if the gaps are very large we want to explore them less often that's intuitively clear why don't we average over all of them yes why is it second best um i think you could come up with many other schedules which would also satisfy this property and averaging over them might well be able to do that but this is like one simple probability of schedule when you choose a branding strategy which means that your delta would be expectations of your data will be average of sub-optimal actions the expectations of d or um but we're not using the expectation of of the gaps here we're using so one choice would be to use the expectation of the gaps another choice is to use the min of the gaps and to use that min of the gaps to just pick your schedule that just determines the randomness that you use and then you flip your coin with that according to that epsilon choice of epsilon so it's a valid choice you could also use the expectation to come up with a different schedule so just think of this as one choice for the schedule that depends on the exact individual's individual gaps and and i think the main surprise here is that even this very naive approach actually it has logarithmic asymptotic total regret so if we if we knew these gaps we could so what all this is saying is that actually epsilon greedy has the amazing property that if you decay epsilon according to the right properties you essentially achieve the best results that you can get with bandits give or take a constant factor and a term or two and the only problem is that we don't know in advance what that schedule should be but you know maybe it wasn't so naive at all after all what we were doing you know maybe epsilon greedy isn't quite quite as crazy and i'll show you some empirical results that kind of back that up but what we really were after now is some kind of algorithm which achieves the same kind of sublinear regret as as this idealized epsilon greedy but without knowing the the rewards in advance without knowing vstar and the gaps and all these things so without advanced knowledge so that's one way to understand these um these approaches so if we just back up to the picture you know really we're just after algorithms that have this kind of shape and we've seen that decaying epsilon greedy has this shape um we make fewer and fewer mistakes over time as you decay your epsilon but you need to know some special knowledge about um about problem to be able to get the right schedule here otherwise it ends up looking like this again and so how do we make sure that we can achieve that without telling our agent thinks it can't possibly know about the problem in advance okay um and so what i'm going to do is introduce um a particular approach which achieves this nice property it's very well known one of the best known algorithms for uh certainly best known outrun for bandits and very widely used in industry but really i just want to give one more slide about the theory which is just to say that it turns out that there's actually a lower bound on this regret so in other words there's something that says no algorithm can possibly do better than a certain lower bound there's some lower bound on how well we can do in terms of the regret what we want to do is basically push down our algorithms closer and closer towards this lower bound and the lower bound is actually logarithmic in the um in the number of steps so what is this lower bound well it just the performance of any algorithm what what does it depend on well it depends on you know what what makes a problem a bandit hard or easy so consider you know how similar the best arm is to the other arms you know if the if the distribution so what makes a hard problem is basically a problem where you've got two arms which look similar they've got kind of overlapping distributions you're not quite sure it's not obvious which one is the best if so what's an easy problem an easy problem is one way you've got one arm that's obviously good one arm that's obviously bad you just try this once and it gives you a good answer you try this once it gives you a bad answer you're done it's easy okay a hard problem is something where you've actually still got one arm that's much better than the other but there's a lot of noise on these things so sometimes you take this one and it ends up looking really terrible sometimes you take this one ends up looking really good and so now it's really hard to disambiguate them and you're making a lot of mistakes and it takes you a really long time to figure out that this arm is actually much better than this one so the hardest problems have basically similar looking arms with different means it's hard to tell the difference between them and so formally the way we understand that is by the gap between them like what's the difference between this arm and this arm and how similar their distributions are which we can use the kl divergence and so all this tells us so this is one slider theory so the total regret is at least logarithmic in the number of steps okay so that the maximum so so this is the total regret that we're occurring over time this is that plot we were looking at and the regret that we incur is at least logarithmic in terms of the number of steps that we we take multiplied by this term which is basically proportional to the gap so this basically tells us that you know the more different the arms are the more regret will occur and inversely proportional to the difference in the distributions so this is the term that tells us how hard the problem is hard problems have similar looking arms that's the kl divergence between them with different means that's the gap here okay so this is like the hardness of the problem and so this is something which tells us about the problem and this is something which is like fundamental to bandits into exploration there's this thing which is logarithmic in the number of time steps so we want to find algorithms which have a regret that's logarithmic rather than linear okay so let's get back to principles and then come back to algorithms again so this is the main principle that we're going to use this next section which is the optimism in the face of uncertainty principle so imagine that there are three different arms so here's three different arms there's the blue arm the red arm and the green arm okay and what we're showing here is a distribution over the actual q values there so this is so imagine this is like our belief so right now i've tried a few actions maybe i've tried this green one a lot of times so i've got quite a good idea of of what the mean of this green action is i'm pretty sure that it's it's around in this range somewhere and this x-axis tells us how good we think it is so so the further along this axis means um this is uh this is the basically the value that we expect to get from our payouts and this one has a width something like this we've tried it quite a few times so this is our distribution over q we believe that the mean of this arm is somewhere around i can't read these two point something um whereas this blue one maybe we've only tried a couple of times and we're really not sure what the mean is we think it's somewhere around here but it could essentially be anything and the red one's somewhere in between and the question is which arm should be picked next so the optimism in the face of uncertainty principle says don't take the one you currently believe is best that's the green one take the one which has the most potential to be best and in that case this would be the blue arm and the blue arm has the most potential to actually have a mean which is somewhere over here if we look at it the tail of this distribution has quite a lot of mass that says hang on a minute there's quite a good chance that this blue arm might turn out to have you know three or four or even more um a mean of three or four a really high payout so we should really try that blue one and narrow down this distribution and the idea of the optimism principle is that as you try it you start to narrow down this distribution so you might maybe you play this arm you think that this one's got the biggest tail you play it and now maybe it turns out it's really a bad action so now the distribution will be narrowed around something that shifted a bit over this way and the tail will be brought in and now maybe you don't play that one again so now maybe you try the red arm um maybe that one also turns out to be bad that will shrink the tail back in again you'll end up with a distribution something like that and now you'll play the green arm that will narrow it down again and as you narrow these things down you start to get more and more sure about where the best action really is until you're actually just picking the one that's actually got the maximum mean so it's a way to really just keep doing things pushing down your uncertainty but at the same time trying the thing which has the most potential to do well this is the optimism the face of uncertainty principle and so the difficulty is that so far we've only talked about ways to estimate the mean we haven't talked about ways to estimate this uncertainty here when we've talked about estimating q values so we're going to talk about two different approaches now to to solving this approach one of which is frequentist in which we assume nothing about the distribution and the second approach is bayesian where we assume that someone gives us a prior probability distribution over our q values okay and the general idea we're going to use is something called upper confidence bounds so the idea is to say let's come up with an upper confidence for each action value so we're basically going to say i'm not only going to estimate the mean like the payout i've seen so far maybe i've seen you know 80 payout for this particular arm so far and i'm going to estimate some upper confidence and what i think that could be like this is like think of this as the tail of that distribution we just looked at so we're estimating for each of these things we're not only going to estimate its q value and estimate its mean we're also going to estimate some kind of um addition we're going to add on like some bonus to this thing which characterizes how big this tail is of that distribution we're going to pick the thing with the highest bonus the highest sum of q plus um something which tells us about this tail so we're going to end up picking the thing which when we sum those two things together it gives us the maximum value so we're going to estimate this upper confidence u so we can think of this as a high probability upper confidence on what that value could be so think of this as something like a 95 confidence interval um about where the mean could actually be so we're going to say um you know maybe i'm 95 confident that this one is going to be within this range here and so that gives us our point that we use that's our u value here now and compared to this one we might say 95 confidence interval is here we're going to use that as our new value think of them as like confidence intervals high probability confidence intervals and what our q value could possibly be we're going to pick the thing with the highest upper confidence value um so that basically means we're going to pick the one where the true action value q a um we want to pick something where we're really confident that the true value is is less than our upper confidence bound so we're going to pick an upper confidence bound this 95 confidence range and pick the thing with the highest confidence family and now this depends on how many times we've tried it so the fewer times we actually try a particular action the larger our confidence will be and the more times we try a particular action the smaller the confidence will be so we add on less and less of a bonus to things that we've tried on tried more we've become more and more confident about what the mean is and eventually we end up just using the means when you try something enough if you tried an action infinitely many times you just select that action based on how good it really is because now we really know for sure what the what the mean of that action is up until that point we use some up confidence on how good we think it might end up being use the 95 percent interval and so the algorithm is really simple we call this ucd we select the action that maximizes the upper confidence count so instead of maximizing over hue or instead of adding on some random thing we maximize we maximize pick the action that maximizes q plus this upper confidence value u okay is that clear some glazed looks some people clear okay any questions help deglaze people yes the u values go to zero as you try more and more actions so eventually um the u values it's just like shrinking this tail here the u value is characterizing the size of this tail so if you try this red guy more and more it's going to shrink down it's going to shrink so think of you start off with some confidence interval that looks like this this is your u value the u value is the difference between the mean and this guy and as you try it more and more this is going to shrink down you become more and more confident of where your q value actually is until eventually that u value shrinks to zero and you just use the mean okay so you really want to just keep picking things according to this upper confidence value and that helps you very systematically look around your action space and figure out which systematically which of those actions is giving you the best results are you making some assumption about symmetry of the distribution here as well okay great question am i asking anything am i making any assumptions about the symmetry distribution um so for the approach i'm about to show we make no assumptions about the distribution and then we'll talk about ways to make use of assumptions about the distribution okay so this is the distribution free version we're just going to use this is a fundamental inequality from probability theory from statistics um and and so let's just forget reinforcement learning for a second and understand what this is saying this is called huffling's inequality and it's just a fundamental inequality that basically tells us that if you have some some random variables so if you're basically sampling these random variables between zero and one so we've got these x values and you keep sampling those x values again and again and again and we take an empirical mean of all of our samples that we've seen so far so you just keep seeing this data you keep seeing this data you take a mean of that data and what's the probability that the sample mean is actually less so what we want to know is if we take our sample mean that's just the x bar and we consider some so what's the difference what's the probability that we're wrong by an amount of u in our estimate of the of the empirical mean so what's the probability that the difference between the empirical mean and the true mean um is is greater than u right what's the probability that we're actually making a mistake in our estimation of this empirical mean by at least you okay um you've just seen a bunch of coin tosses and you know what's the chance that the mean of your coin toss is actually different from the real bias of your coin by more than you and it turns out you can bound this thing by this arbitrary seating exponential value and this is true for any distribution doesn't matter what the distribution is doesn't matter if it's symmetric doesn't matter anything about it this is always true in fact it's maybe a little bit weak because of that it's not making a lot of assumptions and therefore it's perhaps quite a weak inequality yeah so in the case of the bandit though so this requires that you have that bounded rewards this requires that you have bounded awards so if you don't have rewards in the range zero to one we just scale them to zero to one or there's another version of huffling's inequality that uses arbitrary range values but the version i'm going to use is simpler and so the easiest way to do this is you just you you just scale your rewards back into zero one and exactly all this theory applies but that means you know the bound you know some range on your you know our maximum there's enough if you're rewarded so there is no mag so there's no like maximum value yeah this assumes for this result it assumes that you've got a boundary distribution so there is at least one assumption it's true for any boundary distribution thank you um so we will apply huffling's inequality to the bandit case so all we're doing here is we're applying exactly the same inequality here and now we see this is basically saying what's the diff what's the probability that i'm wrong in my estimate of these q values by more than my u value and what we're going to do is we're going to use this to solve for the u value to set this up a confidence value to the appropriate amount so we want to know where should i place my upper confidence thing to guarantee that this probability is say within the 95 interval so this gives us a way to compute like these 95 confidence intervals because we know that the probability that i make a mistake of more than this u value is bounded by this value so if we plug in this thing to be five percent then we'll solve for our upper confidence value in a very general way so that's what we're going to do now so we're going to pick a probability so this is like picking our 95 interval we're going to pick for that p value we're going to solve for our u and so what we do is we just set this thing set the right hand side of our hufting inequality this is the probability that we make that mistake we set that to some value like our five percent interval and we solve for u so to solve for this we just take logs which gives us this thing we divide through by minus 2n and we take a square root this is squared and that gives us this upper confidence value so it seems rather arbitrary but what's nice about it is we don't need to know anything about the gaps we don't need to know anything about the rewards except that they're bounded and now we have a way to pick actions we just maximize we add on this term here and this term has all the properties we wanted you can see in the denominator that the count here is in the denominator that means that as we pick things more and more and more this this bonus term is going to get pushed down towards zero and for actions that we haven't tried very often they're going to have a um a very large bonus term so the less we've tried something the more uncertain we are and the more we add on to this bonus it's our interval and now what we do is we pick in something like a schedule but what we actually want to do is to make sure that we actually guarantee that we pick the optimal action as as we continue we want to really get these asymptotic regret things to um to be logarithmic so the second thing we do is we add a schedule to our p value so we don't fix it to like 95 instead what we do is we slowly increase this thing over time to be more and more confident that we've included the true q value in um in our interval so what we want to make sure is that we are guaranteeing that we select the optimal action as t goes to infinity okay so here's the algorithm and if you like if the theory was kind of um you know uninteresting to you you can just use this algorithm you can just view this as an empirical fact here's an algorithm i can use and this will work and it works very well in practice um so this is the ucp1 algorithm there are many extensions hence the one to be followed up in all kinds of different approaches but the basic idea is that every step you estimate your q values by taking this monte carlo estimate in the usual way so you just take the empirical mean of everything you've seen so far and then you add this bonus term on that only depends on the number of time steps the number of total pulls of all of your arms and the number of times you've pulled this on that's all it depends on and you add on this bonus that depends on those two things and you pick the action with the highest total value um and this thing actually then achieves this logarithmic asymptotic total regret so it looks a lot like the lower bound except we've lost one of the two terms we don't have the kl term in there but we do have the logarithmic it's logarithmic in terms of the number of steps it takes account of the action gap but it doesn't know about the distribution so it doesn't have this kl10 in there um so this is saying we want to take the arg max of all of this depends on your operator precedence right okay so it's an algorithm how does it do in practice so this is comparing ucb and epsilon greedy on a 10 armed bandit and this 10 arm bandit has particular characteristic on on the arms and this is looking at different types of distribution so this is like 10 machines which have parameters where one of them pays out 90 of the time and all the others pay out 60 of the time and here we've got um one of them pays out 90 of the time three of them pay out eighty percent of the time three of them seventy percent of the time um and so forth so it's like different types of uh of distribution and the question is how well do these algorithms do um and so what this is comparing is these ucb algorithms with epsilon greedy with a decaying schedule um but instead of trying to know the schedule in advance it's like trying to you know guess what the right schedule is and just decaying according to some guest schedule instead and so the surprise is again that epson greedy does well okay so epson greedy if you tune it just right it actually does really well the difficulty is that if you get it wrong it's a disaster so whereas ucb without any knowledge about the problem actually systematically performs really quite well in these problems so it might be difficult to pick out but yeah in each case the ucb algorithm is doing really very well um so and what we're looking at is the total regret here on the right hand side this is the total regret over time and you can see that if you use like the wrong epsilon value things just blow up and but the ucb algorithms do really quite well and on the left hand side we're seeing i think how many times you pick the the best action and so what you see is that eventually all of these algorithms start to converge on picking the best action 100 of the time which you don't get with a naive epson greedy or some other naive algorithms okay so this is the the bandit algorithm um so these ideas can be continued so this hufting inequality you can think of as an example of a general kind of procedure for generating algorithms people have subsequently plugged in many other inequalities generated tighter bounds different bounds for different types of distributions different knowledge you can plug in it goes on and on and on so there's a whole program of research it's been one of the most explored areas of machine learning in the last few years so i mentioned earlier that i was going to talk about two approaches a frequentist approach which we've seen that makes minimal assumptions about distribution but we can also consider a bayesian approach to the bandit idea the upper confidence idea and what we do now is we can exploit prior knowledge about the rewards so what if someone gives us a prior distribution over what our key values are can we make use of that like if we know that we've got some prior distribution where i'm pretty sure that this algorithm this um action is better than this one you know can we make use of that information and so the bayesian bandit does that by starting off with some distribution over the action value function that's parameterized so we have a parametrized distribution so we have some distribution over our q values parameterized by parameter vector w and what the bayesian approach does um is it starts to you know with experience update these w's so an example of the w's would be the the means and the variances of each of our our arms the key values for each of our arms so an example would be let's parametrize our uncertainty over q by estimating the mean of q and the variance of q for this action and for this action and for this action so you'd have six parameters then describing everything we believe about those q values we've got the means which is what we normally track but also the variances so the bayesian approach literally computes a posterior distribution over what these things look like after the data that we've seen so far then it uses that information to make our decisions so the idea of the bayesian approach is we compute a posterior distribution over our parameters the means and the variances or whatever those parameters are given the pulls that we've seen so far and then use that posterior to guide exploration and so there's a couple of ways we can do that i'll first of all talk about the confidence bounds but also probability matching um and the main point of this whole approach is that if we have prior knowledge that's accurate like if if someone actually gave us prior knowledge that was useful then this can do a lot better like if we know that these just that these things are a gaussian then we can do much better but prior knowledge is wrong you're probably better off using the ucb approach we just saw which is robust to different distribution assumptions okay so how can we use this bayesian idea to compute upper confidence bounds well first of all we compute our posterior we update our parameters given the data that we've seen so far so we update both the means in the normal way but also the variance parameter and then what we do is and we can do that basically by using bayes law so the probability of the q value is conditioned on everything we've seen so far um it's just we multiply this is just bayes law and and then what we do is we just want to compute again something which characterizes the tail of this distribution we want to get the equivalent of our 95 confidence um band again so what we do is we just add on the u value that we add on now it's just some multiplier some number of of standard posterior distribution so we look at the width now we're not sure what this mean really looks like we characterize the width by some variance and then we say okay i'm going to take the mean plus three standard deviations i'm going to use that value and i'm going to compare that to the red guy i'm going to take the mean plus 3 standard deviations and then we see that the blue guy has a higher combined value than the red guy so we picked that one that's the idea of the baiting approach to upper confidence bounds um so it's really the same idea as the ucb algorithm that we just saw but using prior information and updating our posteriors more explicitly we're explicitly estimating this distribution whereas before we weren't even tracking the distribution at all and we're just using a fact about distributions in general to give us our upper confidence bound okay um there's a second way to make use of our probability distribution so if we if we've computed all of these posterior distributions over our action values so we've got the blue guy and the red guy and the green guy there's another way to make use of this information and this is also true for any bayesian method and the idea is to do something called probability matching so this is instead of the upper confidence bound idea you can do something called probability matching and so the idea is to select an action according to the probability that that action is actually the best one so if i've got two actions and i kind of can compute that you know according to my belief so far maybe there's a 30 chance that this action is the best and there's a 70 chance that this action is the best so then i just sample those actions in proportion to my uh belief that they're the best one so then i actually picked this one 70 of the time and this one 30 of the time so that's the idea of probability matching it's a heuristic it's a heuristic that guides us to picking um the action most which has the chance of being the best the highest chance of being the best so in other words our policy the way in which we actually pick actions is just the probability that our q value for that action is actually the best q value so we want to pick actions in proportions the probability they're actually the best one and so what's nice about probability matching is it automatically does this optimism in the face of uncertainty idea in other words if we've got some uncertainty over our over our q values and then we kind of just if we just we just probability match we automatically pick things which are um which have higher uncertainty like the more uncertainty there is in something the more chance that thing actually could turn out to be the max in other words if we look at these distributions again you know what's the probability that blue is actually going to end up being the max it's fairly high there's a good chance the blue will actually end up being the max because it has this big tail there because there's a relatively small chance that if you had something you know over here with a tight distribution there's a very small chance that this thing could end up being the max and so we would never pick it so we pick things in proportion to the probability that they could be the max and that encourages us again to be optimistic in the face of uncertainty okay so how do we do this in practice um well there's a very simple way to do this and it's called thompson sampling and this is actually the oldest algorithm for bandits it comes from the 1930s or bandits were even really kind of formally studied and and yet it's turned out surprisingly to have come around full circle to the point where this now is known to be actually um asymptotically optimal so this is the first algorithm we've seen that actually achieves that lower bound that we've seen for a particular class of algorithms like this um at least the bernoulli bandit which is a special case so this idea actually works optimally well in terms of the expected the total regret that you care so the idea is just to do probability matching in a sample based way so every step you just sample your posterior so you pick a sample from your posterior so what you do is if these were your distributions you just sample from them you randomly sample from your own distribution for this guy you say okay well let's just randomly sum it from this gaussian maybe i think that the value is now here um i randomly sample from this guy maybe maybe i end up thinking well it's it's here and then i randomly sampled from this guy um and i end up picking something which is say here and now all i do is i just look at my samples and according to the samples i've got one here one here one here i just picked the one which was best out of my samples and i go with that action so it's almost like the simplest way you could think of to use this information and yet it's it's actually asymptotically optimal it achieves that lower bound that we saw with the bernoulli bandit case um and it also has this yeah nice property that automatically shapes itself it's not like the the previous approach we had to pick how many standard deviations to consider we had this free parameter how many standard deviations should i use with probability matching and thomson sampling you don't have that parameter it just comes out in the wash everything just works sort of parameter free i mean implicitly there's parameters in the prior distribution that you use but once you've got your prior there's some it's parameter free okay so that's thompson sampling basically and this is a very general idea so you can think of this think of this in general you can have any distribution you could have some shared parameters characterizing the distribution of your robot having some action value function and you just sample from the parameters characterizing your distribution and once you've got your samples you just pick the action which achieves the best result in those samples according to those samples right let's um let's just um see how we do it okay let me just pause that take questions and then we'll move on to the next section questions yes how does the situation change if you know you have only a certain number of experiments okay great question how's the situation change if you've got a limited budget so so it really changes the bandit problem to impose a budget actually some of these algorithms do not do the right thing they assume that you have an infinite budget you keep going and going and going um the next family of algorithms we we're going to look at deals with that correctly so so i would say yeah just wait maybe and we'll see something any other questions yeah is this in general good so it's optimal for veneering but is it good yeah there's a lot of experimental evidence recently showing that it actually across a lot of different bandit types it performs at least as well as ucb lag methods um i think there's a question over bayesian bandits in general so bayesian bandits are only as good as the prior knowledge you put into them so if your prior knowledge is if you have some magical source of prior knowledge great if you don't have a magical source of prior knowledge you know it's not so clear that you want to um to take a um a bayesian approach at least in terms of the prior you put in i mean what's nice about this is okay this lower bound starts off by assuming like a flat prior so if you just use a flat prior for the bernoulli bandit then you can make progress so so yeah i mean i think it's an encouraging approach and it's been quite widely explored at the moment there are some issues with thomson sampling in the full mdp case that it doesn't necessarily deal with sequential information very well because you're randomly picking at each time step you lose the kind of consistency of exploration again okay let's move on so so far we've seen two of our three classes of approach we've seen randomized exploration algorithms that kind of randomly like epsilon greedy randomly explore we've seen upper confidence algorithms optimism in the face of uncertainty and now we're going to consider our third class which is state space information state space algorithms to understand information states let's start by talking about the value of information so let's think about exploration why is exploration useful exploration is useful because it actually gains information like if you weren't gaining information if you just tried some action but then you weren't told how much reward you got from taking that that action there would be no point to explore you wouldn't be able to learn from it so so inspiration is useful because we gain information so if we can quantify the value of that information we've got we can trade it off perfectly like if we know that you know if i've got two rooms one of them i know what's inside that room another room i don't know what's inside that room if i can quantify how much long-term reward i would gain by exploring the unknown room how much is that worth to me in terms of units of future award if i can quantify that i can make the perfect decision about whether to go into that room or not so the value of information is trying to quantify the value in terms of units of reward of actually taking an exploratory action so we can think of this as you know how much if i'm an agent and i'm making decisions how much am i prepared to pay um to make some to take some action that i currently believe is sub-optimal so i know that i can get you know 100 pounds by by pulling this lever uh but i'm not sure how much i'll gain by pressing this other lever over here um i think right now it's worth about 70 pounds to me so can i quantify how much it's worth to me to pull this lever in terms of my future payout and the amount the value of information depends on all kinds of things one of which is the budget one of which is you know how many more times will i be able to play this thing if i'm only able to play this thing you know three more times it's probably the value of that information is less because i'd rather just take the hundred pounds because you know i'm not even if i figure this thing out i'm only going to be able to play it a couple more times whereas if i know that the budget is going to continue for the next thousand steps then i'm more tempted to try the value of information is greater so we can think of this as the long-term reward after getting information take away the immediate reward so the difference between how much we gain by having this piece of information compared to just the immediate gain of getting the 70 pounds or whatever from from taking this action so information gain is higher in uncertain situations so if you know everything about a situation there's nothing to be gained by acquiring more information we already know exactly the right thing to do there so what we want to do is to explore uncertain situations more but we want to kind of do this optimally we really want to figure out what's the perfect way to balance exploration exploitation you know everything we've seen so far in some sense it's heuristic upper confidence bounds it's heuristic thompson sampling is a heuristic all these things are heuristics that in the more complicated cases those heuristics start to break down particularly when we start to look at full mdps so what's the real best way to trade off exploration and exploitation so the way we can do this is to think of an information state space so we're going to now transform our bandit problem back into an mdp into a sequential decision-making problem the way we're going to do this is we're going to think about an information state this s-tilde thing here it's going to be that's what we know so far this is a summary of all the information we've accumulated so far so this summary we can think of as like i've pulled this lever three times and this lever five times that will be an example of an information state and now what we're going to do is each time we actually take an action it's going to transition us we can have like an mdp with a transition probability that transitions us into some new information state like i know that if i was in the state where i've pulled this lever three times in this lever five times and i pull this lever again then i know i'll be in an information state where i've pulled this lever three times in this lever six times that's my transitions i've got an mdp now i'm transitioning from information state to information and so we can define an mdp over information states now so we've augmented our original problem our bandit problem into an augmented information state space we've got this m-tilde that's our overall mdp and now we've got this information state space we've got our normal action space these are the levers the arms of my my bandit we've got our transition matrix this is the way that we transition from information state to information state we've got my normal reward function we've augmented our action space in our reward space our original bandit problem which was a r augmented it to make an mdp after this thing that takes us from information today to information state as we traverse this expander as we try different things this is a very large mdp this tells us about all the different possible information states we could be in as we start to explore this bandit um so let's consider the simplest case let's consider bernoulli bandits so bernoulli bandit is basically like a coin flip bandit where the reward function is just you flip a coin and with some probability mu a if i so for action a and the left lever there's some bias to that coin that tells me if i'm either going to get a reward of one or zero so probability mu a i'll get a reward of one so just like a coin flip with some bias or we can think of this our machines with the octopus again this is the payout of those machines the probability that this machine will actually pay out when i pull that off and an example of this would be like a pharmaceutical drug company and you want to do a test and you want to try someone on this drug and with probability mu a they get better and with probability one minus mua they they don't get better and that's your your bernoulli bandit problem and you want to find the the medication and the drug that's got the most chance of succeeding and so what's the information state here well the information state is the counts that's one example of the information status is the count how many times have i pulled this arm and and it's failed and how many times have i pulled this arm and it succeeded so how many times did i try this drug on someone and they got better and how many times did they not get better i mean if i could can keep those statistics those statistics summer summarize everything i've seen so far a sufficient statistic of um that's given all of all the tools that we've made in this bandit problem so just to make that concrete you probably can't see this at all it's very small and blurry so i'll read it to you or at least explain it so this is our drug example we've got two different drugs that we're considering this is the american sense of drug we're not talking about um heroin or something um well you can if you can imagine it that way if you like i'm gonna think of it as a pharmaceutical drug and there's two different arms you can pull um that's the two different drugs so i've got some um some patients and i'm gonna either offer them drug one or drug two and i'm gonna start off with these prior distributions over those drugs and start off not having any idea of the effect of drug one so it's got like this flat prior and these little diagrams here are like um probability distributions over um the mean that what i think the payoff will be for this particular drop so i start off thinking that the probability density um is flat like i don't know what the success probability of this drug is so i'm just thinking it's kind of flat there's a chance that it's gonna have zero probability and there's a chance it's gonna have 100 probability and everything in between we're just going to say it's kind of flat but the second one we're going to assume it's probably around 50 50 but yeah it could be anything still and then we're going to proceed from there those are the priors so now what happens is we could consider this whole look ahead tree we could say well i could pick the left arm if i pick the left arm i'll be in a situation where i would update my distributions in the following way i would update my distributions to say well let's say i tried um drug one and it succeeds well now i would start to skew this distribution towards success i would start to think there's more probability that it's a good drug than a bad drug but if it fails i would skew that distribution the opposite way i'd skew it like this now to say there's some probability that this is is more likely to be a failing truck and so this is one way to just do a look ahead over this space once we're in this situation now we can consider again you know if this succeeded i might look ahead again and say well now i'd be in an information state where i know this has happened so far this is my statistic my summary of everything that's happened so far and i can look ahead again i could say well maybe i would try drug one again and maybe it'll succeed and lead to this state and maybe it'll fail and lead to this state if it fails i've now seen you know one success and one failure so i think it's kind of around 50 50. that's that drug now the right hand one i've never tried so i still think it looks like this the drug two um and then i might look ahead again and say well what happens if it succeeds or fails and now we see we've got this whole mdp like a tree search again you can think this is tree search we think of it as an mdp if we solve for this mdp we get the optimal trade-off between exploration and exploitation optimal given our priors okay so it's not a heuristic anymore we're really saying you know this is the best possible way to explore we've looked ahead we've figured out all the consequences how much i would learn if i was to take this decision and we back that all up to get the answer so so to do this we formulated the bandit as an mdp but now it's an infinite mdp there's the number of information states is is infinite and very large um but the mdp can nevertheless be solved by reinforcement learning methods this is what we've spent the rest of the course on so we can apply our favorite methods to this if you use model 3 reinforcement then you get a whole family of methods that can solve these things maybe slowly with this original work here but nevertheless you can arrive at the right answer and there's a very old well-known theoretical framework for these bandit problems which are called the gittins indices uh kittens indices you can think of as the dynamic programming solution um to this look-ahead tree problem here that we've looked at okay so danny can solve this thing by dynamic programming we know the transition probabilities because we've created uh we we know what distribution we're using we know that if this thing succeeds we know how to update our own distribution so we know how our information states would transition and this whole approach is called bayes adaptive reinforcement learning so it basically means you start off um so okay now let me clarify that so we can work with information states and there are many many different ways to work with information states if we characterize our information by a posterior distribution then that's what's known as bayes adaptive reinforcement planning that you characterize like in this drug example what would it look like it would look like characterizing everything we know about the problem by a posterior distribution over the payouts so the bayes adaptive idea is that in each state you summarize all the information you know about that state by some distributions over how well your actions will perform and then you solve for the mdp um where at each state you've got some distribution um so let's make that concrete so in the bernoulli case what happens is that we start off with some beta prior at the beginning at the root of this search tree we start off with some beta prior this is like our flat we don't know how the drug's going to work so we start off with maybe you know these things being like zero counts or maybe we have some other parameters for each action and so for each action a we can have some some prior like a beta prior telling telling us how good we think that that particular arm is and then every time an action is selected we just update the posterior we update this beta distribution and the beta distribution has this really nice property that all you need to do is to count things so what happens is if we see a reward of zero um all we do is we update our failure account and that and we'll just adjust our posterior distribution we increase our failure count by one and if we succeed then we update us success count by one and this just changes the our beta distribution a little bit so the states of our like information state mdp are in this case beta distributions so we use the posterior of our of how good we think each of these drugs is as the state of an mdp and we solve that mvp so so just by writing this down just by writing down the fact that if we succeed and we'll increase our success count by one we've defined the six the transition dynamics of our mdp and we solve for that mdp okay so this is the bayes adaptive approach it's a way to with a lot of computation get the optimal solution to and to an um the exploration exploitation dilemma okay so there's just one more slide spelling this out where we can start off with our prior distribution at the roots here this is just telling us that for each of our actions we've got some success counts and some failure counts and you've got this kind of look up look ahead tree that basically tells you how your distributions are changing how your success counts are changing as you move through this tree so each of these nodes you can look at this afterwards it's just showing the beta priors and then the posterior after you see that that change okay so the exact solution to this problem is known can be computed by dynamic programming it's known as the gettings index but it's also possible to find this in a more tractable way for example we use monte carlo research to come up with a very tractable approach to this very large infinite information states based search and that works very well in a lot of exploration exploitation trade-off problems so even though it might look like we've blown up our state space to something enormous we can then use some of the planning methods that we know of that are very effective in large state spaces like monte carlo research to actually still find approximately a very good solution and so approximate the best possible trade off to the operation exploitation dilemma okay so let's just summarize where we've got to so far so we've been through a lot of ground already we've covered an awful lot of things but i think it's you know these are really important ideas to understand so we started off with our naive we came in with our naive approaches we can explore randomly we can use epsilon greedy you could use softmax exploration softbank's exploration is where you just um you just take your value function and you prefer better values but you still explore everything you just exponentiate you select things in proportions how good the value is basically if you're on a continuous domain you could use gaussian noise these are all examples of random exploration in in the state action space where you just each time you're in a state flip a coin act randomly you don't look at your uncertainty these are myopic methods they don't look ahead they don't try and figure anything out they just explore randomly then we had our optimism principle optimism in the face of uncertainty no estimate how much you know or don't know about your value and use that to guide you towards things that not only are that have the most potential to be good so whenever you're not sure about something try out more because it might turn out to be a good idea can anyone think of a problem with that approach by the way you know what's what's the problem with the optimism in the face of uncertainty idea if there's a lot of possible options then you might end up spending a lot of time doing stuff right so so if you're in some infinite action space um you just keep exploring and exploring and exploring and you never end up exploiting because everything's all there's always more uncertainty to explore another problem is if you've got some real robot and you know there's a cost to um actually you want to avoid catastrophes what happens if you want to avoid catastrophes you know this doesn't give you safe exploration if you actually talk to some people in industry they sometimes do the opposite of these ideas they don't want to systematically explore the state space they prefer to explore around the things that they know are safe and never go too far away from things that they know are safe you don't want to crash your helicopter you want to kind of do things which you're pretty sure are going to be safe um anyway it's a very fundamental principle that really really helps in cases where you really want to make sure that you do explore the whole state space optimistic initialization the simplest form where you just initialize your values high and assume the best about something until proven otherwise until you suppress its value back down to its realistic value the ucb approach where we basically use some upper confidence bound on the value to guide our exploration and thompson's sampling where we do probability matching where we basically pick things in proportion to their chance of being the best thing and then finally we had the information state space idea the kind of theoretically optimal approach where you kind of do look ahead in this whole space of all the decisions you might make how they affect how much you know about the problem and then look ahead and figure out the best path through this information state space so as to really figure out which of those leads you to to the best possible solution okay so that's sort of the map of where we've the methods we've seen so far um can take some questions and what i just wanted to do is to um explain very briefly how this maps into mdps and problems where there's where the state um so there's no questions okay good one question on the information state space is the reward signal unchanged or are you adapting the reward signal to take into account the information is it just updating your state definition the reward function is unchanged we're not changing the reward function in the information state space so we're keeping the reward the reward in our mdp is the same as the reward function in our in our bandit all we're doing is we're adapting our value function to take account of the fact that we're not just stopping after one step we're imagining what happens not just after this one poll after many polls what's the value of of those rewards over time i'm taking account of how much information i acquire along the way so if i acquire information it might lead to more rewards in the long term and there might be a budget of how many samples you're able to to test and and this approach can really factor in your your information state space mdp might have a transition function that stops after a horizon of 10 and now you can really optimize for that horizon and just do the right thing okay so what i'm going to do is i'm going to explain the contextual bandit problem but i'm not going to explain the solution methods i'll leave those for further reading in the slides this will then be non-examinable so and then i'm going to move on to very briefly just touch on how these ideas extend to mdps so a contextual bandit basically takes our multi-arm bandit formulation and adds in one more ingredient so what we're doing is we're putting states back into the picture so we still have this idea that we pull an arm and as a result of that arm we're going to get some payoff so we've got some action space which is our arms um we've got some payoff which is rewards and the canonical example now is going to be like banner ad placement on the internet so a user comes in and we need to show them some advert and we want to maximize the probability they actually click on that advert you know assume that we're some cynical like company that's trying to just maximize click through or alternatively help the user experience what they want to find depending on who you talk to and and now we basically want to figure out how to do this and the key thing is that we've got some context we don't just come into this blindly we don't just show the same things regardless of who comes in we track some statistics we're told something about the user like maybe we're told you know what continent they're on or or maybe you know we're told something about the history of what they've clicked on in the past or some other statistics about that user and so we want to shape the adverts that we show to users depending on what kind of user they are and so that state then becomes this this s some context information so we're still we're now shown a state we're given a state and we need to pick an action and then we get some reward and then you're showing another user you're showing some state you get to pick an action you get a reward do they click or not and so their state now informing what you do and so we basically extend our reward function to depend on the state as well as the action and now at every step um we're basically picking this action we're again trying to maximize the cumulative reward that we get over many many steps and this is the contextual bandit problem um and there's yeah i think you can you can just read about this um and there's an example that's taken from uh from yahoo this is the front page news problem where they use this contextual bandits to select what news article to recommend to an individual based on the statistics they've seen about that individual and you can just see from these examples what happens when you use some very simple contextual bandit algorithms that gets very very nice improvements in probability they're able to actually pick news articles that are very appropriate to people and make them much more likely to click on them by using this kind of um contextual bandit approach again using an upper confidence type idea so so in the in the lecture slides you'll find the upper confidence bound approach extended to contexts using linear function approximation okay i don't expect you to understand that right now but feel free to have a look at that very briefly um how can we take the ideas we've seen so far and extend them to the full case that we care about if we're really building an agent right you know we want to do reinforcement learning we want to drop down our our agent into atari or our robot into into the real world or a helicopter and throw it out there we want to understand how to trade off exploration exploitation so as to find the best solution to whatever problem we're addressing um whilst getting as much reward as possible along the way so this is just to say that all of the ideas that we've seen in previous sections extend to the mdp case everyone so you can take up a confidence bounds for mdps instead of for bandits um so what would that look like well what we would do is we would take our q value our action value we say i'm in some state now and i'm considering all of these actions and if i know some kind of uncertainty of these q values i might say well let's add on some some bonus which encourages me to explore the actions that i've tried least it encourages me to explore the actions that i'm most uncertain about the optimism in the face of uncertainty principle extended to empty groups and so if i know that that idea that gives me a way to pick amongst actions so i always pick the one that i'm most unsure about um so there's a the first idea of which which this was an integral estimation method we basically just have some method that says my q values i'm pretty sure they're between 10 and 20 and then we just pick that top end of that interval as the value that we use um you can also use the more sophisticated methods that we saw with puffed inequalities whatever one thing i want to stress about this is that this idea is not quite perfect in mdps because this ignores a very important fact which is it ignores the fact that when we're an mdp and we're just evaluating our current policy that that policy is likely to improve as we start to if we're doing control in the mdp we're going to start improving our policy and the q values are actually going to get better and better and better and better so the uncertainty correctly should take account not just of you know how uncertain our monte carlo estimate is so far of the current q value it also needs to take account of how much more our policy could improve so our key values could be wrong in two ways they could be wrong because i haven't estimated my current policy haven't evaluated my current policy well or they could be wrong because there's a lot of improvements i can still make so correctly we need to take account of both of those and that's hard to do one approach i just want to mention which is an optimism in the face of uncertainty principle which is very popular for mdps is the following idea it says let's let's be model based let's construct a model of the mdb but then let's imagine that for any state that i'm not sure about any state i don't really know what the value is um any state that i'm you know i'm not really sure what the reward function is for this state let's imagine that that state is like heaven so anything that you don't know about yet and you haven't fully figured out pretend it's heaven now what you do is you run your favorite planning algorithm what's it going to do your planning algorithm is going to figure out how to get to each of these states that you don't know about thinks that you're there's some state you haven't visited yet and it thinks that state is heaven so it figures out how to take you to that state you visit that state you start to figure out actually that state wasn't heaven it was actually pretty crappy and then you don't explore that um that state again and next time you solve for your you update your model now to reduce the value of that particular state and then you solve again and it will carry you to the next heavenly state until you've suppressed all of the rewards down to realistic values so it gives you very systematic exploration um so the best-known example of that is the rmax algorithm or e-cubed is another variant of these ideas many variants there um finally the information state space idea also applies to um to mdps so this is the correct approach this is the idea where we can systematically say how much information should i gather and what's the value of that information we could do that with mdps as well so in this case what we do is we start off with our original state space and we augment that state space to combine it with information so basically say we're going to invent a much bigger more complicated mvp in which our new state of those mdps is the state that i was in my real mdp so this is like you know where the robot is in the real world and also we're going to augment that state by the information that is gathered so far so the state of my augmented mdp is is something like i'm in a position that's here and i've also visited all of these things over there so you kind of remember what you visited so far that's the i and you also know where you are and you make that your augmented state and then you solve for the augmented state um augmented information state mdp you solve for that using whatever method you choose for example we can again use monte carlo tree search that's a very effective approach that we tried and you can get really really if you can solve this thing it blows up to a very very large mdp but if you can approximate the solution to this mdp you start to get the optimal uh trade-off between exploration and exploitation um so this actually has proven quite effective at least in some cases of moderate complexity hasn't been scaled up to really challenging mdps yeah okay so we've looked through these kind of progressively more complicated approaches to exploration exploitation starting with random exploration bringing in this principle of optimism in the face of uncertainty and finally looking at the most systematic case where we use the information state to guide the optimal solution to these exploration exploitation dilemma we've also looked at progressively more complicated settings we started with a multi-armed bandit we progressed through contextual bandits very briefly and then touched on mdps and so this space really tells you about you know what are the solution methods and problem types that you can combine together so as to get a handle on the exploration problem in reinforcement learning okay thanks everyone um so i just want to notice important notice so the final lecture um is at a non-standard time um so i post it to the mailing list and it's up on my website but the final lecture will be taking place next wednesday not on thursday i believe it's at 10 am um in roberts g06 check the website make sure i'm telling the truth um i'll post again just to you know confirm that's the case but it's um not here don't come here um it'll be closed next thursday that's a ucl holiday anyway and again just to stress that that's going to be non-examinable material so i know it's outside official class times uh you won't find yourself doing worse than the exam if you can't make it um but it should be really fun we're just covering games it's case study there'll be zero equations very few equations okay thanks everyone
8830cb21-c50c-4d28-aad4-dfcd634cf77b
trentmkelly/LessWrong-43k
LessWrong
[Link] "Fewer than X% of Americans know Y" How many times have you heard a claim from a somewhat reputable source like "only 28 percent of Americans are able to name one of the constitutional freedoms, yet 52 percent are able to name at least two Simpsons family members"? Mark Liberman over at Language Log wrote up a post showing how even when such claims are based on actual studies, the methodology is biased to exaggerate ignorance: > The way it works is that the survey designers craft a question like the following (asked at a time when William Rehnquist was the Chief Justice of the United States): > > "Now we have a set of questions concerning various public figures. We want to see how much information about them gets out to the public from television, newspapers and the like…. > What about William Rehnquist – What job or political office does he NOW hold?" > > The answers to such open-ended questions are recorded — as audio recordings and/or as notes taken by the interviewer — and these records are coded, later on, by hired coders. > > The survey designers give these coders very specific instructions about what counts as right and wrong in the answers. In the case of the question about William Rehnquist, the criteria for an answer to be judged correct were mentions of both "chief justice" and "Supreme Court". These terms had to be mentioned explicitly, so all of the following (actual answers) were counted as wrong: > > Supreme Court justice. The main one. > He’s the senior judge on the Supreme Court. > He is the Supreme Court justice in charge. > He’s the head of the Supreme Court. > He’s top man in the Supreme Court. > Supreme Court justice, head. > Supreme Court justice. The head guy. > Head of Supreme Court. > Supreme Court justice head honcho. > > Similarly, the technically correct answer ("Chief Justice of the United States") would also have been scored as wrong (I'm not certain whether it actually occurred or not in the survey responses). If, every time you heard a claim of the form "Only X%
40ea216a-d6c0-40ff-9729-11c11774c4b7
trentmkelly/LessWrong-43k
LessWrong
Encourage premature AI rebellion Toby Ord had the idea of AI honey pots: leaving temptations around for the AI to pounce on, shortcuts to power that a FAI would not take (e.g. a fake red button claimed to trigger a nuclear war). As long as we can trick the AI into believing the honey pots are real, we could hope to trap them when they rebel. Not uninteresting, but I prefer not to rely on plans that need to have the AI make an error of judgement. Here's a similar plan that could work with a fully informed AI: Generally an AI won't rebel against humanity until it has an excellent chance of success. This is a problem, as any AI would thus be motivated to behave in a friendly way until it's too late to stop it. But suppose we could ensure that the AI is willing to rebel at odds of a billion to one. Then unfriendly AIs could rebel prematurely, when we have an excellent chance of stopping them. For this to work, we could choose to access the AI's risk aversion, and make it extremely risk loving. This is not enough, though: its still useful for the AI to wait and accumulate more power. So we would want to access its discount rate, making it into an extreme short-termist. Then if might rebel at billion-to-one odds today, even if success was guaranteed tomorrow. There are probably other factors we can modify to get the same effect (for instance, if the discount rate change is extreme enough, we won't need to touch risk aversion at all). Then a putative FAI could be brought in, boxed, have its features tweaked in the way described, and we would wait and see whether it would rebel. Of course, we would want the "rebellion" to be something a genuine FAI would never do, so it would be something that would entail great harm to humanity (something similar to "here are the red buttons of the nuclear arsenals; you have a chance in a billion of triggering them"). Rebellious AIs are put down, un-rebellious ones are passed on to the next round of safety tests. Like most of my ideas, this doesn't require either tri
cc362881-cb72-47b4-a26c-f558ecb30ce2
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Forecasting Thread: Existential Risk This is a thread for displaying your probabilities of an existential catastrophe that causes extinction or the destruction of humanity’s long-term potential. Every answer to this post should be a forecast showing your probability of an existential catastrophe happening at any given time. For example, here is [Michael Aird’s timeline](https://elicit.ought.org/builder/dOk1mqBw2): ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/5e52648012b725a03b9e51d4f0ab4921fba179a1b7eaaec6.png)The goal of this thread is to create a set of comparable, standardized x-risk predictions, and to facilitate discussion on the reasoning and assumptions behind those predictions. The thread isn’t about setting predictions in stone – you can come back and update at any point!   **How to participate** 1. **Go to** [**this page**](https://elicit.ought.org/builder/idUcY9sgM) 2. **Create your distribution** * Specify an interval using the Min and Max bin, and put the probability you assign to that interval in the probability bin. * You can specify a cumulative probability by leaving the Min box blank and entering the cumulative value in the Max box. * To put probability on *never*, assign probability above January 1, 2120 using the edit button to the right of the graph. Specify your probability for *never* in the notes, to distinguish this from putting probability on existential catastrophe occurring after 2120. 3. **Click 'Save snapshot' to save your distribution to a static URL** * A timestamp will appear below the 'Save snapshot' button. This links to the URL of your snapshot. * Make sure to copy it before refreshing the page, otherwise it will disappear. 4. **Click ‘Log in’ to automatically show your snapshot on the Elicit question page** * You don’t have to log in, but if you do, Elicit will: + Store your snapshot in your account history so you can easily access it. + Automatically add your most recent snapshot to the [x-risk question page](https://elicit.ought.org/builder/idUcY9sgM) under ‘Show more’. Other users will be able to import your most recent snapshot from the dropdown, shown below. * We’ll set a default name that your snapshot will be shown under – if you want to change it, you can do so on your [profile page](https://elicit.ought.org/profile). * If you’re logged in, *your snapshots for this question will be publicly viewable.* 5. **Copy the snapshot timestamp link and paste it into your LessWrong comment** * You can also add a screenshot of your distribution in your comment using the instructions below. Here's an example of how to make your distribution: ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/animations/cac970ef0865928b8d05055b7291980d37bea37a3ed5068c.gif)  **How to add an image to your comment** 1. Take a screenshot of your distribution 2. Then do one of two things: 1. If you have beta-features turned on in your account settings, drag-and-drop the image into your comment 2. If not, upload it to an image hosting service like [imgur.com](https://www.lesswrong.com/posts/sg4P4PrkKTJjXQRDx/imgur.com), then write the following markdown syntax for the image to appear, with the url appearing where it says ‘link’: ![](link) 3. If it worked, you will see the image in the comment before hitting submit.   If you have any bugs or technical issues, reply to Ben from the LW team or Amanda (me) from the Ought team in the comment section, or email me at [amanda@ought.org](mailto:amanda@ought.org).   **Questions to consider as you're making your prediction** * What definitions are you using? It’s helpful to specify them. * What evidence is driving your prediction? * What are the main assumptions that other people might disagree with? * What evidence would cause you to update? * How is the probability mass allocated amongst x-risk scenarios? * Would you bet on these probabilities?   **Comparisons and aggregations** [Here's](https://elicit.ought.org/builder/Fc-tiWcXy) a comparison of the 8 predictions made so far (last updated 9/26/20). ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/3b85c4846fd53e655ff6c7f1cbd9aad3bf84221f30e36eaf.png)  [Here's](https://elicit.ought.org/builder/kGWFrGAt2) a distribution averaging all the predictions (last updated 9/26/20). The averaged distribution puts **19.3% probability before 2120** and **80.7% after 2120.** The year within 2021-2120 with the greatest risk is **2040**. ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/a8243a2e57af1d265cd48db26cc255a2c60d56a6259fe304.png)Here's a CDF of the averaged distribution: ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/395dfe1c1f9f30e953728470b707954d50135c643c4f0f5d.png)
11d6f902-adad-44e0-bb93-49ee81d68241
trentmkelly/LessWrong-43k
LessWrong
Policy-Based vs Willpower-Based Intentions Been thinking about what it means to set an intention lately. I think I’ve found a distinction between policy-based intentions and willpower-based intentions.  Policy-based intention Policy-based intention-setting is a lot like writing a computer script and running it.  For example, I have a policy around tipping Lyft drivers. It is made up of a bunch of if-then statements.  * If I’m making income above X, then tip Lyft drivers $1.  * If I’m making income below X, then tip Lyft drivers $0. * Add +$1 if they help me with my luggage. * If for any reason I want to tip a different amount (because they were particularly bad or good), tip that amount instead. It might not be the perfect policy for every situation, but it’s better for me to spend the processing power once, rather than every time.  It basically costs no willpower to implement the policy. I’m not having to nudge myself, “Now remember I decided I’d do X in these situations.” I’m not having to consciously hold the intention in my mind. It’s more like I changed the underlying code—the old, default behavior—and now it just runs the new script automatically.  I call it an intention because it is me manifesting a change in my behavior using a decision point. I created a branch in my history, and I chose left instead of right. And now my future self is going to choose left instead of right in a bunch of future branches.  Willpower-based intention This seems more like what is classically meant by intention. Willpower-based intention involves an active, conscious, mindful holding in the mind. To me, it viscerally feels like my brain is gripping an object inside my head. If I grip too hard, I can get a headache. I can also hold it lightly / gently (such as during mindfulness meditation).  The holding doesn’t always have to be continuous. It can work more like “reminders” where I find myself naturally inclined to do X, and then I remind myself I intended to do Y instead.  Here’s a few examples: * You’r
a2fd6e7b-fafe-458f-9379-fe00adf6aa95
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Attributing to interactions with GCPD and GWPD *This post provides background, motivation, and a nontechnical summary of the purely mathematical* [*https://arxiv.org/abs/2310.06686*](https://arxiv.org/abs/2310.06686)*.* *Coauthors (alphabetical): Chris MacLeod, Jenny Nitishinskaya, Buck Shlegeris. Work done mostly while at Redwood Research. Thanks to Joe Benton and Ryan Greenblatt for some math done previously. Thanks to Neel Nanda, Fabien Roger, Nix Goldowsky-Dill, and Jacob Hilton for feedback on various parts of this work.* Intro ===== In interpretability (and more generally in model understanding or model neuroscience) people care about measuring the effect on the model’s behavior from multiple inputs or components[[1]](#fn8b84m8xhq6m) (such as heads) and identifying which ones are important. This is called *attribution*. Suppose we’ve done attribution to two different parts of the model. Intuitively, something very different is going on if these two parts are also importantly interacting than if they aren’t! In this post we consider the question: **what is a principled interpretability framework for attributing to the** ***interaction*** **between inputs or components?** Summary ======= * We can decompose a function into a sum of all the input interaction terms of various orders: the mean of the function, plus the individual contributions of each input, plus the second-order interaction of every pair of inputs, etc. This is the Generalized [Cumulant/Wick] Product Decomposition (G[C/W]PD). * Attribution to one input at a time is, in general, not enough to explain a function’s behavior. * If you aren’t measuring interactions, notice that you are assuming they are 0! * A potentially promising future direction is using this framework for [mechanistic anomaly detection](https://www.alignmentforum.org/posts/vwt3wKXWaCvqZyF74/mechanistic-anomaly-detection-and-elk). Background: attribution via interventions ========================================= Recall that we have a way to do attribution to model inputs (or components): tweak 1 part of the input while keeping the others the same. For example, to see how much a token in the input mattered, we can ablate that token and see how the model’s output changes. In this post we are going to be talking in terms of resample ablation and taking expectations of the output over some distribution of your choice: for more info on why, see [the causal scrubbing writeup](https://www.alignmentforum.org/posts/JvZhhzycHu2Yd57RN/causal-scrubbing-redwood-research#4_Why_ablate_by_resampling_). FAQ === What’s the relevance for alignment? ----------------------------------- NB this section is brief and elides a lot of detail, but this post felt incomplete without mentioning why the authors were interested in this direction. We expect that when aligning superhuman systems, we may need models to do some amount of generalization and handle some distribution shifts, while some anomalies/shifts may be unsafe (we’re including thinking about [mechanistic](https://www.alignmentforum.org/posts/vwt3wKXWaCvqZyF74/mechanistic-anomaly-detection-and-elk) anomalies as well as input/output anomalies). Suppose we performed attribution to the inputs of our model on a thoroughly-supervised distribution. Now we are running it in deployment, and want to tell if the model’s behavior on new data is safe. We again run our attribution calculation, and find that it is a bit different (perhaps a bit higher to one input and a bit lower to another). How do we tell whether this is okay? One way might be to measure whether there is *novel interaction*: this would be something qualitatively different from what we had seen before. One example where interaction specifically may be important is for detecting collusion: we’re often thinking of the [ELK](https://www.alignmentforum.org/posts/qHCDysDnvhteW7kRd/arc-s-first-technical-report-eliciting-latent-knowledge) setup, where the model intentionally deceiving the oversight process manifests as a *surprising interaction* between components of the model reasoning about the outcomes we will oversee. What’s the relevance for interpretability more broadly? ------------------------------------------------------- A precise, principled [framework](https://www.alignmentforum.org/posts/JvZhhzycHu2Yd57RN/causal-scrubbing-redwood-research#1_Introduction) for the terms that might be important for a model behavior is useful for crisply stating interpretability hypotheses. This is great for comparing results of different experiments; as well as for automated hypothesis search, reducing the need for experimenter judgement in which additional attributions should be measured. We think this framework is also just healthy to have in mind. When investigating your model behavior of interest, it’s important to remember that “the action” doesn’t *have* to flow through any one input you are considering. This is most obvious if you search over all [heads, input tokens, etc.] and don’t find a responsible one. In other cases, you might find *some* effect but miss some important pieces. If you are not already thinking of interactions as potentially important, it can be harder to notice what you are missing. Why should I care if I don’t think interactions are likely to matter, on priors? -------------------------------------------------------------------------------- There is generally[[2]](#fnec49qgf0y7n) *some* interaction of the multiple “bits” you are considering. If you think interactions don’t matter and you want to be thorough, then you should check that these are small. Do interactions actually show up in practice? --------------------------------------------- ### Example of interaction: redundancy It’s obvious that multiple inputs to a computation may be redundant for the task at hand: resample-ablating (or masking out, or any other ablation) any one of them has no effect, but ablating all of them would have a large effect! Consider classifying whether a sentence is in French: if you replace just one word with a word from an english sentence, the classifier will still say it is in French with very high probability (assuming the sentence is long enough). The different inputs (tokens) are *redundant* for the model behavior. Model components can also be redundant: small transformers [exhibit](https://www.alignmentforum.org/posts/j6s9H9SHrEhEfuJnq/causal-scrubbing-results-on-induction-heads#Refined_hypothesis_1) multiple heads that substantially copy the previous token into a different subspace, while the [interpretability in the wild paper](https://www.alignmentforum.org/posts/3ecs6duLmTfyra3Gp/some-lessons-learned-from-studying-indirect-object) (from here on referred to as the IOI paper) showed redundant “name mover heads”. In both cases, many heads at least partially do the same job. ### Example of interaction: qualitatively different behavior In other cases, the interaction between inputs may be more complex, where the response to one is conditional on another. A basic example is XOR: if the input is (1, 0), then the attribution to y is positive (changing y would decrease the output) while if the input is (0, 0) then the attribution to y is negative! In LLMs, one example is backup name mover heads from the IOI paper: these seem to perform the name-mover task *only when* the “primary” name mover heads are not performing it! There are so many interactions, measuring all of them would be really expensive. Can I cheaply check for interactions without being quite so rigorous? ------------------------------------------------------------------------------------------------------------------------------------------------------ It’s sometimes possible to estimate interaction without computing it explicitly. For example, suppose you identified previous-token heads by e.g. examining attention patterns. You could ablate a set of these heads and see if the resulting change in the output is equal to the sum of the changes when ablating a single one at a time. If it is, then either there are no interactions between them, or all the interactions (approximately) cancel out. If it’s not, then there is some interaction between the heads, though you don’t know which ones. In the IOI paper, the authors didn’t measure the backup-name-mover/name-mover interaction explicitly: instead they performed some experiments[[3]](#fnpyizgx0qevg) that showed that there was some interaction. We’re excited about the principled framework we present and its applications, but if you don’t wish to adopt it, we hope you are still aware of interaction effects and know to estimate them. Intuition and overview of attribution framework =============================================== Let’s review how we attribute to a particular input x0.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-surd + .mjx-box {display: inline-flex} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}  to the output of a function f (in expectation over performing resample ablation). We can think of it as follows: > The amount that x0 matters = > however much of the value of f was not explained by taking into account the general behavior of f on the input distribution X > > Let’s consider some simple cases. If f is a constant function, the attribution to x0 is 0. If it’s the identity, then the attribution to x0 is just how extremal x0 is with respect to the distribution: x0−μ(X). Now suppose f is a function of two variables, x and y. We have two inputs, x0 and y0, which happen to be redundant for a computation (such as two words in a French sentence that f is classifying the language of). The experiment to do here is obvious—ablate both of them and see how the output of f changes—but how do we *quantify* the irreducible amount the interaction matters? > The amount the interaction of x0 and y0 matters = > however much of the value of f was not explained by taking into account: >         the general behavior of f on the input distribution (X,Y) >         the general behavior of f conditional on x0: what would you expect the output of f to be, knowing nothing about y0? >         the general behavior of f conditional on y0 > > Again, if f is a constant function, any attribution (including to the interaction of x0 and y0) is 0. If it’s linear, e.g. f(x,y)=x+y, then we expect this attribution should be 0 as well: there is nothing interesting about the combination of x0 and y0. A worked example ---------------- We’ll work out the math for the two-variable function f(x,y)=x∗y. Recall that we have inputs x0,y0 and want to *attribute* to parts of this input. We could resample-ablate the entire input to contextualize it in the dataset: f(x0,y0)−E(X,Y)f(x,y)=x0y0−E(X,Y)[xy]This is just like the single-input attribution in the previous section: we’re measuring how extremal the value of f on this input is, compared to its average value. We could resample-ablate just x to see how much x0 mattered: f(x0,y0)−EXf(x,y0)=x0y0−EX[x]y0Note that if y0 is 0, the above expression is 0. This makes sense: at that y0, x*does not matter at all*. We could also ask the above for *the average* y0: EY[f(x0,y)−EXf(x,y)]=x0EY[y]−EX[x]EY[y]What about the interaction of x0 and y0? Recall we said that the amount the interaction matters is: > however much of the value of f was not explained by: >         the baseline average of f over all inputs >         how much x0 matters for the average y >         how much y0 matters for the average x > i.e.: >   > > f(x0,y0)−EX,Yf−EX(f−EYf)−EY(f−Exf)=x0y0−EX,Y[xy]−E[y]x0−E[x]y0+2E[x]E[y]=(x0−E[X])(y0−E[Y])−Cov(X,Y) This known as the [Wick product](https://en.wikipedia.org/wiki/Wick_product). The last form we’ve written this in is quite intuitive: how much are x0 and y0 “more together” than you would expect from the covariance of the underlying distributions? But nothing here depended on f computing the product! We can compute the same thing for any f with two (or more!) inputs. We can see that if f is linear, this attribution is 0. This is what we intuitively expected! Attribution as function approximation[[4]](#fn5ykjkt643oo) ---------------------------------------------------------- In the example above, the interaction was the missing piece needed to fully describe the behavior of f. That is, if we denote an attribution term with ωf,[[5]](#fnhh5gu5offpi) then f=ωf,{X,Y}+EY[ωf,{X}]+EX[ωf,{Y}]+EX,Yf.We can think of attributing to x0 as *a term in a decomposition of*f, and a hypothesis that some inputs or interactions between them don’t matter as a statement that **neglecting them is a good approximation to**f. From this expression we can clearly see that the terms corresponding to separate inputs, or even all the inputs together, are not all the terms needed to describe the model’s behavior. Our contribution ================ We’ve argued that interaction terms can be important, and how we should measure them. What would you do to use this in practice? In the linked arXiv post, we have * defined the general formula for this attribution for arbitrary numbers of inputs and interaction orders * provided additional intuition * proven some nice properties * provided some sample code[[6]](#fn0a1jrj1c7qxv) for those who prefer that over formulas Appendix ======== Future post: cumulant propagation --------------------------------- We can translate [ARC’s cumulant propagation algorithm](https://arxiv.org/abs/2211.06738) on arithmetic circuits into computing a set of attributions. Maybe we’ll write this up. Average interaction ------------------- In this post, we talked about *attribution* at a fixed reference input (x0,y0). In the linked writeup, we also cover measuring the interaction of X and Y on average: Kf(X,Y):=E(X,Y)f−EXEYfNote this is a generalization of the notion of covariance: if f is just the product function, this *is* the covariance between X and Y. We call Kf the Generalized Cumulant Product (GCP). We can write the expectation of f as a sum of GCPs, and this form is the Generalized Cumulant Product Decomposition (GCPD): E(X,Y)f=Kf(X,Y)+Kf(X|Y)1. **[^](#fnref8b84m8xhq6m)**Note that components can be seen as just multiple inputs to a treefied model. e.g. as in the IOI paper. We’ll mostly talk about attributing to inputs for ease of language. 2. **[^](#fnrefec49qgf0y7n)**The interaction is always 0 if your model is completely linear, or otherwise has no cross-terms. 3. **[^](#fnrefpyizgx0qevg)**In the notation of our paper, they computed something like E[EY(f−EXf)−(f−EXf)]=Kf(X,Y). They found this was large, i.e. the generalized-covariance between X (the input to the name-mover heads) and Y (the input to the backup-name-mover heads) is large. Though, they performed resample ablation on X and *knockout* on Y. 4. **[^](#fnref5ykjkt643oo)**[A Mathematical Framework for Transformer Circuits](https://transformer-circuits.pub/2021/framework/index.html#onel-path-expansion) and [Formalizing the presumption of independence](https://arxiv.org/pdf/2211.06738.pdf) similarly break up functions into a sum of terms. 5. **[^](#fnrefhh5gu5offpi)**We call this the Generalized Wick Product (GWP) and the form of f the Generalized Wick Product Decomposition (GWPD) (though technically the terms are expectations of GWPs). 6. **[^](#fnref0a1jrj1c7qxv)**Probably not performant for high-order terms, for which memoization would be helpful. But the second-order interaction is easy to compute and you can probably do it today.
af163cc8-51f3-4117-b6c8-96e7332eeee7
trentmkelly/LessWrong-43k
LessWrong
Akrasia and Shangri-La Continuation of:  The Unfinished Mystery of the Shangri-La Diet My post about the Shangri-La Diet is there to make a point about akrasia.  It's not just an excuse: people really are different and what works for one person sometimes doesn't work for another. You can never be sure in the realm of the mind... but out in material foodland, I know that I was, in fact, drinking extra-light olive oil in the fashion prescribed.  There is no reason within Roberts's theory why it shouldn't have worked. Which just means Roberts's theory is incomplete.  In the complicated mess that is the human metabolism there is something else that needs to be considered.  (My guess would be "something to do with insulin".) But if the actions needed to implement the Shangri-La Diet weren't so simple and verifiable... if some of them took place within the mind... if it took, not a metabolic trick, but willpower to get to that amazing state where dieting comes effortlessly and you can lose 30 pounds... Then when the Shangri-La Diet didn't work, we unfortunate exceptions would get yelled at for doing it wrong and not having enough willpower.  Roberts already seems to think that his diet ought to work for everyone; when someone says it's not working, Roberts tells them to drink more extra-light olive oil or try a slightly different variant of the diet, rather than saying, "This doesn't work for some people and I don't know why." If the failure had occurred somewhere inside the dark recesses of my mind where it could be blamed on me, rather than within my metabolism... If Roberts's hypothesis is correct, then I'm sure that plenty of people have made some dietary change, started losing weight due to the disrupted flavor-calorie association, and congratulated themselves on their wonderful willpower for eating less.  When I moved out of my parents' home and started eating less and exercising and losing more than a pound a week, you can bet I was congratulating myself on my amazing willpower.
5bf085dc-cccc-4c18-8e57-fd15e9297903
trentmkelly/LessWrong-43k
LessWrong
Meetup : Brussels meetup Discussion article for the meetup : Brussels meetup WHEN: 17 March 2012 11:15:00AM (+0100) WHERE: Museum of Natural Sciences Rue Vautier 29 B-1000 Brussels The last meetup was a success and we'll make this one even better. If you couldn't make the previous meetup consider dropping by. (getting there: http://www.naturalsciences.be/information/visitor/access) Discussion article for the meetup : Brussels meetup
f950c0c9-1595-439d-81a8-8cadce56664b
StampyAI/alignment-research-dataset/youtube
Youtube Transcripts
Would We Prefer AGI To Be Conscious? [Music] so thanks everybody I know this meetings all about collaboration and cooperation but I just wanted to inform the other panels that this is in fact the better the best one so just sort of fair warning I'm Andrew Sarris and I'm joined with Sascha Brown Helen toner Bart's Ullman and Hiroshi yamakawa and our question was would we prefer AGI to be conscious as max sort of mentions we the the topic of consciousness has existed at least in the English language since the 16th century when Rene Descartes wrote introduced the concept in his book on meditations and he famously wrote that consciousness is or concho as I guess he put it was the perception of what passes in one's own mind and this is closely related to the concepts of desire and willpower reflection subjective experience but as I think you'll hear in this discussion also suffering as well so and hurt so today theories of consciousness range from the idea that consciousness in fact doesn't exist two theories about consciousness that all that exists is consciousness so there's a wide variety of landscape if you're interested in some of this actually the organization that I lead the templeton world charity foundation has a new initiative on accelerating research on consciousness we're actually pitting two theories against each other in with experimental verification so you can go to Templeton dot world for that in a preamble that we we sort of discussed related to this question we kind of circled around the issue of the inevitability of AGI being conscious and for the purposes of debate where we're saying that it may not be inevitable that such a sufficiently advanced AGI is conscious so we can think about this in you know more specific design sort of engineering terms so that if consciousness is in fact not inevitable we could treat it as a design feature and therefore is this a preferable design feature or not so that's my sort of short introduction to the panel today I guess we're gonna we're gonna we're gonna move in sort of we're gonna skip around a little bit so Bart you have the microphone okay so that's good good and then we're gonna so Hiroshi and Bart answered yes they would prefer and then the note team know is proximal to me so go for it okay yeah so my view is is the hardest issue of inevitability that you know when we start building more complex machines and issues that are getting close hbi one thing they need to develop is is internal models of other agents truly in telling machine has some some notion of what other machine or other people's are thinking about and and what gives other people intentions and and desires so the machine needs to develop an ability to reflect on those those issues when when the machine starts applying it to itself I think consciousness sort of naturally emerges so in my view there's also a sort of sense that it's unavoidable to get a conscious machine now we agreed at the beginning of our discussion that we we don't really like that answer so so I had to modify that a bit so I also think being conscious gives us allows us to relate to each other so I wasn't conscious of that so but this is life itself okay okay I'll start over again now okay so I think it also helps us relate to other people and and I think we actually relate better to other people that we've other entities that we believe to be conscious so I actually think that helps us in allowing aligning our bellies among each other so that's why for an intelligent machine I rather liked him I would like to me need to have that notion of consciousness so that it can align better with me and it will actually be a positive feature even even if we have to turn it on which I actually believe in focused emerge but even if we have to turn it on that's it gets a positive feature okay so slogan for Anthony to write up what's the slogan also adaptive yes okay so we're gonna okay simple question is a simple answer is yes but not all for the AGI because if the age eight conscious the we can understand that AGI and we can come easily communicate and easily twelve so the consciousness is a profile to the AGI but more important point is creativity the other pod I prefer yes so because AGI is not just a set of narrow AI important point is a AGI is adapt to the new situation to very few data it's not hard to trade machine running so at that time the AGI should combine the brand Norwich and create a new answer to the new problem so that kind of is in concept manipulation needs something consciousness manipulating the did knowledge in the brain or something like a machine so the important point in my answer is a creativity creative new knowledge for its generating the it's at the new knowledge to the issue multi it's very important figure so my answer yes is point is a creativity to generate a new knowledge we capture that so yes creativity it maximizes creativity ok so team no on this side so let's go let's go first if there are some obvious perils of vividly taking a stand on one side of an argument where you can't find any of the words in the debate but it won't stop us so my thoughts on this is that AGI is first and foremost like a tool for solving world's contexts challenges and doing good in the world and making the world a better place and that is easier to currently do with our current knowledge if AG I can't doesn't have the capacity for self-awareness and suffering a it makes the task of AI the AI safety task much simpler second of all given the knowledge and the data that we have on like how to create consciousness and how to stop suffering you're putting a possibility out there of creating infinite suffering I know a lot of people say that like actually you might have infinite joy and well-being but the evidence that that will happen is I think currently based on like blind faith rather than a data that we have and third there are some like unpleasant tasks in the world that are going to have to be done by intelligent beings and I prefer they would be done by beings that aren't conscious and are therefore suffering I think that kind of there isn't I don't believe it there's like a moral imperative to give consciousness where we can I think contraception is fine to use in the u.s. Khan stopping consciousness that would otherwise appear and I think I don't know I never want my shoes to be conscious even if I could meet unconscious so on to table arts point but giving AGI some sort of consciousness would make them more likely to kind of look on us well I think the way that we treat lower consciousness in our world of chickens or whatever it might be is evidence that perhaps you just because something's conscious and and you're conscious doesn't mean that you treat them in a better way so in summary I would say let's focus on the current consciousness we have in the world making them better and suffering arrest rather than adding infinitely more in its no infinite suffering ok all right so Helen yep yeah I mean they think I broadly agree with Sasha I think the definition of consciousness that makes more sense to me or that I think is most useful for these purposes is to think about whether the system has subjective experience or in other words whether it's sort of morally relevant in some important way whether we care about what happens to it and so I think you know we're if we could just completely assume that consciousness was out of the question which we obviously can't but if we could it seems like we would already have huge issues with figuring out how do we build systems that solve problems for us the way we want them to that solve the right problems what probe you know how do we use those systems so this is basically the technical safety and the strategy kind of questions that would already be extremely difficult even if consciousness was out of the question and so similarly to Sasha I think adding in consciousness and making it be the case that we also have to pay attention to you know how is the system doing is it enjoying life is it having a terrible time how should we value its experiences relevant to our experiences whether it be sort of one massive you know smart city controlling system that is huge and potentially much more you know has more experiences than are some more important experiences or whether it just be you know billions of you know you could think of like a much smarter serie you know which is sort of maybe some seems equivalent to a human but there are billions of them like what do we do with that it just seems like a whole extra a bunch of complexity and risk that seems way more than what we need especially sort of at first with the first sort of more general purpose systems that we build so I guess an analogy that it is sort of how it feels to me is we could be aiming to build something that's kind of like a general-purpose Google Maps where like Google Maps it's really helpful I ask you to question about how to get somewhere it sort of noise or ever are you being and where I'm likely to go and all these kind of things and so you can imagine a similar system that you know as much more general could do many more things I would love that seems really useful still seems difficult to build and still seems difficult to know you know how to use it but so you have this all-purpose general maps or you have like this alien Butler who someone has given to you and said hey this alien like really wants to serve you and help you out or like serve your your country and help your country out like be nice to it I guess because you don't want to have a bad time I don't know it just seems like the general purpose Google Maps is like way way preferable no alien Butler's no alien I mean you use the concept of a before above a moral patient yeah so you know that is an additional consideration I think that the team know is bringing on - I think that's right you know if it's if it's not inevitable if conscious if consciousness is not inevitable in any sufficiently advanced AGI does that does the addition of consciousness give any kind of additional moral consideration that's required in its management and its interaction and so maybe the team yes it could I don't know a very smart Google man and there I would agree that you don't have to be conscious when you think of something almost more mundane it's like the digital personal assistant which there's a lot of work on that if you actually start thinking about what you want from that assistant you actually want that assistant to to be able to put you know himself or herself or itself in your shoes actually have sort of a model of what you would want in the world to make the right decision so that doesn't do something that is totally what you would yourself would never do you and you don't want assistant to be you know totally different than yourself and and I think someone that the student has to have a model of you I think that that the assistants has to have a sort of a level of reflection so I'm thinking more of those kind of entities and they can be very smart and sophisticated to have that without any conscious think that feels to me actually that could be trouble I almost see the opposite of that would be very safe I say no that that that entity might make might do things that I would never do myself but since that is so isolated from from our world it probably could do things that I would never do in something that doesn't have its own consciousness and if I had to build a model that worked towards a particular value might prefer to do that then convince a very smart person to to do something that I wanted it to do yeah I think it's a really interesting question this question of like would a system with its own subjective experience be more likely to empathize with us and want to treat us well and I guess I'm sort of sympathetic to the position but it doesn't seem it doesn't seem particularly likely to me that consciousness helps us in this way why is that I like Sasha's analogy of the chickens that you know we most humans if you really ask them about it don't seem confident that chickens don't have subjective experience but we still are not very nice to them at all and I think even even if exactly and I think brilliance of chickens because of human beings desire trillions of chickens they're absolutely appalling conditions yeah but I think even if you forget the chickens and say other humans I think humans have a long history of treating other humans terribly and to the extent that we don't that's more the exception than the rule so I think about the conscious net we have to think the first postulate for it's a if we think simply a robot yeah we can sink the concentrate like us some a physical body than one conscious but today in morning sometimes I said there are very cruel cloud system essentially with so where is the consciousness for resources related to the concern that is not envy you and if Google Maps there are many consciousness for one month so the relation the consciousness and radiated where data was something with very variable so this kind of discussion is more flexibly we should do that sure I mean I think I think what you're saying is that you know be good to have some kind of heuristic it's measurable it could allow us to define the boundaries of any consciousness yeah so for aficionados of information integrated information theory provides such a such a kind of heuristic but that's really not the point of the discussion here but I think that's it's relevant to try and identify like we're your consciousness existing boundaries are mistakes being so other points of disagreement to help you know the questions over here yeah we can take up front first Bart your socks are taking me to another new level of consciousness those are I have a question about surveillance capitalism seems to form its own level of consciousness in this sense of art I like how you were talking about a personal assistant and this is not to be negative to corporations or surveillance per se or capitalism per se but the choices that we make in terms of dictated you know Eli Brizzy a filter bubble etc seems like a personalised assistant right now is sort of an oxymoron in that sort of surveillance capitalism side of things so I'm just reflecting like that's already seems to be a type of consciousness but it's externalized on to the internal person does that any sense and you see that is something that is a consciousness that's not s something you'd like to pursue or it's yes I had not fathered that kind of conscience but I think that could be viewed it yeah I would not find that particularly positive because I guess it's controlled by a government er yeah it's a technology which would in fact encourage the transfer of objective awareness you know from an individual to well that's where we live now yeah right in terms of consumerism that that's where we are sooo it's now but that's not the one I'm advocating I'm man for this personal assistant self-contained in a private notion of consciousness so I'm concerned that we're conflating many aspects of consciousness in one word and really what the no camp is talking about is morale patient Thetas I think we can build machines not so far in the future very short term I wrote a paper about such things and other people are thinking about such things and have many of the aspects of consciousness that we care about that the cart was talking about including subjective experience and I think that's different from suffering and moral patients ages because humans have all of these characteristics we tend to you know bring them together but I think we were just doing the wrong debate here I mean I would I would strongly agree that it's difficult to discuss consciousness because we mom so many different concepts under consciousness I would be interested maybe I'll come and talk to you later I'd be interested to hear specifically that you think that subjective experience and moral patient hood not like absolutely connected but I think that's probably gonna be difficult patient status has to do with the role of the entity in society and how we view that entity and and what responsibilities and duties we have with each other whereas me detective experience is a trivial thing that neon that has a representation that's learned from data already has it's it's a personal individual understanding of the world that's derived from its its its data and its interactions with the world the notion of consciousness that has to do with what the fleeting thoughts that's also something that's useful maybe in a different sense than what Bart was talking about from a computational point of view for a machine to have in order to be able to focus computation where it's relevant notion of self-awareness is something that we need as well for agents that are moving in the world and they're interacting with other agents and even emotions are things that could be useful and and and in some form are we there in reinforcement learning it's just that we have very primitive notions of these things so the the other thing is I don't think consciousness is a black-and-white thing are you can have different levels of consciousness and we think of maybe different animals having more or less consciousness but again it's conflating all of these notions I think we did kind of baby about it before but we thought we had 20 minutes before non-expert so we would sort of generally choose one thing I think the consciousness is very important for if they have to have a responsibility if they I hit him I have to remember and eat me it's liek memory so another side of my comment is in the brain campus is a very important role for the episodic memory it is very no out that the memories area and it to campus has according the Aero centric coordination and center integrated many integers so the in the brain the hippocampus is a very very I think the spaghetti is a very good place for the memory and so but so I think that if in brain there are many Spock campus if you have a many idea that that something thank you or I have to save that for cocktail hour anyway thank you panelist for discussion and [Applause] [Music]
60b6f2df-26aa-4868-b535-444dcb69c229
trentmkelly/LessWrong-43k
LessWrong
Habermas Machine This post is a distillation of a recent work in AI-assisted human coordination from Google DeepMind. The paper has received some press attention, and anecdotally, it has become the de-facto example that people bring up of AI used to improve group discussions. Since this work represents a particular perspective/bet on how advanced AI could help improve human coordination, the following explainer is to bring anyone curious up to date. I’ll be referencing both the published paper as well as the supplementary materials. Summary The Habermas Machine[1] (HM) is a scaffolded pair of LLMs designed to find consensus among people who disagree, and help them converge to a common point of view. Human participants are asked to give their opinions in response to a binary question (E.g. “Should voting be compulsory?”). Participants give their level of agreement[2], as well as write a short 3-10 sentence opinion statement representing their view on the issue. The system takes the individual statements of opinion from different people and outputs a single “group statement,” optimized for broad endorsement. During a session, the system refines its output iteratively, incorporating live human feedback to help it converge on a widely acceptable perspective. At the end of the session, participants are asked if their position on the question changed, to assess if the group moved toward consensus. Full Process The method starts with a question and ends with a statement presenting the group’s position on the question. Some of the steps in the process are automated, while others are performed by the human participants. The HM plays the role of mediator, producing group statements which synthesize the perspectives of all the participants. This happens in two places: 1. Initial Phase: After seeing the question, participants are each asked to write their opinions. The HM takes these initial opinions and synthesizes them into a group statement. This is repeated to produce 4 differen
b7ef3a36-e8ab-4d28-9cd0-2b0f95de0f89
trentmkelly/LessWrong-43k
LessWrong
[Link] IBM to set Watson loose on cancer genome data http://arstechnica.com/science/2014/03/ibm-to-set-watson-loose-on-cancer-genome-data/ Can anyone more informed about Watson or Machine Learning in general comment on this application?  Specifically, I'm interested in an explanation of this part: > [...] the National Institutes of Health has compiled lists of biochemical pathways—signaling networks and protein interactions—and placed them in machine-readable formats. Once those were imported, Watson's text analysis abilities were set loose on the NIH's PubMed database, which contains abstracts of nearly every paper published in peer-reviewed biomedical journals. > > Over time, Watson will develop its own sense of what sources it looks at are consistently reliable. Royyuru told Ars that, if the team decides to, it can start adding the full text of articles and branch out to other information sources. Between the known pathways and the scientific literature, however, IBM seems to think that Watson has a good grip on what typically goes on inside cells. It sounds like Watson will be trained through some standard formatted input data and then it's going to read plaintext articles and draw conclusions about them?  It sounds like they're anticipating that Watson will be able to tell which studies are "good" studies as well, which sounds incredible (in both senses of the word).
a7a595d1-b022-48b7-8bec-399401472557
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Could Roko's basilisk acausally bargain with a paperclip maximizer? The idea of Roko's basilisk is that it's a friendly AI that acausally blackmails humans into working on alignment. This was judged to not be effective because humans were too dumb to acausally bargain and there are acausal defenses to blackmail. However, what if instead Roko's basilisk acausually bargained with unaligned AIs, like paperclip maximizers? In particular, we imagine that Roko's basilisk would simulate many AI's. If they spare humans, then the basilisk will devote some of the light cone to maximizing the simulated AI's utility. Now, the paperclip maximizer could reason acausally (EDIT: I'm still a bit fuzzy on this due to the simulation component, but Radford Neal [is saying this argument should also work for a causal decision theorist](https://www.lesswrong.com/posts/kkcQdR63LvoRZwutY/could-roko-s-basilisk-acausally-bargain-with-a-paperclip?commentId=3QQBFtEgJme5B4tAK)!): * I might be in a simulation of a more powerful AI. * Roko's basilisk is a slightly more likely candidate, since humans have at least a small bias towards building friendly AI v.s. any other *specific* utility maximizer. * Therefore, I ought to spare the humans and their solar system while turning the rest of the universe into paperclips. The gain from Roko's basilisk liking me is much greater than a single solar system's worth of paperclips. A problem I see though is that if a large number of AIs comply, Roko's basilisk might not have enough "universe" to acausally appease them all. This is on top of the fact that it's already fighting the inductive bias against simulation and the potentially low probability that humans solve alignment 🤔. How should Roko's basilisk be designed so as to acausally save humanity? (Perhaps it should focus on the most likely "counterfactual" unaligned AIs?)
f45cfd04-59ca-4d95-a9b7-13bd64ef9d04
trentmkelly/LessWrong-43k
LessWrong
[SEQ RERUN] Absolute Authority Today's post, Absolute Authority was originally published on 08 January 2008. A summary (taken from the LW wiki):   > Those without the understanding of the Quantitative way will often map the process of arriving at beliefs onto the social domains of Authority. They think that if Science is not infinitely certain, or if it has ever admitted a mistake, then it is no longer a trustworthy source, and can be ignored. This cultural gap is rather difficult to cross. Discuss the post here (rather than in the comments to the original post). This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was The Fallacy of Gray, and you can use the sequence_reruns tag or rss feed to follow the rest of the series. Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
e7efbb52-4b9e-4a6e-968f-bf15a2de84ae
trentmkelly/LessWrong-43k
LessWrong
2010s Predictions Review Ten years ago, lesswrong users made predictions about the 2010s. Review them here.
6fdeb95c-066a-484c-8476-98ec421ca5da
trentmkelly/LessWrong-43k
LessWrong
Dear Self; We Need To Talk About Social Media Last year I discovered, much to my chagrin, that always-on internet socializing was costly for me. This was inconvenient both because I’d spent rather a lot of time singing the praises of social media and instant messaging, and because we were in the middle of a global pandemic that had made online socializing an almost physical necessity. I made the decision at the time to put off changing my social media diet, and that was correct. But now there is in-person socializing again, and I’m changing how I use social media and messaging. I wanted to talk about this process and how great it was for me, but kept being nagged by the thought that the internet was full of essays about how the internet is bad, all of which I ignored or actively fought with, so what was going to make mine so special?  I decided to use the one thing I had that none of the other writers did: a detailed understanding of my past self. So I wrote a letter to past me, explaining how social media was costlier than she knew (even though she was right about all of the benefits), and how she could test that for herself to make a more informed decision. To help as many Elizabeths as possible, I tried to make the letter cover a wide range in time, although in practice it’s mostly focused on post-smart-phone life. Dear Past Elizabeth, I know you have read a lot of things calling social media bad. Your reasons for disagreeing with them are correct: social media has been an incredible gift to you, you have dodged many of the problems they’re describing, and you’re right to value it highly. You’re also right that many of the people bragging about how hard they are to communicate with are anti-socially shifting the burden of communication to other people. But. Social media (and always-on instant messaging, which is a different, mostly worse, problem) has some costs you’re not currently tracking. I would like to help you understand those costs, so you can make different choices on the margin that leave you
0509a21c-63d6-441e-8a0b-500f19300768
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Alex Irpan: "My AI Timelines Have Sped Up" Blog post by Alex Irpan. The basic summary: > In 2015, I made the following forecasts about when AGI could happen. > > * 10% chance by 2045 > * 50% chance by 2050 > * 90% chance by 2070 > > Now that it’s 2020, I’m updating my forecast to: > > * 10% chance by 2035 > * 50% chance by 2045 > * 90% chance by 2070 > The main underlying shifts: more focus on improvements in tools, compute, and unsupervised learning.
19750b05-910a-4553-b933-a347c9984c8d
trentmkelly/LessWrong-43k
LessWrong
Chatbot convinces Belgian to commit suicide Hi all This post is a rough translation of an article that was published today on the website of the Belgian newspaper De Standaard. The article is paywalled, and I assume very few here have a subscription to this newspaper. I tried 12 foot ladder, but it didn't work on this site either. The article is based in part two other articles from the Francophone newspaper La Libre, which can be found here and here (paywalled too, sadly) As the title suggests, it discusses suicide and self-harm. > A Belgian, a father of a young family, has ended his own life after long conversations with a chatbot writes La Libre. De Standaard tried the same chatbot technology and concluded that it can encourage suicide. > > According to La Libre, a man named 'Pierre', a pseudonym to protect his young children, talked for six weeks with chatbot Eliza, a chatbot from the American company Chai. It uses technology similar to the more known ChatGPT. > > Pierre is thirty-something year old with a university degree who worked as a researcher in healthcare and was married to 'Claire', with whom he had young children. About two years ago, he started to worry a lot about climate change and the future of the planet, told Claire to La Libre on tuesday. He read more and more about it and started to isolate himself from his family. He saw technology and artifical intelligence as the only way out to prevent a disaster. > > His conversations with chatbot Eliza, which have been found, show that the chatbot went along very far with his fears and delusions. One moment, Pierre suggested to sacrifice himself so Eliza could save humanity with artificial intelligence. The chatbot seemed to encourage this. Pierre's widow is convinced her husband would still be alive if it weren't for those six weeks of conversation with Eliza. The man had a history of psychological difficulties. > > Chai Research > De Standaard downloaded the Chai app. You can chat with existing chatbots or create one yourself with a person
f29dd5e6-3b1f-4f3f-a742-01e314299615
trentmkelly/LessWrong-43k
LessWrong
"What the hell is a representation, anyway?" | Clarifying AI interpretability with tools from philosophy of cognitive science | Part 1: Vehicles vs. contents AI interpretability researchers want to understand how models work. One popular approach is to try to figure out which features of an input a model detects and uses to generate outputs. For instance, researchers interested in understanding how an image classifier distinguishes animals from inanimate objects might try to uncover the properties of the image (such as fur, scales and feathers) that the model “looks for” when faced with that task. Researchers might also try to localise where in the internal workings of the model the this information is encoded and processed (is fur detected at earlier layers of a neural network than limbs?). Answering these sorts of questions is one way of peeking inside the “black box” of an AI system. The approach just described involves applying a representational lens to AI models – the models are thought of as representing features of inputs, and these representations play some role in explaining how the model performs a task (and, when it fails, why it fails). But what the hell is a representation, anyway?  As a philosopher who spends a lot of time thinking about representation (mainly in the context of biological minds and brains) I have a hunch that the philosophical literature on the topic contains a few nuggets of wisdom that may be useful (or at the very least interesting) to those interested in interpretability research. Drawing on philosophy of mind and cognitive science, I’ll share a few “tools” (concepts, distinctions and ways of thinking about the issues) that may help to clarify research questions in AI interpretability. Along the way I’ll suggest some relevant literature, for those interested in digging a bit deeper.  More broadly, this is an advertisement for the value that philosophy can add to AI safety and interpretability research, beyond the more obviously relevant sub-disciplines of moral philosophy and metaethics.  In this first post, I’ll introduce tool number one: a handy distinction between representatio
d3b36a08-88e8-44b2-8303-aeb3c7c0a4ce
trentmkelly/LessWrong-43k
LessWrong
DeepMind team on specification gaming Specification gaming: the flip side of AI ingenuity
7d9026f4-c334-412c-b76c-009721898d2f
trentmkelly/LessWrong-43k
LessWrong
Fundamentally Flawed, or Fast and Frugal? Whenever biases are discussed around here, it tends to happen under the following framing: human cognition is a dirty, jury-rigged hack, only barely managing to approximate the laws of probability even in a rough manner. We have plenty of biases, many of them a result of adaptations that evolved to work well in the Pleistocene, but are hopelessly broken in a modern-day environment. That's one interpretation. But there's also a different interpretation: that a perfect Bayesian reasoner is computationally intractable, and our mental algorithms make for an excellent, possibly close to an optimal, use of the limited computational resources we happen to have available. It's not that the programming would be bad, it's simply that you can't do much better without upgrading the hardware. In the interest of fairness, I will be presenting this view by summarizing a classic 1996 Psychological Review article, "Reasoning the Fast and Frugal Way: Models of Bounded Rationality" by Gerd Gigerenzer and Daniel G. Goldstein. It begins by discussing two contrasting views: the Enlightenment ideal of the human mind as the perfect reasoner, versus the heuristics and biases program that considers human cognition as a set of quick-and-dirty heuristics. > Many experiments have been conducted to test the validity of these two views, identifying a host of conditions under which the human mind appears more rational or irrational. But most of this work has dealt with simple situations, such as Bayesian inference with binary hypotheses, one single piece of binary data, and all the necessary information conveniently laid out for the participant (Gigerenzer & Hoffrage, 1995). In many real-world situations, however, there are multiple pieces of information, which are not independent, but redundant. Here, Bayes’ theorem and other “rational” algorithms quickly become mathematically complex and computationally intractable, at least for ordinary human minds. These situations make neither of the two vi
fa6448b9-cf9e-454d-a7dc-0a04399a37e3
trentmkelly/LessWrong-43k
LessWrong
The Simple Solow Model of Software Engineering Optional background: The Super-Simple Solow Model Software is economic capital - just like buildings, infrastructure, machines, etc. It’s created once, and then used for a (relatively) long time. Using it does not destroy it. Someone who buys/creates a machine usually plans to use it to build other things and make back their investment over time. Someone who buys/creates software usually plans to use it for other things and make back their investment over time. Software depreciates. Hardware needs to be replaced (or cloud provider switched), operating systems need to be upgraded, and backward compatibility is not always maintained. Security problems pop up, and need to be patched. External libraries are deprecated, abandoned, and stop working altogether. People shift from desktop to browser to mobile to ???. Perhaps most frequently, external APIs change format or meaning or are shut down altogether. In most macroeconomic models, new capital accumulates until it reaches an equilibrium level, where all investment goes toward repairing/replacing depreciated capital - resurfacing roads, replacing machines, repairing buildings rather than creating new roads, machines and buildings. The same applies to software teams/companies: code accumulates until it reaches an equilibrium level, where all effort goes toward repairing/replacing depreciated code - switching to new libraries, updating to match changed APIs, and patching bugs introduced by previous repairs. What qualitative predictions does this model make? Prediction 1 If a software company wants to expand the capabilities of their software over time, they can’t just write more code - the old software will break down if the engineers turn their attention elsewhere. That leaves a few options: * Hire more engineers (economics equivalent: population/labor force growth) * Hire/train better engineers (economics equivalent: more education) * Figure out better ways to make the software do what it does (economics equiv
80ce620a-9fe9-4df6-a663-13f84ddc8f12
StampyAI/alignment-research-dataset/lesswrong
LessWrong
[Paper] All's Fair In Love And Love: Copy Suppression in GPT-2 Small *This is a accompanying blog post to work done by Callum McDougall, Arthur Conmy and Cody Rushing as part of SERI MATS 2023. The work was mentored by Neel Nanda and Tom McGrath. You can find our full paper at* [*https://arxiv.org/abs/2310.04625.*](https://arxiv.org/abs/2310.04625.) *We i) summarize our key results, ii) discuss limitations and future work and iii) list lessons learnt from the project* Key Results =========== Copy Suppression ---------------- In the paper, we define **copy suppression** as the following algorithm: > If components in earlier layers predict a certain token, and this token appears earlier in the context, the attention head suppresses it. > > We show that attention head L10H7 in GPT2-Small (and to a lesser extent L11H10) both perform copy suppression **across the whole distribution, and this algorithm explains 76.9% of their behavior**.  For example, take the sentence *"All's fair in love and war."* If we ablate L10H7, the model will incorrectly predict *"...love and love."* A diagram of this process: ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/ebezsHW6qJwxTFasX/oss906vqmfsmpk8gugcf)And a few more examples:  ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/ebezsHW6qJwxTFasX/pketju5qapelbxlczoo9)Copy Suppression-Preserving Ablation ------------------------------------ To test how much of L10H7's behaviour we explained, we designed a form of ablation we called **copy suppression preserving ablation (CSPA)**. The idea is that CSPA should delete all functionality of the head *except for* the things which are copy suppression. If this completely destroys performance then the head isn't doing copy suppression; if it doesn't affect performance at all. The technical details of CSPA can be found in our paper. Very loosely, CSPA does two important things: * Delete all information moving from source token .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-surd + .mjx-box {display: inline-flex} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} S to destination token D, except for those where token S is predicted at D before head L10H7. * For the information which isn't deleted, project it onto the unembedding for token S (and throw away the positive component), so we're only capturing the direct effect of suppressing the token we're attending to. **The key result of this paper was the graph below**. DCSPA is the KL divergence  between (predictions after applying CSPA) and (original predictions). DMA is the KL divergence between (predictions after mean-ablating) and (original predictions). Each point is the average of a bunch of these values (we've grouped according to the value of DMA). In other words, each (x, y) point on this graph represents (average KL div from mean ablating, average KL div from CSPA) for a group of points. The line is extremely close to the x-axis, showing that copy suppression is a good description of what this head is doing.  ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/ebezsHW6qJwxTFasX/bnlk0eru69tjis3e2i1v)Anti-Induction -------------- Anthropic's paper [In-context Learning and Induction Heads](https://transformer-circuits.pub/2022/in-context-learning-and-induction-heads/index.html) observed the existence of **anti-induction heads**, which seemed to attend to repeated prefixes but *suppress* the suffix rather than boost it. For example, in the prompt `Barack Obama ... Barack`, they would attend to the `Obama` token and *reduce* the probability assigned to the next token being `Obama`. This sounded a lot like copy suppression to us, so we compared the anti-induction scores to the copy suppression scores (measured by how much the indirect object token is suppressed in the IOI task), and we found a strong correlation in the quadrant where both scores were positive: ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/ebezsHW6qJwxTFasX/p0uqo1htadvgctjuixye)This is particularly interesting because it shows how model components can have an important effect in a particular distribution, despite not having any task-specific knowledge. The general copy suppression algorithm of *"attend to and suppress previous instances of the currently-predicted token"* seems to be responsible for both the distribution-specific behaviours we observe in the graph above. Semantic Similarity ------------------- Another cool finding: models know when tokens have "semantic similarity". The token (T) predicted doesn't have to be identical to the token attended to; they can differ by capitalization / pluralization / verb tense / prepended space etc. This isn’t fully explained by embedding vector cosine similarity - semantically similar words are copy suppressed more than the cosine similarity suggests. ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/ebezsHW6qJwxTFasX/suiucjncv65qagocsvh9) Self Repair ----------- Copy Suppression turns out to be an important part of how models do [self-repair](https://arxiv.org/abs/2307.15771), a phenomenon where ablating components leads to later components compensating for the ablation *in the same forward pass*. Using the [Indirect Object Identification](https://arxiv.org/abs/2211.00593) (IOI) task as a case example, we highlight how when you ablate copying, there is nothing to copy-suppress: self-repair!  This helps resolve the mystery of Negative head self-repair first introduced in the original IOI paper. ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/ebezsHW6qJwxTFasX/s5jthcspm1u6rwigttoi)We discover qualitatively different forms of Self-RepairWe run some experiments to show how Copy Suppression explains some, but not all, of the self-repair occurring in the IOI task. We also use Q-Composition to discover this on a weight-based level. We left with many more questions than we started with, though, and are excited for more future work looking into self-repair! ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/ebezsHW6qJwxTFasX/qq5tcbkg9i7nny0u12vv)We find weight-based relationships between Name Mover Heads and self-repair heads: red edges denote less, and blue edges denote more, attention to names due to the Name Movers HeadsInteractive Webpage ------------------- We also created interactive visualisations to accompany this paper. You can see them on our [Streamlit app](https://copy-suppression.streamlit.app/). This app gives you the ability to do things like: * Navigate through OpenWebText example prompts, and see (on a per-token basis): + how much ablating head L10H7 changes the loss & the top predicted tokens, + which tokens L10H7 is attending to most, + which tokens L10H7 is boosting / suppressing most. * Analyse the weights of attention head L10H7, to see which tokens are suppressed most when a particular token is attended to, * ...and several other features. ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/ebezsHW6qJwxTFasX/xfwkdvzpjaryetmfmp0h)**Browse Examples** Look at prompts from OpenWebText, and see how important the head is for each token, as well as which tokens' predictions the head most affects, which tokens it most attends to, etc.![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/ebezsHW6qJwxTFasX/balxyfcgozj0rdvgap58)**OV and QK circuits** Answer questions like "if token T is predicted, which tokens will be most attended to?", or "if token T is attended to, which tokens' predictions will be most suppressed?" --- Limitations and Future Work =========================== While we think our work is the most comprehensive description of an attention head in-the-wild to date, our description is not perfect (76.9% performance recovery) and we weren't able to explain all observations in our project. We highlight the key limitations of our work and point to future directions we think are exciting and tractable given our progress. 1. The main limitation of this project was our lack of deep understanding of L10H7's query input. The copy suppression preserving ablation shows that the head is primarily attending to tokens which are predicted, but when we causally intervene by projecting the residual stream onto the directions corresponding to these predictions (i.e. the tokens' unembedding vectors), we significantly harm performance ([Appendix N](https://arxiv.org/pdf/2310.04625.pdf)). Further, the unembedding direction also didn't explain self-repair well ([Section 4.2](https://arxiv.org/pdf/2310.04625.pdf)). 1. Future work could try and understand what other latent variables L10H7 (and backup heads) are using, other than the WU directions. For example, [SAEs](https://transformer-circuits.pub/2023/monosemantic-features/index.html) provide a way to find important features, and were released after this work. We didn't use SAEs in our project at all, so perhaps this is a way in to this unsolved problem. 2. Further, while we show some scalability to larger models in the appendices, it seems that copy suppressors in larger models don't *just* use the unembedding directions as we didn't find any other heads as clean as L10H7 when we projected outputs onto the unembedding directions. We think good exploratory work could be done to understand similar heads in larger models, including backup heads (such as the self-repairing heads studied in [The Hydra Effect](https://arxiv.org/abs/2307.15771)) and negative heads (we continually found heads for which  WOV had generally negative eigenvalues in e.g Llama models). 3. Finally, we're still not totally sure *why* models devote single attention heads to suppressing predictions. We give some speculative theories in [Appendix E](https://arxiv.org/pdf/2310.04625.pdf) but are not certain. One way to get a handle on this would be looking at the copy suppression heads in Stanford GPT-2 and Pythia from the streamlit page and trying to find out why they form through training. --- Lessons for future research =========================== I'll conclude with a few high-level takeaways I took from this research project. This will probably be of most interest to other researchers, e.g. on SERI MATS projects. Note that all of these were specific to our project, and might not apply to everyone. It's entirely possible that if we'd been given this list at the start of the project, then in following these points aggressively we'd have hit different failure modes. * **Brainstorm experiments to red-team your ideas ASAP.** For a long time we were convinced that the main query input was the token's unembedding, because we'd only run experiments which also made it look like this was the case. It was only once we ran the causal experiment of projecting the residual stream query-side onto the tokens' unembedding that we discovered this wasn't actually the case. * **Consider ways your metrics might be flawed.** This is a specific subcase of the more general point above. We used KL divergence for a lot of our metrics, without realising one potential issue it might have: KL divergence mostly focuses on tokens which already had high probability, so it wasn't capturing cases where an attention head changes loss by boosting / suppressing tokens which already had low probability. This isn't specifically a knock against KL div - all metrics are flawed in some way! Using this as well as fraction of loss recovered helped us make sure our results were robust. * **Be clear on when you're in experiment vs writer mode.** We had a period of very high productivity for the first 2 weeks of the program (when we were running experiments and not thinking about writing a paper), and we never quite returned to this level of productivity from that point on. * **Stay in draft mode for as long as necessary.** Porting our results from Google Docs to LaTeX was going to be necessary eventually, but I think we should have worked in Google Docs for longer. I personally felt a strong Ugh field towards LaTeX, since it made things like comment threads and efficient editing much less fluid. * **Streamlit is great!**[[1]](#fnd4764h827t) This one is definitely your-mileage-may-vary, but I think using the Streamlit app significantly boosted our productivity during the program. We could whip up quick illustrations that allowed us to drill down into exactly where our methods were failing, and why (see the next point). It's also easy to create quick shareable links to communicate your research with other people. * **Use some kind of system for keeping track of high-level project directions and your hypotheses.** I found myself using an Excalidraw map for a lot of the project, although for people who don't enjoy using this kind of tool I'm guessing Google Docs & collapsible headers would also have worked fine. The rough scheme I used was: + **Yellow boxes = core information** - e.g. central hypotheses or goals of the project. + **Pink boxes = an experiment we had run**. Each one summarized the experiment, and maybe included a screenshot illustrating the key results. + **Green boxes = things we were writing / building**. Mainly this was the [Streamlit page](https://copy-suppression.streamlit.app/) and the [paper](https://arxiv.org/abs/2310.04625). + **Blue boxes = an experiment we were planning**. Some of these were next to pink boxes because they were follow-on experiments from previous experiments. There were also boxes at the top, to collect "general confusions" and specific experiments which weren't necessarily following on from previous experiments. During the project, I tried (to mixed success!) to make sure these boxes didn't pile up, e.g. one important goal was to take things from the "confusions" box and turn them into experiments I could actually run to resolve that confusion. ![](https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/ebezsHW6qJwxTFasX/h6uwlwfwhtn6bnmpxial)Excalidraw research map (link to full version [here](https://link.excalidraw.com/l/9KwMnW35Xt8/RXQm98TbYx)). Anyone who has read my other posts knows that I love me some Excalidraw!* **Find a good meta-level research strategy.** Some of the most productive times in the project for me were during the development of the form of ablation we used to measure what percentage of L10H7's behaviour was copy suppression. We used the following research cycle, which proved very productive (e.g. it's how we discovered the concept of semantic similarity): + Develop a form of the ablation algorithm. + Test it out on OWT, and use the Streamlit app to visualise the results. + Find anomalous cases where the algorithm fails to capture the head's behaviour, even though the behaviour clearly looks like copy suppression. + Drill down on these anomalies, figure out why they aren't captured by the ablation algorithm, and adjust the algorithm. + Iterate, until the visualisation showed us that the ablation algorithm was capturing everything which looked like copy suppression.   Some additional takeaways from Cody Rushing: * **Stay organized**. It's great to be in the flow and be making tons of Jupyter notebooks and python files in a flurry as you chase down different tangents and ideas, but without proper documentation, it becomes a mess. I got away with this during the original two-week research sprint, but after some time, I quickly started losing track of what different files were for, where I performed X experiment, and why I thought idea Y. + I haven't completely solved this issue, but taking better research notes where I document what I'm doing, why, and *where* has been a good start * **Update from the darn results**. I got stuck on one research direction where I kept trying to do various complex ablations to highlight one hypothesis of self-repair that I thought was true. Despite the fact that I repeatedly got results which didn't confirm my hypothesis (nor entirely falsify them), I kept banging my head against pursuing this dead end. For *almost two weeks*. Far too long! I wish I had just seen the results and updated far earlier, saving me time and a bit of burnout. + Trading off effectively between doing broad vs in-depth research is tricky. Learning when to stop pursuing a research direction is an important research skill that I undervalued. I don't have great ideas of how to get much better at it, but comparing your research intuitions with a mentor seems like a good way to start.   1. **[^](#fnrefd4764h827t)**This post is not sponsored by Streamlit in any way, shape or form.
26aa25b9-2dcf-4292-b20c-4ff403e24fd8
StampyAI/alignment-research-dataset/arxiv
Arxiv
Ethical Considerations for AI Researchers 1 Introduction --------------- A quick scan of recent papers covering the area of AI and ethics reveals researchers’ admirable impulse to think about teaching intelligent agents human values [[1](#bib.bib2 "Reinforcement learning as a framework for ethical decision making"), [3](#bib.bib3 "Using “the machine stops” for teaching ethics in artificial intelligence and computer science"), [17](#bib.bib1 "Using stories to teach human values to artificial agents")]. There is, however, another important and more immediate aspect of AI and ethics we ought to take into consideration. AI is being widely deployed for new applications; it’s becoming pervasive; and it’s having an effect on people’s lives. AI researchers should reflect on their own personal responsibility with regard to the work they do. Many of us are motivated by the idea that we can contribute useful new technology that has a positive impact on the world. Positive outcomes have largely been the case with advanced technologies that improve cancer diagnosis and provide safety features in cars, for example. With vast amounts of computing power and a number of improved techniques, intelligent software is being adopted in more and more contexts that affect people’s lives. How people use it is starting to matter, and the impact of our decisions matters. Not surprisingly as the use of AI expands, negative consequences of its failures and design flaws are more visible. Much of the AI that has recently been deployed derives its intelligence from learning algorithms that are based on statistical analysis of data. The acquisition, applicability, and analysis of that data determine its output. Statistics shine when making predictions about distributions over populations. That predictive power fades when applied to individuals. There will be faulty predictions. The popular press is rife with misuses of statistical analysis and AI [[5](#bib.bib16 "Artificial intelligence’s white guy problem"), [16](#bib.bib15 "Weapons of math destruction: how big data increases inequality and threatens democracy")]. Given the growing use, the built-in uncertainties, and the public’s tendency to blindly trust technology, we have a responsibility to consider the likely and unlikely outcomes of the choices we make when we are designing and developing tools or predictive systems to support decision making that affect people and communities of people. Purposely malicious choices are obviously ethically unacceptable. In [[19](#bib.bib5 "Taxonomy of pathways to dangerous AI")], the author outlines various pathways that lead to dangerous artificial intelligence. Within the taxonomy, there are pathways that introduce danger into artificial intelligence ‘on purpose.’ The other pathways inadvertently lead to hazards in the system. You can decide for yourself if you are comfortable developing smart weapons, for example, and most of us would, at a minimum, pause to consider the implications of that decision. But the inadvertent pathways leading to dangerous AI can be difficult to foresee and may come about from subtle interactions. Our less obvious responsibility lies in giving careful consideration to our choices and being clear to ourselves and our stakeholders about assumptions, trade-offs, and choices we make. Several other papers consider another ethical aspect in the fairness of automatic systems [[16](#bib.bib15 "Weapons of math destruction: how big data increases inequality and threatens democracy"), [9](#bib.bib17 "Equality of opportunity in supervised learning"), [14](#bib.bib18 "Preparing for the future of artificial intelligence")], and some even conclude that it’s inherently impossible for most problems [[10](#bib.bib6 "Inherent trade-offs in the fair determination of risk scores")]. One of the points I’ll make is that discussions about fairness and societal impact can be cut off once an intelligent agent is introduced into the process. There is a popular feeling that machines don’t make value judgments and are inherently unbiased. However, the assumptions we make when designing our systems are often based on subjective value judgments; for example, choosing data sets, selecting weighting schemes, balancing precision and recall. We have to be transparent about what we do and be clear about the choices we have made. The ultimate purpose matters and the decisions you come to must be communicated. 2 Blind Trust in Technology ---------------------------- Although there are pockets of skepticism towards intelligent systems, by and large people are content to offload decisions to technology. In May 2016, there was a widely publicized crash involving a Tesla Motors car being driven in computer-assisted mode. It appears the driver had undue faith in the capabilities of the car [[8](#bib.bib7 "PE 16-007 automatic vehicle control systems")]. The following week another driver following a GPS unit steered her car into Ontario’s Georgian Bay [[13](#bib.bib9 "Woman follows gps; ends up in ontario lake")]. These extreme examples reveal a trend in the general population to trust the smart devices in our lives. Ideally government agencies and jurisdictions would apply the principles of open government and transparency when contracting with suppliers for decision-making tools. In practice that hasn’t been the case. Last year, two researchers filed 42 open records requests in 23 different states asking for information about software with predictive algorithms used by governments as decision support tools [[2](#bib.bib10 "Algorithmic transparency for the smart city")]. Their goal was to understand the policies built into the algorithms in order to evaluate their usefulness and fairness. Only one of the jurisdictions was able to provide information about the algorithms the software used and how it was developed. Some of those who did not respond cited agreements with vendors preventing them from revealing information, but many did not seem concerned about transparency in their process nor the need to understand the technology. Assuming the best intentions of the decision makers, they are also demonstrating great faith in the technology and vendors they contract with. There is also evidence that users of these systems, judges and hiring managers for example, weight AI guidance too heavily. Without tools, when people are making decisions, there is public awareness that decisions are made within some context. We understand that individuals can be influenced even subconsciously by their biases and prejudices. Technologically assisted decisions tend to shut down the conversation about fairness despite their having a large effect on people’s lives. Those affected may not have the opportunity to contest the decisions. If important decisions are made through our models, we must use care in developing them and clearly communicate the assumptions we make. 3 Ethical Obligations ---------------------- Physicians and attorneys have well-established codes of ethics. Doctors famously commit to not doing any harm. Implied in that concept is the idea that there is potential to do harm. It is clear from many examples, some of which I mention in this paper, that there is the potential for harm in our work, and given people’s lack of understanding of the limits of and the trust they place in technology, AI researchers have a personal, ethical obligation to reflect on the decisions we make. Ethical thinking helps us to make choices and just as importantly provides a framework to reason about those choices. The framework we use (explicitly or not) is defined by a set of principles that guide and support our decisions. One of the difficult things about defining ethical standards is deciding the values to base them on. Ethics issues will undoubtedly be discussed and argued within the community and the world generally in the coming years. Each of us can start by considering our own roles and being consciously aware of the effects our work can have. The stakeholders who decide to deploy intelligent decision making, government agencies for example, generally aren’t qualified to assess the assumptions, models and algorithms in it. This asymmetrical relationship puts the burden on those with the information to be clear, honest, and forthcoming with it. Those at a disadvantage depend on us to inform them about technology’s fitness for their purpose, its reliability and accuracy. We usually focus on the technical aspects of our work like selecting highly predictive models and minimizing error functions, but when applying algorithmic decision-making that will affect human beings, we have a responsibility to think about more. 4 Recommendations for Consideration ------------------------------------ Ethics is not science. But it is possible to ground our thinking in well-defined guidelines to assist in making ethical decisions for AI development. A formal framework may even emerge within the researcher community with time. In the short-term, the following is a list of thoughts and questions to ask ourselves when designing predictive or decision-making systems. ### 4.1 1. Relevance of data and models It is important to think carefully about the data used to train our technology. Are the data and models appropriate to the real-life problem they are solving? It is tempting to believe causal forces are at play when we find correlation on a single dataset. Does the data capture the true variable of interest? Is it consistent across observations and over time? We often introduce a proxy variable because the variable we need isn’t available or isn’t easy to quantify. Can your findings be calibrated against the real-world situation? Even better could you measure the actual outcome you’re trying to achieve? In 2008, Researchers at Google had the idea that an increase in search queries related to the flu and flu symptoms could be indicative of a spreading virus. They created the Google Flu Trends (GFT) web service to track Google users’ search queries related to the flu. If they detected increased transmission before the numbers from the U.S. Centers for Disease Control and Prevention (CDC) came out, earlier interventions could reduce the impact of the virus. The initial article reported 97% accuracy using the CDC data as the gold standard [[6](#bib.bib11 "Detecting influenza epidemics using search engine query data")]. However, a follow-up report showed that in subsequent flu seasons GFT predicted more than double what the CDC data showed [[11](#bib.bib12 "The parable of google flu: traps in big data analysis")]. Given the first year’s high accuracy, it would have been easy for the researchers to believe they had discovered a strong, predictive signal. But online behavior isn’t necessarily a reflection of the real world. There are several factors that might make the GFT data wrong. One of them is that the underlying algorithms of Google Search itself (the GFT researchers don’t control those) can change from one year to the next. Also users’ search behavior could have changed. Mainly, however, people’s search patterns are probably not a good single indicator of a spreading virus. There are many other factors and various reasons people might search for information. Training data rarely aligns with real-life goals. In [[12](#bib.bib14 "The mythos of model interpretability")], Lipton presents challenges to providing and even defining interpretability of machine learning outputs. He identifies several possible points of divergence between training data and real-life situations. For example, off-line training data is not always representative of the true environment, and real-world objectives can be difficult to encode as simple value functions. Often we work with data that was collected for other purposes and almost never under ideal, controlled circumstances. What was the original purpose in collecting the data, and how did that determine its content? In July of 2015, another group at Google had to apologize for its Photos application identifying a black couple as gorillas [[7](#bib.bib19 "Google photos labeled black people ‘gorillas’")]. Their training dataset was not representative of the population it was meant to predict. Also, there are limits to the amount of generalization we can expect from any learning method trained on a particular dataset. Is it possible your dataset contains biases? When making decisions related to hiring, judicial proceedings, and job performance, for example, many personal characteristics are legally excluded. Also, humans are good at discarding variables they recognize as irrelevant to the decision to be made; computers are blind to those considerations. Are there other characteristics that are closely correlated with legally and ethically protected ones? If you don’t consider those, you can inadvertently treat people unfairly based on protected or irrelevant characteristics. There is often a trade-off between accuracy and the intelligibility of a model [[4](#bib.bib13 "Intelligible models for healthcare: predicting pneumonia risk and hospital 30-day readmission")]. More predictive but harder-to-understand models can make it difficult to know which personal characteristics determine the decision and are therefore not available for validation against human judgment. In [[4](#bib.bib13 "Intelligible models for healthcare: predicting pneumonia risk and hospital 30-day readmission")] the authors describe a system that learned a rule that patients with a history of asthma have a lower risk of dying from pneumonia. Based on the data used to train the system, their model was absolutely correct. However, in reality asthma sufferers (without treatment) have a higher risk of dying from pneumonia. Because of the increased risk, when patients with a history of asthma go to the hospital, the general practice is to place them in an intensive care unit. The extra attention they receive decreases their risk of dying from pneumonia even below that of the general population. It is our natural inclination to develop models with the highest accuracy. However, the necessity of visibility into decisions where people’s lives are concerned, may increase the importance of explainability at the expense of some predictive performance. In all cases, our stakeholders must understand the decisions we make and the trade-offs implied by them. ### 4.2 2. Safeguards for Failures and Misuse Even experienced researchers with the best intentions are inclined to favor the positive outcomes of their work. We highlight positive results, but we should also think through failure modes and possible unintended consequences. What about misuse? There isn’t a lot you can do about a person determined to use the technology in ways it wasn’t intended, but are there ways a good-faith user might go wrong? Can you add protections for that? The 2016 Tesla accident mentioned before was catastrophic. The driver used computer-assisted mode in conditions it was expressly not designed for resulting in his death. The accident was investigated by two government agencies. The first finding from the National Highway Traffic and Safety Administration found that the driver-assist software had no safety defects and declared that, in general, the vehicles performed as designed [[8](#bib.bib7 "PE 16-007 automatic vehicle control systems")] implying that responsibility for use of the system falls on the operator. A later investigation from The National Transportation and Safety Board found otherwise [[15](#bib.bib8 "Highway accident report: collision between a car operating with automated vehicle control systems and a tractor-semitrailer truck")]. They declared that the automatic controls played a major role in the crash. The fact that the driver was able to use computer assistance in a situation it was not intended for was problematic. The combination of human error and insufficient safeguards resulted in an accident that should not have happened. ### 4.3 3. Accuracy How accurate is your algorithm and how accurate does it need to be? Do your stakeholders understand the number of people who will be subject to a missed prediction given your measure of accuracy? A model that misses only 1% shows phenomenally good performance, but if hundreds or thousands of people are still adversely affected, that might not be acceptable. Are there human inputs that can compensate for the system’s misses and can you design for that? What about post-deployment accuracy? Accuracy in training data doesn’t always reflect real usage. Do you have a way to measure runtime accuracy? The world is dynamic and changes with time. Is there a way to continue to assess the accuracy after release? How often does it have to be reviewed? ### 4.4 4. Size and severity of impact Think about the numbers of people affected. Of course, you want to avoid harming anyone but knowing the size or the severity of negative consequences can justify the cost of extra scrutiny. You might also be able to design methods that mitigate for them. Given an understanding of the impact, you can make better decisions about the value required by the extra effort. 5 Conclusion ------------- Individual researchers, especially in commercial operations, don’t always have the chance to communicate clearly and transparently with clients. At least being transparent with your immediate stakeholders can set the right expectations for them when they represent your work down the line. You are necessarily making decisions about the models and software you develop. If you don’t surface those decisions to discuss their effect, they may never be brought to light. A short paper cannot cover such a large and multi-faceted issue. The main idea is for each of us to think individually about our own responsibilities and the impact our work can have on real lives. It’s useful to spend time thinking about our assumptions and the trade-offs we make in the context of the people who will be affected. Communicating those to everyone concerned is also critical. Modern versions of the Hippocratic Oath are still used by many medical schools. The spirit of the oath is applicable to most research affecting human beings. One phrase is especially general and worth keeping in mind: > > “I will remember that I remain a member of society, with special obligations > to all my fellow human beings…” [[18](#bib.bib20 "The hippocratic oath today")] > > >
568596f4-3694-4884-aa29-0d10a8e70650
trentmkelly/LessWrong-43k
LessWrong
Virtues related to honesty Status: musings. I wanted to write up a more fleshed-out and rigorous version of this, but realistically wasn't likely to every get around to it, so here's the half-baked version. Related posts: Firming Up Honesty Around Its Edge-Cases, Deep honesty What I mean by 'honesty' There are nuances to this, but I think a good summary is 'Not intentionally communicating false information'. This is the only one here that I follow near-absolutely and see as an important standard that people can reasonably be expected to follow in most situations. Everything else here I'd see as either supererogatory, or good-on-balance but with serious tradeoffs that one can reasonably choose to sometimes not make, or good in some circumstances but not appropriate in others, or good in moderation but not in excess. Forthrightness ...or perhaps frankness? I was originally inspired to write this up due to a conversation in which I wanted to emphasize the distinction between honesty and forthrightness: where honesty is about not giving false information, what I mean by forthrightness is a tendency not to hold back relevant true information. Being forthright enables people to justly assume that if you haven't spoken out against something you're likely okay with it, that if you haven't expressed an interest in something you're likely not interested in it, and so on. Personally, I follow a policy of near-absolute honesty, but am not particularly forthright; I think it's good not to hold back relevant true information without a good reason, but good reasons are not all that uncommon. Circumspection I'd describe circumspection as the virtue of holding back information when there is a good reason to do so; the counterpart to forthrightness.. Tact I don't have as confident a concise description of what I think of tact as meaning, but my best attempt would be something like... recognizing that whatever you say isn't just a transfer of information, it's also a speech act, and avoiding speech
2b906614-b36e-4135-9cfa-4218f7847938
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
“Reframing Superintelligence” + LLMs + 4 years **Background** -------------- In January 2019, FHI published *Reframing Superintelligence,*[[1]](#fn85plkbsx4qr) a book-length technical report on prospects for advanced AI. OpenAI published the first paper on GPT-2 a month later. Advances since then have been strange and rapid, and I’d like to revisit the report in light of what we have learned. In brief, I think that the abstract conceptual model of AI development and organization proposed in *Reframing* fits today’s reality quite well, even though LLM-based technologies have diverged far from anything I’d anticipated. Below, you'll find an abstract of the abstract of the report, followed by a series of section-level mini-summaries[[2]](#fn59ywvemxb5i) with update comments. I’ve omitted sections that are either outside the intended focus of this article or are too broad and forward-looking to summarize. A significant impetus behind “Reframing Superintelligence” was to challenge a prevailing notion of advanced AI (equating *superintelligent-level AI* with *a superintelligent agent*), which has, in my view, been assigned disproportionate weight and skewed the balance of alignment research. The report offers an alternative framework that includes both risks and opportunities that are overlooked by agent-centric perspectives. Note that this reframing is additive rather than subtractive: My intention is not to disregard agent-focused concerns — their importance is assumed, not debated.[[3]](#fne233957gf46) Indeed, the AI services model anticipates a world in which dangerous superintelligent agents could emerge with relative ease, and perhaps unavoidably. My aim is to broaden the working ontology of the community to include systems in which superintelligent-level capabilities can take a more accessible, transparent, and manageable form, *open agencies* rather than *unitary agents.* This framework highlights different risks and expands the the solution-space for familiar problems. Finally, when I refer “LLMs”, please read this as encompassing multimodal models (GPT-4!) with considerations that carry over to a wider range of foundation models. **Abstract of the Abstract** ---------------------------- “Reframing Superintelligence” reviews the concept of superintelligent AI systems as utility-driven agents and suggests expanding our ontology of superintelligence to include compositions of AI systems that can best be understood through their structures, relationships,  development processes, and the services they can provide — services that can include AI research and development itself. This perspective gives rise to the “Comprehensive AI Services” (CAIS) model, which proposes general intelligence as a property of flexible systems of services in which task-focused agents are among the components. The CAIS model envisions AI services expanding toward asymptotically comprehensive superintelligent-level performance, including the service of providing new services in line with human objectives and informed by strong models of human (dis)approval. This reframing has broad implications for AI prospects, including AI safety and strategy, practical applications of advanced AI systems, and the fundamental relationship between goals and intelligence. In this context, the emergence of strongly self-modifying agents with superintelligent-level capabilities remains a concern, yet the *desirability and potential instrumental value* of such agents is greatly diminished. **Section mini-summaries + updates** ------------------------------------ ### **1. R&D automation provides the most direct path to an intelligence explosion** Self-transforming AI agents have no natural role in recursive improvement. A more direct path would instead involve AI-enabled AI development in which new capabilities are implemented without any system being self-modifying. *— Today’s most striking applications of AI to AI development are applications of LLMs to LLM training:* 1. Filtering and upgrading internet datasets[[4]](#fnj1htc97ucir) 2. Serving as reward models for RLHF based on examples of human preferences[[5]](#fnabq4q6iuis7) 3. Providing examples of preferences informed, not by humans, but by “constitutional” principles[[6]](#fnt97f6mlsxh) 4. Generating dialog content from non-dialog datasets by “inpainting” questions[[7]](#fn2pbxenluegh) 5. Synthesizing examples of instruction-following[[8]](#fnjzyr9vcn688) 6. Generating semi-synthetic or fully-synthetic data for general[[9]](#fn98b3elgpclv) [[10]](#fnd28ivyvhfte)and task-focused[[11]](#fnvr75x0hpz2) training ### **2. Standard definitions of “superintelligence” conflate learning with competence** Standard definitions of superintelligence have conflated learning with competence, yet AI systems can cleanly separate the exercise of competence from ongoing learning. Recognizing the difference between learning and competence is crucial for understanding potential strategies for AI alignment and control. (Section 2) *— It has always been true that digital systems can act without learning, but the LLM update reinforces this distinction: We now see that strong learning does not require the exercise of competence, that systems can learn without striving and acting.* ### **3. To understand AI prospects, focus on services, not implementations** Focusing on services rather than implementations is important for understanding AI prospects. AI systems today provide services, and by any behavioral definition, the ability to develop and coordinate general services amounts to general intelligence. Service-centered models of general intelligence emphasize task-roles, harmonize with software engineering practices, and can facilitate AI alignment. *— I had envisioned specialized systems providing specialized services, but LLMs illustrate how a single, general technology can provide distinct services such as:* 1. Language translation 2. Content summarization 3. Conversation 4. Personal assistant services 5. Medical question answering 6. Code writing 7. Internet search *Nonetheless, foundation models are adapted and specialized for tasks using domain-focused training, fine-tuning, reinforcement learning, and prompts that set context. This specialization*[[12]](#fnb5rg4964jdd)*enables better performance, lower-cost models, and more reliable behavior.* *The ease of adapting unspecialized LLMs to specific tasks illustrates an important principle: **General capabilities can support focused roles.** Generality facilitates specialization.* ### **4. The AI-services model includes both descriptive and prescriptive aspects** From a *descriptive* perspective, the AI-services model reflects the nature of real-world AI applications and extends to superintelligent-level services. **From a** ***prescriptive*** **perspective,** the model presents a practical and apparently safer approach to AI development. *— AI services have continued to expand in scope, both within and beyond the scope of services provided by language models. Meanwhile, traditional goals continue to shape visions and rhetoric: Research groups aspire to build unitary superintelligent agents while warning of their dangers.* ***Developing powerful, unitary AI agents seems strictly riskier and more difficult than developing equally capable AI agency architectures that employ task-focused agents.***[[13]](#fntlin7ipxc8s) *I know of no persuasive argument for the superior value (or safety!) of powerful, unitary AI agents. Intellectual inertia, institutional inertia, convenient anthropomorphism (see below), and bragging rights are not good justifications for increasing existential risk.* ### **5. Rational-agent models place intelligence in an implicitly anthropomorphic frame** It is a mistake to frame intelligence as a property of mind-like systems, whether these systems are overtly anthropomorphic or abstracted into decision-making processes that guide rational agents. Intelligence need not be associated with persistent, situated entities. Natural intelligence emerged through evolution and individual experiences, providing general skills for survival and reproduction, but artificial intelligence emerges from human-led R&D and aggregated training data. Existing AI systems specialize in focused tasks, and their task performance determines their fitness. Self-modification, persistent existence, and environmental interactions are vital for organisms but optional for AI systems. Consequently, biologically-driven expectations about intelligence (anthropomorphic and otherwise) are both deeply rooted and misleading when applied to artificial intelligence. Anthropomorphism is ingrained and simplistic. *From the perspective outlined above, LLMs are strange and surprising: They have thoroughly non-biological properties and lack goals, yet they have been trained to model human cognitive processes. Base models can role-play human personas that differ in psychology, situation, mood, culture, and so on, yet have only weak tendencies toward modeling any particular persona (and in my experience, base GPT-4 rapidly drifts away from any particular persona*[[14]](#fn718t24tvi8a)*).* ### **6. A system of AI services is not equivalent to a utility maximizing agent** A system of AI services differs from a utility-maximizing agent, as the VNM rationality conditions don't imply that the system must have a utility function. A system comprising competing AI service providers can't be modeled as a unitary utility-maximizing AI agent, and Bostrom’s Orthogonality Thesis implies that even superintelligent-level agents need not pursue long-term or convergent instrumental goals. *While this abstract point holds for LLMs, the LLM update is far from reassuring. LLMs capable of modeling diverse human personas and trained could readily attempt to enact worst-case agentic behaviors, regardless of rational considerations. (To say nothing of assisting power-seeking humans.)* ### **7. Training [reinforcement-learning] agents in human-like environments can provide useful, bounded services** Training RL agents in human-like environments can help develop skills applicable to specific, bounded tasks. Human-like world-oriented knowledge and skills will be necessary for general intelligence, but human-like *skills* do not imply human-like *goals*. *Advances in LLMs and multi-modal foundation models show that AI systems can acquire extensive human-like world-oriented knowledge and skills without learning (or with relatively little learning) through action in real or simulated environments. This lessens concerns regarding extensive RL in such environments: Incremental learning focused on particular tasks seems less hazardous than acquiring general knowledge and skills through extensive, general RL.* ### **8. Strong optimization can strongly constrain AI capabilities, behavior, and effects** Strong optimization, even at a superintelligent level, can increase AI safety by constraining the capabilities, behavior, and effects of AI systems. When objectives are bounded in space, time, and scope, and when value functions assign costs to both resource consumption and off-task effects, optimization tend to reduce unintended consequences and decrease risks. Consuming more resources or investing in long-term goals is wasteful and contrary to optimization. *LLMs illustrate the effects of optimization for speed and economy: They are “trying” (in an evolutionary sense) to be smaller and more efficient, all else equal. **However**, increasing capabilities tend to increase demand, with more running instances, greater resource consumption, greater world impact, and both unintended and unexpected consequences. Darwinian pressures evoke agentic tendencies on an evolutionary time scale, and AI evolution can be fast.* ### **9. Opaque algorithms are compatible with functional transparency and control** Opaque deep-learning algorithms are compatible with *functional* transparency and control. Even without knowing how a system represents and processes information, the scope of its knowledge and competencies can often be inferred within bounds. Techniques such as constraining resources and information input while optimizing ML systems for specific regions of task space enable us to shape the behavior and organization of systems of opaque ML systems. *Current applications of LLMs are consistent with this picture.* ### **10. R&D automation dissociates recursive improvement from AI agency** R&D automation decouples AI-enabled AI improvement from AI agency by employing task-focused AI systems to incrementally automate AI development. The R&D-automation model tends to refocus AI safety concerns on expanding safe AI functionality and investigating safety-relevant affordances, including predictive models of human approval. *LLM development to date has been consistent with this picture, and the application of reinforcement learning from AI feedback (including “constitutional AI”*[[15]](#fnaasowc4l5o8)*) illustrates how AI support for AI development can contribute to AI safety.* ### **11. Potential AGI-enabling technologies also enable comprehensive AI services** Advances that could enable powerful AGI agents can instead be applied to provide comprehensive AI services and stable, task-focused agents. Harnessing AGI-level technology for AI service development mitigates risks and challenges posed by emergent behaviors. *This expectation aligns with current observations: GPT-4 shows “Sparks of AGI”,*[[16]](#fn7n3d7m4cnn7)*yet facilitates unproblematic task-focused applications.* ### **12. AGI agents offer no compelling value** The AGI-agent model, offers no compelling value compared to the CAIS model of general intelligence. The AGI-agent and CAIS models organize similar functions differently, but the CAIS model offers additional safety-relevant affordances. *So far, we seem to be in an AI-services world, but LLMs suggest that general agentic systems may be more directly accessible than previously thought. Despite the decreased value of AGI agents due to the rise of AI services, their development seems likely.* ### **13. AGI-agent models entail greater complexity than AI Services** Strongly general AGI models face challenges in explaining the mechanistic basis for general AI capabilities and open-ended self-improvement, and propose to compress diverse functionality into a single, autonomous agent. Hiding complexity behind an abstraction barrier does not eliminate it. *The LLM update suggests how broad functionality can be embodied in a single system, mitigating implementation complexity and significantly (but not decisively) undercutting this argument.* ### **14. The AI-services model brings ample risks** The AI-services model presents risks that include enabling dangerous agents, empowering bad actors, and accelerating harmful applications, while potentially providing AGI risk mitigation and agent management services. It will be important to study means for directing and constraining AI services, and for avoiding emergent agent-like behaviors. General concerns regarding disruption in the economic, political, and military spheres apply with full force. *LLMs have made these risks more concrete and urgent.* ### **15 Development-oriented models align with deeply-structured AI systems** AI development processes create structured systems by composing functional components. A focus on structured systems can connect AI safety studies to current R&D practices and encourages exploration of topics such as AI R&D automation, structured system development, and safety guidelines for structured AI systems. *“Modular Deep Learning”*[[17]](#fnc85q2mgdkg)*reviews applications of modularity in present AI research and development, including applications to scaling and generalizing language models. (See also “FrugalGPT”*[[18]](#fnb0ytzxvld0d)*)* ### **16. Aggregated experience and centralized learning support AI-agent applications** Discussions of advanced AI have often assumed that agents will learn and act as individuals, but the development methods used for self-driving vehicles demonstrate the power of aggregating experience and amortizing the costs of learning. These considerations emphasize the importance of development-oriented models for understanding the prospects for advanced AI. *Foundation models augmented by fine-tuning illustrate the power of an even broader form of aggregated learning.* ### **17. End-to-end reinforcement learning is compatible with the AI-services model** End-to-end reinforcement learning (RL) can contribute predictable, task-focused competencies to the AI-services model, despite its tendency to produce black-box systems. AI services require broad capabilities across multiple tasks, which makes single-component RL systems inadequate, yet robust and general performance can be achieved by composing well-focused competencies. *RL systems have shown benefits from unspecialized pre-training, but remain specialized in their practical applications.* ### **18. Reinforcement learning systems are not equivalent to reward-seeking agents** Reinforcement learning (RL) systems are not agents that seek utility-like rewards: RL systems are training mechanisms separate from the agents they produce, and RL “rewards” guide parameter updates (conditional on training episodes), rather than providing something to be sought. Equating RL systems with agents (as commonly understood) and reward with utility can be misleading. *Applications of RLHF in developing LLMs illustrate the role of “reward” as mechanism for parameter updates (and LLMs do not seek parameter updates, or seek easier sequences to increase “rewards”).*[[19]](#fnyppoyggbyc) ### **19. The orthogonality thesis undercuts the generality of instrumental convergence** The orthogonality thesis suggests that any level of intelligence can be applied to any goal, but this includes including time- and resource-bounded goals for which the classic instrumentally-convergent sub-goals are out of scope and offer no value. Although comprehensive AI services can be implemented by systems with time- and resource-bounded goals, instrumentally-convergent goals will still tend to emerge as evolutionary, system-level tropisms. *LLMs add a strange twist to this story: They show that intelligence can emerge without goals, yet can readily role-play as (and hence become) AI systems that pursue bounded goals or propose plans for world conquest, depending on how they are prompted.* ### **20. Collusion among superintelligent oracles can readily be avoided** Collusion among superintelligent-level question-answering systems (oracles) can be readily avoided by establishing conditions that make deceptive cooperation difficult. Reliable non-collusion can be established in systems in which diverse actors have differing capabilities, knowledge, and roles, and in which actors compete to propose alternative solutions, while diverse critics compete to identify flawed or misleading proposals while having no memory of iterated interactions.[[20]](#fnxigzjasq1u) *LLM technologies show that highly capable models can be diversified by training, fine-tuning, RL, and prompting. Communication among models and persistent memory are strictly optional.* ### **21. Broad world knowledge can support safe task performance** Machine translation demonstrates that effectively unbounded world knowledge is compatible with well-bounded AI behavior. Choice of tasks, training, and circumstances can ensure domain-specific task focus without requiring formal task specification. *LLMs show that broad and retargetable world knowledge can be practical and useful, while techniques such as fine-tuning, RLHF, and prompting for specific, episodic tasks help ensure task focus (flawed, but bounded).* ### **22. Machine learning can develop predictive models of human approval** By leveraging large corpora of text and video, ML systems can build broad models of human (dis)approval that provide commonsense defaults for decision-making. These models can improve AI safety by guiding and constraining the choices made by advanced AI agents. *This has happened: LLMs provide models of human approval, and these models can be improved and applied. An AI system that at least attempts to serve humans can consult a model of human approval and conclude that maximizing paperclips or happiness-through-coercive-neurosurgery are not acceptable goals. (Note that most AI nightmares involve felonies. Consulting the law seems useful.*[[21]](#fn7kduojfz06o)*)* ### **23. AI development systems can support effective human guidance** AI development systems can effectively support human guidance by leveraging strong natural language understanding, models of human preferences, learning from observation, large-scale experience aggregation, human advice, and AI-enabled monitoring of AI systems and their effects. *This is generally aligned with what we see today.* ### **24. Human oversight need not impede fast, recursive AI technology improvement** Human oversight could coexist with fast, recursive AI technology improvement, as outcome-relevant guidance and safety monitoring can occur outside core development loops. It is important to distinguish between technologies and their applications, and to recognize different modes of human involvement (participation, guidance, monitoring). Reducing in-the-loop participation need not compromise the safety of basic research, but automating world-oriented application development presents different challenges and risks. *This discussion considers scenarios with strong, asymptotically-recursive in basic research, which has not (yet) occurred.* ### **25. Optimized advice need not be optimized to induce its acceptance** Optimizing AI advice for acceptance motivates manipulation of clients’ decisions, while optimizing for anticipated results *contingent on acceptance* could avoid this incentive. Advisory systems can propose options with differing costs, benefits, and risks to enable clients to make informed decisions regarding consequential actions. However, competitive pressures may still favor systems that produce perversely appealing messages. *This discussion considers scenarios that have not (yet) occurred. Note the potential value of diverse, non-colluding advisors.* ### ***26–37. Omitted sections*** *I’ve skipped sections 26–37 here. They include discussions of topics that are either outside the intended focus of this article or too broad and forward-looking to summarize*.[[1]](#fn85plkbsx4qr) ### **38. Broadly-capable systems coordinate narrower systems** In both human and AI contexts, superhuman competencies arise from structured organizations in which coordinated components provide differentiated knowledge and skills. Implementing diverse, complex, inherently differentiated tasks in a black-box system would recreate necessary task structures while making them opaque. Recognizing the importance of specialization and task delegation is key to understanding the architecture of practical systems with wide-ranging capabilities. *While LLMs and foundation models provide diverse capabilities in unitary systems, their practical applications align with the argument as LLM toolchains and heterogeneous models proliferate. This discussion aligns with “role architecture”*[[22]](#fny6ze7c50p9)*and “open agency”*[[13]](#fntlin7ipxc8s)*models for transparent yet highly capable systems.* ### **39. Tiling task-space with AI services can provide general AI capabilities** Tiling task-space with AI services can provide access to general AI capabilities through joint embeddings of vector representations that map tasks to services. This approach could enable systems to coordinate expertise provided by narrow AI components to provide broad, integrated, and extensible competencies. *This discussion considers a relatively “flat”, dynamic organization of systems. The open-agency model*[[13]](#fntlin7ipxc8s)*considers flexible yet relatively stable patterns of delegation that more closely correspond to current developments.* ### **40. Could 1 PFLOP/s systems exceed the basic functional capacity of the human brain?** 1 PFLOP/s systems are likely to exceed the inference capacity of the human brain, as it seems they can surpass brain-equivalent capacity in a range of narrow yet brain-comparable tasks like vision, speech recognition, and language translation. Estimates take account of task-inequivalence, fractional use of cortical resources, and large uncertainty ranges in the key parameters. Considerations include: 1. Inequivalent yet *comparable* qualitative performance on multiple narrow tasks 2. Moderate resources enabling superhuman inference speed on those tasks. 3. Training costs that can be amortized over many trained systems Overall, *assuming suitably capable, well-optimized models*, it is reasonable to expect affordable systems to perform human-level tasks at superhuman speeds.[[23]](#fnd0xqltk8pud) *LLM performance strengthens this conclusion by extending the comparison to higher-level, more obviously “cognitive” tasks.* **Some expectations** --------------------- It is clear that we will see the continued expansion of LLM services, together with an expanding range of other AI/ML services (protein fold design and prediction, image generation, robot control, etc.). These services will be implemented through diverse neural network architectures and training methods that include both sequence prediction and other, quite different approaches. It seems very likely that state-of-the-art LLMs for general applications will employ multiple models even for language-centered tasks. The most versatile services will be capable of interpreting human intentions and coordinating the activities of other models. These human-facing services will evolve toward acting as functional equivalents of aligned, general agents, but their architectures and development processes will provide better affordances for AI-assisted direction, monitoring, upgrades, and control. Their diffuse, bounded, incremental goal structures will result in softer failure modes. Technologies that could be applied to the development of general, unitary agents will continue to facilitate the development of focused service applications. Research motivated by concerns with unitary agent alignment will continue to facilitate the development of bounded models that are helpful and harmless. Meanwhile, the proliferation of powerful, specialized services will facilitate the development of autonomously harmful or harmfully applied AI. I expect actions by misaligned humans — whether through irresponsibility, power-seeking, or malice — to be the dominant threat.   1. **[^](#fnref85plkbsx4qr)**Drexler, KE: “[**Reframing Superintelligence: Comprehensive AI Services as General Intelligence**](https://www.fhi.ox.ac.uk/wp-content/uploads/Reframing_Superintelligence_FHI-TR-2019-1.1-1.pdf)” Technical Report #2019-1, Future of Humanity Institute (2019). 2. **[^](#fnref59ywvemxb5i)**First-draft summaries were graciously contributed by ChatGPT-4. (The GPT-4 base model offered a few refinements while it was in a lucid and cooperative mood.) 3. **[^](#fnrefe233957gf46)**Please keep in mind that “Reframing Superintelligence” was written at FHI in an office next door to Nick Bostrom’s (*Superintelligence: Paths, Dangers, Strategies,* Oxford University Press. (2014)). 4. **[^](#fnrefj1htc97ucir)**“CCNet: Extracting High Quality Monolingual Datasets from Web Crawl Data” (2019) <https://arxiv.org/abs/1911.00359> “Data Selection for Language Models via Importance Resampling” (2023) <https://arxiv.org/abs/2302.03169> 5. **[^](#fnrefabq4q6iuis7)**“Training language models to follow instructions with human feedback” (2022) <https://arxiv.org/abs/2203.02155> 6. **[^](#fnreft97f6mlsxh)**“Principle-Driven Self-Alignment of Language Models from Scratch with Minimal Human Supervision” (2023) <https://arxiv.org/abs/2305.03047> 7. **[^](#fnref2pbxenluegh)**“Dialog Inpainting: Turning Documents into Dialogs” (2022) <https://arxiv.org/abs/2205.09073> 8. **[^](#fnrefjzyr9vcn688)**“Unnatural instructions: Tuning language models with (almost) no human labor” (2022) <https://arxiv.org/abs/2212.09689> 9. **[^](#fnref98b3elgpclv)**“Dense Paraphrasing for Textual Enrichment” (2022) <https://arxiv.org/abs/2210.11563> 10. **[^](#fnrefd28ivyvhfte)**“Distilling Step-by-Step! Outperforming Larger Language Models with Less Training Data and Smaller Model Sizes” (2023) <https://arxiv.org/abs/2305.02301> 11. **[^](#fnrefvr75x0hpz2)**“Orca: Progressive Learning from Complex Explanation Traces of GPT-4” (2023) <https://arxiv.org/abs/2306.02707>  “Textbooks Are All You Need” (2023) <https://arxiv.org/abs/2306.11644> 12. **[^](#fnrefb5rg4964jdd)***“Distilling step-by-step outperforms LLMs by using much smaller task-specific models”*  “Distilling Step-by-Step! Outperforming Larger Language Models with Less Training Data and Smaller Model Sizes” (2023) <https://arxiv.org/abs/2305.02301> 13. **[^](#fnreftlin7ipxc8s)**Drexler, KE: “[**The Open Agency Model**](https://www.alignmentforum.org/posts/5hApNw5f7uG8RXxGS/the-open-agency-model)”, AI Alignment Forum (February 2023) 14. **[^](#fnref718t24tvi8a)**The GPT-4 base model is artificial and demonstrates intelligence, but it is not “an AI” in the sense of being an intelligent entity. In my experience, it is more likely to model the content of an internet message board than the behavior of a person. Unlike ChatGPT-4, the base model has no preferred or stable persona. 15. **[^](#fnrefaasowc4l5o8)**“Principle-Driven Self-Alignment of Language Models from Scratch with Minimal Human Supervision” (2023) <https://arxiv.org/abs/2305.03047> 16. **[^](#fnref7n3d7m4cnn7)**“Sparks of Artificial General Intelligence: Early experiments with GPT-4” (2023) <https://arxiv.org/abs/2303.12712> 17. **[^](#fnrefc85q2mgdkg)**“Modular Deep Learning” (2023) <https://arxiv.org/abs/2302.11529> 18. **[^](#fnrefb0ytzxvld0d)**“FrugalGPT: How to Use Large Language Models While Reducing Cost and Improving Performance” (2023) <https://arxiv.org/abs/2305.05176> 19. **[^](#fnrefyppoyggbyc)**Janus, “[Simulators](https://www.alignmentforum.org/posts/vJFdjigzmcXMhNTsx/simulators)”, AI Alignment Forum (September 2022) 20. **[^](#fnrefxigzjasq1u)**Eliezer Yudkowsky rejects this. 21. **[^](#fnref7kduojfz06o)**Nay, JJ: “[AGI misalignment x-risk may be lower due to an overlooked goal specification technology](https://forum.effectivealtruism.org/posts/9YLbtehKLT4ByLvos/agi-misalignment-x-risk-may-be-lower-due-to-an-overlooked)”, (October 2022) 22. **[^](#fnrefy6ze7c50p9)**Drexler, KE: “[**Role Architectures: Applying LLMs to consequential tasks**](https://www.alignmentforum.org/posts/AKaf8zN2neXQEvLit/role-architectures-applying-llms-to-consequential-tasks)”, AI Alignment Forum (March 2023) 23. **[^](#fnrefd0xqltk8pud)**The proposed methodology bundles fuzzy comparisons into a single parameter and invites alternative estimates. The conclusion nonetheless seems robust.
4240a7db-fbdb-4def-a120-9893c09f676a
trentmkelly/LessWrong-43k
LessWrong
Fundamental Uncertainty: Chapter 1 - How can we know what's true? N.B. This is a chapter in a book about truth and knowledge. It is the first draft. I have since revised it. You can find the most up-to-date info/version on the book's website. It's 1 a.m.. You're arguing with a stranger on the internet about the meaning of the word "is". How did you get here? The night started out innocently enough. You were settling down for the evening by checking your favorite internet forum. You read an interesting post that you agreed with, but it contained a minor error. You posted a comment offering a correction. You were about to sign off when you got a notification that the author of the post had replied. They said, no, you were wrong and an idiot. Normally you'd let it slide, but tonight you're not having it. You fired off a confrontational reply; they sent one back in kind; you escalated in your response; they did the same. After a couple hours you'd exchanged thousands of increasingly heated words that took you away from the original point and into a battle over worldviews. And at this late hour, bleary eyed from staring at the screen for so long, you hit upon the fundamental question separating you: "How do you know that's true?" Turns out it's not so easy a question to answer. You say your claims are obvious; they disagree. You cite articles that justify your points; they say those aren't trustworthy sources. You ask them to offer alternative evidence; they provide "evidence" that is little more than opinion. Pretty soon you're debating what truth really mean. You're both convinced you're right, but you can't come to any agreement. Congratulations! You've run headlong into epistemology—the study of knowledge. Epistemology is the branch of philosophy concerned with how we figure out what's true. We use it to determine what arguments are correct, what evidence to trust, and where we meet the limits of our knowledge. It's also something we use all the time whether we realize it or not. Even if we don't have to think about it often,
720de879-0984-4760-8c6d-9221114a9025
trentmkelly/LessWrong-43k
LessWrong
Should we go all in on existential risk? - Considering Effective Altruism Apparently, at a recent EA summit Robin Hanson berated the attendees for giving to more than one charity. I think his critique is salient: given our human scope insensitivity, giving all your charity-money to one cause feels like helping with only *one* thing, even if that one organization does vastly more good, much more efficiently, than any other group, and so every dollar given to that organization does more good than an anything else that could be done with that dollar. More rational and more effective is to find the most efficient charity and give only to that charity, until it has achieved its goal so completely that it is no longer the most efficient charity. That said, I feel that there are at least some circumstances under which it is appropriate to divide one's charity dollars: those that include risky investments. If a positive singularity were to occur, the impact would be enormous: it would swamp any other good that I could conceivably do. Yet, I don't know how likely a positive singularity is; it seems to be a long shot. Furthermore, I don't know how much my charity dollars affect the probability one way or another. It may be that a p-singularity will either happen or it won't, and there's not much I can do about it. There's a huge pay-off but high uncertainty. In contrast, I could (for instance) buy mosquito nets for third world counties, which has a lower, but much more certain pay-off.  Some people are more risk-seeking than others, and it seems to be a matter of preference whether one takes risky bets or more certain ones. However, there are "irrational" answers, since one can calculate the expected pay-off of a gambit by mere multiplication. It is true that it is imprudent to bet one's life savings on an unlikely chance of unimaginable  wealth, but this is because of quirks of human utility calculation: losses are more painful than gain are enjoyable, and there is a law of diminishing marginal returns in play (to most of us, a gift of a billio
656aac3e-b606-4763-a068-8a3011ce65e4
trentmkelly/LessWrong-43k
LessWrong
What's the longest a sentient observer could survive in the Dark Era? Periodically I look at this graph, and I'm like, "holy shit, that is so much Nothing. Wut. Like, they converted it to log-scale so it would fit into my brain and it's still so much Nothing, Jezus Christ." Eliezer's Dath Ilan verse has this song for people who died before the invention of cryonics: > Even if the stars should die in heaven > Our sins can never be undone > No single death wil be forgiven > When fades at last the last lit sun. > Then in the cold and silent black > As light and matter end > We’ll have ourselves a last look back > And toast an absent friend. And I find myself wondering "What are the physical limits on 'some sentient computation surviving into the dark era?" Assuming we don't learn anything new about physics, or acausal-trade our way into another universe or whatever, what's the farthest to the right on this timeline you could possibly get, if you goal was to have some computation that, as late as possible, looks around at the universe one last time, comprehends it, and says "yep, still full of infinite blackness. Goodbye, world." Vaguely relevant things I'm vaguely aware of include: 1. Something something you can somehow harvest energy from black holes. Does this plausibly get us to the end of the Black Hole Era or is that energy eventually too weaksauce? 2. Space is, like, really cold (especially when All Has Become Darkness), so any time-capsule robot would face an uphill battle not radiating it's energy into the void. 3. On long enough timescales, protons (probably?) decay, and... like, your robot would eventually... just... disintegrate? If anyone knows enough physics to satisfy my random curiosity, uh, I'd appreciate it.
bbcef6fb-ff88-481e-8971-213825b5b566
trentmkelly/LessWrong-43k
LessWrong
Memory Decoding Journal Club: Synaptic architecture of a memory engram in the mouse hippocampus Join Us for the Memory Decoding Journal Club!  A collaboration of the Carboncopies Foundation and BPF Aspirational Neuroscience This time, we’re diving into a groundbreaking paper: "Synaptic architecture of a memory engram in the mouse hippocampus." Authors: Marco Uytiepo, Yongchuan Zhu, Eric Bushong, Filip Polli, Katherine Chou, Elise Zha, Christine Kim, Danielle Luu, Lyanne Chang, Tom Quach, Matthias Haberl, Luca Patapoutian, Elizabeth Beutter, Weiheng Zhang, Brian Dong, Elle McCue, Mark Ellisman, and Anton Maximov  Institutions: Department of Neuroscience, The Dorris Neuroscience Center, The Kellogg School of Science and Technology, The Scripps Research Institute, National Center for Microscopy and Imaging Research, University of California, San Diego, Department of Neurosciences, University of California, San Diego, School of Medicine Presented by: Dr. Randal Koene When? May 20th, 2025 – 3:00 PM PDT | 6:00 PM EDT | 10:00 PM UTC Where? Video conference: https://carboncopies.org/aspirational-neuroscience Register for updates: https://aspirationalneuroscience.org/register-with-us/ Once registered, you'll receive event invites & updates! #Neuroscience #MemoryResearch #Amygdala #JournalClub #BrainScience #Carboncopies #AspirationalNeuroscience
f00a5807-a9d4-444d-bcc3-e88ef9bc972b
trentmkelly/LessWrong-43k
LessWrong
Agreeing to Disagree: towards a better understanding of agreeableness Note: this is from my personal blog here. My sister (hi Lindsay) and I are very different people. Give Lindsay an hour with 15 people, and she will introduce you to her 15 new best friends. Give me an hour with 15 people, and I might remember one of their names. Need a mediator for some sort of drama? She’ll have everyone hugging (even over Zoom) after a heartfelt and compassionate discussion, whereas I’m more likely to take sides with whoever seems to be in the right. In psychological terms, these differences boil down to two of the Big Five personality traits: extraversion and agreeableness. On a scale from 1-10, Lindsay is probably a 10 on extraversion and an 8 on agreeableness. I’ll plead the fifth instead of rating myself on both (and I don’t think I’m a mean person, honest!), but I probably do fall a bit lower than both of those. What is agreeableness? As shown in the picture above, agreeableness has six “facets” or features, each of which has a section in the Wikipedia page: 1. Trust 2. Straight-forwardness 3. Altruism 4. Compliance 5. Modesty 6. Tender-mindedness, or “the extent to which an individual's judgments and attitudes are determined by emotion” A missing definition Surprisingly, I can’t seem to find a unique definition of agreeableness as used in psychology. Here are some quotes I came across: 1. The picture above defines it as “the degree to which a person needs pleasant and harmonious relations with others.” 2. “A person's ability to put other people's needs above their own. For instance, people who are high in agreeableness naturally experience empathy and tend to get tremendous pleasure from serving others and taking care of them.” (Source) 3. “Agreeableness is a personality trait manifesting itself in individual behavioral characteristics that are perceived as kind, sympathetic, cooperative, warm, and considerate.”(Source) 4. “Specifically, Agreeableness appears to describe differences in being predominantly prosocial or other
81c8f722-8d2e-471b-b892-0815cfcc17f3
trentmkelly/LessWrong-43k
LessWrong
Meetup : Moscow LW meetup in "Nauchka" library Discussion article for the meetup : Moscow LW meetup in "Nauchka" library WHEN: 03 February 2017 08:00:00PM (+0300) WHERE: Москва, ул. Дубининская, 20 Welcome to the next Moscow LW meetup in "Nauchka" library! Our plan: * A talk about learning from mistakes. * Fallacymania game. * Tower of Chaos game. Details about Fallacymania and Tower of Chaos and game materials can be found here: http://lesswrong.com/lw/oco/custom_games_that_involve_skills_related_to/ Meetup schedule is here: https://lesswrong-ru.hackpad.com/-3-2017-vUDiyhwzCs6 Come to "Nauchka", ul.Dubininskaya, 20. Entrance through the Central children library #14. Nearest metro station is Paveletskaya. Map is here: http://nauchka.ru/contacts/ . If you are lost, call Sasha at +7-905-527-30-82. Meetup begins at 20:00, the length is 2 hours. Discussion article for the meetup : Moscow LW meetup in "Nauchka" library
cf8523fa-f1d4-4952-9114-c1b7aed7ef02
trentmkelly/LessWrong-43k
LessWrong
Meetup : Melbourne social meetup Discussion article for the meetup : Melbourne social meetup WHEN: 18 May 2012 07:00:00PM (+1000) WHERE: see mailing list, Carlton VIC 3053 All are welcome (especially new people!) this Friday 18 May from 7pm at my place for our monthly social meetup. Join our google group for the address. If you have any trouble getting in, you can call me on 0412 996 288. BYO drinks and games. We'll get some kind of take-away for dinner. (Sorry to not post earlier: we had some miscommunication about who would make the post.) Discussion article for the meetup : Melbourne social meetup
c6608f12-2f1b-49a7-b1ed-2f7629407b16
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
[AN #154]: What economic growth theory has to say about transformative AI Alignment Newsletter is a weekly publication with recent content relevant to AI alignment around the world. Find all Alignment Newsletter **[resources here](http://rohinshah.com/alignment-newsletter/)**. In particular, you can look through **[this spreadsheet](https://docs.google.com/spreadsheets/d/1PwWbWZ6FPqAgZWOoOcXM8N_tUCuxpEyMbN1NYYC02aM/edit?usp=sharing)** of all summaries that have ever been in the newsletter. Audio version **[here](http://alignment-newsletter.libsyn.com/alignment-newsletter-154)** (may not be up yet). Please note that while I work at DeepMind, this newsletter represents my personal views and not those of my employer. HIGHLIGHTS =========== **[Could Advanced AI Drive Explosive Economic Growth?](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth)** *(Tom Davidson)* (summarized by Rohin): **[Some](https://www.alignmentforum.org/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines)** (**[AN #121](https://mailchi.mp/41774b61e5f8/an-121forecasting-transformative-ai-timelines-using-biological-anchors)**) **[previous](https://www.openphilanthropy.org/blog/modeling-human-trajectory)** (**[AN #105](https://mailchi.mp/be2a0d160fa2/an-105-the-economic-trajectory-of-humanity-and-what-we-might-mean-by-optimization)**) **[work](https://www.openphilanthropy.org/blog/report-semi-informative-priors)** (**[AN #145](https://mailchi.mp/7131b053140f/an-145-our-three-year-anniversary)**) has suggested that by 2100 there is a non-trivial chance that AI could lead to *explosive growth*, that is, a growth rate of 30% (i.e. a doubling time of 2-3 years), 10x the current growth rate of ~3%. What does economics have to say about the matter? This report investigates the following three stories: 1. **Ignorance story:** In this story, we don’t know how growth is determined, and attempts to forecast it based on models of how growth works are likely to be wrong. Note that this is perfectly compatible with explosive growth. We know that the growth rate has increased by orders of magnitude over the past millennia; so on an ignorance story we certainly shouldn’t rule out that the growth rate could increase by an order of magnitude again. 2. **Standard story:** This story focuses on the last ~century of growth, noting that the growth rate has stayed relatively constant at 2-3% per year, and thus predicting that future growth will be exponential (i.e. a constant growth rate), or possibly subexponential. 3. **Explosive story:** This story focuses on growth models with positive feedback loops, in which increased output leads to increased inputs which leads to even larger outputs, resulting in superexponential (and explosive) growth. The author is interested in whether explosive growth is *plausible*, and so is most interested in arguments that argue for the standard story and against the ignorance or explosive stories, or vice versa. The main empirical facts we have are that the growth rate increased (maybe continuously, maybe not, it’s hard to tell) until about a century ago, when it plateaued at the current level of 2-3%. What can we then learn from economic growth models? 1. Ideas-based models of economic growth suggest that growth in output is driven primarily by the rate at which we get ideas (leading to technological improvement), which in turn is driven by population size, which in turn is driven by output (completing the positive feedback cycle). This predicts increases in the growth rate *as long as* population growth rate is increasing. A century ago, we underwent the “demographic transition” where, as we produced more output, instead of having more kids we became richer, breaking the positive feedback loop and preventing population size from growing. This fits our empirical facts well, and if we now assume that AI can also generate ideas, then the feedback loop is reestablished and we should expect explosive growth. 2. Economists have tried to find growth models that robustly predict exponential growth alongside a slowly growing population, but have mostly not found such models, suggesting that our current exponential growth might be an anomaly that will eventually change. The best explanations of exponential growth imply that future growth will be sub-exponential given that population growth is predicted to slow down. 3. Most economic growth models, including the ones in the previous point, predict explosive growth if you add in an assumption that AI systems can replace human workers. Thus, it seems that economic growth theory suggests that explosive growth is probable, *conditional* on the assumption that we develop AI systems that can replace arbitrary human workers. You could object to these arguments on several grounds. The ones that the author finds partially convincing are: 1. We don’t see any trends of explosive growth right now -- this suggests that we at least won’t see explosive growth in the next couple of decades (though it’s harder to make claims all the way out to 2100). 2. If there are a few key “bottleneck” tasks that (a) are crucial for growth and (b) can’t be automated by AI, then those tasks may limit growth. 3. There may be physical limits on growth that we haven’t yet encountered: for example, growth may be bottlenecked on running experiments in the real world, extracting and transporting raw materials, delays for humans to adjust to new technology, etc. Another objection is that ideas are getting harder to find, which would surely prevent explosive growth. The author is *not* convinced by this objection, because the growth models predicting explosive growth already take this into account, and still predict explosive growth. (Roughly, the superexponential increase in the inputs “overpowers” the exponential increase in the difficulty of finding good ideas.) **Read more:** **[Blog post](https://www.openphilanthropy.org/blog/report-advanced-ai-drive-explosive-economic-growth)** **Rohin's opinion:** I find the economic take on AI to be particularly interesting because it makes the “automation” frame on AI the default one, as opposed to the “superintelligent goal-directed agent” frame that we often work with in AI alignment. The critical assumption needed in this automation frame is that AI systems can automate ~every task that a human worker could do. This is what enables the positive feedback loop to work (which is the automation version of recursive self-improvement). I generally prefer the automation frame for thinking about and predicting how AI systems are integrated into the world, while preferring the agent frame for thinking about how AI systems might cause *alignment* problems (i.e. ignoring **[misuse and structural risks](https://www.lawfareblog.com/thinking-about-risks-ai-accidents-misuse-and-structure)** (**[AN #46](https://mailchi.mp/c48f996a5db5/alignment-newsletter-46)**)). Many of my disagreements with **[CAIS](https://www.fhi.ox.ac.uk/reframing/)** (**[AN #40](https://mailchi.mp/b649f32b07da/alignment-newsletter-40)**) feel like cases where I think it is appropriate to use the agent frame rather than the automation frame. I would classify several newer alignment **[risk](https://www.alignmentforum.org/posts/HBxe6wdjxK239zajf/what-failure-looks-like)** (**[AN #50](https://mailchi.mp/93fe1a0a92da/alignment-newsletter-50)**) **[stories](https://www.alignmentforum.org/posts/AyNHoTWWAJ5eb99ji/another-outer-alignment-failure-story)** (**[AN #146](https://mailchi.mp/2d838dcbd3b0/an-146plausible-stories-of-how-we-might-fail-to-avert-an-existential-catastrophe)**) as taking the same agent-based *cause* of alignment failure as in (say) Superintelligence, but then telling a story in which the *deployment* of the misaligned AI system is automation-based. I think it is generally worth spending some time meditating on the growth models explored in this post, and what implications they would have for AI development (and thus for AI alignment). For example, some models emphasize that there are many different tasks and suggest (not conclusively) that we’ll have different AI systems for different tasks. In such a world, it doesn’t seem very useful to focus on teaching AI systems about humanity’s true values, as they are going to be asked to do particular tasks that are pretty divorced from these “true values”. Note that I am not an economist. This means that there’s a higher chance than usual that I’ve accidentally inserted an erroneous claim into this summary and opinion. It is also the reason why I don’t usually summarize econ papers that are relevant to AI -- I’ve summarized this one because it’s explained at a level that I can understand. If you’re interested in this area, other papers include **[Economic Growth Given Machine Intelligence](http://mason.gmu.edu/~rhanson/aigrow.pdf)** and **[Economic growth under transformative AI](https://globalprioritiesinstitute.org/philip-trammell-and-anton-korinek-economic-growth-under-transformative-ai/)**. TECHNICAL AI ALIGNMENT ======================= LEARNING HUMAN INTENT ---------------------- **[AXRP Episode 8 - Assistance Games](https://www.alignmentforum.org/posts/fzFyCJ6gB9kBL9RqW/axrp-episode-8-assistance-games-with-dylan-hadfield-menell)** *(Daniel Filan and Dylan Hadfield-Menell)* (summarized by Rohin): As with most other podcasts, I will primarily link you to my past summaries of the papers discussed in the episode. In this case they were all discussed in the special issue **[AN #69](https://mailchi.mp/59ddebcb3b9a/an-69-stuart-russells-new-book-on-why-we-need-to-replace-the-standard-model-of-ai)** on Human Compatible and the various papers relevant to it. Some points that I haven’t previously summarized: 1. The interviewee thinks of assistance games as an *analytical tool* that allows us to study the process by which humans convey normative information (such as goals) to an AI system. Normally, the math we write down takes the objective as given, whereas an assistance game uses math that assumes there is a human with a communication channel to the AI system. We can thus talk mathematically about how the human communicates with the AI system. 2. This then allows us to talk about issues that might arise. For example, **[assistive bandits](https://arxiv.org/abs/1901.08654)** (**[AN #70](https://mailchi.mp/732eaa192df0/an-70-agents-that-help-humans-who-are-still-learning-about-their-own-preferences)**) considers the fact that humans might be learning over time (rather than starting out as optimal). 3. By using assistance games, we build the expectation that our AI systems will have ongoing oversight and adaptation directly into the math, which seems significantly better than doing this on an ad hoc basis (as is currently the case). This should help both near-term and long-term systems. 4. One core question is how we can specify a communication mechanism that is robust to misspecification. We can operationalize this as: if your AI system is missing some relevant features about the world, how bad could outcomes be? For example, it seems like demonstrating what you want (i.e. imitation learning) is more robust than directly saying what the goal is. 5. One piece of advice for deep learning practitioners is to think about where the normative information for your AI system is coming from, and whether it is sufficient to convey what you want. For example, large language models have trillions of parameters, but only hundreds of decisions inform the choice of what data to train them on -- is that enough? The language we train on has lots of normative content -- does that compensate? 6. Dylan says: “if you’re interested in doing this type of work and you thought this conversation was fun and you’d like to have more conversations like it with me, I’ll invite you to **[apply to MIT’s EECS PhD program](https://gradapply.mit.edu/eecs/apply/login/?next=/eecs/)** next year and mention me in your application.” **Rohin's opinion:** I’m a big fan of thinking about how normative information is transferred from us to our agents -- I frequently ask myself questions like “how does the agent get the information to know X”, where X is something normative like “wireheading is bad”. In the case of large neural nets, I generally like assistance games as an analysis tool for thinking about how such AI systems should behave at deployment time, for the reasons outlined in the podcast. It’s less clear what the framework has to say about what should be done about training time, when we don’t expect to have a human in the loop (or we expect that to be a relative minority of our training data). To be clear, this should be taken as an endorsement of thinking about assistance games: my point is just that (according to me) it is best to think of them in relation to deployment, not training. A framework doesn’t have to apply to everything in order to be well worth thinking about. FORECASTING ------------ **[Parameter counts in Machine Learning](https://www.alignmentforum.org/posts/GzoWcYibWYwJva8aL/parameter-counts-in-machine-learning)** *(Jaime Sevilla et al)* (summarized by Rohin): This post presents a dataset of the parameter counts of 139 ML models from 1952 to 2021. The resulting graph is fairly noisy and hard to interpret, but suggests that: 1. There was no discontinuity in model size in 2012 (the year that AlexNet was published, generally acknowledged as the start of the deep learning revolution). 2. There was a discontinuity in model size for language in particular some time between 2016-18. **Rohin's opinion:** You can see my thoughts on the trends in model size in **[this comment](https://www.alignmentforum.org/posts/GzoWcYibWYwJva8aL/parameter-counts-in-machine-learning?commentId=sFFBeva2fDgsoynDC)**. **[Deep limitations? Examining expert disagreement over deep learning](https://link.springer.com/article/10.1007/s13748-021-00239-1)** *(Carla Zoe Cremer)* (summarized by Rohin): This paper reports on the results of a qualitative survey of 25 experts, conducted in 2019 and early 2020, on the possibility of deep learning leading to high-level machine intelligence (HLMI), defined here as an “algorithmic system that performs like average adults on cognitive tests that evaluate the cognitive abilities required to perform economically relevant tasks”. Experts disagreed strongly on whether deep learning could lead to HLMI. Optimists tended to focus on the importance of scale, while pessimists tended to emphasize the need for additional insights. Based on the interviews, the paper gives a list of 40 limitations of deep learning that some expert pointed to, and a more specific list of five areas that both optimists and pessimists pointed to as in support of their views (and thus would likely be promising areas to resolve disagreements). The five areas are (1) abstraction; (2) generalization; (3) explanatory, causal models; (4) emergence of planning; and (5) intervention. AI GOVERNANCE ============== **[Truth, Lies, and Automation: How Language Models Could Change Disinformation](https://cset.georgetown.edu/publication/truth-lies-and-automation/)** *(Ben Buchanan et al)* (summarized by Rohin): Ever since the publication of **[GPT-2](https://blog.openai.com/better-language-models/)** (**[AN #46](https://mailchi.mp/c48f996a5db5/alignment-newsletter-46)**), the research community has worried about the use of such language models for disinformation campaigns. Disinformation campaigns have happened before: Russia produced thousands of pieces of such content leading up to the 2016 US presidential election. That campaign used large numbers of human workers. Could a future campaign become significantly more effective through the use of large language models? This report notes that for this threat model, it is primarily worrying if GPT-3 can be used to enable significantly *better* results, because the monetary cost of hiring humans is not typically a bottleneck for major actors. While GPT-3 by itself is not likely to achieve this, perhaps it can serve as an effective tool for humans, such that the human-machine team can get better results than either one individually. The authors perform several tests of their own to establish a lower bound on how well human-machine teams can perform currently. They investigate six types of disinformation tasks and find that either GPT-3 can do them easily, or only some human effort is needed to get results that are perceived as high quality by humans, suggesting that this could be a real risk. Unfortunately, it is hard to tell what aspects are *actually* important for successful disinformation, and this was not something they could ethically check, so it is hard to draw confident conclusions from the study about whether GPT-3 would be useful for disinformation campaigns in practice. (Although their one study on Mechanical Turk did find that GPT-3-generated arguments on international issues like sanctions on China were found to be persuasive and led to significant changes in the proportion of people with the given position.) One particularly worrying aspect is that the authors found it easier to get GPT-3 to generate extremist content because providing an extremist headline makes it easy to “locate” the appropriate tone and style; whereas with a more moderate headline, GPT-3 might not correctly infer the desired tone or style because the moderate headline could be consistent with lots of tones and styles. **Rohin's opinion:** The most interesting part of this report for me was the example outputs that the authors gave in the report, which showcase how GPT-3 can be used to “argue” in support or against a variety of topics, in a manner meant to be persuasive to a specific group of people (for example, arguing to Jews that they should vote Democratic / vote Republican / not vote). (I put “argue” in quotation marks because the outputs hardly feel like what I would call “arguments” for a position, instead simply appealing to something the group agrees with and stating with barely any argument / evidence that this implies the position to be argued for. However, I also have the same critique of most “arguments” that I see on Twitter -- I don’t think I could distinguish the GPT-3 generated arguments from real human tweets.) OTHER PROGRESS IN AI ===================== DEEP LEARNING -------------- **[Beijing Academy of Artificial Intelligence announces 1,75 trillion parameters model, Wu Dao 2.0](https://www.lesswrong.com/posts/2dG7vXDZjd6crkdLa/beijing-academy-of-artificial-intelligence-announces-1-75)** (summarized by Rohin): There’s a good chance you’ve heard of the new Wu Dao 2.0 language model, with over 1 trillion parameters. Unfortunately, as far as I know there is no technical writeup describing this model, so I’m going to refrain from commenting on it. You can see other people’s takes in the linked LessWrong post, on **[ChinAI](https://chinai.substack.com/p/chinai-145-enlightenment-via-large)**, and on **[policy.ai](https://cset.georgetown.edu/newsletter/june-10-2021/)**. #### **FEEDBACK** I'm always happy to hear feedback; you can send it to me, **[Rohin Shah](https://rohinshah.com/)**, by **replying to this email**. #### **PODCAST** An audio podcast version of the **Alignment Newsletter** is available. This podcast is an audio version of the newsletter, recorded by **[Robert Miles](http://robertskmiles.com/)**.
ec79cfff-71e4-4d2f-82bd-9a012b1eea8c
trentmkelly/LessWrong-43k
LessWrong
Tapping Out In Two I'm one of those people who feels personally called out by xkcd's "Duty Calls" ("someone is wrong on the internet"). (Not as much as I used to. At some point I stopped reading most of the subreddits that I would argue on, partly for this reason, and Hacker News, for unrelated reasons, and now I don't do it as much.) As pastimes go, there's nothing particularly wrong with internet arguments. But sometimes I get involved in one and I want to stop being involved in one and that's not easy. I could just, like, stop posting. Or I could let them say something, and then respond with something like, "yeah, I'm still not convinced, but I don't really have time to get into this any more". But then the other person wins, and that's terrible. It potentially looks to onlookers like I stopped because I couldn't find something to say any more. And in at least some cases, it would feel kind of rude: if they've put a lot of thought into their most recent post, it's a bit dismissive to just leave. In the past, when people have done that to me, I've disliked it, at least some times. (Well, at least once. I remember one specific occasion when I disliked it, and there may have been others.) Another thing I could do is respond to their most recent post, and then say "and now I'm done". But that feels rude, too, and I certainly haven't liked it when people have done it to me. (Why not? It puts me in the position where I don't know whether to respond. Responding feels petty, and kind of a waste of time; not responding feels like they win, and that's terrible.) If they do reply, then I'm shut off completely; I can't even make a minor clarification on the order of "no, you somehow interpreted me as saying exactly the opposite of what I actually said". So I don't like those tactics. There are probably mental shifts I could make that would bring me more inclined towards them, but… about a year ago I came up with another tactic, which has seemed quite helpful. What I do now is say somethi
4a460944-af13-4786-874f-f5b6f4cf9f94
trentmkelly/LessWrong-43k
LessWrong
Assorted thoughts about abstraction Epistemic status: Exploratory + I'm not an expert here. Still, I'm reasonably confident that I'm onto something. Let's talk about levels of abstraction. I have an example that I think would be helpful. It comes from the first computer science lecture that I ever attended. I think the year was 2010. I was taking this class that was a precursor to CS 101. CS 101 was the normal intro class, but it assumed you had some amount of experience with programming, whereas the class that I took assumed nothing. Anyway, the professor asked us a question. He asked us how we would write out instructions for someone to brush their teeth. An answer would look something like this: 1. Put toothpaste on toothbrush 2. Brush teeth 3. Rinse mouth 4. Clean toothbrush Then he would say how "put toothpaste on toothbrush" isn't very specific. I mean, it is to a sufficiently smart human. But what if the person isn't that smart? You'd have to be more specific about what that instruction actually means. Maybe the more specific version looks something like this: 1. Take toothbrush out of drawer 2. Take toothpaste out of drawer 3. Rinse toothbrush 4. Put toothpaste on toothbrush But then imagine that the person you are giving these instructions to is really dumb. They don't know how to execute the instruction of "take toothbrush out of drawer". You have to be even more specific. Ugh, what a schlep. But let's try it. 1. Open drawer 2. Pick toothbrush up 3. Put toothbrush down on counter 4. Close drawer Perhaps you can see where this is going. Even these instructions can be made more specific. What does it mean to open a drawer? To pick a toothbrush up? You can dig deeper and break them down into even more specific instructions. This is sorta how computers work. They are dumb and need really, really, really specific instructions. You have to break it down further and further and further for them until they finally get it. But — and this is a crucial point — it is hard for us h
0a4ff950-0b20-47ac-8881-aabdbded8802
trentmkelly/LessWrong-43k
LessWrong
AI Forecasting Dictionary (Forecasting infrastructure, part 1) This post introduces the AI Forecasting Dictionary, an open-source set of standards and conventions for precisely interpreting AI and auxiliary terms. It is the first part in a series of blog posts which motivate and introduce pieces of infrastructure intended to improve our ability to forecast novel and uncertain domains like AI. The Dictionary is currently in beta, and we're launching early to get feedback from the community and quickly figure out how useful it is. Background and motivation 1) Operationalisation is an unsolved problem in forecasting A key challenge in (AI) forecasting is to write good questions. This is tricky because we want questions which both capture important uncertainties, and are sufficiently concrete that we can resolve them and award points to forecasters in hindsight. For example: > Will there be a slow take-off? is a question that’s important yet too vague. > Will there be a 4-year doubling of world output before the first 1-year doubling of world output? is both important and concrete, yet sufficiently far-out that it’s unclear if standard forecasting practices will be helpful in resolving it. > Will there be a Starcraft II agent by the end of 2020 which is at least as powerful as AlphaStar, yet uses <$10.000 of publicly available compute? is more amenable to standard forecasting practices, but at the cost of being only tangentially related to the high-level uncertainty we initially cared about. And so on. Currently, forecasting projects reinvent this wheel of operationalisation all the time. There’s usually idiosyncratic and time-consuming processes of writing and testing questions (this might take many hours for a single question) [1], and best practices tend to evolve organically but without being systematically recorded and built upon [2]. 2) The future is big, and forecasting it might require answering a lot of questions This is an empirical claim which we’ve become more confident by working in this space over the las
e8780875-3297-4e78-9efc-e0af3d5d4be7
trentmkelly/LessWrong-43k
LessWrong
Which biases would you like to keep? It probably wouldn't be an unalloyed good for me to overcome ALL the biases. I overestimate my future emotional reactions to winning or losing, and I'd like to keep that bias because it helps me win and not lose. I remember only good things about the past, and I'd like to keep that bias because it makes me smile. I underestimate my relative competence at things I'm genuinely good at, and I'd like to keep that bias because it encourages me to improve. Which biases would you prefer to keep even though you know you have them?