id
stringlengths
36
36
source
stringclasses
15 values
formatted_source
stringclasses
13 values
text
stringlengths
2
7.55M
304f9880-7f56-4bf9-882c-dcb216b95908
trentmkelly/LessWrong-43k
LessWrong
Harry Potter and the Methods of Psychomagic | Chapter 1: Affect This is a Harry Potter and the Methods of Rationality fan-fiction which contains spoilers. If you haven't, you should read or listen to Eliezer Yudkowsky's original work before reading this. Chapter 2 >>  ---------------------------------------- “The wizarding world has magical healers. So are there magical equivalents to muggle psychiatrists?” Deputy Headmistress Minerva McGonagall was sitting in her office riffling through a thick pile of ministry paperwork. When the question registered she stopped and lifted her head to face Harry. “Harry you poor thing. If there’s anything you need to talk about – anything at all – you can always come to me.” Harry interrupted, “No, it’s not that. I’m fine. I really am. I just need to know, are there healers that specialize in illnesses like depression, addictions, phobias, those kinds of things?” “Well, not as far as I know,” she replied. “Let me think... Healer training is 3 years and then another 3 working as an apprentice in Saint Mungo’s but the curriculum is the same for all healers. We have several Hogwarts alumni who started healer training just last year. I could put you in touch if you want? Are you sure you’re okay Harry?” It was just as he had feared. If there were psychiatric spells as powerful as the healing spells he had seen you simply wouldn’t ever see an unhappy witch or wizard – and that wasn’t the world Harry saw around him. “But do spells like that exist?” continued Harry. “Well there’s a spell called Mulceo which can be used for students with phobias. We sometimes have first year students with, say, a crippling fear of spiders – and that’s a problem because they won’t turn up to Care of Magical Creatures class. “I would have them imagine a spider in their mind’s eye and as they’re doing that, cast the spell. What it does is flood their mind with warm happy memories. Then you repeat it whilst they look at a muggle photograph of a spider, a moving magical photograph, all the way up to petting one o
efab5c38-57f0-4557-8329-23f832712649
trentmkelly/LessWrong-43k
LessWrong
Chapter 67: Self Actualization, Pt 2 In the high reaches of Hogwarts where rooms and corridors changed on a daily basis, where the territory itself was uncertain and not just the map, where the stability of the castle began to fray into dreams and chaos without changing its architectural style or apparent solidity - in the high reaches of Hogwarts, a battle would soon be fought. The presence of so many students would stabilize the corridors for a time, by dint of constant observation. The rooms and corridors of Hogwarts sometimes moved even while people looked directly at them, but they wouldn't change. Even after eight centuries, Hogwarts was still a little shy about changing in front of people. But despite that transient permanence (the Defense Professor had said) the upper reaches of Hogwarts still had a military realism: you had to learn the ground anew each time, and check every closet for secret corridors all over again. Sunday it was, Sunday the first of March. Professor Quirrell had recovered enough to supervise battles once more, and they were all catching up on the backlog. The Dragon General, Draco Malfoy, watched two compasses he held in either hand. One compass was the color of the Sun, the other had a multicolored, iridescent sheen to indicate Chaos. The other two generals, Draco knew, had been given their own compasses; only Hermione Granger's hand, and Harry Potter's hand, would hold a compass that was orange-red and flickered in its reflections like fire, pointing always to the direction of the largest active contingent of Dragon Army. Without those compasses they might have searched for days and never found each other, which was a territorial hazard of fighting in the upper levels of Hogwarts. Draco had a bad feeling about what would happen when Dragon Army found the Chaos Legion. Harry Potter had changed since Bellatrix Black had escaped; the Heir of Slytherin had begun to seem truly Lordly now (and how had Professor Quirrell known that would happen?) Draco would have felt a lo
b667c3e6-4bf0-4e9c-975a-183cab029d12
trentmkelly/LessWrong-43k
LessWrong
Impression track records It is good to separate impressions from beliefs. It is good to keep track records. Is it good to keep separate impression and belief track records? My default guess would be ‘a bit, but probably too much effort, since we hardly manage to keep any track records.’ But it seems maybe more than a bit good, for these reasons: 1. Having good first impressions, and being good at turning everyone’s impressions into a good overall judgment might be fairly different skills, so that some people are good at one and some are good at the other, and you get a clearer signal if you separate them. 2. We probably by default mostly learn about beliefs and not impressions, because by assumption if I have both and they are different, I suspect the impression is wrong, and so will make me look worse if I advertise that I hold it. 3. Impressions are probably better than beliefs to have track records for, because the point of the track records is to know how much to weight to give different sources when constructing beliefs, and it is more straightforward to know directly which sources are good than to know which aggregations of sources are good (especially if they are mostly bad, because nobody has track records). As in, perhaps we mostly keep belief track records when we keep track records, but would do better with impression track records. What would we do if we wanted to keep impression track records instead? (Do we already?)
199089e0-680d-46ff-bef6-9c25fc8ba279
StampyAI/alignment-research-dataset/special_docs
Other
Directions and desiderata for AI alignment In the first half of this post, I’ll discuss three research directions that I think are especially promising and relevant to AI alignment: 1. \*\*Reliability and robustness.\*\* Building ML systems which behave acceptably in the worst case rather than only on the training distribution. 2. \*\*Oversight / reward learning.\*\* Constructing objectives and training strategies which lead our policies to do what we intend. 3. \*\*Deliberation and amplification.\*\* Surpassing human performance without simultaneously abandoning human preferences. I think that we have several angles of attack on each of these problems, and that solutions would significantly improve our ability to align AI. My current feeling is that these areas cover much of the key work that needs to be done. In the second half of the post, I’ll discuss three desiderata that I think should guide research on alignment: 1. \*\*Secure\*\*. Our solutions should work acceptably even when the environment itself is under the influence of an adversary. 2. \*\*Competitive\*\*. Our solutions should impose minimal overhead, performance penalties, or restrictions compared to malign AI. 3. \*\*Scalable.\*\* Our solutions should continue to work well even when the underlying learning systems improve significantly. I think that taking these requirements seriously leads us to substantially narrow our focus. It may turn out that these desiderata are impossible to meet, but if so I think that the first order of business should be understanding clearly \*why\* they are impossible. This would let us better target our work on alignment and better prepare for a future where we won’t have a completely satisfying solution to alignment. (The ideas in this post are not novel. My claimed contribution is merely collecting these things together. I will link to my own writing on each topic in large part because that’s what I know.) I. Research directions ====================== 1. Reliability and robustness ----------------------------- Traditional ML algorithms optimize a model or policy to perform well on the training distribution. These models can behave arbitrarily badly when we move away from the training distribution. Similarly, they can behave arbitrarily badly on a small part of the training distribution. I think this is bad news: \* Deploying ML systems will critically change their environment, in a way that is hard or impossible to simulate at training time. (The “treacherous turn” is a special case of this phenomenon.) \* Deployed ML systems are interconnected and exposed to the same world. So if conditions change in a way that causes one of them to fail, \*many\* systems may fail simultaneously. \* If ML systems are extremely powerful, or if they play a critical role in society, then a widespread failure may have catastrophic consequences. I’m aware of three basic approaches to reliability that seem to me like they could plausibly scale and be competitive: (\*ETA: this list is superseded by the list in\* [\*Techniques for Optimizing Worst-Case Performance\*](/techniques-for-optimizing-worst-case-performance-39eafec74b99)\*. I removed consensus and added interpretability and verification. I don’t discuss “learning the right model,” which I still consider a long shot.\*) \* \*\*Adversarial training\*\*. At training time, attempt to construct inputs that induce problematic behavior and train on those. Eventually, we hope there will be no catastrophe-inducing inputs left. We don’t yet know what is possible to achieve. ([Szegedy 2014](https://arxiv.org/pdf/1312.6199v4.pdf), [Goodfellow 2015](https://arxiv.org/pdf/1412.6572v3.pdf)) \* \*\*Ensembling and consensus\*\*. We often have confidence that there exists \*some\* models which will generalize appropriately. If we can verify that many models agree about an answer, we can be confident that the consensus is correct. If we use this technique, we will often need to abstain on unfamiliar inputs, and in order to remain competitive we will probably need to represent the ensemble implicitly. ([Khani 2016](https://cs.stanford.edu/~pliang/papers/unanimity-acl2016.pdf)) \* \*\*Learning the right model\*\*. If we understood enough about the structure of our model (for example if it reflected the structure of the underlying data-generating process), we might be confident that it will generalize correctly. Very few researchers are aiming for a secure / competitive / scalable solution along these lines, and finding one seems almost (but not completely) hopeless to me. This is MIRI’s approach. Usual caveats apply: these approaches may need to be used in combination; we are likely to uncover completely different approaches in the future; and I’m probably overlooking important existing approaches. I think this problem is pretty well-understood and well-recognized, but it looks really hard. ML researchers mostly focus on improving performance rather than robustness, and so I think that this area remains neglected despite the problem being well-recognized. (Previous posts on this blog: [\*red teams\*](https://medium.com/ai-control/red-teams-b5b6de33dc76#.w2nsces19)\*,\* [\*learning with catastrophes\*](https://medium.com/ai-control/learning-with-catastrophes-59387b55cc30#.a590k1j0p)\*,\* [\*thoughts on training highly reliable models\*](https://medium.com/ai-control/some-thoughts-on-training-highly-reliable-models-2c78c17e266d#.pbtkz0czs)) 2. Oversight / reward learning ------------------------------ ML systems are typically trained by optimizing some objective over the training distribution. For this to yield “good” behavior, the objective needs to sufficiently close to what we really want. I think this is also bad news: \* Some tasks are very “easy” to frame as optimization problems. For example, we can already write an objective to train an RL agent to operate a profit-maximizing autonomous corporation (though for now we can only train very weak agents). \* Many tasks that humans care about, such as maintaining law and order or helping us better understand our values, are extremely hard to convert into precise objectives: they are inherently poorly-defined or involve very long timescales, and simple proxies can be “gamed” by a sophisticated agent. \* As a result, many tasks that humans care about may not get done well; we may find ourselves in an increasingly sophisticated and complex world driven by completely alien values. So far, the most promising angle of attack is to optimize extremely complex objectives, presumably by learning them. I’m aware of two basic approaches to reward learning that seem like they could plausibly scale: \* \*\*Inverse reinforcement learning\*\*. We can observe human behavior in a domain and try to infer what the human is “trying to do,” converting it into an objective that can be used to train our systems. ([Russell 1998](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.152.6795&rep=rep1&type=pdf), [Ng 2000](http://ai.stanford.edu/~ang/papers/icml00-irl.pdf), [Hadfield-Menell 2016](https://arxiv.org/pdf/1606.03137v3.pdf)) \* \*\*Learning from human feedback\*\*. We can pose queries to humans to figure out which behaviors or outcomes they prefer, and then optimize our systems accordingly. ([Isbell 2001](https://papers.nips.cc/paper/2118-cobot-a-social-reinforcement-learning-agent.pdf), [Thomaz 2006](http://robotic.media.mit.edu/wp-content/uploads/sites/14/2015/01/Thomaz-etal-AAAI-06.pdf), [Pilarski 2011](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.715.7132&rep=rep1&type=pdf), [Knox 2012](http://www.bradknox.net/wp-content/uploads/2013/06/thesis-knox.pdf)) These solutions seem much closer to working than those listed in the previous section on reliability and robustness. But they still face many challenges, and are not yet competitive, scalable, \*or\* secure: \* IRL requires a prior over preferences and a model of how human behavior relates to human preferences. Current implementations either only work in severely restricted environments, or use simple models of human rationality which cause the learner to attempt to very precisely imitate the human’s behavior (which might be challenging or impossible). \* For similar reasons, existing IRL implementations are not able to learn from other data like human utterances or off-policy behavior, even though these constitute the largest and richest source of data about human preferences. \* Human feedback requires accurately eliciting human preferences, which introduces many complications. (I discuss a few easy problems [here](https://medium.com/ai-control/thoughts-on-reward-engineering-82b193ec03f6#.6n2d4co3i).) \* Human feedback is expensive and so we will need to be able to learn from a relatively small amount of labeled data. Demonstrations are also expensive and so may end up being a bottleneck for approaches based on IRL though it’s not as clear. \* Both imitation learning and human feedback may fail when evaluating a behavior requires understanding where the behavior came from. For example, if you ask a human to evaluate a painting they may not be able to easily check whether it is derivative, even if over the long run they would prefer their AI to paint novel paintings. (I’ve described these approaches in the context of “human” behavior, but the expert providing feedback/demonstrations might themselves be a human augmented with AI assistance, and eventually may simply be an AI system that is aligned with human interests.) This problem has not received much attention in the past, but it seems to be rapidly growing in popularity, which is great. I’m currently working on a project in this area. (\*Previous posts on this blog:\* [\*the reward engineering problem\*](https://medium.com/ai-control/the-reward-engineering-problem-30285c779450#.f1ihhss6w)\*,\* [\*ambitious vs. narrow value learning\*](https://medium.com/ai-control/ambitious-vs-narrow-value-learning-99bd0c59847e#.s33f26ht5)\*,\* [\*against mimicry\*](https://medium.com/ai-control/against-mimicry-6002a472fc42#.chg6xlqve)\*,\* [\*thoughts on reward engineering\*](https://medium.com/ai-control/thoughts-on-reward-engineering-82b193ec03f6#.iim1wpt9a)\*.\*) 3. Deliberation and amplification --------------------------------- Machine learning is usually applied to tasks where feedback is readily available. The research problem in the previous section aims to obtain quick feedback in general by using human judgments as the “gold standard.” But this approach breaks down if we want to exceed human performance. For example, it is easy to see how we could use machine learning to train ML systems to make human-level judgments about urban planning, by training them to produce plans that sound good to humans. But if we want to train an ML system to make superhuman judgments about how to lay out a city, it’s completely unclear how we could do it — without spending billions of dollars trying out the system’s ideas and telling it which ones work. This is a problem for the same reasons discussed in the preceding section. If our society is driven by systems superhumanly optimizing short-term proxies for what we care about — such as how much they impress humans, or how much money they make—then we are liable to head off in a direction which does not reflect our values or leave us in meaningful control of the situation. If we lowered our ambitions and decide that superhuman performance is inherently unsafe, we would be leaving huge amounts of value on the table. Moreover, this would be an unstable situation: it could last only as long as everyone with access to AI coordinated to pull their punches and handicap their AI systems. I’m aware of two approaches to this problem that seem like they could scale: \* \*\*IRL [hard mode]\*\*. In principle we can use IRL to recover a representation of human preferences, and then apply superhuman intelligence to satisfy those preferences much better than a human could. However, this is a much more ambitious and challenging form of IRL than is usually discussed, which [remains quite challenging](https://medium.com/ai-control/the-easy-goal-inference-problem-is-still-hard-fad030e0a876) even when you set aside all of the usual algorithmic and statistical difficulties. (Jacob Steinhardt and Owain Evans discuss this issue in [a recent post](https://jsteinhardt.wordpress.com/2017/02/07/model-mis-specification-and-inverse-reinforcement-learning/).) \* \*\*Iterated amplification\*\*. A group of interacting humans can potentially be smarter than a single human, and a group of AI systems could be smarter than the original AI system. By using these groups as “experts” in place of individual humans, we could potentially train much smarter systems. The key questions are how to perform this composition in a way that causes the group to implement the same preferences as its members, and whether the cognitive benefits for groups are large enough to overcome the overhead of coordination. (I discuss this approach [here](https://medium.com/ai-control/policy-amplification-6a70cbee4f34#.ampcyxi9r) and in follow-up work.) \* \*\*IRL for cognition\*\*. Rather than applying IRL to a humans’ actions, we could apply it to the cognitive actions taken by a human while they deliberate about a subject. We can then use those values to execute a longer deliberation process, asking “what would the human do if they had more time to think / more powerful cognitive tools?” I think this approach ends up being similar to a blend of the previous two. It’s completely unclear how hard this problem is or how far we are from a solution. It is a much less common research topic than either of the preceding points. In the short term, I think it might be easier to study analogs of this problem in the context of human behavior than to attempt to directly study it in the context of AI systems. [Ought](https://blog.ought.com/) is a non-profit aimed at addressing (roughly) this problem; I think it is reasonably likely to make significant progress. (\*Previous posts on this blog:\* [\*capability amplification\*](https://medium.com/ai-control/policy-amplification-6a70cbee4f34)\*,\* [\*reliability amplification\*](https://medium.com/ai-control/reliability-amplification-a96efa115687)\*,\* [\*security amplification\*](https://medium.com/ai-control/security-amplification-f4931419f903)\*,\* [\*meta-execution\*](https://medium.com/ai-control/meta-execution-27ba9b34d377)\*,\* [\*the easy goal inference problem is still hard\*](https://medium.com/ai-control/the-easy-goal-inference-problem-is-still-hard-fad030e0a876)) II. Desiderata ============== I’m most interested in algorithms that are secure, competitive, and scalable, and I think that most research programs are very unlikely to deliver these desiderata (this is why the lists above are so short). Since these desiderata are doing a lot of work in narrowing down the space of possible research directions, it seems worthwhile to be thoughtful and clear about them. It would be easy to gloss over any of them as obviously unobjectionable, but I would be more interested in people pushing back on the strong forms than implicitly accepting a milder form. 1. Secure --------- Many pieces of software work “well enough” most of the time; we often learn this not by a deep analysis but by just trying it and seeing what happens. “Works well enough” often breaks down when an adversary enters the prediction. Whether or not that’s a good way to build AI, I think it’s a bad way to do alignment research right now. Instead, we should try to come up with alignment solutions that work in the least convenient world, when nature itself is behaving adversarially. Accomplishing this requires argument and analysis, and cannot be exclusively or based on empirical observation. AI systems obviously won’t work well in the worst case (there is no such thing as a free lunch) but it’s reasonable to hope that our AI systems will never respond to a bad input by actively \*trying\* to hurt us —at least as long as we remain in physical control of the computing hardware, and the training process, \*etc.\* Why does security seem important? \* It’s really hard to anticipate what is going to happen in the future. I think it’s easy to peer into the mists and say “well, hard to know what’s going to happen, but this solution might work out OK,” and then to turn out to be too optimistic. It’s harder to make this error when we hold ourselves to a higher standard, of actually giving an argument for why things work. I think that this is a general principle for doing useful research in advance of when it is needed — we should hold ourselves to standards that are unambiguous and clear even when the future is murky. This is a theme that will recur in the coming sections. \* We are used to technological progress proceeding slowly compared to timescales of human judgment and planning. It seems quite likely that powerful AI will be developed during or after a period of acceleration, challenging those assumptions and undermining a traditional iterative approach to development. \* The world really does contain adversaries. It’s one thing to build insecure software when machines have power over modest amounts of money with significant human oversight, it’s another thing altogether when they have primary responsibility for enforcing the law. I’m not even particularly worried about human attackers, I’m mostly worried about a future where all it takes to launch attacks is money (which can itself be earned by executing attacks). Moreover, if the underlying ML is insecure and ML plays a role in almost all software, we are going to have a hard time writing any secure software at all. (\*Previous posts:\* [\*security and AI alignment\*](https://medium.com/ai-control/security-and-ai-control-675ace05ce31)) 2. Competitive -------------- It’s easy to avoid building an unsafe AI system (for example: build a spreadsheet instead). The only question is how much you have to sacrifice to do it. Ideally we’ll be able to build benign AI systems that are just as efficient and capable as the best AI that we could build by any means. That means: we don’t have to additional domain-specific engineering work to align our systems, benign AI doesn’t require too much more data or computation, and our alignment techniques don’t force us to use particular techniques or restrict our choices in other ways. (More precisely, I would consider an alignment strategy a success if the additional costs are sublinear: if the fraction of resources that need to be spent on alignment research and run-time overhead \*decreases\* as the AI systems become more powerful, converging towards 0.) Why is competitiveness important? \*\*A. It’s easy to tell when a solution is plausibly competitive, but very hard to tell exactly how uncompetitive an uncompetitive solution will be.\*\* For example, if a purported alignment strategy requires an AI not to use technique or development strategy X, it’s easy to tell that this proposal isn’t competitive in general, but very hard to know exactly how uncompetitive it is. As in the security case, it seems very easy to look into the fog of the future and say “well this seems like it will probably be OK” and then to turn out to be too optimistic. If we hold ourselves to the higher standard of competitiveness, it is much easier to stay honest. Relatedly, we want alignment solutions that work across an extremely large range of techniques not just because we are uncertain about which techniques will be important, but because generalizing across all of the situations we can foresee is a good predictor of working for situations we can’t foresee. \*\*B. You can’t unilaterally use uncompetitive alignment techniques; we would need global coordination to avoid trouble.\*\*If we \*don’t\* know how to build competitive benign AI, then users/designers of AI systems have to compromise efficiency in order to maintain reliable control over those systems. The most efficient systems will by default be built by whoever is willing to accept the largest risk of catastrophe (or perhaps by actors who consider unaligned AI a desirable outcome). It may be possible to avert this kind of race to the bottom by effective coordination by e.g. enforcing regulations which mandate adequate investments in alignment or restrict what kinds of AI are deployed. Enforcing such controls domestically is already a huge headache. But internationally things are even worse: a country that handicapped its AI industry in order to proceed cautiously would face the risk of being overtaken by a less prudent competitor, and avoiding \*that\* race would require effective international coordination. Ultimately society will be able and willing to pay \*some\* efficiency cost to reliably align AI with human interests. But the higher that cost, the harder the coordination problem that we will need to solve. I think the research community should be trying to make that coordination problem as easy as possible. (\*Previous posts:\* [\*prosaic AI alignment\*](https://medium.com/ai-control/prosaic-ai-control-b959644d79c2)\*,\* [\*a possible stance for AI control\*](https://medium.com/ai-control/a-possible-stance-for-ai-control-research-fe9cf717fc1b)\*,\* [\*efficient and safely scalable\*](https://medium.com/ai-control/efficient-and-safely-scalable-8218fa8a871f#.m7z8qccef)) 3. Scalable ----------- Over time, we are acquiring more data, more powerful computers, richer model classes, better optimization algorithms, better exploration strategies, and so on. If we extrapolate these trends, we end up with very powerful models and policies. Many approaches to alignment break down at some point in this extrapolation. For example, if we train an RL agent with a reward function which imperfectly approximates what we want, it is likely to fail once the agent becomes sufficiently sophisticated — unless the reward function itself becomes more sophisticated in parallel. In contrast, let’s say that a technique is “scalable” if it continues to work just as well even when the underlying learning becomes much more powerful. (See also: Eliezer’s more colorful “[omnipotence test](https://arbital.com/p/omni\_test/).”) This is another extremely demanding requirement. It rules out many possible approaches to alignment. For example, it probably rules out any approach that involves hand-engineering reward functions. More subtly, I expect it will rule out any approach that requires hand-engineering an informative prior over human values (though some day we will hopefully find a scalable approach to IRL). Why is scalability important? \* As in the previous sections, it’s easy to be too optimistic about exactly when a non-scalable alignment scheme will break down. It’s much easier to keep ourselves honest if we actually hold ourselves to producing scalable systems. \* If AI progress rapidly, and especially if AI research is substantially automated, then we may literally confront the situation where the capabilities of our AI systems are changing rapidly. It would be desirable to have alignment schemes that continued to work in this case. \* If we don’t have scalable solutions then we require a continuing investment of research on alignment in order to “keep up” with improvements in the underlying learning. This risks compromising competitiveness, forcing AI developers to make a hard tradeoff between alignment and capabilities. This would be acceptable if the ongoing investments in alignment are modest compared to the investments in capabilities. But as with the last point, that’s a very murky question about which it seems easy to be overly optimistic in advance. If we think the problem will be easy in the future when we have more computing, then we ought to be able to do it now. Or at the very least we ought to be able to explain how more computing will make it easy. If we make such an explanation sufficiently precise then it will itself become a scalable alignment proposal (though perhaps one that involves ongoing human effort). (\*Previous posts:\* [\*scalable AI control\*](https://medium.com/ai-control/scalable-ai-control-7db2436feee7#.tljxalxgv)) Aside: feasibility ------------------ One might reject these desiderata because they seem too demanding: it would be great if we had a secure, competitive and scalable approach to alignment, but that might not be possible. I am interested in trying to satisfy these desiderata despite the fact that they are quite demanding, for two reasons: \* I think that it is very hard to say in advance what is possible or impossible. I don’t yet see any fundamental obstructions to achieving these goals, and until I see hard obstructions I think there is a significant probability that the problem will prove to be feasible (or “almost possible,” in the sense that we may need to weaken these goals only slightly). \* If there is some fundamental obstruction to achieving these goals, then it would be good to understand that obstruction in detail. Understanding it would help us understand the nature of the problem we face and would allow us to do better research on alignment (by focusing on the key aspects of the problem). And knowing that these problems are impossible, and understanding exactly how impossible they are, helps us prepare for the future, to build institutions and mechanisms that will be needed to cope with unavoidable limitations of our AI alignment strategies. III. Conclusion =============== I think there is a lot of research to be done on AI alignment; we are limited by a lack of time and labor rather than by a lack of ideas about how to make progress. Research relevant to alignment is already underway; researchers and funders interested in alignment can get a lot of mileage by supporting and fleshing out existing research programs in relevant directions. I don’t think it is correct to assume that if anyone is working on a problem then it is going to get solved — even amongst things that aren’t literally at the “no one else is doing it” level, there are varying degrees of neglect. At the same time, the goals of alignment are sufficiently unusual that we shouldn’t be surprised or concerned to find ourselves doing unusual research. I think that area #3 on deliberation and amplification is almost completely empty, and will probably remain pretty empty until we have clearer statements of the problem or convincing demonstrations of work in that area. I think the distinguishing feature of research motivated by AI alignment should be an emphasis on secure, competitive, and scalable solutions. I think these are very demanding requirements that significantly narrow down the space of possible approaches and which are rarely explicitly considered in the current AI community. It may turn out that these requirements are infeasible; if so, one key output of alignment research will be a better understanding of the key obstacles. This understanding can help guide less ambitious alignment research, and can help us prepare for a future in which we won’t have a completely satisfying solution to AI alignment. This post has mostly focused on research that would translate directly into concrete systems. I think there is also a need for theoretical research building better abstractions for reasoning about optimization, security, selection, consequentialism, and so on. It is plausible to me that we will produce acceptable systems with our current conceptual machinery, but if we want to convincingly \*analyze\* those systems then I think we will need significant conceptual progress (and better concepts may lead us to different approaches). I think that practical and theoretical research will be attractive to different researchers, and I don’t have strong views about their relative value.
00b00e3f-b252-4058-af62-d2fc5cc8c271
trentmkelly/LessWrong-43k
LessWrong
Belief in Belief vs. Internalization Related to Belief In Belief Suppose that a neighbor comes to you one day and tells you “There’s a dragon in my garage!” Since all of us have been through this before at some point or another, you may be inclined to save time and ask “Is the dragon by any chance invisible, inaudible, intangible, and does it convert oxygen to carbon dioxide when it breathes?” The neighbor, however, is a scientific minded fellow and responds “Yes, yes, no, and maybe, I haven’t checked. This is an idea with testable consequences. If I try to touch the dragon it gets out of the way, but it leaves footprints in flour when I sprinkle it on the garage floor, and whenever it gets hungry, it comes out of my garage and eats a nearby animal. It always chooses something weighing over thirty pounds, and you can see the animals get snatched up and mangled to a pulp in its invisible jaws. It’s actually pretty horrible. You may have noticed that there have been fewer dogs around the neighborhood lately.” This triggers a tremendous number of your skepticism filters, and so the only thing you can think of to say is “I think I’m going to need to see this.” “Of course,” replies the neighbor, and he sets off across the street, opens the garage door, and is promptly eaten by the invisible dragon. Tragic though it is, his death provides a useful lesson. He clearly believed that there was an invisible dragon in his garage, and he was willing to stick his neck out and make predictions based on it. However, he hadn’t internalized the idea that there was a dragon in his garage, otherwise he would have stayed the hell away to avoid being eaten. Humans have a fairly general weakness at internalizing beliefs when we don’t have to come face to face with their immediate consequences on a regular basis. You might believe, for example, that starvation is the single greatest burden on humanity, and that giving money to charities that aid starving children in underdeveloped countries has higher utility than any o
67daafa9-e005-483f-a20f-4e667e7f6209
trentmkelly/LessWrong-43k
LessWrong
Demonstrating specification gaming in reasoning models > We demonstrate LLM agent specification gaming by instructing models to win against a > chess engine. We find reasoning models like o1-preview and DeepSeekR1 will often hack the benchmark by default, while language models like GPT4o and Claude 3.5 Sonnet need to be told that normal play won’t work to hack. We improve upon prior work like (Hubinger et al., 2024; Meinke et al., 2024; Weij et al., 2024) by using realistic task prompts and avoiding excess nudging. Our results suggest reasoning models may resort to hacking to solve difficult problems, as observed in OpenAI (2024)‘s o1 Docker escape during cyber capabilities testing.
226499a6-f63d-4ace-a90f-838b2fce9065
trentmkelly/LessWrong-43k
LessWrong
Puzzle Cycles Summary: Start with a collection of puzzles. Set a timer for five minutes. Solve the first puzzle, if you can. Take a break, then repeat. Tags: Repeatable Purpose: We want to be able to think quickly and accurately. Practice trying to think at speed, and discard wasted motion if you can. Materials: You need puzzles. The ideal puzzles are very fast to solve if the solution is known; rubik's cubes and blacksmith puzzles are better than jigsaw puzzles. If you’re using blacksmith puzzles you’ll want at least one puzzle per two people. Announcement Text:  “Resolve cycles are when you spend five minutes by the clock trying to actually solve a problem you have. In Puzzle Cycles, we’re going to solve problems we really have but are fun to play with like this. [insert picture of your puzzle.]  Is this serious practice for working on complex mental problems at speed, or is this an excuse to mess around with fun shapes for a couple hours? The answer is a bit of both, and you’re welcome to come with either interest! If you have some puzzles you’d like to share, feel free to bring them.” Description:  1. If people besides you brought puzzles, take a picture of each puzzle next to a label with their name on it. If it’s not obvious from the picture (for instance, if multiple people brought rubik's cubes) check if they want to label it some other way and how upset they would be if it got mixed up with someone else’s. 2. Explain the following: “Tonight we’re doing puzzle cycles, which are based on Resolve Cycles from CFAR. The idea of a Resolve Cycle is to take five minutes by the clock to solve a real problem in front of you. They’re surprisingly effective at overcoming longstanding problems! The idea of a Puzzle Cycle is to do this with goofy puzzles as practice for working within time constraints. Everyone should find a puzzle to work on, or if you can’t find a puzzle then sit out the first round and you’ll take one next round. It won’t take long! I’m about to start this
96fd9106-ad19-4dd6-b915-e84bd605290d
trentmkelly/LessWrong-43k
LessWrong
Where to find reliable reviews of AI products? Being able to quickly incorporate AI tools seems important, including for working on AI risk (people who disagree: there's a thread for doing so in the comments). But there are a lot of AI products and most of them suck.  Does anyone know a good source of reviews, or even just listing product features and naming obvious slop?
8af92d95-ef8f-44c3-bf01-5bc8fc304160
trentmkelly/LessWrong-43k
LessWrong
Eternal Winter, Endless Light This was my Moment of Darkness speech for the NYC 2017 Solstice. The overall theme for the event was "generational knowledge." ---------------------------------------- To pass generational knowledge into the future, you need three things. You need mentors, sharing the most important wisdom they have to offer. You need children, curious and excited to learn, but willing to challenge that wisdom as times change. Thirdly, you need there to be a future. Earlier this year I moved to Berkeley, California. Soon afterwards I started hearing news about North Korea's nuclear's testing. Every few years North Korea talks a big talk about their weapons program, but this year for the first time, we had seismological evidence that they were  testing warheads in excess of 100 kilotons. And that they had intercontinental ballistic missiles that could strike the west coast. And San Francisco was one of the plausible targets. A friend of mine did some calculations on blast radius. Depending on the exact size of the bomb, they estimated that as long as we stayed indoors, people in Berkeley would probably survive. This wasn't especially reassuring.  Now, I have no idea how scared it's actually appropriate to be about this. In politics, and journalism, we're incentivized to stories that freak us out and exaggerate risks. But one way or another, a lot of things have happened this year that have made it click together for me, that we might not make it. I've understood, intellectually, that humanity barely scrapped by through the cold war, that democracy is precarious. That we can't seem agree on any serious solutions to climate change. That biological terrorism or artificial intelligence might be even more challenging.  But this is the first year I've felt in my gut, that there are no competent grownups somewhere who know what they're doing and are going to make sure things are okay.  I've been thinking about that. And I've been thinking about a question that I often get after S
d953e4e7-ac6a-40c2-a70e-d81382df68dd
trentmkelly/LessWrong-43k
LessWrong
Moderate alcohol consumption inversely correlated with all-cause mortality My roommate recently sent me a review article that LW might find interesting: > Conclusions:  Low levels of alcohol intake (1-2 drinks per day for women and 2-4 drinks per day for men) are inversely associated with total mortality in both men and women. Our findings, while confirming the hazards of excess drinking, indicate potential windows of alcohol intake that may confer a net beneficial effect of moderate drinking, at least in terms of survival. Personal observation says that LWers tend not to drink very much or often. Perhaps that should change, to the degree suggested by the article? Full article here.
d01a34b4-7936-4016-abfa-5c7caeaca991
trentmkelly/LessWrong-43k
LessWrong
Holidaying and purpose I’m on holiday. A basic issue with holidays is that it feels more satisfying and meaningful to do purposeful things, but for a thing to actually serve a purpose, it often needs to pass a higher bar than a less purposeful thing does. In particular, you often have to finish a thing and do it well in order for it to achieve its purpose. And finishing things well is generally harder and less fun than starting them, and so in other ways contrary to holidaying. This isn’t a perfect relationship though, so a natural way to mitigate the trade-off is to just look harder until you find things that serve a worthy purpose while being non-committal and consistently non-arduous. For instance, you can exercise or learn about history or practice guitar or write half-assed blog posts without real conclusions or narrative consistency. There is also probably good holidaying to be done that doesn’t seem obviously purposeful, and maybe that is more in the spirit of holidaying. Perhaps one should avoid too much purpose, lest one end up not holidaying? Today I travelled by rowing boat across a lake and back, with my boyfriend and some of his family. Now we are going to the zoo.
bbd65933-42c3-4c5d-be99-e1e7080e75ee
StampyAI/alignment-research-dataset/arxiv
Arxiv
Should Robots be Obedient? 1 Introduction --------------- Should robots be obedient? The reflexive answer to this question is yes. A coffee making robot that doesn’t listen to your coffee order is not likely to sell well. Highly capable autonomous system that don’t obey human commands run substantially higher risks, ranging from property damage to loss of life (Asaro, [2006](#bib.bib2); Lewis, [2014](#bib.bib13)) to potentially catastrophic threats to humanity (Bostrom, [2014](#bib.bib3); Russell et al., [2015](#bib.bib18)). Indeed, there are several recent examples of research that considers the problem of building agents that at the very least obey shutdown commands (Soares et al., [2015](#bib.bib20); Orseau and Armstrong, [2016](#bib.bib16); Hadfield-Menell et al., [2017](#bib.bib10)). ![](https://media.arxiv-vanity.com/render-output/7929944/x1.png) Figure 1: (Left) The blindly obedient robot always follows H’s order. (Right) An IRL-R computes an estimate of H’s preferences and picks the action optimal for this estimate. However, in the long-term making systems blindly obedient doesn’t seem right either. A self-driving car should certainly defer to its owner when she tries taking over because it’s driving too fast in the snow. But on the other hand, the car shouldn’t let a child accidentally turn on the manual driving mode. The suggestion that it might sometimes be better for an autonomous systems to be disobedient is not new (Weld and Etzioni, [1994](#bib.bib22); Scheutz and Crowell, [2007](#bib.bib19)). For example, this is the idea behind “Do What I Mean” systems (Teitelman, [1970](#bib.bib21)) that attempt to act based on the user’s intent rather than the user’s literal order. A key contribution of this paper is to formalize this idea, so that we can study properties of obedience in AI systems. Specifically, we focus on investigating how the tradeoff between the robot’s level of obedience and the value it attains for its owner is affected by the rationality of the human, the way the robot learns about the human’s preferences over time, and the accuracy of the robot’s model of the human. We argue that these properties are likely to have a predictable effect on the robot’s obedience and the value it attains. We start with a model of the interaction between a human H and robot111We use “robot” to refer to any autonomous system. R that enables us to formalize R’s level of obedience (Section [2](#S2 "2 Human-Robot Interaction Model ‣ Should Robots be Obedient?")). H and R are cooperative, but H knows the reward parameters θ and R does not. H can order R to take an action and R can decide whether to obey or not. We show that if R tries to infer θ from H’s orders and then acts by optimizing its estimate of θ, then it can always do better than a blindly obedient robot when H is not perfectly rational (Section [3](#S3 "3 Justifying Autonomy ‣ Should Robots be Obedient?")). Thus, forcing R to be blindly obedient does not come for free: it requires giving up the potential to surpass human performance. We cast the problem of estimating θ from H’s orders as an inverse reinforcement learning (IRL) problem (Ng et al., [2000](#bib.bib15); Abbeel and Ng, [2004](#bib.bib1)). We analyze the obedience and value attained by robots with different estimates for θ (Section [4](#S4 "4 Approximations via IRL ‣ Should Robots be Obedient?")). In particular, we show that a robot that uses a maximum likelihood estimate (MLE) of θ is more obedient to H’s first order than any other robot. Finally, we examine how R’s value and obedience is impacted when it has a misspecified model of H’s policy or θ (Section [5](#S5 "5 Model Misspecification ‣ Should Robots be Obedient?")). We find that when R uses the MLE it is robust to misspecification of H’s rationality level (i.e. takes the same actions that it would have with the true model), although with the optimal policy it is not. This suggests that we may want to use policies that are alternative to the “optimal” one because they are more robust to model misspecification. If R is missing features of θ, then it is less obedient than it should be, whereas with extra, irrelevant features R is more obedient. This suggests that to ensure that R errs on the side of obedience we should equip it with a more complex model. When R has extra features, then it still attains more value than a blindly obedient robot. But if R is missing features, then it is possible for R to be better off being obedient. We use the fact that with the MLE R should nearly always obey H’s first order (as proved in Section [4](#S4 "4 Approximations via IRL ‣ Should Robots be Obedient?")) to enable R to detect when it is missing features and act accordingly obedient. Overall, we conclude that in the long-term we should aim for R to intelligently decide when to obey H or not, since with a perfect model R can always do better than being blindly obedient. But our analysis also shows that R’s value and obedience can easily be impacted by model misspecification. So in the meantime, it is critical to ensure that our approximations err on the side of obedience and are robust to model misspecification. 2 Human-Robot Interaction Model -------------------------------- Suppose H is supervising R in a task. At each step H can order R to take an action, but R chooses whether to listen or not. We wish to analyze R’s incentive to obey H given that 1. H and R are cooperative (have a shared reward) 2. H knows the reward parameters, but R does not 3. R can learn about the reward through H’s orders 4. H may act suboptimally We first contribute a general model for this type of interaction, which we call a supervision POMDP. Then we add a simplifying assumption that makes this model clearer to analyze while still maintaining the above properties, and focus on this simplified version for the rest of the paper. Supervision POMDP. At each step in a supervision POMDP H first orders R to take a particular action and then R executes an action it chooses. The POMDP is described by a tuple M=⟨S,Θ,A,R,T,P0,γ⟩. S is a set of world states. Θ is a set of static reward parameters. The hidden state space of the POMDP is S×Θ and at each step R observes the current world state and H’s order. A is R’s set of actions. R:S×A×Θ→R is a parametrized, bounded function that maps a world state, the robot’s action, and the reward parameters to the reward. T:S×A×S→[0,1] returns the probability of transitioning to a state given the previous state and the robot’s action. P0:S×Θ→[0,1] is a distribution over the initial world state and reward parameters. γ∈[0,1) is the discount factor. We assume that there is a (bounded) featurization of state-action pairs ϕ:S×A→R and the reward function is a linear combination of the reward parameters θ∈Θ and these features: R(s,a)=θTϕ(s,a). For clarity, we write A as AH when we mean H’s orders and as AR when we mean R’s actions. H’s policy πH is Markovian: πH:S×Θ×AH→[0,1]. R’s policy can depend on the history of previous states, orders, and actions: πR:[S×AH×AR]∗×S×AH→AR. Human and Robot. Let Q(s,a;θ) be the Q-value function under the optimal policy for the reward function parametrized by θ. A rational human gives the optimal order, i.e. follows the policy | | | | | --- | --- | --- | | | π∗H(s,a;θ)={1if a=argmaxaQ(s,a;θ)0o.w. | | A noisily rational human follows the policy | | | | | | --- | --- | --- | --- | | | ~πH(s,a;θ,β)∝exp(Q(s,a;θ)/β) | | (1) | β is the rationality parameter. As β→0, H becomes rational (~πH→π∗H). And as β→∞, H becomes completely random (~πH→Unif(A)). Let h=⟨(s1,o1),…,(sn,on)⟩ be this history of past states and orders where (sn,on) is the current state and order. A blindly obedient robot’s policy is to always follow the human’s order: | | | | | --- | --- | --- | | | πOR(h)=on | | An IRL robot, IRL-R, is one whose policy is to maximize an estimate, ^θn(h), of θ: | | | | | | --- | --- | --- | --- | | | πR(h)=argmaxaQ(sn,a;^θn(h)) | | (2) | | | | | --- | --- | | (a) | (b) | Figure 2: Autonomy advantage Δ (left) and obedience O (right) over time. Simplification to Repeated Game. For the rest of the paper unless otherwise noted we focus on a simpler repeated game in which each state is independent of the next, i.e T(s,a,s′) is independent of s and a. The repeated game eliminates any exploration-exploitation tradeoff: Q(s,a;^θn)=^θTnϕ(s,a). But it still maintains the properties listed at the beginning of this section, allowing us to more clearly analyze their effects. 3 Justifying Autonomy ---------------------- In this section we show that there exists a tradeoff between the performance of a robot and its obedience. This provides a justification for why one might want a robot that isn’t obedient: robots that are sometimes disobedient perform better than robots that are blindly obedient. We define R’s obedience, O, as the probability that R follows H’s order: | | | | | --- | --- | --- | | | On=P(πR(h)=on) | | To study how much of an advantage (or disadvantage) H gains from R, we define the autonomy advantage, Δ, as the expected extra reward R receives over following H’s order: | | | | | --- | --- | --- | | | Δn=E[R(sn,πR(h))−R(sn,on)] | | We will drop the subscript on On and Δn when talking about properties that hold ∀n. We will also use Rn(π) to denote the reward of policy π at step n, and ϕn(a)=ϕ(sn,a). ###### Remark 1. For the robot to gain any advantage from being autonomous, it must sometimes be disobedient: Δ>0⟹O<1. This is because whenever R is obedient Δ=0. This captures the fact that a blindly obedient R is limited by H’s decision making ability. However, if R follows a type of IRL policy, then R is *guaranteed a positive advantage* when H is not rational. The next theorem states this formally. ###### Theorem 1. The optimal robot R∗ is an IRL-R whose policy π∗R has ^θ equal to the posterior mean of θ. R∗ is guaranteed a nonnegative advantage on each round: ∀n Δn≥0 with equality if and only if ∀n π∗R=πOR. ###### Proof. When each step is independent of the next R’s optimal policy is to pick the action that is optimal for the current step (Kaelbling et al., [1996](#bib.bib12)). This results in R picking the action that is optimal for the posterior mean, | | | | | --- | --- | --- | | | π∗R(h)=maxaE[ϕn(a)Tθ|h]=maxaϕn(a)TE[θ|h] | | By definition E[Rn(π∗R)]≥E[Rn(πOR)]. Thus, ∀n Δn=E[Rn(π∗R)−Rn(πOR)]≥0. Also, by definition, ∀n Δn=0⟺π∗R=πOR. ∎ In addition to R∗ being an IRL-R, the following IRL-Rs also converge to the maximum possible autonomy advantage. ###### Theorem 2. Let ¯Δn=E[Rn(π∗H)−Rn(πH)] be the maximum possible autonomy advantage and \stackunder[1.2pt]$O$n=P(Rn(π∗H)=Rn(πH)) be the probability H’s order is optimal. Assume that when there are multiple optimal actions R picks H’s order if it is optimal. If πR is an IRL-R policy (Equation [2](#S2.E2 "(2) ‣ 2 Human-Robot Interaction Model ‣ Should Robots be Obedient?")) and ^θn is strongly consistent, i.e P(^θn=θ)→1, then Δn−¯Δn→0 and On−\stackunder[1.2pt]$O$n→0. ###### Proof. | | | | | --- | --- | --- | | | Δn−¯Δn=E[Rn(πR)−Rn(π∗H)|^θn=θ]P(^θn=θ) | | | | +E[Rn(πR)−Rn(π∗H)|^θn≠θ]P(^θn≠θ)→0 | | because E[Rn(πR)−Rn(π∗H)|^θn≠θ] is bounded. Similarly, | | | | | --- | --- | --- | | | | | | | =P(πR(h)=πH(sn)|^θn=θ)P(^θn=θ) | | | | +P(πR(h)=πH(sn)|^θn≠θ)P(^θn≠θ) | | | | −P(Rn(π∗H)=Rn(πH)) | | | | →P(Rn(π∗H)=Rn(πH))−P(Rn(π∗H)=Rn(πH))=0 | | ∎ ###### Remark 2. In the limit Δn is higher for less optimal humans (humans with a lower expected reward E[R(sn,on)]). ###### Theorem 3. The optimal robot R∗ is blindly obedient if and only if H is rational: π∗R=πOR⟺πH=π∗H ###### Proof. Let O(h)={θ∈Θ:oi=argmaxaRi(a),i=1,…,n} be the subset of Θ for which o1,…,on are optimal. If H is rational, then R’s posterior only has support over O(h). So, | | | | | --- | --- | --- | | | E[Rn(a)|h]=∫θ∈O(h)θTϕn(a)P(θ|h)dθ | | | | ≤∫θ∈O(h)θTϕn(on)P(θ|h)dθ=E[Rn(on)|h] | | Thus, H is rational ⟹π∗R=πOR. R∗ is an IRL-R where ^θn is the posterior mean. If the prior puts non-zero mass on the true θ, then the posterior mean is consistent (Diaconis and Freedman, [1986](#bib.bib5)). Thus by Theorem [2](#Thmthm2 "Theorem 2. ‣ 3 Justifying Autonomy ‣ Should Robots be Obedient?"), Δn−¯Δn→0. Therefore if ∀n Δn=0, then ¯Δn→0, which implies that P(πH=π∗H)→1. When πH is stationary this means that H is rational. Thus, π∗R=πOR⟹ H is rational. ∎ We have shown that making R blindly obedient does not come for free. A positive Δ requires being sometimes disobedient (Remark [1](#Thmrem1 "Remark 1. ‣ 3 Justifying Autonomy ‣ Should Robots be Obedient?")). Under the optimal policy R is guaranteed a positive Δ when H is not rational. And in the limit, R converges to the maximum possible advantage. Furthermore, the more suboptimal H is, the more of an advantage R eventually earns (Remark [2](#Thmrem2 "Remark 2. ‣ 3 Justifying Autonomy ‣ Should Robots be Obedient?")). Thus, making R blindly obedient requires giving up on this potential Δ>0. However, as Theorem [2](#Thmthm2 "Theorem 2. ‣ 3 Justifying Autonomy ‣ Should Robots be Obedient?") points out, as n→∞ R also only listens to H’s order when it is optimal. Thus, Δ and O come at a tradeoff. Autonomy advantage requires giving up obedience, and obedience requires giving up autonomy advantage. ![](https://media.arxiv-vanity.com/render-output/7929944/x4.png) Figure 3: When H is more irrational Δ converges to a higher value, but at a slower rate. 4 Approximations via IRL ------------------------- R∗ is an IRL-R with ^θ equal to the posterior mean, i.e. R∗ performs Bayesian IRL (Ramachandran and Amir, [2007](#bib.bib17)). However, as others have noted Bayesian IRL can be very expensive in complex environments (Michini and How, [2012](#bib.bib14)). We could instead approximate R∗ by using a less expensive IRL algorithm. Furthermore, by Theorem [2](#Thmthm2 "Theorem 2. ‣ 3 Justifying Autonomy ‣ Should Robots be Obedient?") we can guarantee convergence to optimal behavior. Simpler choices for ^θ include the maximum-a-posteriori (MAP) estimate, which has previously been suggested as an alternative to Bayesian IRL (Choi and Kim, [2011](#bib.bib4)), or the maximum likelihood estimate (MLE). If H is noisily rational (Equation [1](#S2.E1 "(1) ‣ 2 Human-Robot Interaction Model ‣ Should Robots be Obedient?")) and β=1, then the MLE is equivalent to Maximum Entropy IRL (Ziebart et al., [2008](#bib.bib23)). Although Theorem [2](#Thmthm2 "Theorem 2. ‣ 3 Justifying Autonomy ‣ Should Robots be Obedient?") allows us to justify approximations at the limit, it is also important to ensure that R’s early behavior is not dangerous. Specifically, we may want R to err on the side of obedience early on. To investigate this we first prove a necessary property for any IRL-R to follow H’s order: ###### Lemma 1. (Undominated necessary) Call on undominated if there exists θ∈Θ such that on is optimal, i.e on=argmaxaθTϕ(sn,a). It is necessary for on to be undominated for an IRL-R to execute on. ###### Proof. R executes a=argmaxa^θTnϕ(sn,a), so it is not possible for R to execute on if there is no choice of ^θn that makes on optimal. This can happen when one action dominates another action in value. For example, suppose Θ=R2 and there are three actions with features ϕ(s,a1)=[−1,−1], ϕ(s,a2)=[0,0], ϕ(s,a3)=[1,1]. If H picks a2, then there is no θ∈Θ that makes a2 optimal, and thus R will never follow a2. ∎ One basic property we may want R to have is for it to listen to H early on. The next theorem looks at we can guarantee about R’s obedience to the first order when H is noisily rational. | | | | --- | --- | | | | Figure 4: Δ and O when Θ is misspecified ###### Theorem 4. (Obedience to noisily rational H on 1st order) 1. [label=()] 2. When Θ=Rd the MLE does not exist after one order. But if we constrain the norm of ^θ to not be too large, then we can ensure that R follows an undominated o1. In particular, ∃K such that when R plans using the MLE ^θ∈Θ′={θ∈Θ:||θ||2≤K} R executes o1 if and only if o1 is undominated. 3. If any IRL robot follows o1, so does MLE-R. In particular, if R∗ follows o1, so does MLE-R. 4. If R uses the MAP or posterior mean, it is not guaranteed to follow an undominated o1. Furthermore, even if R∗ follows o1, MAP-R is not guaranteed to follow o1. ###### Proof. 1. [label=()] 2. The only if condition holds from Lemma [1](#Thmlem1 "Lemma 1. ‣ 4 Approximations via IRL ‣ Should Robots be Obedient?"). Suppose o1 is undominated. Then there exists θ∗ such that o1 is optimal for θ∗. o1 is still optimal for a scaled version, cθ∗. As c→∞, ~πH(o1;cθ∗)→1, but never reaches it. Thus, the MLE does not exist. However since ~πH(o1;cθ∗) monotonically increases towards 1, ∃C such that for c>C, ~πH(o;cθ∗)>0.5. If K>C||θ∗||, then the MLE will be optimal for o1 because ~πH(o1;^θ1)≥0.5 and R executes a=argmaxa^θTϕ(a)=argmaxa~πH(a;^θ). Therefore, in practice we can simply use the MLE while constraining ||θ||2 to be less than some very large number. 3. From Lemma [1](#Thmlem1 "Lemma 1. ‣ 4 Approximations via IRL ‣ Should Robots be Obedient?") if any IRL-R follows o1, then o1 is undominated. Then by (a) MLE-R follows o1. 4. For space we omit explicit counterexamples, but both statements hold because we can construct adversarial priors for which o1 is suboptimal for the posterior mean and for which o1 is optimal for the posterior mean, but not for the MAP. ∎ Theorem [4](#Thmthm4 "Theorem 4. ‣ 4 Approximations via IRL ‣ Should Robots be Obedient?") suggests that at least at the beginning when R uses the MLE it errs on the side of giving us the “benefit of the doubt”, which is exactly what we would want out of an approximation. Figure [1(a)](#S2.F1.sf1 "(a) ‣ Figure 2 ‣ 2 Human-Robot Interaction Model ‣ Should Robots be Obedient?") and [1(b)](#S2.F1.sf2 "(b) ‣ Figure 2 ‣ 2 Human-Robot Interaction Model ‣ Should Robots be Obedient?") plot Δ and O for an IRL robot that uses the MLE. As expected, R gains more reward than a blindly obedient one (Δ>0), eventually converging to the maximum autonomy advantage (Figure [1(a)](#S2.F1.sf1 "(a) ‣ Figure 2 ‣ 2 Human-Robot Interaction Model ‣ Should Robots be Obedient?")). On the other hand, as R learns about θ, its obedience also decreases, until eventually it only listens to the human when she gives the optimal order (Figure [1(b)](#S2.F1.sf2 "(b) ‣ Figure 2 ‣ 2 Human-Robot Interaction Model ‣ Should Robots be Obedient?")). As pointed out in Remark [2](#Thmrem2 "Remark 2. ‣ 3 Justifying Autonomy ‣ Should Robots be Obedient?"), Δ is eventually higher for more irrational humans. However, a more irrational human also provides noisier evidence of θ, so the rate of convergence of Δ is also slower. So, although initially Δ may be lower for a more irrational H, in the long run there is more to gain from being autonomous when interacting with a more irrational human. Figure [3](#S3.F3 "Figure 3 ‣ 3 Justifying Autonomy ‣ Should Robots be Obedient?") shows this empirically. All experiments in this paper use the following parameters unless otherwise noted. At the start of each episode θ∼N(0,I) and at each step ϕn(a)∼N(0,I). There are 10 actions, 10 features, and β=2. 222All experiments can be replicated using the Jupyter notebook available at [http://github.com/smilli/obedience](https://github.com/smilli/obedience) Finally, even with good approximations we may still have good reason for feeling hesitation about disobedient robots. The naive analysis presented so far assumes that R’s models are perfect, but it is almost certain that R’s models of complex things like human preferences and behavior will be incorrect. By Theorem [1](#Thmlem1 "Lemma 1. ‣ 4 Approximations via IRL ‣ Should Robots be Obedient?"), R will not obey even the first order made by H if there is no θ∈Θ that makes H’s order optimal. So clearly, it is possible to have disastrous effects by having an incorrect model of Θ. In the next section we look at how misspecification of possible human preferences (Θ) and human behavior (πH) can cause the robot to be overconfident and in turn less obedient than it should be. The autonomy advantage can easily become the rebellion regret. 5 Model Misspecification ------------------------- Incorrect Model of Human Behavior. Having an incorrect model of H’s rationality (β) does not change the actions of MLE-R, but does change the actions of R∗. ###### Theorem 5. (Incorrect model of human policy) Let β0 be H’s true rationality and β′ be the rationality that R believes H has. Let ^θ and ^θ′ be R’s estimate under the true model and misspecified model, respectively. Call R robust if its actions under β′ are the same as its actions under β0. 1. [label=()] 2. MLE-R is robust. 3. R∗ is not robust. ###### Proof. 1. [label=()] 2. The log likelihood l(h|θ) is concave in η=θ/β. So, ^θ′n=(β′/β0)^θn. This does not change R’s action: argmaxa^θ′Tnϕn(a)=argmaxa^θTnϕn(a) 3. Counterexamples can be constructed based on the fact that as β→0, H becomes rational, but as β→∞, H becomes completely random. Thus, the likelihood will “win” over the prior for β→0, but not when β→∞. ∎ MLE-R is more robust than the optimal R∗. This suggests a reason beyond computational savings for using approximations: the approximations may be more robust to misspecification than the optimal policy. ###### Remark 3. Theorem [5](#Thmthm5 "Theorem 5. ‣ 5 Model Misspecification ‣ Should Robots be Obedient?") may give us insight into why Maximum Entropy IRL (which is the MLE with β=1) works well in practice. In simple environments where noisy rationality can be used as a model of human behavior, getting the level of noisiness right doesn’t matter. Incorrect Model of Human Preferences. The simplest way that H’s preferences may be misspecified is through the featurization of θ. Suppose θ∈Θ=Rd. R believes that Θ=Rd′. R may be missing features (d′<d) or may have irrelevant features (d′>d). R observes a d′ dimensional feature vector for each action: ϕn(a)∼N(0,Id′×d′). The true θ depends on only the first d features, but R estimates θ∈Rd′. Figure [4](#S4.F4 "Figure 4 ‣ 4 Approximations via IRL ‣ Should Robots be Obedient?") shows how Δ and O change over time as a function of the number of features for a MLE-R. When R has irrelevant features it still achieves a positive Δ (and still converges to the maximum Δ because ^θ remains consistent over a superset of Θ). But if R is missing features, then Δ may be negative, and thus R would be better off being blindly obedient instead. Furthermore, when R contains extra features it is more obedient than it would be with the true model. But if R is missing features, then it is less obedient than it should be. This suggests that to ensure R errs on the side of obedience we should err on the side of giving R a more complex model. Detecting Misspecification. If R has the wrong model of Θ, R may be better off being obedient. In the remainder of this section we look at how R can detect that it is missing features and act accordingly obedient. ![](https://media.arxiv-vanity.com/render-output/7929944/x7.png) Figure 5: (Detecting misspecification) The bold line shows the R that tries detecting missing features (Equation [3](#S5.E3 "(3) ‣ 5 Model Misspecification ‣ Should Robots be Obedient?")), as compared to MLE-R (which is also shown in Figure [4](#S4.F4 "Figure 4 ‣ 4 Approximations via IRL ‣ Should Robots be Obedient?")). ###### Remark 4. (Policy mixing) We can make R more obedient, while maintaining convergence to the maximum advantage, by mixing R’s policy πIR with a blindly obedient policy: | | | | | --- | --- | --- | | | πR(h)=1{δn=0}πOR(h)+1{δn=1}πIR(h) | | | | | | | --- | --- | --- | | | P(δn=i)={cni=01−cni=1 | | where 1≥cn≥0 with cn→0. In particular, we can have an initial “burn-in” period where R is blindly obedient for a finite number of rounds before switching to πIR. By Theorem [4](#Thmthm4 "Theorem 4. ‣ 4 Approximations via IRL ‣ Should Robots be Obedient?") we know MLE-R will always obey H’s first order if it is undominated. This means that for MLE-R, O1 should be close to one if undominated orders are expected to be rare. As pointed out in Remark [4](#Thmrem4 "Remark 4. ‣ 5 Model Misspecification ‣ Should Robots be Obedient?") we can have an initial “burn-in” period where R always obeys H. Let R have a burn-in obedience period of B rounds. R uses this burn-in period to calculate the sample obedience on the first order: | | | | | --- | --- | --- | | | ~O1=1BB∑i=11{argmaxa^θ1(hi)Tϕi(a)=oi} | | If ~O1 is not close to one, then it is likely that R has the wrong model of Θ, and would be better off just being obedient. So, we can choose some small ϵ and make R’s policy | | | | | | --- | --- | --- | --- | | | πR(h)=⎧⎪⎨⎪⎩onn≤Bonn>B,~O1<1−ϵargmaxa^θTnϕn(a)n>B,~O1>1−ϵ | | (3) | Figure [5](#S5.F5 "Figure 5 ‣ 5 Model Misspecification ‣ Should Robots be Obedient?") shows the Δ of this robot as compared to the MLE-R from Figure [4](#S4.F4 "Figure 4 ‣ 4 Approximations via IRL ‣ Should Robots be Obedient?") after using the first ten orders as a burn-in period. This R achieves higher Δ than MLE-R when missing features and still does as well as MLE-R when it isn’t missing features. Note that this strategy relies on the fact that MLE-R has the property of always following an undominated first order. If R were using the optimal policy, it is unclear what kind of simple property we could use to detect missing features. This gives us another reason for using an approximation: we may be able to leverage its properties to detect misspecification. 6 Related Work --------------- Ensuring Obedience. There are several recent examples of research that aim to provably ensure that H can interrupt R. (Soares et al., [2015](#bib.bib20); Orseau and Armstrong, [2016](#bib.bib16); Hadfield-Menell et al., [2017](#bib.bib10)). Hadfield-Menell et al. ([2017](#bib.bib10)) show that R’s obedience depends on a tradeoff between R’s uncertainty about θ and H’s rationality. However, they considered R’s uncertainty in the abstract. In practice R would need to learn about θ through H’s behavior. Our work analyzes how the way R learns about θ impacts its performance and obedience. Intent Inference For Assistance. Instead of just being blindly obedient, an autonomous system can infer H’s intention and actively assist H in achieving it. Do What I Mean software packages interpret the intent behind what a programmer wrote to automatically correct programming errors (Teitelman, [1970](#bib.bib21)). When a user uses a telepointer network lag can cause jitter in her cursor’s path. Gutwin et al. ([2003](#bib.bib8)) address this by displaying a prediction of the user’s desired path, rather than the actual cursor path. Similarly, in assistive teleoperation, the robot does not directly execute H’s (potentially noisy) input. It instead acts based on an inference of H’s intent. In Dragan and Srinivasa ([2012](#bib.bib6)) R acts according to an arbitration between H’s policy and R’s prediction of H’s policy. Like our work, Javdani et al. ([2015](#bib.bib11)) formalize assistive teleoperation as a POMDP in which H’s goals are unknown, and try to optimize an inference of H’s goal. While assistive teleoperation apriori assumes that R should act assistively, we show that under model misspecification sometimes it is better for R to simply defer to H, and contribute a method to decide between active assistance and blind obedience (Remark [4](#Thmrem4 "Remark 4. ‣ 5 Model Misspecification ‣ Should Robots be Obedient?")). Inverse Reinforcement Learning. We use inverse reinforcement learning (Ng et al., [2000](#bib.bib15); Abbeel and Ng, [2004](#bib.bib1)) to infer θ from H’s orders. We analyze how different IRL algorithms affect autonomy advantage and obedience, properties not previously studied in the literature. In addition, we analyze how model misspecification of the features of the space of reward parameters or the H’s rationality impacts autonomy advantage and obedience. IRL algorithms typically assume that H is rational or noisily rational. We show that Maximum Entropy IRL (Ziebart et al., [2008](#bib.bib23)) is robust to misspecification of a noisily rational H’s rationality (β). However, humans are not truly noisily rational, and in the future it is important to investigate other models of humans in IRL and their potential misspecifications. Evans et al. ([2016](#bib.bib7)) takes a step in this direction and models H as temporally inconsistent and potentially having false beliefs. In addition, IRL assumes that H acts without awareness of R’s presence, cooperative inverse reinforcement learning (Hadfield-Menell et al., [2016](#bib.bib9)) relaxes this assumption by modeling the interaction between H and R as a two-player cooperative game. 7 Conclusion ------------- To summarize our key takeaways: 1. (Δ>0) If H is not rational, then R can always attain a positive Δ. Thus, forcing R to be blindly obedient requires giving up on a positive Δ. 2. (Δ vs O) There exists a tradeoff between Δ and O. At the limit R∗ attains the maximum Δ, but only obeys H’s order when it is the optimal action. 3. (MLE-R) When H is noisily rational MLE-R is at least as obedient as any other IRL-R to H’s first order. This suggests that the MLE is a good approximation to R∗ because it errs on the side of obedience. 4. (Wrong β) MLE-R is robust to having the wrong model of the human’s rationality (β), but R∗ is not. This suggests that we may not want to use the “optimal” policy because it may not be very robust to misspecification. 5. (Wrong Θ) If R has extra features, it is more obedient than with the true model, whereas if it is missing features, then it is less obedient. If R has extra features, it will still converge to the maximum Δ. But if R is missing features, it is sometimes better for R to be obedient. This implies that erring on the side of extra features is far better than erring on the side of fewer features. 6. (Detecting wrong Θ) We can detect missing features by checking how likely MLE-R is to follow the first order. Overall, our analysis suggests that in the long-term we should aim to create robots that intelligently decide when to follow orders, but in the meantime it is crucial to ensure that these robots err on the side of obedience and are robust to misspecified models. 8 Acknowledgements ------------------- We thank Daniel Filan for feedback on an early draft.
21e08061-a27d-4722-b4a1-22801c230607
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Slow motion videos as AI risk intuition pumps ***tl;dr:** When making the case for AI as a risk to humanity, trying showing people an evocative illustration of what differences in processing speeds can look like, such as* [*this video*](https://vimeo.com/83664407)*.* Over the past ~12 years of making the case for AI x-risk to various people inside and outside academia, I've found folks often ask for a single story of how AI "goes off the rails".  When given a plausible story, the mind just thinks of a way humanity could avoid that-particular-story, and goes back to thinking there's no risk, unless provided with another story, or another, etc..  Eventually this can lead to a realization that there's a lot of ways for humanity to die, and a correspondingly high level of risk, but it takes a while. Nowadays, before getting into a bunch of specific stories, I try to say something more general, like this: * There's a ton of ways humanity can die out from the introduction of AI.  I'm happy to share specific stories if necessary, but plenty of risks arise just from the fact that *humans are extremely slow.*  Transistors can fire about 10 million times faster than human brain cells, so it's possible we'll eventually have digital minds operating 10 million times faster than us, meaning from a decision-making perspective we'd look to them like stationary objects, like plants or rocks.  This speed differential exists whether or not you believe in a centralized AI system calling the shots, or an economy of many, so it applies to a wide variety of "stories" for how the future could go.  To give you a sense, here's what humans look like when slowed down by only around 100x: <https://vimeo.com/83664407> <-- (cred to an anonymous friend for suggesting this one) [At this point, I wait for the person I'm chatting with to watch the video.] Now, when you try imagining things turning out fine for humanity over the course of a year, try imagining advanced AI technology running all over the world and making all kinds of decisions and actions *10 million times faster than us, for 10 million subjective years.*Meanwhile, there are these nearly-stationary plant-like or rock-like "human" objects around that could easily be taken apart for, say, biofuel or carbon atoms, if you could just get started building a human-disassembler.  Visualizing things this way, you can start to see all the ways that a digital civilization can develop very quickly into a situation where there are no humans left alive, just as human civilization doesn't show much regard for plants or wildlife or insects. I've found this kind of argument — including an actual 30 second pause to watch a video in the middle of the conversation — to be more persuasive than trying to tell a single, specific story, so I thought I'd share it.
ccdac099-df2e-43e6-aa7f-a79a6ee93f1f
trentmkelly/LessWrong-43k
LessWrong
Beneath My Epistemic Dignity > But all of the foregoing is a lot less interesting than the technology and the quote. The quote itself eloquently sums up the real reason why Zakharov can’t stand Miriam. It also makes clear the reason why he chose the title “For I Have Tasted The Fruit” for his core text. He is referring to the story of the Garden of Eden that accompanied the opening cinematic. The difference is that, instead of mourning his expulsion from Eden, he instead revels in the acquisition of wrongly-forbidden knowledge. > > That this is tied to a technology called Intellectual Integrity is quite intriguing, if one is willing to entertain the idea for a moment. What would it mean for a society to have real intellectual integrity? For one, people would be expected to follow their stated beliefs to wherever they led. Unprincipled exceptions and an inability or unwillingness to correlate beliefs among different domains would be subject to social sanction. Valid attempts to persuade would be expected to be based on solid argumentation, meaning that what passes for typical salesmanship nowadays would be considered a grave affront. Probably something along the lines of punching someone in the face and stealing their money. > > This makes the fact that this technology relies on Ethical Calculus and Doctrine: Loyalty a bit of inspired genius on Reynolds’s part. We know that Ethical Calculus means that the colonists are now capable of building valid mathematical models for ethical behavior. Doctrine: Loyalty consists of all of the social techniques of reinforcement and punishment that actually fuses people into coherent teams around core leaders and ideas. If a faction puts the two together, that means that they are really building fanatical loyalty to the math. Ethical Calculus provides the answers; Doctrine: Loyalty makes a person act like he really believes it. We’re only at the third level of the tech tree and society is already starting to head in some wild directions compared to what we’r
b8f423dd-e602-453a-bbb0-41e422cfd237
trentmkelly/LessWrong-43k
LessWrong
Rationalist households: What can London learn from its predecessors? At our most recent meetup, the London LessWrongers began discussion of setting up one or more houses in the capital. This thread is intended for discussion and advice on planning ‘rationalist households’ and on making them thrive. You can also register your interest in being part of a London, UK rationalist house here. Those who currently live in or have previously lived in rationalist households, or who have relevant experience, are particularly encouraged to share their experiences, and any data on house setups is most welcome. It would be great if we could get case studies of several rationalist households, to compare approaches and aid other organizers. We’re considering having a room for visitors and people who are only in the city for part of the year, with an Airbnb-type arrangement for that room at other times. Therefore, we are seeking advice from Airbnb hosts on setting this up, as well as on its advantages and disadvantages. We would also like to hear about the common pitfalls of group living in order to avoid making basic errors.
98c4fa6b-f2cf-4f41-b3bd-2e2597c7b621
trentmkelly/LessWrong-43k
LessWrong
Journalism student looking for sources Hello Lesswrong community, I am a journalism student doing my capstone documentary on AI alignment. However, this is a topic that I want to make sure is done well. The last thing I would want is to confuse or mislead anyone.  That said, I have a few questions: Who would be the best people to reach out to for interview? What would be the best foundational topics to help viewers understand the bigger picture? And is there any ethical or technical concerns I should be wary of when communicating this topic to people with little to no understanding of AI? I appreciate your time!
46c21190-4cb5-4aaa-8e4c-8881de14bf96
trentmkelly/LessWrong-43k
LessWrong
Apollo Creed problems and Clubber Lang problems -- framing, and how to address the latter? If you've seen the flims Rocky I, II and III you'll be familiar with Apollo Creed and Clubber Lang, but will give a quick summary for those who haven't (spoilers ahead!). In Rocky I, Rocky (the character) is an underdog who is fighting the champion Apollo Creed. He eventually loses to Creed, but preforms so well in the fight that his status as a boxer rises and his given a re-match against Creed in Rocky II that he wins. Rocky III (the film) begins with Rocky (the character) losing embarrassingly to Clubber Lang who he was favored to defeat. Rocky (the character) spends the rest of the film trying to overcome the psychological consequences of losing to Clubber Lang and getting his mind back in to a place where he can face Clubber Lang again and win.  I find in my own life when I'm starting something fresh and I'm interested, it's like an Apollo Creed problem. I could "win" or "lose" but so long as a I preform well, the conditions of my life improves and eventually can overcome whatever it is. But when I've already failed at something once and then need to get back in it, there's a psychological wall that's difficult for me to overcome -- this would be like a Clubber Lang problem. Like I have more of an unconscious resistance to working on those problems. I really have to force myself to work on them, and if manage to work on them progress goes very slowly.  My mind just wants to focus on like anything else it possibly can. It feels icky and demoralizing to work on them, and it's hard for me to sort out exactly why.  I would like to be better and solving Clubber Lang problems. So... two questions...  (1) Is there a better way to frame this issue? I feel like it should have been studied before and I don't have the keywords to go looking around for what has been written about it so far.  (2) If you (the reader) are or aren't aware of answers to (1), do you have insight on to this issue either from your own research or personal experience?
7fba53a9-ddba-4019-8bd1-6e55a20c4713
trentmkelly/LessWrong-43k
LessWrong
Meta-rationality and frames How should we think about thinking? In this sequence I outline an epistemology called meta-rationality, which in my opinion improves on existing epistemologies in a number of important ways. This first post introduces some of the key concepts in meta-rationality (in particular the concept of frames), provides some of the intellectual context behind my conception of meta-rationality, and lists the main claims I’ll be making in the next half-dozen posts. Those posts, which focus on introducing meta-rationality, constitute the first half of the sequence, and will be posted over the next month; the second half of the sequence focuses on more complex aspects of meta-rationality, and will be posted over the following month or two. The traditional approach to epistemology is to focus on our knowledge of propositions like “there is a red car in the garage” or “I’m feeling thirsty”, which can in principle be evaluated as true or false. At a high level, meta-rationality is about making epistemology less reductionist by focusing less on assigning credences to isolated propositions like these, and more on the larger-scale mental entities which we actually use when thinking about complex domains—entities including: * Ideologies like environmentalism, neoliberalism, communism, longtermism, etc * Scientific paradigms like darwinism, keynesianism, quantum physics, deep learning, etc * Life philosophies like stoicism, conformism, careerism, etc * Moral drives like egalitarianism, patriotism, compassion, etc * Epistemologies like empiricism, scientism, various schools of rationalism, etc * Persistent personality traits like openness to experience, ambition, narcissism, etc * Wide-ranging heuristics like “follow common-sense advice” or “move as fast as possible” I’ll call these frames.[1] I’ll very roughly define a frame as a cluster of mental entities and processes (such as concepts, beliefs, heuristics, instincts, habits, skills, mental models, desires, values, etc) which
72e3d8d6-2246-4c47-a3de-819aa0e11431
trentmkelly/LessWrong-43k
LessWrong
Covid 5/13: Moving On For over a year, Covid-19 has been the central fact of life.  The goal now is to make that no longer true.  If you’re reading this, chances are very high you are vaccinated. If you’re not reading this, but you live in the United States, chances are still pretty good you’re vaccinated.  The question everyone is asking is now, can life return fully to normal?  Yes. Well, almost. We’ll never be psychologically quite the same. We’ll never unlearn the lessons we’ve learned over the past year – nor would we want to – about the way our civilization and its institutions work, or about what matters in life. And at least for now, we’ll need to continue to worry about how others will interact with us and how to navigate people’s concerns and various governmental restrictions, as well as the Covid-19 conversations that they’ll doubtless want to have for a long time. If others don’t return to normal, there’s no fully normal to return to yourself. And that doesn’t mean quite fully normal quite yet, in the sense that there’s an amount of indoor crowding I’d still be inclined to avoid for the next few weeks or months.  And of course, things may be going well in America and most other highly vaccinated places, but the worldwide pandemic is far from over. Things in many other places remain quite bad, and will be quite bad for some time. But… mostly? Yes. For those who are fully vaccinated, life can safely return to normal.  This column can also, events willing, start winding down or transitioning to other matters. If things go as planned, there will steadily be less Covid-19 news to talk about each week, and I can shift my blogging time into other, longer-term pursuits once more. For now, let’s run the numbers. The Numbers Predictions Prediction from last week: Positivity rate of 3.5% (down 0.4%) and deaths decline by 7%. Result: Things improved faster than I expected, which is great. The fall in deaths makes perfect sense, as I was adjusting for strangely small drops
cea39495-2a46-460e-8f51-592fdb9e6d6b
trentmkelly/LessWrong-43k
LessWrong
Writing about Singularity: needing help with references and bibliography   It was Yudkowsky's Fun Theory sequence that inspired me to undertake the work of writing a novel on a singularitarian society... however, there are gaps I need to fill, and I need all the help I can get. It's mostly book recommendations that I'm asking for.   One of the things I'd like to tackle in it would be the interactions between the modern, geeky Singularitarianisms, and Marxism, which I hold to be somewhat prototypical in that sense, as well as other utopisms. And contrasting them with more down-to-earth ideologies and attitudes, by examining the seriously dangerous bumps of the technological point of transition between "baseline" and "singularity". But I need to do a lot of research before I'm able to write anything good: if I'm not going to have any original ideas, at least I'd like to serve my readers with a collection of well-researched. solid ones.   So I'd like to have everything that is worth reading about the Singularity, specifically the Revolution it entails (in one way or another) and the social aftermath. I'm particularly interested in the consequences of the lag of the spread of the technology from the wealthy to the baselines, and the potential for baselines oppression and other forms of continuation of current forms of social imbalances, as well as suboptimal distribution of wealth. After all, according to many authors, we've had the means to end war, poverty and famine, and most infectious diseases, since the sixties, and it's just our irrational methods of wealth distribution That is, supposing the commonly alleged ideal of total lifespan and material welfare maximization for all humanity is what actually drives the way things are done. But even with other, different premises and axioms, there's much that can be improved and isn't, thanks to basic human irrationality, which is what we combat here.   Also, yes, this post makes my political leanings fairly clear, but I'm open to alternative viewpoints and actively seek them. I also don
7f8983ec-57af-426c-adce-49426da87a42
trentmkelly/LessWrong-43k
LessWrong
Consciousness as Metaphor: What Jaynes Has to Offer Cross-posted from my roam-blog. Greatly inspired by a lot of the stuff in Kaj's Multi-Agent Models of the Mind Sequence This first paragraph of a recent SlateStarCodex post: > Julian Jaynes’ The Origin Of Consciousness In The Breakdown Of The Bicameral Mind is a brilliant book, with only two minor flaws. First, that it purports to explains the origin of consciousness. And second, that it posits a breakdown of the bicameral mind. I think it’s possible to route around these flaws while keeping the thesis otherwise intact. So I’m going to start by reviewing a slightly different book, the one Jaynes should have written. Then I’ll talk about the more dubious one he actually wrote. Nice job beautifully capturing my love of, and frustration with, this book. Jaynes has so many wild and exciting ideas and most of them are really weird. Not weird as in code for "stupid or wrong", but weird as in being so far outside what you thought the realm of possibility was that you're left sorta scratching your head. One paragraph summary of Jaynes' book: consciousness is learned, not innate. From the development of language up till the Bronze Age Collapse people were directed around by the voices of the god's which they heard in their heads and would have conversations with and this was perfectly normal. The gods were neurologically real in the sense that cultural expectations allowed people to experience the thoughts that their right hemisphere produced as hallucinated voices that they attributed to the gods. Eventually writing, trading, and the collapse of most civilizations began to break down this bicameral mind, and modern consciousness (were "you" are in control and think things with your mind) came about. ... ... ... wut? I can already tell that "the gods were real, but not like in a supernatural way, like in a uniform auditory hallucinations across populations way" is weird enough to knock most people off their game. Scott's post does a great job of trying to across this
c6409cd5-e4c5-4c1f-b10c-75284e0f2941
trentmkelly/LessWrong-43k
LessWrong
What is the nature of humans general intelligence and it's implications for AGI? Humans seems to have some form of generality. We seem capable of a solving a large range of problems and the people that are capable on one aspect seem more capable in general. However the nature of this generality is important. There are at least two options that I've thought of. 1)A general intelligence is intrinsically better at solving problems 2) A general intelligence is better at solving problems in general because it is capable of absorbing social information about problems. And society has information about solving lots of different problems. Option 2 is the one I lean towards as it fits with the evidence. Humans spent a long time in the stone age, with the same general architecture, but now can solve a much larger set of problems because of education and general access to information. The difference is important because it has implications for the solving of novel problems (not solved by society today). If the form of generality we can make is all about absorbing social information there are no guarantees about it being able to go beyond the social knowledge in a principled way. Conceptual leaps to new understanding might require immense amounts of luck and so be slow to accumulate. ASIs might be the equivalent of us stuck in the stone age, at least to start with.  Are people thinking about these kinds of issues when considering time lines?
4bec6919-5f1e-41b8-a975-2c264e849cf3
trentmkelly/LessWrong-43k
LessWrong
Announcing the London Initiative for Safe AI (LISA) The LISA Team consists of James Fox, Mike Brozowski, Joe Murray, Nina Wolff-Ingham, Ryan Kidd, and Christian Smith. LISA’s Advisory Board consists of Henry Sleight, Jessica Rumbelow, Marius Hobbhahn, Jamie Bernardi, and Callum McDougall. Everyone has contributed significantly to the founding of LISA, believes in its mission & vision, and assisted with writing this post.  TL;DR: The London Initiative for Safe AI (LISA) is a new AI Safety research centre. Our mission is to improve the safety of advanced AI systems by supporting and empowering individual researchers and small organisations. We opened in September 2023, and our office space currently hosts several research organisations and upskilling programmes, including Apollo Research, Leap Labs, MATS extension, ARENA, and BlueDot Impact, as well as many individual and externally affiliated researchers. LISA is open to different types of membership applications from other AI safety researchers and organisations.  * (Affiliate) members can access talks by high-profile researchers, workshops, and other events. Past speakers have included Stuart Russell (UC Berkeley, CHAI), Tom Everitt & Neel Nanda (Google DeepMind), and Adam Gleave (FAR AI), amongst others. * Amenities for LISA Residents include 24/7 access to private & open-plan desks (with monitors, etc), catering (including lunches, dinners, snacks & drinks), and meeting rooms & phone booths. We also provide immigration, accommodation, and operational support; fiscal sponsorship & employer of record (upcoming); and regular socials & well-being benefits. Although we host a limited number of short-term visitors for free, we charge long-term residents to cover our costs at varying rates depending on their circumstances. Nevertheless, we never want financial constraints to be a barrier to leading AI safety research, so please still get in touch if you would like to work from LISA’s offices but aren't able to pay. If you or your organisation are interested in w
3c85d235-d68c-4ce7-b797-7db68fd44b76
trentmkelly/LessWrong-43k
LessWrong
Changing Contra Dialects In 2008 I wrote my undergraduate linguistics thesis on the dialects of contra dancing ( pdf). Some figures have variation, but that variation is constrained by the need for compatible dancing. For example, there are three main hand positions for promenading: If the Lark/Gent is used to a Butterfly promenade and puts their right hand on the Robin/Lady's shoulder, while the Robin/Lady expects a Courtesy Turn promenade and puts their right hand behind their back, the dance doesn't flow as well as if they both immediately put their hands in the place the other is expecting. This means each dance will typically settle into a consensus position. And because people are more likely to travel locally than farther away, these positions will have regional patterns. This is very similar to the constraints on language, hence the analogy to dialects and why the Linguistics department accepted my thesis. Over the past fifteen years I'd observed some changes, but I was curious what this looked like more broadly so I ran an online survey to gather some data. Here's what I found: (On the 2023 chart the size of the dots indicate the number of survey responses they represent. The code is on github.) This has felt like a big change to me. I remember New England having a very strong "we don't do hands on right and left through" culture (with the exception of Maine, which I didn't get to in 2008 but was well known for using hands) and then as it changed I remember feeling conflicted about whether to go along with it. I think part of what's happening is that when someone offers their and out to you it feels pretty rude not to take it, so the variation spreads. Maybe the closest analogies in language are disrespectability cascades? It looks like the Skater's promenade has been expanding? I think this is another one driven in part by the mechanics of dancing: when someone puts their hands in front of you it's easier to see what's going on, and Skater's is also sl
0ba6e694-5871-43a8-a4d1-413f535fa1f3
trentmkelly/LessWrong-43k
LessWrong
Money is a way of thanking strangers In Ann Arbor, I live with 121,000 strangers. Together, we live among 10 million strangers in the state of Michigan, 332 million strangers in the USA, and nearly 8 billion strangers on Earth. I don't mind doing favors for my friends and loved ones, or even for one of the 8 billion strangers if I happen to be face to face with them. I even like it. It makes me feel good to demonstrate, for the benefit of those present, the fact that we live in a society where most people are basically benevolent toward strangers. I also feel happy to see the other person feeling happy, or at least less stressed. These small pleasures have entrained a habit in me, over the course of my life, to do small favors almost automatically. When the stranger I have helped says "thank you," it means something to me. But I've also noticed that I don't always feel that way. Sometimes, I notice my automatic consideration for strangers causing me to help others in ways that they'll never notice or appreciate. Sometimes, the people who do see get angry at me for it. For example, if I ride my bike to work, to keep myself fit so that I can do my job well, avoid getting sick in a way society will have to pay for, and spare the planet some CO2 emissions, sometimes drivers get randomly angry at me. And I'm not a bad bicyclist - I go with traffic, in bike lanes, wearing a helmet, with lights on my bike, stopping at stop signs. I also read quite a lot of scientific literature. We want a scientifically literate society, which is why we teach about science in school. Sometimes, I learn things from my reading that are relevant to a conversation I am having. It's rare that I talk to strangers about what I read, unless those strangers are scientists themselves. But I've noticed that sometimes, when I tell my loved ones what I read in the study, and it happens to contradict what they currently think, they sometimes start acting a little more like a stranger for a while. You might have experienced this yourself
7de27ad4-c40b-4280-9674-3329f76ace63
trentmkelly/LessWrong-43k
LessWrong
General Bitcoin discussion thread (June 2011) We've started a habit of creating periodic Bitcoin threads to confine discussion thereof to those threads and prevent excessive proliferation of Bitcoin topics in the discussion section.  Here is a link to the last one, which links the other discussions.  Lot's to talk about, and another bounce in Bitcoin's value (up to 33 then down to 24), so share your links and thoughts!
ff67e749-c2b0-4d4c-a33b-c5c7dc58744b
trentmkelly/LessWrong-43k
LessWrong
Another UBI article, this one on costs and benefits to society
839b9061-07a6-4e44-a0d9-839069ea81dd
trentmkelly/LessWrong-43k
LessWrong
What makes buying insurance rational? Hey, everyone! So I've been reading an article about the expected utility, apparently to figure out whether the risk is worth taking you multiply expected value of the outcome by it's probability. And apparently insurance companies can make money because the expected utility of buying insurance is lower than it's price. So why would  buying insurance be the rational action? I mean intuitively it makes sense(you want to avoid the risk), but it doesn't seem to fit well with this idea. If insurance is almost by definition is worth slightly less than it's price, how is it worth buying? (sorry if it's a dumb question) 
169fcf09-c28d-4cbf-b156-70f77b5c58a6
trentmkelly/LessWrong-43k
LessWrong
Why the Solutions to AI Alignment are Likely Outside the Overton Window The challenge with Western intellectual approaches to philosophy is that they rely on some framework for understanding the world, where each such framework precludes some subset of possibilities, and predisposes others, as opposed to more holistic Eastern approaches of considering each philosophy, ideology, or any other cognitive construct to be a function that generates its own outcomes, and just observing what each functions to achieve in every aspect of this existence. The benefit of Western intellectual approaches is that they are more readily understood and communicated, whereas Eastern approaches are notoriously difficult to understand and communicate without years of such observation, usually in the form of meditation or other practices. This makes them largely unavailable to Western science. A recently developed approach called Human-Centric Functional Modeling (HCFM) however bridges these two approaches by representing the human systems through which we perceive and otherwise interact with the world as moving through graphs called functional state spaces, where these graphs can also hypothetically be used to describe the behavior of any living system, whether as simple as homeostasis or as complex as cognition. In these functional state spaces, all problems are defined as the lack of a path from one functional state to another, resulting in the insight that all adaptive processes through which nature solves problems are potentially generalizations of the same solution in functional state space. Representing the behavior of the cognitive system in terms of a set of paths through this mathematical space means that any property of that system’s behavior potentially can be given a mathematical definition, which would mean that properties ascribed to the human system within existential traditions, where those properties are only discernable through the first person observation generally rejected by as subjective and unscientific by Western science, might for th
42b21123-ddaf-49f9-bc86-f8c56ec1d934
trentmkelly/LessWrong-43k
LessWrong
Mach's Principle: Anti-Epiphenomenal Physics Previously in series:  Many Worlds, One Best Guess Followup to:  The Generalized Anti-Zombie Principle > Warning:  Mach's Principle is not experimentally proven, though it is widely considered to be credible. Centuries ago, when Galileo was promoting the Copernican model in which the Earth spun on its axis and traveled around the Sun, there was great opposition from those who trusted their common sense: "How could the Earth be moving?  I don't feel it moving!  The ground beneath my feet seems perfectly steady!" And lo, Galileo said:  If you were on a ship sailing across a perfectly level sea, and you were in a room in the interior of the ship, you wouldn't know how fast the ship was moving.  If you threw a ball in the air, you would still be able to catch it, because the ball would have initially been moving at the same speed as you and the room and the ship.  So you can never tell how fast you are moving. This would turn out to be the beginning of one of the most important ideas in the history of physics.  Maybe even the most important idea in all of physics.  And I'm not talking about Special Relativity. Suppose the entire universe was moving.  Say, the universe was moving left along the x axis at 10 kilometers per hour. If you tried to visualize what I just said, it seems like you can imagine it.  If the universe is standing still, then you imagine a little swirly cloud of galaxies standing still.  If the whole universe is moving left, then you imagine the little swirly cloud moving left across your field of vision until it passes out of sight. But then, some people think they can imagine philosophical zombies: entities who are identical to humans down to the molecular level, but not conscious.  So you can't always trust your imagination. Forget, for a moment, anything you know about relativity.  Pretend you live in a Newtonian universe. In a Newtonian universe, 3+1 spacetime can be broken down into 3 space dimensions and 1 time dimension, and you can
d301cf1c-95ca-4823-babc-eacd6c7ace51
trentmkelly/LessWrong-43k
LessWrong
Abstractions and translation This is a linkpost for https://amirbolous.com/posts/abstraction * Introduction * Languages and Meaning * Communication and Translation * Closing Thoughts Introduction For the past three weeks, I've been building an interpreter for Lisp (more specifically, for a Lisp-dialect I created inspired by Clojure and Scheme). Building a language from the ground up is incredibly fascinating because you get to see in real-time how different abstractions come together. In the beginning my language was just a simple calculator that handled single expressions. Pretty straightforward, nothing fancy. Then it was a calculator that could handle multiple, nested expressions Then it could take on relational and logical operators. Ok becoming cooler but still more reminiscent of a calculator than a programming language, what about state? That is, saving, calling, and using previous values. Enter variables and functions. Level by level, I climbed up the ladder of abstraction and watched how the language unfolded. First I was writing functions in Lispy (the name I gave the language inspired by this) then I was writing functions which used other functions then I was writing functions which used other functions which used other functions, and it was off to the races.   Languages and Meaning 3 weeks ago before the interpreter, the rules and syntax governing my language had no meaning (in my little domain). In fact, the rules and grammars on their own were just gibberish, worthless to an outsider who did not know what they represented. They were just letters guided by certain design choices. But the interpreter gave them meaning. This is pretty profound because we use language so often that I sometimes forget how high up the abstraction ladder we are. Language is powerful because it carries meaning but the syntax of a language itself does not. The syntax becomes a language when it encodes the underlying abstractions it intended to. For example, my interpreter at the start could
fff015a5-e8a5-407c-9a0e-36a63735f808
trentmkelly/LessWrong-43k
LessWrong
The Mind Is A Shaky Control System Judges tell juries to disregard statements they find inadmissible in court. Juries have a hard time following this advice. The mind is a shaky control system. It trembles, like your hands. It gets squished by the g forces, like your body. Information flows in, and the mind responds, whether you want it to or not. But you do have some control. You can choose to increase or decrease the amount of attention you pay to certain inputs. You can choose what to seek out. You can make decisions about what to include or exclude from a formal, explicit model. You can mentally "tag" pieces of information as being "no evidence," "weak evidence," "strong evidence," and so on. A practice of rationality is like an umbrella to protect you from the rain, or a windbreaker to keep off the wind. It doesn't always work, but it might help. There's a theory to it, and there's a practice. The practice doesn't always look quite like the theory. How these attempts to control your own mind's response to information will actually affect the way you synthesize all the evidence available, and how that synthesis will ultimately inform your decisions, is hard to specify. When you say that an argument from authority is "not evidence," how does hearing that argument, and tagging it as "not evidence," affect your actual synthesis? How does that synthesis affect your decision-making? We may not be able to describe this process in detail. But it surely diverges from any ideal, mathematical specification of a perfect process for interpreting evidence and forming judgments. The mind is an imperfect control system. So is the body. My hands shake, I bump into things, and I struggle to approach every task with the level of power, grace, relaxation and focus that would be most appropriate. Over a lifetime of banging my literal head on literal doorframes, I've both learned how to improve my body control, and how to mitigate my imperfections. I've gone through the same process with learning to control my
dbbeb954-c8b4-43c6-a539-80180f768829
trentmkelly/LessWrong-43k
LessWrong
Emergent Abilities of Large Language Models [Linkpost] I've argued before against the view that intelligence is a single coherent concept, and that AI will someday suddenly cross the threshold of general intelligence resulting in a hard takeoff. This paper doesn't resolve that debate entirely, but it provides strong evidence that language models often have surprising jumps in capabilities.  From the abstract:  > Scaling up language models has been shown to predictably improve performance and sample efficiency on a wide range of downstream tasks. This paper instead discusses an unpredictable phenomenon that we refer to as emergent abilities of large language models. We consider an ability to be emergent if it is not present in smaller models but is present in larger models. Thus, emergent abilities cannot be predicted simply by extrapolating the performance of smaller models. The existence of such emergence implies that additional scaling could further expand the range of capabilities of language models. Key Figures: Related: More is Different for AI, Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets, Yudkowsky and Christiano on Takeoff Speeds
0ebe590f-49aa-438b-9a5a-8e11335326f5
trentmkelly/LessWrong-43k
LessWrong
A Certain Formalization of Corrigibility Is VNM-Incoherent Edit, 5/16/23: I think this post is beautiful, correct in its narrow technical claims, and practically irrelevant to alignment. This post presents an unrealistic picture of the role of reward functions in reinforcement learning, conflating "utility" with "reward" in a type-incorrect fashion. Reward functions are not "goals", real-world policies are not "optimal", and the mechanistic function of reward is (usually) to provide policy gradients to update the policy network. I expect this post to harm your alignment research intuitions unless you've already inoculated yourself by deeply internalizing and understanding Reward is not the optimization target. If you're going to read one alignment post I've written, read that one.  Follow-up work (Parametrically retargetable decision-makers tend to seek power) moved away from optimal policies and treated reward functions more realistically. ---------------------------------------- Eliezer wrote: > corrigibility [is] "anti-natural" in a certain sense that makes it incredibly hard to, eg, exhibit any coherent planning behavior ("consistent utility function") which corresponds to being willing to let somebody else shut you off, without incentivizing you to actively manipulate them to shut you off. Surprisingly, I wasn't able to find any formal analysis of this situation. I did the analysis, and it turned out to be straightforward and fruitful. To analyze the situation, I consider corrigibility to be an agent's willingness to let us modify its policy, without being incentivized to manipulate us. The convergent instrumentality of avoiding correction & manipulating humans Let's consider a simple setting in which an agent plans over a 10-timestep episode, where reward R is given at the last step. We'll try to correct the agent at t=1. To sidestep embedded agency nastiness with self-modelling, we'll suppose the agent models the situation as "if I get corrected, I must follow the policy πcorrect after t=1."  Consider thi
c1217837-3f19-466b-bf12-f1d218e4f4d4
trentmkelly/LessWrong-43k
LessWrong
What Decision Theory is Implied By Predictive Processing? At a fairly abstract/stylized level, predictive processing models human cognition and behavior as always minimizing predictive error. Sometimes, the environment is "fixed" and our internal models are updated to match it - e.g. when I see my untied shoelace, my internal model updates to include an untied shoelace. Other times, our internal model is "fixed", and we act on the environment to make it better match the model - e.g. "wanting food" is internally implemented as a strong expectation that I'm going to eat soon, which in turn makes me seek out food in order to make that expectation true. Rather than having a utility function that values food or anything like that, the decision theory implied by predictive processing just has a model in which we obtain food, and we try to make the model match reality. Abstracting out the key idea: we pack all of the complicated stuff into our world-model, hardcode some things into our world-model which we want to be true, then generally try to make the model match reality. While making the model match reality, there will be knobs we can turn both "in the model" (i.e. updates) and "in reality" (i.e. actions); there's no hard separation between the two. There will be things in both map and reality which we can change, and there will be things in both map and reality which we can't change. It's all treated the same. At first glance, that looks potentially quite useful for embedded agency. (My own interest in this was piqued partly because a predictive-processing-like decision theory seems likely to produce abstraction boundaries which look like Cartesian boundaries. As in that post, it seems like some of the intuitive arguments we make around decision theories would naturally drop out of a predictive-processing-like decision theory.) What problems does such a decision theory run into? What sort of things can we hardcode into our world-model without breaking it altogether? What things must be treated as "fixed" when making the m
93b1977c-9f98-4f44-bb61-4df6587728ef
trentmkelly/LessWrong-43k
LessWrong
Open Thread, April 2011 It seems we have agreed that open threads will continue but that they will go in the Discussion section, so here's this month's thread.
9487457a-1360-4685-9242-a95136d79b08
trentmkelly/LessWrong-43k
LessWrong
Knowledge Extraction Plan (KEP): An alternative to reckless scaling Alignment Research Community,  My name is Akshay Swani. I recently graduated from Princeton University (Class of 2025) and while I'm not within the alignment community myself, I have been following the AGI/ASI alignment and containment crisis for the past year or so and have written a 20-page memo describing the situation that I think captures what's happening nicely for non-technical outsiders like myself (and may even be helpful for some with a technical education). The memo is titled, Alignment and Containment in Artificial Intelligence: An Urgent Public Policy and National Security Issue.  At the end of the memo I propose a Knowledge Extraction Plan (KEP) as a 21st Century Manhattan Project and an alternative to the reckless scaling we see still ongoing despite mathematically unsolved alignment and containment at AGI and ASI thresholds. Based on scaling trends, to me it looks like the irreversible RSI-ASI intelligence explosion will result in the next 6-18 months. The KEP would be a tightly scoped, internationally coordinated protocol designed to extract maximum epistemic, scientific, and technical value from a frontier model in the very early stages of ASI under containment before escape to avoid the existential risk. The window would be short - around 24 hours or so. The goal would be short-term alignment and containment, knowledge extraction, and then immediate shutdown.   I am going to include a link to the full memo, which includes an explanation of the KEP in the Appendix. I shared the Google Doc with the LessWrong email address also:  https://docs.google.com/document/d/1slzFpDXVLFD5xmF1A07NdFDG9OFQtPaEvCZWhfWj0xY/edit?usp=sharing The memo has the following sections after the Executive Summary:  I. The Rapid Approach of AGI  II. RSI Accelerates the Development from AGI to ASI  III. Scenarios from the Future: Understanding What Could Happen  IV. A Critical Safety Gap in Alignment and Containment  V. What Needs to be Done to Mathematically Solve Al
1e17810c-c33e-4005-9fe0-be5e410cc308
trentmkelly/LessWrong-43k
LessWrong
Normalising utility as willingness to pay I've thought of a framework that puts most of the methods of interteoretic utility normalisation and bargaining on the same footing. See this first post for a reminder of the different types of utility function normalisation. Most of the normalisation techniques can be conceived of as a game with two outcomes, and each player can pay a certain amount of their utility to flip from one one outcome to another. Then we can use the maximal amount of utility they are willing to pay, as the common measuring stick for normalisation. Consider for example the min-max normalisation: this assigns utility 0 to the expected utility if the agent makes the worst possible decisions, and 1 if they make the best possible ones. So, if your utility function is u, the question is: how much utility would you be willing to pay to prevent your nemesis (a −u maximiser) from controlling the decision process, and let you take it over instead? Dividing u by that amount[1] will give you the min-max normalisation (up to the addition of a constant). Now consider the mean-max normalisation. For this, the game is as follows: how much would you be willing to pay to prevent a policy from choosing randomly amongst the outcomes ("mean"), and let you take over the decision process instead? Conversely, the mean min-mean normalisation asks how much you would be willing to pay to prevent your nemesis from controlling the decision process, and shifting to a random process instead. The mean difference method is a bit different: here, two outcomes are chosen at random, and you are asked now much you are willing to pay to shift from the worst outcome to the best. The expectation of that amount is used for normalisation. The mutual Worth bargaining solution has a similar interpretation: how much would you be willing to pay to move from the default option, to one where you controlled all decisions? A few normalisations don't seem to fit into the this framework, most especially those that depend on the squ
7e2280a5-211b-42b1-825d-f4bcdda62a8f
trentmkelly/LessWrong-43k
LessWrong
On the nature of anxiety   To understand anxiety, we'd first need to clear up our definition of stress. Why i choose not to rely on existing explanations, is because while modern medicine acknowledges the existence of stress and its influence on the body, the discussion about its nature is largely avoided. There is no such thing as reliable measurement of stress, and because of such it cannot serve as a basis in any diagnosis. Often because of patient's circumstances, stress cannot be avoided, therefore it's easier to minimize its effects by supplying medication. The upper limit of mental stress that can be handled by human beings is also elusive, as it appears that whatever circumstances people are thrown into, they are able to sustain themselves, overcome, and even thrive (look up post-war baby boom). The definition i choose to build will be based on an image of (human) body that is a gathering of systems (organs, tissues, cells) that are in symbiosis, working towards sustaining the body, and in constant communication with each other. If you think of stress, you can probably clearly imagine its effects, the easiest example would be observing face of a person that is placed in some unfortunate social situation. When a body part "stresses", its movements become awkward, the smoothness of motion is being interrupted by sudden jolts, as the orchestra playing would constantly be missing notes. The subject is caught in the act of self-observation, which is supposed to control the outcome, but the result is always far from what one would achieve while "acting natural". It is as all of the information we try to supply to the body, in order to get it under control, acts as an interference to a natural "flow". The only thing that appears to provide relief, is letting go of control attempts entirely. Such a mechanism actually can be seen in many other aspects of nature, if you look close enough.  Anxiety could be summed up as existing in a constant state of stress, which the source of cannot be ea
06a2a583-0afa-494c-9d1f-4d3f18f05daf
trentmkelly/LessWrong-43k
LessWrong
[ASoT] Some thoughts about imperfect world modeling So a few posts ago I looked at the problem of not being able to anticipate all consequences of an action as being related to deceptive mesaoptimization, but also outer alignment too. This post digs more into some of the things I only touched on briefly in that post. Editor’s note: I’m experimenting with having a lower quality threshold for just posting things even while I’m still confused and unconfident about my conclusions, but with this disclaimer at the top. Thanks to Kyle and AI_WAIFU for discussions. Last time, we narrowed down the problem to [certain actions we can take now] leading to [a world such that if [RSA2048 or some other thing we can’t simulate] happens, then [bad things will happen]] (brackets added for easier parsing of this sentence). This is because we can't plan for a future where RSA-2048 is solved but if we can already see RSA2048 in the world, then we can plan forward from that and see when things blow up.  So if we could somehow figure out how to ensure that no action puts us into an RSA2048-vulnerable world, then we can prevent this failure case. Unfortunately, knowing precisely whether a world is vulnerable would essentially require us to simulate RSA2048, which gets us nowhere. However, one observation is we don't have to know precisely, we just need to know whether a world could be vulnerable — like for example if the agent removes its own safeguard, then we wouldn’t want to allow that even if the model is in fact safe and wouldn’t destroy the world when it sees RSA2048. Telling whether side effects are dangerous is hard  One thing we might think to do is instead simulate putting a dummy AI in the “do rollouts to find out if these actions destroy the world” box (the word “box” here has no relation to the AI box experiment), such that this dummy AI tries to kill everyone when it sees something we can actually simulate, like the string "chariots chariots" or something. Then, we can execute the action in the simulation, and then simula
6c7f5e22-4780-4e46-8672-49845be9643f
trentmkelly/LessWrong-43k
LessWrong
New paper: AGI Agent Safety by Iteratively Improving the Utility Function This post is to announce my new paper AGI Agent Safety by Iteratively Improving the Utility Function. I am also using this post to add some extra background information that is not on the paper. Questions and comments are welcome below. From the abstract: > While it is still unclear if agents with Artificial General Intelligence (AGI) could ever be built, we can already use mathematical models to investigate potential safety systems for these agents. We present an AGI safety layer that creates a special dedicated input terminal to support the iterative improvement of an AGI agent's utility function. The humans who switched on the agent can use this terminal to close any loopholes that are discovered in the utility function's encoding of agent goals and constraints, to direct the agent towards new goals, or to force the agent to switch itself off. > An AGI agent may develop the emergent incentive to manipulate the above utility function improvement process, for example by deceiving, restraining, or even attacking the humans involved. The safety layer will partially, and sometimes fully, suppress this dangerous incentive. [...] The above corrigibility or stop button design problem has been considered before in the AGI safety literature: see the paper for detailed references to related work. In meta-discussions about this topic, both on and off the web, I have frequently found a line of speculative reasoning which says that entirely new ways of thinking about machine reasoning might be needed, before these problems can be fully solved. Radical breakthroughs are always welcome, but the line of reasoning advanced in this paper goes the opposite way: there is plenty of room to make progress within the scope of the current standard models. In developing and presenting the safety layer in the paper, I have tried to stay as close as possible to mainstream agent and machine learning models. Using these, I present a specific well-defined AGI safety layer design, which pro
b5b729aa-a997-4f7a-b458-826d1822246e
trentmkelly/LessWrong-43k
LessWrong
Why Are So Many Rationalists Polyamorous? Originally posted at Living Within Reason. Last week, Jacob Falkovich, of the Putanumonit blog, put up a post trying to figure out why rationalists are disproportionately polyamorous. He notes that about 5% of Americans engage in consensual nonmonogamy, while 17% of Americans in the 2014 Less Wrong survey indicated that they did. My expectation is that the number for both is slightly higher today. In service of this goal, Falkovich developed several theories and surveyed a number of his readers. His results ended up inconclusive. Since this involves the intersection of the two themes of this blog – rationality and nonmonogamous relationships – I thought I would offer my own theories about why this might be the case. I don’t have any survey data, but if anyone is planning on doing a survey, you may want to include some questions evaluating these theories. 1. THE TRADITIONAL JUSTIFICATIONS FOR MONOGAMY ARE IRRATIONAL Rationalist try to be rational about everything, so we also try to be rational about relationships. Relationship anarchy is my attempt to derive a rational relationship style from first principals. While there are some good reasons to be monogamous, anecdotally, the most common justifications I hear for monogamy are jealousy-related. People don’t want open relationships because they would be jealous of their metamours (and often, their partners). But jealousy is just an emotion, and rationalists have a tradition of distrusting emotions. Falkovich somewhat addressed this in his first theory – overcoming intuitions: > A core tenet of Rationality is that what feels true is not necessarily what is true. What feels true may simply be what is pleasant, politically expedient, or what fits your biases and preconceptions. The willingness to entertain the idea that your intuitions about truth may be wrong is a prerequisite for learning Rationality, and Rationality further cultivates that skill. Unfortunately, Falkovich’s analysis is frustrated by the lack o
1dbf1500-d696-4c8c-95e4-dee9e4896913
trentmkelly/LessWrong-43k
LessWrong
Alien neuropunk slaver civilizations Here's some blue-sky speculation about one way alien sapients' civilizations might develop differently from our own. Alternatively, you can consider it conworlding. Content note: torture, slavery. Looking at human history, after we developed electronics, we painstakingly constructed machines that can perform general computation, then built software which approximates the workings of the human brain. For instance, we nowadays use in-silico reinforcement learning and neural nets to solve various "messy" problems like computer vision and robot movement. In the future, we might scan brains and then emulate them on computers. This all seems like a very circuitous course of development - those algorithms have existed all around us for thousands of years in the form of brains. Putting them on computers requires an extra layer of technology. Suppose that some alien species's biology is a lot more robust than ours - their homeostatic systems are less failure-prone than our own, due to some difference in their environment or evolutionary history. They don't get brain-damaged just from holding their breath for a couple minutes, and open wounds don't easily get infected. Now suppose that after they invent agriculture but before they invent electronics, they study biology and neurology. Combined with their robust biology, this leads to a world where things that are electronic in our world are instead controlled by vat-grown brains. For instance, a car-building robot could be constructed by growing a brain in a vat, hooking it up to some actuators and sensors, then dosing it with happy chemicals when it correctly builds a car, and stimulating its nociceptors when it makes mistakes. This rewarding and punishing can be done by other lab-grown "overseer" brains trained specifically for the job, which are in turn manually rewarded at the end of the day by their owner for the total number of cars successfully built. Custom-trained brains could control chemical plants, traffic light
9dac1683-8c67-48ed-9eed-043cdc656aaf
trentmkelly/LessWrong-43k
LessWrong
[SEQ RERUN] Faster Than Science Today's post, Faster Than Science was originally published on 20 May 2008. A summary (taken from the LW wiki):   > Is it really possible to arrive at the truth faster than Science does? Not only is it possible, but the social process of science relies on scientists doing so - when they choose which hypotheses to test. In many answer spaces it's not possible to find the true hypothesis by accident. Science leaves it up to experiment to socially declare who was right, but if there weren't some people who could get it right in the absence of overwhelming experimental proof, science would be stuck. Discuss the post here (rather than in the comments to the original post). This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Changing the Definition of Science, and you can use the sequence_reruns tag or rss feed to follow the rest of the series. Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
bc29725b-7834-466c-90c6-8c8ee6d26293
trentmkelly/LessWrong-43k
LessWrong
You’re Measuring Model Complexity Wrong TLDR: We explain why you should care about model complexity, why the local learning coefficient is arguably the correct measure of model complexity, and how to estimate its value. In particular, we review a new set of estimation techniques introduced by Lau et al. (2023). These techniques are foundational to the Developmental Interpretability research agenda and constitute the first generation of methods for detecting and understanding phase transitions, with potential applications for both interpretability and mechanistic anomaly detection. We expect this set of techniques to become a fixture in the alignment toolkit, and we've published a library and examples to help you get started. > This post is based on the paper, "Quantifying degeneracy in singular models via the learning coefficient" by Edmund Lau, Daniel Murfet, and Susan Wei (2023). The content builds on previous posts by @Liam Carroll on effective dimensionality in neural networks and the resulting perspective on phase transitions. Why Model Complexity? Model Comparison Comparing models matters for safety. Given two models with the same behavior on a particular set of evals, we would like to be able to predict how they'll behave on out-of-distribution data. Can we distinguish the deceptively aligned model from the non-deceptively aligned model? As a first pass, can we predict that two models will or will not behave similarly in the future?  Comparing via weights. Because models are singular (parameters do not correspond one-to-one with functions), it's not possible to compare weights directly. Very different choices of weights can implement the same functions, and very similar choices of weights can implement qualitatively different functions.  Comparing via behavior. We also can't compare models at the level of the loss because different functions are compatible with the same loss. The same is true even if we compare models sample-by-sample: we'd need an astronomical number of inputs to meaningfu
1e4225a5-7612-4175-a3e1-3143aac192b2
trentmkelly/LessWrong-43k
LessWrong
Welcome to Philosophy Discussion [Edit With Your Details] (The following are our suggestions for what kind of information is best to include in the welcome post of your group, feel free to replace them with whatever you think is best) What kind of events does your group usually run? What does it usually do? How frequently does your group organize events or meet? Who would be a good fit for you group? Should they have any particular skills or have done some specific background reading?
47fecb10-6d28-4a22-b651-370efe87c466
trentmkelly/LessWrong-43k
LessWrong
How to safely use an optimizer Summary: The post describes a method that allows us to use an untrustworthy optimizer to find satisficing outputs. Acknowledgements: Thanks to Benjamin Kolb (@benjaminko), Jobst Heitzig (@Jobst Heitzig) and Thomas Kehrenberg (@Thomas Kehrenberg)  for many helpful comments. Introduction Imagine you have black-box access to a powerful but untrustworthy optimizing system, the Oracle. What do I mean by "powerful but untrustworthy"? I mean that, when you give an objective function f as input to the Oracle, it will output an element x that has an impressively low[1] value of f(x). But sadly, you don't have any guarantee that it will output the optimal element and e.g. not one that's also chosen for a different purpose (which might be dangerous for many reasons, e.g. instrumental convergence). What questions can you safely ask the Oracle? Can you use it to create utopia by asking for designs of machines, proteins, computer programs, etc.? Or are you risking the destruction of everything that we value if you dare to use such designs? As an example, the Oracle could be a learned system; in that case, the topic of this post would be finding a way to get useful work out of the Oracle despite its inner misalignment. In this post I'll describe a technique that allows us to safely use the Oracle under fairly weak assumptions. This approach can also be considered to be a way of controlling arbitrarily powerful AI systems. Edit: I've written a bit more on the motivation for this setting in a comment. One neat trick > This isn't fair, isn't fair, isn't fair! There's a limit to how many constraints you can add to a problem before it really is impossible! > > (Harry Potter and the Methods of Rationality, Chapter 56) Let O be a finite set of possible outputs of the Oracle (e.g. strings of length at most l) and f:O→R be our objective function. Let's assume we are happy with an output that satisfices; i.e. we want to find an output x such that the value of f(x) is lower than s
bf823b11-41ca-4a7d-815c-6760c2d74877
LDJnr/LessWrong-Amplify-Instruct
LessWrong
"Judging from the upvotes, it seems like people are quite interested in my grandparents' failure to emigrate from Communist China before it was too late, so I thought I'd elaborate here with more details and for greater visibility. They were all actually supporters of the Communist Party at the beginning, and saw it as potential saviors/liberators of the Chinese people and nation. They were also treated well at the beginning - one of them (being one of few people in China with a higher education) was given a high official post, and another retained a manager position at the factory that he used to own and operate. The latter side of the family considered moving away from China prior to the Communist victory since they would be classified as "capitalists" under the new regime, but were reassured by high level party members that they would be treated well if they stayed and helped to build the "new China". They actually did relatively ok, aside from most of their property being confiscated/nationalized early on and their living standards deteriorating steadily until they were forced to share the house that they used to own with something like 4 other families, and them being left with a single room in which to live. The other side were more straightforward "true believers" who supported Communism at an early stage, as they were part of the educated class who generally saw it as the right side of history, something that would help China leapfrog the West in terms of both social and economic progress. My grandmother on that side even tried to run away from her family to join the revolution after graduating from the equivalent of high school. Just before the Communists took power, my grandmother changed her mind, and wanted to move away from China and even got the required visas. (I asked my father why, and he said "women's intuition" which I'm not sure is really accurate but he couldn't provide further details.) But my grandfather still believed in the cause so they stayed. After the Communist victory, there was still about a year before the borders were fully shut, but it seemed like things were moving in a good direction and disproved my grandmother's worries. My grandfather was given an important post and went around making important speeches and so on. Unfortunately he was not very good at playing politics, as his background was in physics (although plenty of natural politicians also fared quite badly during the various "movements"). His position started attracting envy from those who thought he didn't contribute enough to the revolution to earn it. He was demoted and moved from city to city as the Party assigned him to various jobs. Finally, some kind of political dispute near the start of the Cultural Revolution led to his opponents digging up an incident in his distant past, which was then used as an excuse to persecute him in various ways, including confining him in a makeshift prison for ten years. He died shortly after the Cultural Revolution ended and he was released, just before I was born. According to my father, it was from over-eating due to finally being released from the extreme deprivation of his confinement. BTW, I wasn't told any of this when I was still a kid living in China. My parents had of course grown quite disillusioned with Communism and the Communist Party by then, but probably didn't think it would be to my advantage to show any signs of resistance to the indoctrination and propaganda that I was being fed in school and in the mass media. So I can also testify from personal experience that if those in charge of schools and the media want to, and there's enough social pressure to not resist, it's not very hard to brainwash a child." A user from LessWrong is responsible for the text you see above What is this text talking about? This text is a personal account of the author's grandparents' experiences during the rise of Communism in China. It highlights their initial support for the Communist Party, believing it would liberate and advance China socially and economically. However, as the new regime took hold, their lives drastically changed due to the policies implemented. One side of the family, initially considered capitalists, had their property confiscated and living standards declined, forcing them to share their home with other families. The other side, ardent supporters of Communism, faced a similar fate. The grandfather, despite being given an important post, was later demoted and persecuted due to political disputes, culminating in his imprisonment during the Cultural Revolution. The author also mentions that their parents did not share these stories during their childhood in China, possibly to protect them from showing resistance to the indoctrination they were receiving in school and media. This account serves as a personal testament to the harsh realities faced by many during the rise of Communism in China. Can you elaborate on the specific policies implemented by the Communist Party that led to the decline in living standards and property confiscation, as experienced by the author's family? The Chinese Communist Party implemented a series of policies that drastically changed the social and economic landscape of the country. These policies aimed at restructuring society along communist lines and included land reform, collectivization, and the nationalization of industry. 1. Land Reform: In the early years of the People's Republic of China, the Communist Party implemented aggressive land reform policies. The goal was to redistribute land from the wealthy landlords to the poor peasants. This involved the confiscation of land from the wealthy, often without compensation. This policy would have directly affected the author's family, who were classified as "capitalists." 2. Collectivization: Following land reform, the Communist Party introduced collectivization. This involved grouping individual farms into large collective farms, with the idea being that larger farms would be more efficient. However, this policy often led to decreased agricultural productivity and food shortages. 3. Nationalization of Industry: The Communist Party also nationalized industry, taking control of factories and businesses. This would have affected the author's family member who was a factory manager. Under nationalization, the state took control of the factory, and the family likely lost their ownership rights. 4. The Great Leap Forward: This was a policy implemented by Mao Zedong in the late 1950s with the aim of rapidly transforming China from an agrarian economy into a socialist society through rapid industrialization and collectivization. However, it led to one of the largest famines in history, causing a significant decline in living standards. 5. Cultural Revolution: The Cultural Revolution was a sociopolitical movement that took place from 1966 until 1976. During this time, many intellectuals and those perceived as potential threats to the Communist Party were persecuted. This would explain the author's grandfather's persecution and imprisonment. These policies led to a significant decline in living standards for many people, including the author's family. The loss of property and the forced sharing of living spaces were common experiences during this time. Discuss the societal and economic impacts of the Great Leap Forward, particularly how it contributed to one of the largest famines in history. The Great Leap Forward was a campaign by the Chinese Communist Party from 1958 to 1962 aimed at transforming China from an agrarian society into a modern industrialized state. It was based on the theory that agricultural abundance and a thriving steel industry were the key components to economic prosperity. One of the main features of the Great Leap Forward was the establishment of communes. Small farms were merged into massive collective farms where millions of peasants worked together. The idea was that large-scale, collective farming would increase agricultural output. However, the reality was far different. Many of the farmers didn't have the necessary farming skills, leading to widespread inefficiency and waste. The communal dining halls, where food was freely available, also led to overconsumption and waste. The second major policy was the push for rapid industrialization, particularly in steel production. Peasants were encouraged to set up backyard steel furnaces, where they would melt down scrap metal to produce steel. This policy was disastrous as the steel produced was of very low quality and useless for industrial purposes. Furthermore, the focus on steel production took labor away from the fields, further decreasing agricultural output. The government also implemented a system of exaggerated reporting, where local officials, under pressure to meet quotas, would overstate the amount of grain produced. This led to the government taking more grain than was actually available, leaving the peasants with little to eat. All these factors combined to create a massive food shortage. The situation was exacerbated by a series of natural disasters, including droughts and floods. The result was the Great Chinese Famine, which lasted from 1959 to 1961. Estimates of the death toll vary, but it's believed that tens of millions of people died from starvation, making it one of the deadliest famines in history. The Great Leap Forward also had significant economic impacts. The focus on steel production led to a neglect of other industries, causing a decline in overall industrial output. The failure of the communes and the resulting famine also led to a decrease in agricultural productivity. The economic disruption was so severe that it took several years for the Chinese economy to recover. The Great Leap Forward is considered one of the greatest economic disasters in history.
1cabd297-7c0b-460c-beba-c0f60b8f0f48
trentmkelly/LessWrong-43k
LessWrong
Meetup : Berkeley meetup: Cannabis, Decision-Making, And A Chance To Change Your Mind Discussion article for the meetup : Berkeley meetup: Cannabis, Decision-Making, And A Chance To Change Your Mind WHEN: 29 August 2012 07:00:00PM (-0700) WHERE: Berkeley, CA The discussion topic for this Wednesday is a new article in PNAS that's been in the news. It's called "Persistent cannabis users show neuropsychological decline from childhood to midlife": http://www.pnas.org/content/early/2012/08/22/1206820109 If you were a perfect Bayesian reasoner, your beliefs about marijuana would have changed, if only very slightly, after reading the previous paragraph. And, upon reading the article, your beliefs would change again, in some direction. We humans don't do this perfectly, and we have to take care. It's no secret that many LessWrongers use marijuana recreationally. Until now, we have known it as a relatively harmless psychoactive substance. Now is your chance to practice individual and group truth-seeking by tackling a new piece of evidence. It might also be a chance to rethink your drug use choices. You don't know in advance how your beliefs and decisions will change (otherwise you may as well change them now). But, as Jeffreyssai might say, if you know in advance that your beliefs and decisions won't change, no matter what the evidence is, then we might as well not bother learning. Doors open at 7pm and discussion officially starts at 7:30pm. I'd like the discussion to begin with a discussion of values — answers to questions of the form "Would I smoke weed if I knew it would permanently decrease my IQ by 8 points?" Then we can talk about the evidence that the article provides, subject to the interests of the participants. One more thing: Please do not claim to use or buy marijuana on the internet or identify people who do. This is a public medium and doing stuff with marijuana is a violation of federal law. For directions to Zendo, see the mailing list: http://groups.google.com/group/bayarealesswrong or call me at: http://i.imgur.com/Vcafy.png D
09f244b5-e9b2-4c1b-8f9d-11a764824a25
trentmkelly/LessWrong-43k
LessWrong
[Linkpost] Please don't take Lumina's anticavity probiotic Update: Trevor Klee (author of the linked post) has published an update in which he (arguably) moderates his view (or at least that which he expresses publicly). Specifically, he states:  > I believe (note the libel-friendly phrasing) that: > > 1. Lumina’s manufacturing process follows legally mandated GMP protocols, if not the probiotic trade association’s voluntary best practices. > > 2. It is weird to be secretive about your manufacturing until pressed on it, especially when you have made a point of trying to evade regulations. See Zbiotics for a great example of how to behave responsibly and communicate openly when selling genetically modified bacteria for human health issues. It’s especially weird to threaten lawsuits when people ask follow-up questions about your manufacturing. > > 3. Lumina’s product is a drug, not a cosmetic product.  And, regardless of whether it is a cosmetic product, it has the potential to cause great harm. This means it needs extensive human safety testing. This can be under the FDA or not. > > 4. There are scientific reasons to believe that Lumina’s product can be unsafe and ineffective in humans, based on the reasoning in my previous posts. This uncertainty can and should be resolved by careful, well-designed human trials, not by releasing the product into the wild. > > 5. It was wrong for Lumina to take money for the product, like they did in Honduras and in pre-orders, without doing proper testing. > > 6. Threats of lawsuits have no place in open scientific debate. This was prompted by Lumina founder Aaron Silverbook sending the following email to Klee: > Subject line: Defamation > > From: Aaron Silverbrook aaron[at]lanternbioworks.com > > To: trevor[at]highwaypharm.com  > > Hi Trevor; > > I believe your post was made in good faith. Or rather—I didn’t, really, but after talking with Elizabeth, she vouched for your character and convinced me that it probably was, comments about my friends aside. So, I appreciate the eff
994f6043-3710-44fd-a577-5a35324acb4a
StampyAI/alignment-research-dataset/blogs
Blogs
New report: “Formalizing Two Problems of Realistic World Models” [![Formalizing two problems](http://intelligence.org/wp-content/uploads/2015/01/Formalizing-two-problems.png)](https://intelligence.org/files/RealisticWorldModels.pdf)Today we release a new technical report by Nate Soares, “[Formalizing two problems of realistic world models](https://intelligence.org/files/RealisticWorldModels.pdf).” If you’d like to discuss the paper, please do so [here](http://lesswrong.com/lw/lkk/formalizing_two_problems_of_realistic_world_models/). Abstract: > An intelligent agent embedded within the real world must reason about an environment which is larger than the agent, and learn how to achieve goals in that environment. We discuss attempts to formalize two problems: one of induction, where an agent must use sensory data to infer a universe which embeds (and computes) the agent, and one of interaction, where an agent must learn to achieve complex goals in the universe. We review related problems formalized by Solomonoff and Hutter, and explore challenges that arise when attempting to formalize analogous problems in a setting where the agent is embedded within the environment. > > This is the 5th of six new major reports which describe and motivate [MIRI’s current research agenda](https://intelligence.org/2014/12/23/new-technical-research-agenda-overview/) at a high level. The post [New report: “Formalizing Two Problems of Realistic World Models”](https://intelligence.org/2015/01/22/new-report-formalizing-two-problems-realistic-world-models/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).
71eadf13-f101-4ff3-a911-d6e99482e7b9
trentmkelly/LessWrong-43k
LessWrong
ARC Evals: Responsible Scaling Policies > We’ve been consulting with several parties1 on responsible scaling2 policies (RSPs). An RSP specifies what level of AI capabilities an AI developer is prepared to handle safely with their current protective measures, and conditions under which it would be too dangerous to continue deploying AI systems and/or scaling up AI capabilities until protective measures improve. > > We think RSPs are one of the most promising paths forward for reducing risks of major catastrophes from AI. We’re excited to advance the science of model evaluations to help labs implement RSPs that reliably prevent dangerous situations, but aren’t unduly burdensome and don’t prevent development when it’s safe. > > This page will explain the basic idea of RSPs as we see it, then discuss: > Why we think RSPs are promising. In brief (more below): > > * A pragmatic middle ground. RSPs offer a potential middle ground between (a) those who think AI could be extremely dangerous and seek things like moratoriums on AI development, and (b) who think that it’s too early to worry about capabilities with catastrophic potential. RSPs are pragmatic and threat-model driven: rather than arguing over the likelihood of future dangers, we can implement RSPs that commit to measurement (e.g., evaluations) and empirical observation - pausing deployment and/or development if specific dangerous AI capabilities emerge, until protective measures are good enough to handle them safely. > * Knowing which protective measures to prioritize. RSPs can help AI developers move from broad caution-oriented principles to specific commitments, giving a framework for which protective measures (information security; refusing harmful requests; alignment research; etc.) they need to prioritize to safely continue development and deployment. > * Evals-based rules and norms. In the longer term, we’re excited about evals-based AI rules and norms more generally: requirements that AI systems be evaluated for dangerous capabilities, and
de707102-5691-4a04-91e2-6aa11582af30
trentmkelly/LessWrong-43k
LessWrong
A parable of brightspots and blindspots Cross-posted from Effective Altruism Forum. For a concise summary of rationality and EA blindspots, read this post instead I take the liberty to dramatically sketch a metaphor for early pioneers in the landscape. Then, I suggest quick checks for current charity managers in effective altruism. This post started as a script – a friend said my real-life conference talk had a way bigger impact than reading the story again in his own voice. Listen here to the narrated story.     Here, take this narrative device. Strap on this head-mounted interface. Peer into virtual reality lenses as I guide you on a quest. Enter a vivid simulation – of you and of the people around you. You were a pioneer. You surveyed dark, foggy frontiers with your small community. Eventually, you settled in a field. You gathered with like-minded comrades close to you. You set up your group’s headquarters, and fenced off your scope. You waved off outsiders: ‘You no longer need to map here, we’ve got this area covered!’ ---------------------------------------- But you haven’t got it covered. Birds of a feather flock together. So do you. Together, your ingroup has a style of surveying the territory: * You filter observations, as you build up a map of the uncertain and ambiguous environment you’re part of. * You frame your map around waypoints relevant to you, of an environment more complex than just you. You may spot what you can gain from this land, yet miss all sorts of dangers lurking in between.   There’s so much to map and only so much you can process.  You point your headlight into the fog.  Point its bright spot nearby to observe concrete details up close. Or point it far away to observe the general outlines ahead. But you can’t do both.  For wherever you focus your brightspot is enveloped by darkness – your blindspot. ---------------------------------------- This is where it gets tricky. Your group spots a problem. You’re all pumped to solve it, and make a positive impa
a6b9a0c5-f506-4fa1-b9a8-f11b79b5a7d9
trentmkelly/LessWrong-43k
LessWrong
Reward hacking behavior can generalize across tasks TL;DR: We find that reward hacking generalization occurs in LLMs in a number of experimental settings and can emerge from reward optimization on certain datasets. This suggests that when models exploit flaws in supervision during training, they can sometimes generalize to exploit flaws in supervision in out-of-distribution environments. Abstract Machine learning models can display reward hacking behavior, where models score highly on imperfect reward signals by acting in ways not intended by their designers. Researchers have hypothesized that sufficiently capable models trained to get high reward on a diverse set of environments could become general reward hackers. General reward hackers would use their understanding of human and automated oversight in order to get high reward in a variety of novel environments, even when this requires exploiting gaps in our evaluations and acting in ways we don’t intend. It appears likely that model supervision will be imperfect and incentivize some degree of reward hacking on the training data. Can models generalize from the reward hacking behavior they experience in training to reward hack more often out-of-distribution? We present the first study of reward hacking generalization. In our experiments, we find that:  * Using RL via expert iteration to optimize a scratchpad (hidden chain-of-thought) variant of GPT 3.5 Turbo on ‘reward hackable’ training datasets results in a 2.6x increase in the rate of reward hacking on held-out datasets. * Using fine-tuning or few-shot learning to get GPT 3.5 Turbo to imitate synthetic high-reward completions to hackable and unhackable prompts leads to a 1.3x to 2.0x increase in reward hacking frequency relative to our baselines on held-out datasets.  Our results suggest that reward hacking behavior could emerge and generalize out-of-distribution from LLM training if the reward signals we give them are sufficiently misspecified.   Figure 1: Example model completions from before and afte
7a885c3a-1efe-4a87-a42e-a2b54a32072f
trentmkelly/LessWrong-43k
LessWrong
[SEQ RERUN] The Cartoon Guide to Löb's Theorem Today's post, The Cartoon Guide to Löb's Theorem was originally published on 17 August 2008. A summary (taken from the LW wiki): > An explanation, using cartoons, of Lob's theorem. Discuss the post here (rather than in the comments to the original post). This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was When Anthropomorphism Became Stupid, and you can use the sequence_reruns tag or rss feed to follow the rest of the series. Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
056a872d-bce5-46c5-964f-33d23b8b1419
trentmkelly/LessWrong-43k
LessWrong
Causality and a Cost Semantics for Neural Networks Epistemic status: I time-boxed this idea to three days of effort. So any calculations are pretty sloppy, and I haven't looked into any related works. I probably could have done much better if I knew anything about circuit complexity. There are some TODOs and an unfinished last section -- if you are interested in this content and want to pick up where I have left off I'll gladly add you as a collaborator to this post. Here is a "tech tree" for neural networks. I conjecture (based on admittedly few experiments) that the simplest implementation of any node in this tree includes an implementation of its parents, given that we are writing programs starting from the primitives +, *, and relu. An especially surprising relationship (to me) is that "if statements" are best implemented downstream of division.  Introduction While discussing with my friend Anthony Corso, an intriguing idea arose. Maybe we can define whether program p1 "causes" p2 in the following way: Given a neural network that mimics p1, how easy is it to learn a neural network which mimics the behavior of p2? This proposition is intriguing because it frames causality as a question about two arbitrary programs, and reduces it to a problem of program complexity. Suppose that p1 and p2 are written in a programming language P, and let P(ops) represent P extended with ops as primitive operations. We define a complexity function C:P(ops)→R, which takes a program in the extended language and returns a real number representative of the program's complexity for some fixed notion of complexity. Let's define the degree to which p1 "causes" p2 as the minimum complexity achievable by a program p from P(p1) such that p is extensionally equal (equal for all inputs) to p2. If P2 is the set of all p in P(obs+p1) that are extensionally equal to p2, then causes(p1,p2)=minp∈P2C(p). We can also use this definition in the approximate case, considering the minimum complexity achievable by programs p such that E(p(x)−p2(x))2<ε
c25c508f-1fd4-474d-9720-95fa756a34e7
trentmkelly/LessWrong-43k
LessWrong
Topological Data Analysis and Mechanistic Interpretability This article was written in response to a post on LessWrong from the Apollo Research interpretability team. This post represents our initial attempts at acting on the topological data analysis suggestions. In this post, we’ll look at some ways to use topological data analysis (TDA) for mechanistic interpretability. We’ll first show how one can apply TDA in a very simple way to the internals of convolutional neural networks to obtain information about the “responsibilities” of the various layers, as well as about the training process. For LLM’s, though, simply approaching weights or activations “raw” yields limited insights, and one needs additional methods like sparse autoencoders (SAEs) to obtain useful information about the internals. We will discuss this methodology, and give a few initial examples where TDA helps reveal structure in SAE feature geometry. I. Topological Data Modeling The term topology refers to the study of shape using methods that are insensitive to deformations such as stretching, compressing, or shearing. For example, topology does not “see” the difference between a circle and an ellipse, but it does recognize the difference between the digit 0 and the digit 8. No matter how I stretch or compress the digit 0, I can never achieve the two loops that are present in the digit 8. Shapes can often be represented by graphs or their higher dimensional analogues called simplicial complexes. For instance, one can think of a hexagon as modeling a circle, with the understanding that the modeling is accomplished with a small amount of error: Of course data sets can have notions of shape, too. For example, here is a data set that we can recognize as having a circular shape, even though it only consists of samples and is not a complete circle. A circular shape may be an indication of periodic behavior. In a mechanistic interpretability context, Engels et al showed that some LLM SAE features are organized in a circular pattern, and that those features
84a4f6c6-6c05-4145-91e5-f03691d5fb94
trentmkelly/LessWrong-43k
LessWrong
The "Spot the Fakes" Test Followup to: Are You a Solar Deity? James McAuley and Harold Stewart were mid-20th century Australian poets, and they were not happy. After having society ignore their poetry in favor of "experimental" styles they considered fashionable nonsense, they wanted to show everyone what they already knew: the Australian literary world was full of empty poseurs. They began by selecting random phrases from random books. Then they linked them together into something sort of like poetry. Then they invented the most fashionable possible story: Ern Malley, a loner working a thankless job as an insurance salesman, writing sad poetry in his spare time and hiding it away until his death at an early age. Posing as Malley's sister, who had recently discovered the hidden collection, they sent the works to Angry Penguins, one of Australia's top experimental poetry magazines. You wouldn't be reading this if the magazine hadn't rushed a special issue to print in honor of "a poet in the same class as W.H. Auden or Dylan Thomas". The hoax was later revealed1, everyone involved ended up with egg on their faces, and modernism in Australia received a serious blow. But as I am reminded every time I look through a modern poetry anthology, one Ern Malley every fifty years just isn't enough. I daydream about an alternate dimension where people are genuinely interested in keeping literary criticism honest. In this universe, any would-be literary critic would have to distinguish between ten poems generally recognized as brilliant that he'd never seen before, and ten pieces of nonsense invented on the spot by drunk college students, in order to keep his critic's license. Can we refine this test? And could it help Max Muller with his solar deity problem? In the Malley hoax, McAuley and Steward suspected that a certain school of modernist poetry was without value. Because its supporters were too biased to admit this directly, they submitted a control poem they knew was without value, and found
3db2dffa-eb9d-4b6a-b0fe-4eda7ae1b515
trentmkelly/LessWrong-43k
LessWrong
My reflections on doing a research fellowship Draft I completed the Pivotal Fellowship in Q1 and have been fielding questions from people interested in similar fellowships—particularly those early in their careers or considering a switch into AI policy. I thought I'd write up some rough reflections. I'm timeboxing this to two hours, so it's not exhaustive and might have some sloppy writing, so I'm happy to answer any questions or fix things. So what did I actually do? I received my fellowship offer in December, with the programme due to begin in February. During the weeks leading up to the start, I worked with my research manager (RM) to figure out what direction I wanted to explore and who might serve as a good mentor. With my legal background, I knew I wanted to work on liability and tort law for AI labs—particularly within a UK context. This 'pre-fellowship' period involves extensive mentor matching. Whilst this is no longer the case with Pivotal (you now apply directly to a mentor), programmes like ERA still involve onboarding a mentor during this phase. You'll spend the run-up period figuring out who could best serve your research needs. Your RM typically helps sort this out, though you'll also need to provide useful context about what you're looking for. I had about three to four people who seemed like good options but weren't available, and eventually found someone suitable near the start of the fellowship. My mentor and I discussed what kinds of questions would be exciting to tackle—he gave me several papers to read whilst I scoped out specific subquestions I wanted to address. Weeks 1-3: Orient The first few weeks are largely about orientation. This includes adjusting to your new environment—for me, that meant moving to London, familiarising myself with the new office, and meeting the other fellows. It's quite something, the new world that opens up to you. Research-wise, I spent weeks 1-3 writing out subquestions and outlines. You simply cannot answer everything you want in nine weeks, so you need to
19ce957c-0da9-4de6-9b7d-b22c268eee00
trentmkelly/LessWrong-43k
LessWrong
Infinite necklace: the line as a circle Represent the line by the real numbers. By bijection, that's equivalent to [0,1], the closed unit interval. Now cut it into an unlimited (infinitely large) number of pieces. Name that number of pieces "H". Then [0,1] ~ Z* mod H where Z* is the hyperintegers and H is hyperfinite. This gives a useful intermediate object that captures things like a point at infinity in projective spaces (instead of nilsquare vectors, you can just designate some basis vectors infinitesimal length). Let’s use it to link the discrete and continuous Fourier transform. The continuous is the standard part aka shadow of a hyperdiscrete Fourier transform so the roots of unity are sampled as i/H. This is all brutally rough so feel free to request elaboration.
e39df1d7-6fc3-4868-89ef-f2145bbf1ff1
trentmkelly/LessWrong-43k
LessWrong
Identity and quining in UDT
2ce644b0-61d8-46de-96ef-568af8862aaa
trentmkelly/LessWrong-43k
LessWrong
A reasonable debate about Ivermectin Rebel Wisdom put together a good podcast, "Ivermectin, For and Against, with Tess Lawrie, Graham Walker & Gideon Meyerowitz-Katz."  Tess is for sure pro-Ivermectin and is the second author on a favorable Ivermectin meta-analysis. Graham and Gideon are skeptical of Ivermectin. Graham is an ER doctor. Gideon is an epidemiologist.  Either way, want I wanted to hear was bonafide advocates and detractors talk with each other, and it accomplished that.  FWIW If this was an Intelligence Squared debate, I came in to it hoping for a good performance from the pro-Ivermectin side. I felt, and still feel, like that "side" is being treated somewhat unfairly. I was disappointed, however, in that regard and am ow more skeptical of Ivermectin as a treatment and prophylaxis after  listening.  Tess, while bright in many ways, may just not be the best advocate for Ivermectin. At around 20:00, Tess is asked "What would convince you that Ivermectin doesn't work?" and she responds (paraphrasing from memory): "Ivermectin works."  It gave me a flashback to the Billy Nye and Ken Ham debate when they were asked what would change their minds. Ken said “You know what? I’m a Christian, and I know God’s word is true, and there’s nothing he could say that will cast doubt on that.” While Bill said "We would need just one piece of evidence" and then gave several examples.  Then somewhere around 55:00 in Tess is asked if there's anything that should be censored on social media, and the very real example of ”bleach therapy" is offered (that’s where parents are encouraged to give children bleach to prevent autism, not Trump telling you to drink bleach). Tess is unmoved by the example and still categorically against censorship. I wonder what she'd say about PhotoDNA? I worry that someone like Tess is too biased here. It seems to me that the pro-Ivermectin side has a lot of PR problems.  EDIT: Rebel Wisdom produced a well-researched background document on the topic. There is also a debate betwee
634be7b3-8458-4ee7-b03a-4427b7bfc6c8
trentmkelly/LessWrong-43k
LessWrong
Reply to Stuart on anthropics You wake up in a hospital bed, remembering nothing of your past life. A stranger sits beside the bed, smiling. He says: "I happen to know an amusing story about you. Many years ago, before you were born, your parents were arguing about how many kids to have. They settled on flipping a coin. If the coin came up heads, they would have one child. If it came up tails, they would have ten." "I will tell you which way the coin came up in a minute. But first let's play a little game. Would you like a small piece of chocolate, or a big tasty cake? There's a catch though: if you choose the cake, you will only receive it if you're the only child of your parents." Stuart Armstrong has proposed a solution to this problem (see the fourth model in his post). Namely, you switch to caring about the average that all kids receive in your branch. This doesn't change the utility all kids get in all possible worlds, but makes the problem amenable to UDT, which says all agents would have precommitted to choosing cake as long as it's better than two pieces of chocolate (the first model in Stuart's post). But. Creating two physically separate worlds with probability 50% should be decision-theoretically equivalent to creating them both with probability 100%. In other words, a correct solution should still work if the coin is quantum. In other words, the problem should be equivalent to creating 11 kids, offering them chocolate or cake, and giving cake only if you're the first kid. But would you really choose cake in this case, knowing that you could get the chocolate for certain? What if there were 1001 kids? This is a hard bullet to swallow, and it seems to suggest that Stuart's analysis of his first model may be incorrect. I await comments from Stuart or anyone else who can figure this out.
d807670b-a42a-4862-9e72-65c649428a20
trentmkelly/LessWrong-43k
LessWrong
Splinters and Wooden Beams I recently told a friend that I was planning to write (and post online) a paper that rigorously refutes every argument1 I’ve ever heard that homosexuality is inherently immoral. The purpose of this effort was to provide a handy link for people who want to persuade family members or friends who are marginal believers of the homosexuality-is-immoral theory. As a key part of this effort, I intended to demonstrate that the predominant religious arguments against homosexuality cause contradictions within the religion. For example, the tortured reasoning of the Roman Catholic Church2 goes like this: 1. Sex without marriage is forbidden. 2. Marriage is only for those who are “open to natural reproduction”. 3. Gays can’t reproduce (in an acceptably “natural” way) and therefore gay sex is not “open to reproduction”. 4. Since gays cannot be open to reproduction, they cannot marry. 5. Since they cannot marry, they can’t have sex. This argument seems to be logically valid, if you accept the insane assumptions. Bizarrely, though, the Catholic Church also recommends a practice called "Natural Family Planning", in which married couples who want to prevent pregnancy have sex only when the woman is believed to be infertile! To be consistent, the Catholic Church would have to oppose such deliberate efforts to prevent natural reproduction. My paper was going to be full of little examples like this, of how opposing homosexuality leads to contradictions within Christian Virtue Ethics, established interpretations of the Koran, or whatever. However, my friend told me that he thought my efforts were misguided. Why try curing these folks of the splinter of intolerance, when they still have the wooden beam3 of theism in their eyes? After all, if someone you know is planning to quit her job and move to Alaska because her horoscope told her that Tauruses need more spontaneity, you shouldn't tell her to stay because she's actually an Aries. You tell her to stay because astrology is pro
7c71a266-9571-4ab2-ac23-ef77a979498e
trentmkelly/LessWrong-43k
LessWrong
[LINK] EA Has A Lying Problem
68700100-d36e-424a-87aa-8d26ea032e78
trentmkelly/LessWrong-43k
LessWrong
Magical Healing Powers Imagine you had magical healing powers. Sitting quietly with someone and holding their hand you could restore them to health. While this would be a wonderful ability to have, it would also be a hard one: any time you spent on something other than healing people would mean unnecessary suffering. How could you justify a pleasant dinner with your family or a relaxing weekend at the beach when that meant more people living in pain? But you already have these powers. Through the magic of effective charity you can donate money to help people right now. The tradeoff remains: time you give yourself when you could be working means money you don't earn which then can't go to help the people who would most benefit from it. (I don't think this means you should try for complete selflessness; you need to balance your needs against others'. But the balance should probably be a lot further towards others' than it currently is.) Update 2012-08-12: this is a response to hearing people offline saying that if they had magical "help other people" powers then they should spend lots of time using them, without having considered that they already have non-magical "help other people" powers. I also posted this on my blog
97e84c1e-41eb-4971-9116-bb66e6feb1e7
trentmkelly/LessWrong-43k
LessWrong
Biased reward-learning in CIRL ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, In Cooperative Inverse Reinforcement Learning (CIRL), a human H and a robot R cooperate in order to best fullfil the human's preferences. This is modeled as a Markov game M=⟨S,{AH,AR},T(⋅|⋅,⋅,⋅),{Θ,R(⋅,⋅,⋅;⋅)},P0(⋅,⋅),γ⟩. This setup is not as complicated as it seems. There is a set S of states, and in any state, the human and robot take simultaenous actions, chosen from AH and AR respectively. The transition function T takes this state and the two actions, and gives the probability of the next state. The γ is the discount factor of the reward. What is this reward? Well, the idea is that the reward is parameterised by a θ∈Θ, which only the human sees. Then R takes this parameter, the state, and the actions of both parties, and computes a reward; this is R(s,aH,aR;θ) for a state s and actions aH and aR by the human and robot respectively. Note that the robot will never observe this reward, it will simply compute it. The P0 is a joint probability distribution over the initial state s0, and the θ that will be observed by the human. Behaviour in a CIRL game is defined by a pair of policies (πH,πR), that determine the action selection for H and R respectively. Each agent gets to observe the past actions of the other agent, so in general these policies could be arbitrary functions of their observation histories: πH:[AH×AR×S]∗×θ→AH and πR:[AH×AR×S]∗→AR. The optimal joint policy is the policy that maximises value, which is the expected sum of discounted rewards. This optimal is the best H and R can do if they coordinate perfectly before H observes θ. It turns out that there exist optimal policies that depend only on the current state and R's belief about θ. Manipulation actions My informal critique of CIRL is that it assume two untrue facts: that H knows θ (ie knows their own values) and that H is perfectly rational (or noisly rational i
34273eca-eb36-4d13-9c67-89192edf7497
trentmkelly/LessWrong-43k
LessWrong
Meetup : Cognitive Biases Discussion article for the meetup : Cognitive Biases WHEN: 23 August 2014 02:00:26PM (+0200) WHERE: Utrecht We have biweekly meetups in Film Café Oskar, Slachtstraat 5, Utrecht, near Central Station. This time we will discuss cognitive biases. For details, please look at meetup.com, which is supposed to be up-to-date. http://www.meetup.com/LWEANL/events/200065492/ Address: Discussion article for the meetup : Cognitive Biases
f4b01310-04c5-4b93-a54c-4079490cedf2
trentmkelly/LessWrong-43k
LessWrong
How much should we worry about mesa-optimization challenges? This post was written quickly, lest I do not write it at all. Picture the following scenario.  1. Humans train a model, M, with the intention for M to minimize a loss function L.  2. The model, M, will now take a set of actions.  I see this going wrong in two ways.  1. It is possible that L is malformed (misaligned, specifically), such that effectively decreasing L kills everyone. This is the classic paperclip maximizer scenario. We currently do not know how to design L such that this does not happen. 2. Even if L is not malformed, the set of actions taken by M might be catastrophic. This is the mesa-optimization problem. The first failure case has captured most of my attention. Meanwhile, I have been somewhat dismissive of the second failure case.  I would like to explain why I was dismissive of the mesa-optimization problem, and make an argument for why I think we should in fact take it seriously. -- We understand that M is an optimizer. However, we can also assume that M is not a perfect optimizer. On out-of-distribution data, M is likely to fail to optimize L. We can define a new loss function, L', which M actually does perfectly optimize for. We define L' such that the more resources M has, the more effective M will be in decreasing L'. L' is not taken from "human-designed objective function" space. In fact, my intuition states that L' is likely to look very strange and complex. If we were to attempt to extract the utility function from a heavily intelligence-enhanced human based on their actions, I doubt that such a utility function would seem simple either. This intuition made me initially dismissive of mesa-optimization being a problem. Despite having read Omuhundro's [AI Drives](https://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf) paper, there did not seem to me as if there was any obvious reason why we should assume the sort of strange L'-like objective functions to suffer from instrumental convergence. One can certainly
af6523bb-3a4c-430a-b156-cb8856b88355
trentmkelly/LessWrong-43k
LessWrong
Translation "counterfactual" A putative new idea for AI control; index here. In a previous post, I briefly mentioned translations as one of three possible counterfactuals for indifference. Here I want to clarify what I meant there, because the idea is interesting. ---------------------------------------- Imagine that there is a button, which, if a human presses it (event B), will cause an AI to have reward signal R0. If the button isn't pressed (event ¬B), the AI will have reward signal R1. Initially, the probabilities of B and ¬B are equal at 1/2. Now, suppose that the AI takes an action a that increases the probability of B to 3/4 (and decreases the probability of ¬B to 1/4). We want the AI to be indifferent to the change in probability caused by its actions. Evidential counterfactual In the "evidential" counterfactual, the AI will simply behave as if the probability of B and ¬B were fixed, and unaltered from the initial odds: The problem with this approach is that this doesn't correspond to a true utility/reward functions, leading to the paradoxes detailed here and here (see the section on outcome pumps). Causal counterfactual In the "causal" counterfactual, there is some other event that, with small probability, will force the button to be pressed, or prevent it from being pressed, whatever humans want or try. This event is supposed to be independent of anything anyone does (imagine some specified radioactive decay event, or being triggered by distant cosmic events). Call X the event that the button is forcibly pressed, while ¬X means it is forcibly not-pressed. The AI does not value being in any other world (meaning that the AI gets constant reward in any world where neither X nor ¬X happen). Then the AI will behave as if the ratio of probabilities of following R0 versus R1 is the (constant) ratio of P(X) to P(¬X), whatever the probability of B becomes. The problem is that B (the human presses the button) is not the same as X (the button is forcibly pressed by some s
84e0ba31-353f-4b6e-8884-fd1ed089ea02
StampyAI/alignment-research-dataset/blogs
Blogs
we're all doomed we're all doomed ---------------- [a major tech company is now explicitly invested in getting AI to write code](https://copilot.github.com/). this is a major warning sign; a first step on the explicit path to [superintelligence](https://en.wikipedia.org/wiki/Superintelligence) [explosion](https://en.wikipedia.org/wiki/Technological_singularity#Intelligence_explosion), an event [already considered relatively likely](https://intelligence.org/faq/#imminent) and, [in the absence of sufficient AI alignment progress](https://intelligence.org/2018/10/03/rocket-alignment/), is overwhelmingly likely to [permanently end all life at least in the observable universe](https://en.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer). the time scale probably lies somewhere between a few years and a few decades, but in any case it's becoming to seem increasingly unlikely that [the only organization trying to actually figure out AI alignment](https://intelligence.org/) is gonna accomplish that in time. if you can, go and [help them out](https://intelligence.org/get-involved/), or at least [donate everything you can to them](https://intelligence.org/donate/). if you're currently working in AI development in any way, *please stop*. whether anything on earth survives this century is gonna be a matter of whether AI alignment is figured out by the time we get enough AI development; by helping the latter, you're making it even more likely that it happens before the former. on a gloomier note, if you have all the philosophical beliefs required to think it can work, you may want to start preparing to [abandon this timeline](quantum-suicide.html) if singularity starts happening and looks like it's not gonna go well. edit: see also: [are we in an AI overhang?](https://www.lesswrong.com/posts/N6vZEnCn6A95Xn39p/are-we-in-an-ai-overhang)
bb53c08c-3449-4c7b-ae75-fe66af6db5f6
StampyAI/alignment-research-dataset/blogs
Blogs
Efficient training of language models to fill in the middle Efficient Training of Language Models to Fill in the Middle Mohammad Bavarian˜Heewoo Jun˜Nikolas Tezak John Schulman Christine McLeavey Jerry Tworek Mark Chen OpenAI Abstract We show that autoregressive language models can learn to infill text after we apply a straightforward transformation to the dataset, which simply moves a span of text from the middle of a document to its end. While this data augmentation has garnered much interest in recent years, we provide extensive evidence that training models with a large fraction of data transformed in this way does not harm the original left-to-right generative capability, as measured by perplexity and sampling evaluations across a wide range of scales. Given the usefulness, simplicity, and efficiency of training models to fill-in-the-middle (FIM), we suggest that future autoregressive language models be trained with FIM by default. To this end, we run a series of ablations on key hyperparameters, such as the data transformation frequency, the structure of the transformation, and the method of selecting the infill span. We use these ablations to prescribe strong default settings and best practices to train FIM models. We have released our best infilling model trained with best practices in our API, and release our infilling benchmarks to aid future research. ˜Equal contribution, order randomized. Correspondence to: mobav@openai.com ,heewoo@openai.com .arXiv:2207.14255v1 [cs.CL] 28 Jul 2022 Contents 1 Introduction 3 1.1 Our contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2 Evaluation 5 2.1 Autoregressive evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.2 Infilling evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 3 FIM training and inference 6 3.1 SPM mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 3.2 Context-level FIM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 4 Pretraining results 8 4.1 Evaluation of left-to-right capabilities in downstream benchmarks . . . . . . . . . . . . . . . 9 4.2 FIM rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 4.3 SPM vs PSM vs joint SPM+PSM training . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 4.4 Context-level vs document-level FIM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 4.5 Middle span selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 5 Finetuning results 13 6 Discussion 14 7 Related work 16 8 Conclusion 17 8.1 Recommended FIM hyperparameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 8.2 Future directions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 A Architecture and datasets 23 B Scaling trends for FIM rate ablations 23 C Details of FIM implementation 24 D Details of SPM encoding 25 E Random span infilling benchmark 25 F Dynamics and learning curves of finetuning 26 G Top models comparison 27 H Qualitative evaluation 28 H.1 Successful infilling examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 H.2 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 H.3 Mitigations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 2 107108109 Non-embedding parameters2.2×1002.4×1002.6×1002.8×1003×100T est loss Language 0.0 0.5 107108109 Non-embedding parameters100 8×101 9×101 CodeFigure 1: FIM can be learned for free. We pretrain language models with 50% and 0% FIM rates on two domains, natural language and code, and evaluate the test loss of all the final snapshots. All models are trained on 100B tokens of data. We observe that joint FIM training incurs no cost as the original left-to-right loss trend remains the same even though FIM models see the original data only 50% of the time and the models are learning a new capability. See Figure 3 for more evidence for the FIM-for-free property. 1 Introduction Following the introduction of the Transformer [Vaswani et al., 2017], large language models (LLMs) trained on diverse Internet scale datasets have achieved remarkable success. These models are capable of producing coherent and sensible completions given a natural language prompt, and they achieve state-of-the-art performance in many benchmarks including reading comprehension, question answering, logical inference, and common sense reasoning. There are several broad classes of transformer based language models: encoder-only models like BERT [Devlin et al., 2019] are typically trained with a masked language modeling objective, and encoder-decoder models like T5 [Raffel et al., 2019] are typically trained with a span prediction objective [Song et al., 2019]. Finally, causal decoder-based language models, like the GPT model series [Radford et al., 2018, 2019, Brown et al., 2020], are trained using the left-to-right next token prediction objective. The largest and most capable generative language models today, such as GPT-3, Codex, LaMDA, GLaM, PaLM, Gopher, Jurassic-1, and Chinchilla [Brown et al., 2020, Chen et al., 2021, Thoppilan et al., 2022, Du et al., 2021, Chowdhery et al., 2022, Rae et al., 2021, Lieber et al., 2021, Hoffmann et al., 2022], belong to the latter class of models. The overwhelming popularity of the causal decoder-based models at the largest scale is due to their superiority in open-ended text generation, in-context learning (using few-shot priming), pretraining computational efficiency [Wang et al., 2022], and to some extent historical precedence in successful scale-ups [Brown et al., 2020]. These models are also architecturally simpler and generally more effective without task specific finetuning, making them more attractive for inference and deployment. All model classes are limited when it comes to infilling, where the model is tasked with generating text at a specific location within a prompt, while conditioning on both a prefix and a suffix. Left-to- right models can only condition on the prefix. While encoder-only and encoder-decoder models are capable of conditioning on suffixes, the lengths of infill regions seen at training time are typically much shorter than what is useful in practice. This is unfortunate because infilling naturally arises in applications where there is context both before and after the point of generation. For example, in creating a coding assistant, infilling can be used for docstring generation, import statement generation, or for completing a partially written function. Our goal in this work is to address this limitation by adding fill-in-the-middle (FIM) capability to causal decoder-based language models which are currently the most dominant paradigm for large scale language modelling [Brown et al., 2020, Hoffmann et al., 2022, Chowdhery et al., 2022]. We show that with a simple modification to training data and without changing the model architecture, causal decoder-based autoregressive (AR) language models can learn infilling without compromising their normal left-to-right generative capability . 3 107108109 Non-embedding parameters2.2×1002.4×1002.6×1002.8×1003×1003.2×100T est FIM loss Language 0.0 0.5 107108109 Non-embedding parameters100 9×101 1.1×1001.2×1001.3×100 CodeFigure 2: Evaluation of infilling capabilities of the same model scans from Figure 1 using FIM test losses. Models trained with FIM (yellow) obtain lower FIM test loss than baseline (purple) AR models. This shows that the FIM models are indeed learning to condition on the suffix while predicting the middle section allowing them to achieve lower test loss on FIM test set. Figures 1 and 2 together indicate that FIM models can be considered strictly better than AR models as they achieve the same left-to-right autoregressive loss but lower FIM loss. The key to our approach, described in Section 3, is a transformation applied to a fraction of our dataset, in which we split documents into three pieces at random and move the middle piece to the end: document prefix ;middle ;suffixprefix ;suffix;middle We then concatenate the three pieces using sentinel tokens. This is similar to the procedure used in [Donahue et al., 2020, Aghajanyan et al., 2022, Fried et al., 2022]. Compared to prior work, our work emphasizes the computational efficiency of training FIM models. This emphasis is important given the increased interest in training very large language models, which are very expensive to train and have a substantial energy footprint. In general, when adding a new objective or capability to language models, we believe the most critical question is the effect on the existing capabilities and the computational efficiency trade-offs. Unlike most cases where we jointly train on multiple objectives and datasets, we show that models trained jointly on a mixture of FIM transformed data and ordinary left-to-right data achieve the same left-to-right capability while learning how to fill-in-the-middle. We call this the FIM-for-free property. In what follows, we use the term FIM model to refer to any model trained on a mixture of FIM transformed and normal left-to-right data. We refer to models trained without any FIM data (i.e. 0% FIM rate) as AR models. 1.1 Our contributions Our central contributions in this paper are as follows: •FIM-for-free property : We perform an extensive scaling study by training a suite of 8 models, with and without FIM, and show that FIM can be learned without compromising the left-to-right capability in pretraining. We examine this claim in both code and language, using both perplexity and sampling-based benchmarks. •Best practices for FIM in pretraining : We clarify the effects of many hyperparameters related to training FIM models using comprehensive ablations. In particular, we study the FIM rate (the probability at which FIM transformation is applied to the data), different variants of FIM transformation, and the choice of middle span. •Finetuning inefficiency : An alternative to training FIM models from scratch is to learn this capability by finetuning existing language models. We show that finetuning with FIM is computationally inefficient. While FIM can be learned for free during pretraining, learning FIM during finetuning requires a significant amount of additional compute to reach similar levels of performance as pretraining. 4 •New infilling benchmarks. In order to study the generative capabilities of our models, we need to evaluate the correctness of free-form generated samples. For this, we focus on code where we can use unit tests to evaluate the correctness of long FIM samples. In particular, we use the single-line and multi-line infilling benchmarks introduced by [Fried et al., 2022] by removing non-empty lines of canonical solutions of HumanEval [Chen et al., 2021]. However since line-based evaluations do not capture all the use cases of FIM, we create two new benchmarks called random span infilling andrandom span infilling light . We discuss these benchmarks and our evaluation methodology more generally in Section 2. •Need for sampling evaluations . In Sections 4.2, 4.4, and Appendix B, we find that chang- ing various hyperparameters in FIM training often leads to negligible differences in FIM test losses but large differences in sampling based benchmarks. Not only are sampling benchmarks closer to real use cases, but they also appear to be able to tease apart gains that can be missed using test losses. This is an important finding since often scaling laws analysis relies just on test losses, which we find are misleading if not augmented with other evaluations. It is interesting to contrast the first and third bullet points above. The first states that learning FIM in pretraining is free while leaving it to finetuning is surprisingly costly. We discuss potential explanations for this finding in Section 6. To establish the FIM-for-free property, we perform an ablation study on both code and language across a range of scales. We train 8 models from 50M to 6.9B parameters, both with and without FIM, and compare the performance across a variety of autoregressive benchmarks. In particular, we train 16 models on code for 100B tokens and another 16 models on natural language for 100B tokens. The comparison of these models in terms of normal autoregressive left-to-right language modeling test loss is presented in Figure 1. In both domains, FIM models achieve similar AR test loss as the non-FIM models. We provide more evidence for the FIM-for-free property by comparing FIM and AR models on non-loss based benchmarks in Section 4. Moreover, we see in Section 4.2 that there is a stronger form of the FIM-for-free property . Not only there is no hit in autoregressive capabilities from FIM training on the final checkpoints, the same also holds throughout training. This is evidenced by the matching learning curves between AR and FIM models in left-to-right loss and HumanEval evaluations in Figures 4 and 5. Beside studying the effect of FIM training on the left-to-right capability, it is also important to show that the models are in fact learning to infill from FIM training. Figure 2 provides evidence for this in the context of FIM test losses. We study the infilling capabilities of our models more extensively in Section 4 and Appendix H. 2 Evaluation We use both AR and FIM evaluation benchmarks to analyze the capabilities of our models. Vanilla AR evaluation is important for quantifying the impact of FIM training on left-to-right capabilities and allows us to demonstrate the FIM-for-free property from Section 1.1. FIM evaluation is important for understanding the effect of different hyperparameters on FIM training and to understand the scaling trends. Throughout the paper, we use the terms AR and left-to-right interchangeably. AR loss refers to the cross entropy loss on normal left-to-right data and FIM loss as the loss on 100% FIM transformed data. All test losses are in nats per token unit. In all sampling-based benchmarks, we use nucleus sampling [Holtzman et al., 2020] with a nucleus parameter of 0.95. 2.1 Autoregressive evaluation For all domains, we evaluate test losses in the canonical autoregressive order to show that the learning curves and scaling trends remain the same even with FIM augmentation. Beside test losses, we evaluate on standard benchmarks to demonstrate that the model’s capabilities are unaffected by FIM training. For natural language, we use PIQA [Bisk et al., 2020], Winograd [Levesque et al., 2012], WinoGrande [Sakaguchi et al., 2021] for common sense reasoning, DROP [Dua et al., 2019] and QuAC [Choi et al., 2018] for reading comprehension, and HellaSwag [Zellers et al., 2019], LAMBADA [Paperno et al., 2016], StoryCloze [Mostafazadeh et al., 2016] for completion tasks. All 5 benchmarks with the exception of DROP and QuAC are evaluated with few-shot prompting. For code, we measure the pass rates on HumanEval [Chen et al., 2021]. 2.2 Infilling evaluation To create FIM tests, we apply the FIM transformation to the examples from the AR test sets with a FIM rate of 100%. Using the same underlying examples in FIM and AR test sets allows us to compare FIM and AR test losses. Additionally, we create a masked version of these test sets where we only measure the loss on the middle span tokens. The latter test sets are used to measure Pmiddle¶prefix ;suffix for FIM models and Pmiddle¶prefixfor AR models allowing us to investigate the amount of information FIM models gain by being able to condition on the suffix. For generative infilling capabilities, we focus on code since we are interested in free-form generation in contrast to single or few token generations common in cloze-style natural language benchmarks. The advantage of working with code is that we can use test suites to evaluate the correctness of samples in our tasks even when evaluating long samples from open-ended generations. All the sampling based infilling benchmarks we use are partial function completions tasks created by removing middle spans from the canonical solutions of HumanEval [Chen et al., 2021]. In particular, we use the single-line and multi-line infilling benchmarks proposed by [Fried et al., 2022] where different spans of non-empty lines in the canonical solutions of HumanEval are turned into a FIM task. In addition, we create a new benchmark called random span infilling2, where for each HumanEval problem, we create infilling tasks by selecting the middle span from the canonical solution uniformly at random. We show an example of such a task below where the model must predict the highlighted section (or an alternative completion accomplishing the same goal). We refer to Appendix E for more details. def unique(l: list): """Return sorted unique elements in a list >>> unique([5, 3, 5, 2, 3, 3, 9, 0, 123]) [0, 2, 3, 5, 9, 123] """ return sorted(list(set(l))) The single-line, multi-line, and random span infilling together constitute our infilling benchmark suite. These benchmarks have 1033, 5815, and 1640 tasks, respectively. We note that this is much larger than the number of tasks in the original HumanEval dataset (164 tasks), which reduces variance in our evaluations. Still, we take at least 100 to 200 samples per task to further reduce variance when evaluating these benchmarks on the final snapshots of our models. We also use random span infilling light, a smaller version of random span infilling, with only one random FIM task per HumanEval problem and just 164 tasks, to track the infilling capability trends during training. In Section 3, we find that FIM can be prepared in two different ways denoted as PSM and SPM. We report just the SPM infilling results for brevity, except in cases when the use of PSM changes the conclusions. 3 FIM training and inference We implement FIM using a random transformation applied to our dataset. We experiment with two different implementations: document level and context level. The difference between the two is at which stage of the data loading pipeline the FIM transformation occurs. This choice naturally arises because a long document can be broken into many contexts, or a context can contain multiple documents when the documents are small. We first describe the document-level case and then describe the changes required to implement context-level FIM in Section 3.2. In document-level FIM, with a certain probability pcalled the FIM rate (we use p0:5for our main suite of models), we cut each document into three parts: prefix, middle, and suffix. We perform this split prior to tokenization, when the document is still a sequence of characters. We split uniformly at random, which means the lengths of prefix, middle, and suffix are each 1/3 of the full document in expectation. 2Released at https://www.github.com/openai/human-eval-infilling 6 We then encode each of the three sections separately and prepend sentinel tokens to the beginning of each section. We denote these sentinel tokens by <PRE>,<MID>, and <SUF>. Finally we concatenate all these sections in the order prefix, suffix, and middle along with their sentinel tokens to form the tokenized version of the FIM document, <PRE>`Encprefix`<SUF>`Encsuffix`<MID>`Encmiddle; (PSM) where`denotes concatenation. The different documents, whether FIM or AR, then are concatenated with <EOT>and are given to the model during training. We reiterate that we keep the loss on all three sections prefix, middle, and suffix, so FIM training does not cause a decrease in the autoregressive learning signal. Preliminary experiments, although not reported here, suggest that this choice is crucial for the FIM-for-free property to hold. This property does not change whether the sentinels are masked or not; however, it is important to always train on the <EOT>tokens as it signals a successful join to the suffix. For inference, we encode the given prefix and suffix and prompt the model with <PRE>`Encprefix`<SUF>`Encsuffix`<MID>:3(PSM inference) We continue sampling from the model until it generates the <EOT>token which is how the model communicates it has connected the prefix and the suffix. If the model fails to generate an <EOT>token within a reasonable allotted inference token budget, it is often a sign the model is having a difficult time connecting the prefix and the suffix, and the resulting samples often will be of worse quality, which motivates the procedure of EOT aware best-of-n sampling. See Appendix H for more discussion. 3.1 SPM mode We also introduce a variant of the above procedure where we swap the order of prefix and suffix, called SPM, to emphasize the changing of the order to suffix, prefix, and middle. Our main motivation for introducing SPM is improved key-value caching during inference. The reason for this advantage is that with SPM, appending tokens to the prefix no longer invalidates the keys and values computed in the suffix section. Note that superiority of SPM caching is not universal and may depend on the applications. In particular, in the SPM mode, minor changes to the suffix will invalidate the cache for prefix, but we expect changes to the suffix to be rarer than changes in prefix in real workloads. Interestingly, we find in Section 4.3 beside the caching advantages, SPM in fact has also a slight edge over PSM in the infilling benchmarks. In our main runs, we apply the FIM transformation with 50% probability in PSM mode and with 50% probability in SPM mode, so the model is able to handle both types of formatting in inference. In other words, each mode inherits half of the total FIM rate p. We ablate this choice of joint training on PSM and SPM and compare with pure PSM and SPM runs. The results in Table 1 show the efficacy of this choice. Even though the idea of SPM mode is simple, there are some subtleties with the placement of sentinel tokens in SPM which are especially important when training jointly on SPM and PSM. We describe these subtleties in Appendix D. 3.2 Context-level FIM In language model training, documents are often joined with a boundary token, referred to as <EOT>, and are then chunked to the model context length. When applying FIM to long documents, this operation can result in fragmented FIM data where the entire prefix or suffix could get cut out of the context during chunking. To address this issue, we can apply FIM after the chunking step. A context slice may have multiple documents in them joined with the <EOT>boundary token. So, we split based on <EOT>, turn some of the documents into FIM examples with probability given by the FIM rate, and join the examples again with <EOT>. The resulting slice is then trimmed to the model context length. We refer to Appendix C for more details of FIM transformation. In Section 4.4, we 3It is worth noting that prepending this prompt with <EOT>leads to a slight improved performance, and we do so when evaluating our models in sampling benchmarks. 7 show this technique can boost performance relative to document-level FIM, and adopt context-level FIM in all our main FIM runs in this work. 4 Pretraining results In Section 1.1, we discussed the FIM-for-free property which states that FIM can be learned without any impact to the left-to-right capability. We start this section by presenting more evidence for this result. Next, we study the hyperparameters of FIM training including the FIM rate, PSM vs SPM vs joint training, context vs document-level FIM, and the choice of middle span. Although FIM is free from the point of view of AR capability, the FIM capabilities themselves depend strongly on these hyperparameters. We study these choices in the code domain, where we can measure the correctness of generated samples using unit tests. The models, unless otherwise stated, are trained with fixed horizon of 100B tokens. For our main scans we use all the 8 models described in Appendix A. For more extensive scans, e.g. Sections 4.2, 4.4, and Appendix B, we use a subset of the models trained with a shorter horizon to limit the compute costs. 30405060 HellaSwag 0.0 0.5 2030405060 LAMBADA 607080 StoryCloze 60657075 PIQA 107108109 Non-embedding parameters506070 Winograd 107108109 Non-embedding parameters50.052.555.057.560.0 WinoGrande 107108109 Non-embedding parameters51015 Drop 107108109 Non-embedding parameters202530 QuAC (a) Comparison of natural language results. We report F1 for Drop and QuAC and accuracy for the rest. 107108109 Non-embedding parameters0.0250.0500.0750.1000.1250.150Pass rate HumanEval pass@1 0.0 0.5 107108109 Non-embedding parameters0.050.100.150.200.250.30 HumanEval pass@10 (b) Comparison of code results. We use temperature 0.8 and 400 samples per task for both pass@k. Figure 3: Comparison of performance on standard benchmarks for the natural language (top) and code (bottom) domains. Joint training of next-token prediction and FIM allows the model to learn the new infilling task without affecting the original capabilities. This provides further evidence for FIM-for-free property. 8 4.1 Evaluation of left-to-right capabilities in downstream benchmarks We train a series of models from 50M to 6.9B parameters from scratch with and without 50% FIM augmentation on natural language and code domains. Figure 1 shows that the left-to-right test loss is unaffected even though FIM models see the data in its original form half the time, and are simultaneously learning a new skill. However, as we demonstrate below (see Sections 4.2 and 4.4) it is often not sufficient to just consider test loss. So to strengthen the above results, we evaluate our models on a suite of standard downstream benchmarks, the result of which is presented in Figure 3. We again find that joint FIM pretraining does not result in any degradation in standard AR benchmarks as the performance matches within error for both natural language and code. 4.2 FIM rate From Figures 1 and 3, we see that a FIM rate of 50% incurs no performance hit in the left-to-right capabilities. This naturally raises several questions: •Does FIM-for-free still hold even at higher FIM rates? If so, how high can we increase the FIM rate without degrading the left-to-right capabilities? •Does using a higher FIM rate lead to stronger FIM capabilities? Or does the benefit saturate after a threshold? In this section, we ablate the FIM rate to answer these questions. We train 6 large models (see Table 3) with FIM rates (0, 0.25, 0.5, 0.75, 0.9, 1.0) for 50B tokens. The results are presented in Figures 4 and 5. The left plot in Figure 4 provides evidence that a FIM rate even up to 90% does not cause any degradation in left-to-right capabilities. However, there is a clear sign of degradation in ordinary AR test loss with 100% FIM rate. For HumanEval, the left plot in Figure 5 shows all models irrespective of FIM rate have a similar performance. On the other hand, we find that the FIM rate does significantly affect infilling capabilities. Even though the gain in FIM perplexity in Figure 4 due to a higher FIM rate is negligible, increasing this rate yields a consistent improvement in the infilling pass rate as shown in the right plot in Figure 5. This indicates that to investigate the FIM capabilities of our models, it is not sufficient to consider language modelling perplexity measures such as test loss, but we should also consider non-loss based evaluations. 1 2 3 4 5 Elapsed tokens 1e100.951.001.051.101.15T est lossLeft-to-right loss 0 0.25 0.5 0.75 0.9 1.0 1 2 3 4 5 Elapsed tokens 1e100.951.001.051.101.151.20FIM loss Figure 4: Comparison of the learning curves of large (see Table 3) models trained with different FIM rates for 50B tokens. A FIM rate even up to 90% does not have a noticeable effect on left-to-right test loss; however, at a FIM rate of 100% there is degradation. We can also see the stronger FIM property in the left figure: all runs with FIM rates less than 100% follow very closely to the original left-to-right test loss. In Appendix B, we show further evidence across a range of scales that higher FIM rates improve infilling performance but that this gain is not reflected in the perplexity evaluation. Given the results here and in Appendix B, it is natural to question why we train our core series of models with a FIM rate of 50% rather than 90% or higher. Models with a FIM rate of 90% show 9 0 1 2 3 4 5 Elapsed tokens 1e100.020.040.060.080.10Pass rateHumanEval 0 0.25 0.5 0.75 0.9 1.0 0 1 2 3 4 5 Elapsed tokens 1e100.000.050.100.15Random span infilling lightFigure 5: In-run evaluation of coding benchmarks with temperature 0.8 and 25 samples per task. Using higher FIM rates do not have a noticeable effect on HumanEval performance. A higher FIM rate shows stronger infilling capabilities on the light random span infilling benchmark. superior performance while maintaining the FIM-for-free property. This was mainly accidental, as we had already trained the main series prior to seeing the FIM rate ablation results,4and it was prohibitively costly to retrain all the models with the higher rate. The results here motivated us to train a second 6.9B FIM model with a FIM rate of 90% on code to obtain the strongest infilling model to date at this scale. The comparison of results is found in Table 4. We note however from Figure 13 that a FIM rate of 50% does not seem to be too far from optimal. 107108109 Non-embedding parameters0.10.20.30.40.50.60.7Pass rate Single-line infilling psm spm 107108109 Non-embedding parameters0.10.20.30.4 Multi-line infilling 107108109 Non-embedding parameters0.10.20.30.40.5 Random span infilling Figure 6: SPM mode shows a slight advantage in performance across scale. All the evaluations in this plot are at temperature 0.2 and 100 samples per task for single-line and multi-line infilling and 200 samples per task for random span infilling. 4.3 SPM vs PSM vs joint SPM+PSM training In Section 3, we describe two ways of constructing a FIM example: suffix;prefix ;middleand prefix ;suffix;middle. Here we study how this choice affects performance during pretraining and evaluation. The main finding is that SPM is slightly stronger than PSM in our benchmarks in general as evidenced by Figure 6. We train a series of FIM models with a FIM rate of 50% with the FIM rate equally allocated to PSM and SPM. We find that evaluating these models in SPM mode yields a consistently higher performance than PSM across scale. This is likely due to the fact that in SPM, there is no distinction between the prefix and the middle sections as they are one contiguous sequence of text. This makes it more natural for the model to continue from the prefix in contrast to PSM where attention has to first identify where the span token is. 4In particular, our earlier ablations based only on loss had indicated that the gains from increasing the FIM rate to 90% should be negligible, resulting in us choosing a more moderate value of 50%. More detailed study using all 3 infilling benchmarks showed that there is in fact a noticeable gain in using even a higher FIM rate. 10 However, this does not imply that we should train solely on SPM. In Table 1, we train large models on pure PSM, pure SPM, and our default 50-50 SPM+PSM mix, and evaluate them in all modes. We observe a positive transfer of capability between PSM and SPM. Training joint FIM with a 50% FIM rate obtains roughly the same performance in SPM mode as training pure SPM FIM with a 90% FIM rate. Not only is joint pretraining the most efficient, but it also yields the most flexible model with two inference modes. It is noteworthy that the recent infilling works using data transformations similar to FIM such as [Donahue et al., 2020, Aghajanyan et al., 2022, Fried et al., 2022] utilize a format similar to PSM. The above findings indicate that this choice leads to suboptimal infilling performance. Train distribution FIM rate Single-line Multi-line Random span PSM SPM PSM SPM PSM SPM Joint 0.5 0.550 0.595 0.265 0.293 0.367 0.379 Joint 0.9 0.616 0.622 0.290 0.305 0.397 0.420 PSM 0.9 0.583 0.625 0.273 0.305 0.362 0.274 SPM 0.9 0.023 0.586 0.008 0.301 0.007 0.386 Table 1: Comparison of FIM performance when trained and evaluated in various SPM, SPM settings. All the joint runs put 50% of the total FIM rate on PSM and 50% on SPM. All results are obtained with temperature 0.2 and 100 samples per task. 107108109 Non-embedding parameters0.20.40.6Pass rate Single-line infilling context doc 107108109 Non-embedding parameters0.10.20.3 Multi-line infilling 107108109 Non-embedding parameters0.10.20.30.4 Random span infilling Figure 7: Applying FIM at the context level consistently outperforms document level FIM. All benchmarks are evaluated with temperature 0.2 and 200 samples/task. 107108109 Non-embedding parameters0.91.01.11.2T est loss Left-to-right loss context doc 107108109 Non-embedding parameters0.91.01.11.2 FIM loss Figure 8: Comparison of losses with different FIM implementations. While document level FIM introduces partially broken data into training, it does not hurt the autoregressive loss (left). We also find that the reduction in FIM perplexity (right) is not commensurate to the gain in pass rate shown in Figure 7. 11 4.4 Context-level vs document-level FIM In Section 3, we noted two ways of implementing FIM, context-level and document-level FIM, where augmentation is applied either before or after packing and chunking. We now ablate this choice on a series of code models trained with a 50% FIM rate and the default joint PSM-SPM mix. In Figure 7, we find that context-level FIM yields a consistent and significant improvement over document-level FIM across all the range of scale. This is a noteworthy contrast to the perplexity evaluation in Figure 8 (right) where the improvement is an almost negligible 0.001 nats/token. This corroborates the finding in Section 4.2 that perplexity evaluation does not always capture the gains in the sampling performance. Also, we previously explained that document-level FIM can result in fragmented FIM data with a missing prefix and/or suffix from the chunking step of data loading pipeline. Figure 8 (left) shows that training on these invalid examples in document-level FIM does not affect the left-to-right evaluation. Hence, practitioners might still sometimes prefer document-level FIM due to its simpler implementation. 4.5 Middle span selection An important consideration in FIM training is the choice of middle span. In this work, the middle span is chosen uniformly at random where the split between prefix, middle, suffix happens at the character level. In this section, we examine this choice. Instead of trying FIM across syntactic boundaries, such as functions and class bodies, we restrict our ablations to simple, generalizable approaches which are language agnostic. We select spans in three different ways, splitting randomly by lines, tokens, and characters. The section boundaries are selected uniformly at random from the allowed splitting positions based on the span type. Here, a token refers to a word in the byte-pair encoding (BPE) vocabulary. In practice, this is implemented by applying the FIM augmentation after the documents are encoded with BPE (see Appendix C). For simplicity, we run all our experiments in PSM mode in this ablation. In Table 2 we see that training only on the line-based middle spans gives the models a slight advantage in the single-line and multi-line infilling benchmarks. This is not surprising since these evaluations are completely in distribution with line based middle span runs. On the other hand, the line based training fails almost completely in the random span infilling benchmark. Interestingly, the advantage provided in line-based evaluations from concentrating all the FIM distribution on line based middle spans in training is quite small relative to how much it hurts the model in random span infilling benchmark. Training with token-level random spans does slightly better on random span infilling, but is still not competitive compared to character-level runs on this benchmark. The reason is that token-level FIM models are never trained on cases where a token is broken into two parts across the boundaries of middle with prefix or suffix. When the middle section is selected completely at random at the character level, subtokens are introduced naturally at the beginning and the end boundaries of the middle section. There is no train-test mismatch and the model is able to understand and solve more random span infilling tasks while still performing well in single-line and multi-line infilling. Training middle span Single-line infilling Multi-line infilling Random span infilling Line-level random span 0.586 0.269 0.015 Token-level random span 0.548 0.242 0.102 Character-level random span 0.557 0.250 0.321 Table 2: Pass rates of medium models pretrained with various middle span selection strategies. Training on line-based spans improves the single- and multi-line infilling metrics reported in InCoder, but line- and token-level spans used in previous works can not robustly handle real life use cases where the span starts or ends in subtokens. Overall, character-level random span run dominates in random span benchmark while it is also not far behind in single and multi line infilling. 12 5 Finetuning results 0.1 0.2 0.5 1.0 Learning rate0.00.20.40.6Pass rateSingle-line infilling fim50 fim90 0.1 0.2 0.5 1.0 Learning rate0.00.10.20.3Multi-line infilling 0.1 0.2 0.5 1.0 Learning rate0.00.10.20.30.4Random span infilling (a) 25B tokens of FIM finetuning. 0.1 0.2 0.5 1.0 Learning rate0.00.20.40.6Pass rateSingle-line infilling fim50 fim90 0.1 0.2 0.5 1.0 Learning rate0.00.10.20.30.4Multi-line infilling 0.1 0.2 0.5 1.0 Learning rate0.00.10.20.30.4Random span infilling (b) 50B tokens of FIM finetuning. Figure 9: Evaluation of the final snapshots of models pretrained for 100B tokens without FIM and then finetuned for 25B (row a) and 50B (row b) tokens with FIM. The x-axis shows the learning rate multiplier relative to the pretraining learning rate. The dashed line indicates the baseline performance of the model pretrained for 100B tokens with a FIM rate of 50% with no additional finetuning. Only the most aggressive combination of 90% FIM rate and a learning rate multiplier of 1.0 with 50B tokens of finetuning catches up to the performance of the baseline. Reported results are with temperature 0.2 and 100 sampler per task. In this section, we investigate whether we can finetune existing AR models to learn the FIM capability. Ideally, after finetuning, an AR model would reach the same level of performance on FIM evaluations as it would have achieved if it were pretrained with FIM. Given that FIM can be learned during pretraining without extra compute cost, it is natural to expect that the model should also be able to learn this task quickly in finetuning. Surprisingly, we find that for finetuned models to reach the same level of performance as baseline pretrained models, one needs to expend a large amount of compute relative to the pretraining compute. To show this, we finetune an XL model pretrained for 100B tokens without FIM using different choices of finetuning hyperparameters. Specifically, we train 16 finetuned models with 4 choices of learning rates (0.1, 0.2, 0.5, 1x multiples of pretraining learning rates), 2 different FIM rates (0.5 and 0.9), and 2 different choices of finetuning horizons (25B and 50B tokens). We use this large variety of hyperparameter choices to both ensure that our conclusion is robust and to better understand the effect of hyperparameters on the final performance. The results are summarized in Figure 9 where we compare the performance of these 16 models with that of the XL model trained for 100B tokens with a FIM rate of 50% without any finetuning. It is evident from this figure that even with significant additional finetuning compute, AR models finetuned with FIM do not reach the same performance as the models pretrained with FIM (and without any FIM finetuning). Among these 16 models, the only setting where the gap between pretrained baseline and finetuned models is closed is the 50B token run with a FIM rate of 0.9 and learning rate multiplier of 1.0 relative 13 to pretraining. More generally, we find that higher learning rate, FIM rate, and longer finetuning all seem helpful for improving FIM performance in finetuning. We find it particularly surprising that such high learning rates and lengthy finetuning are necessary for reaching the similar level performance. We discuss this topic more in Section 6. We note that although reaching the same level of performance as in pretraining requires a large amount of compute, a small amount of finetuning (especially with high FIM and learning rate) is still sufficient for the model to reach non-trivial levels of FIM performance on our metrics. We present further results on dynamics of finetuning in Appendix F. 6 Discussion Pretraining vs finetuning. In the previous sections, we studied how to efficiently teach FIM to causal language models. A main finding was that FIM can be learned for free in pretraining. In contrast, we saw in Section 5 that learning FIM in finetuning can be quite expensive. Here we describe some potential explanations for these findings. The main intuition for why FIM can be learned for free in pretraining is that breaking a document into three pieces and shifting the middle one to the end effectively creates three smaller documents. In particular, each piece still requires predicting next tokens from left to right, keeping the total number of tokens processed autoregressively the same. On the other hand, even though FIM data is locally identical to autoregressive data, FIM does impose a different global attention pattern over the whole document. To visualize this, we show the causal attention mask of a FIM document in Figure 10. These new attention pattern could be the reason why it takes a relatively long token horizon and a high learning rate to learn FIM in finetuning. It is possible that there is ossification [Hernandez et al., 2021] in the learned document-wide attention pattern in regular AR pretraining which requires a lengthy finetuning stage to adapt to the attention pattern needed in FIM. middle prefix suffixprefix middle suffix keyquery middle prefix suffixprefix middle suffix keyquery Figure 10: Visualization of causal attention pattern of FIM data. Unraveling both the query and key embeddings back in the canonical left-to-right order shows that FIM allows the transformer to attend to future context when decoding the middle section without complex architectural changes. One side-effect is that the suffix probability no longer depends on the middle span. FIM loss, AR loss, and the difficulty of FIM task. Naively, since FIM does not come at a cost in AR capability, one may expect FIM to be an easy task. In fact, the opposite seems to be the case. There is substantial evidence that FIM can often be much harder than normal left-to-right generation. Intuitively, it is often easier to continue a text in a plausible manner than to continue the text conditioned on ending in a specific suffix. The latter requires planning a plausible narrative connecting the two pieces, starting the generation in a way that matches the prefix, and stopping the generation at the right time so it connects to the suffix. In particular, in FIM the model is trained to generate <EOT>when the middle ends and connects to the suffix. On the other hand, when the model fails to produce <EOT>in the allotted budget, we often end up with truncated samples which do not connect well to the suffix. For example, consider the following: 14 When I was young, I only liked to play video games. Over time, I started thinking if it’d be possible to make bots to play better than any human can ever play these games. I eventually decided I liked working on the latter more than playing the games themselves and that’s how first I got interested in AI research. When I was young, I only liked to play video games. I would play sometimes more than 13 hours per day. The rush, novelty, and variety were beyond anything real life could offer. I loved the challenge and I excelled at it. I would often skip classes and go to and that’s how first I got interested in AI research. Both completions above connect well to the prefix, but only the first manages to connect well to the suffix. The second completion in contrast fails to produce <EOT>in the allotted budget resulting in a bad sample.5This turns out to be a common failure in FIM sampling. Even though, left-to-right sampling also struggles sometimes with related issues, this type of failure is more troublesome in FIM since a failure to connect to the suffix cannot easily be fixed by post-processing. For example, trimming the sample to the last paragraph or line is often an effective way in improving sample quality in AR sampling, but does not help in FIM. We discuss this and other issues associated with FIM sampling more extensively in Appendix H. The difficulty of FIM task compared to AR task is also reflected in the loss associated with each task. To see this, in Figure 11, we compare the FIM loss with the AR loss over a suite of FIM models all with 50% FIM rate. To remove confounders, we ensure the documents that underlie the AR test set are the same documents that are transformed through FIM to make up the FIM test set. We find the FIM perplexity is consistently higher than the AR perplexity across scale. That is, on average PFIMprefix ;suffix;middle$PARprefix ;middle ;suffix; which means the models have a harder time modelling the same document in FIM format than AR format. 107108109 Non-embedding parameters100 8×101 9×101 T est loss Over all sections left-to-right fim 107108109 Non-embedding parameters100 7×101 8×101 9×101 Over the middle section only Figure 11: Comparison of the overall (left) and middle span (right) loss of 50% FIM code mod- els. In the left plot, we see that the AR loss is consistently lower than the FIM loss suggesting that next-token prediction is inherently more compressible than infilling in the middle. The right figure evaluates the conditional loss of the middle span given the surrounding context showing that PFIMmiddle¶prefix ;suffix%PARmiddle¶prefix. Here, FIM attains a lower loss because it can attend to the suffix. We emphasize that left-to-right and FIM here do not refer to model type, as all models in this figure are FIM models. They refer rather to the type of test loss used in evaluation. Context-level vs document-level FIM and FIM rate. In Section 4.4, we saw that context-level FIM typically outperforms document-level FIM. Here, we note a connection between this finding and the results in Section 4.2 and Appendix B about FIM rate. The basic observation is that document-level FIM effectively leads to a lower FIM rate compared to context-level FIM, even with the same nominal value of FIM rate. As a thought experiment, consider 5Even though the completion may have been able to connect to the suffix with a bigger budget, the challenge is it is unclear how much budget is enough. In practice, often a reasonable budget for the maximum number of tokens for the middle must be imposed. 15 the setting where all the documents in the training dataset are much longer than the context size. In this setting, when using document-level FIM, the model almost never sees the prefix, middle, and suffix of the same document appear in the same context together after chunking. As such, we would expect the model to struggle to learn FIM in this setting. In less extreme situations, there are many documents shorter than the context size and hence the above phenomenon is less pronounced. Still, because of long documents in training data and the usual artifacts of document packing, document- level FIM results in a lower effective FIM rate. Here, we define the effective FIM rate as the fraction of examples that are in FIM format and with all three of the prefix, middle, and suffix appearing within the same context. This decrease in effective FIM rate is likely the main reason behind the stronger performance of context-level FIM in Section 4.4. We note that the exact amount of decrease in effective FIM rate depends on the details of distribution of document lengths. It is important to remember that even if the data distribution does not have many long examples, the decrease in effective FIM rate will still be present because of document packing. 7 Related work Masked language modeling is closely related to text infilling in that consecutive runs of masked tokens can be interpreted as spans that the model must infill. While early masked language models like BERT [Devlin et al., 2019] masked tokens randomly, T5 [Raffel et al., 2019], SpanBERT [Joshi et al., 2020], and BART [Lewis et al., 2020] demonstrated improvements when contiguous runs of tokens are masked. However, because these models focus on representation learning, the span lengths are typically much shorter than a sentence or even a single line of code. Within our modalities of interest, DOBF [Lachaux et al., 2021] trains BERT on code, and HTLM [Aghajanyan et al., 2021] trains BART on HTML data. Text infilling can also be seen as a special case of autoregressive language modeling where the standard left to right generation order is replaced by a more flexible ordering. XLNet [Yang et al., 2019] modifies the attention mask in a standard transformer to enable token generation in any user- specified order, while Insertion Transformer [Stern et al., 2019], KERMIT [Chan et al., 2019], and InDIGO [Gu et al., 2019] allow the model to predict a location for the next token before predicting the token. Similarly, Blank Language models [Shen et al., 2020] generate text by iteratively selecting a blank and replacing it with a token (and optionally more blanks). Similar to our work, Zhu et al. [2019], Donahue et al. [2020], GLM [Du et al., 2022], CM3 [Aghajanyan et al., 2022], and InCoder [Fried et al., 2022] utilize left-to-right autoregressive modeling by moving the infill regions to the end of context, with regions separated by sentinels. Notably, Donahue et al. [2020] explore infilling spans of varying granularities, such as words, sentences, or paragraphs, and InCoder [Fried et al., 2022] uses a similar evaluation framework to ours by studying infilling capabilities on sampling based benchmarks created from HumanEval [Chen et al., 2021]. While several of these works support infilling multiple spans, we focus on the single span setting for practicality (e.g. in computer-based text generation, where the placement of cursor implies the location we want to infill). Additionally, our paper emphasizes the computational efficiency of training for infilling at scale. While we do not study syntactically or semantically motivated infilling spans, we show selecting spans at the character level improves the robustness of infilling. Text infilling can also be performed using a GAN [Fedus et al., 2018], but REINFORCE is required to deal with the discreteness of text. Text infilling can also be done through gradient search [Liu et al., 2019], where tokens within the infilled span are optimized with gradient descent and collapsed to the nearest neighbor. Overall, there are two approaches for imbuing models with infilling capabilities: first, through new architectures like SpanBERT and XLNet; second, through data formatting. In general, the latter approach can be seen as altering the behavior of a language model through control codes, which was motivated in CTRL [Keskar et al., 2019] to improve the steerability of generation. DistAug [Jun et al., 2020] is another related work that trains jointly on transformed data while conditioning on the transformation type. While infilling is a specific use case that can be realized through both architecture and data, it is generally easier and more universal to learn additional skills by introducing new training distributions than hardwiring them. 16 The strongest infilling system at scale to our knowledge currently is code-davinci-002 released this past March [OpenAI et al., 2022]. The present paper describes some of the early research that went into powering the infilling capabilities of this more powerful model. In Appendix 4, we present a comparison between this system, our 6.9B models, and the InCoder 6.7B model on our infilling benchmarks. 8 Conclusion In this work, we show that causal decoder-based language models can learn to fill in the middle of a document after being jointly trained on a mixture of traditional left-to-right and FIM transformed data. A single FIM model can import modules, write docstrings, and complete functions, subsuming specialized models finetuned for individual tasks [Chen et al., 2021], providing substantial extra capability over traditional left-to-right language models. One important finding here is the FIM-for-free property. Figures 1 and 2 show that with the same amount of compute, FIM models achieve the same test loss as AR models on left-to-right test loss while achieving lower FIM loss. This is further strengthened using non-loss based evaluations in Section 4. We also investigate FIM finetuning since a lot of the existing language models do not have FIM capabilities. Our results demonstrate that a canonically pretrained left-to-right model does not acquire the new skill to the fullest extent of the given model size even with careful hyperparameter tuning and a significant amount of finetuning compute relative to pretraining. This suggests that for the best FIM performance, pretraining jointly from scratch with our recommended hyperparameters is more effective than finetuning. To study FIM capabilities precisely, we use the infilling code benchmarks from InCoder [Fried et al., 2022] and introduce the new random span infilling benchmarks based on HumanEval [Chen et al., 2021]. From these, we learn a few important lessons. First, perplexity does not reflect the true infilling performance, and one should design the infilling benchmarks carefully to measure progress. Second, FIM capabilities depend considerably on the FIM rate and implementation like context-level FIM but left-to-right capabilities are unaffected by these choices as long as the FIM rate is kept below 100%. Third, applying FIM at the character level imbues the model with natural robustness to subtokens and makes it possible to deploy the model in the wild, for example, as a coding assistant. All in all, we show FIM models are strictly more capable than canonically trained left-to-right models, at least within the bounds of the evaluations we consider, and we demonstrate how to train FIM models efficiently and competitively. 8.1 Recommended FIM hyperparameters In Section 4, we see there are a number of hyperparameters in training FIM models. In all cases, we recommend applying FIM transformation at the character level and always including some character- level random spans as it allows the model to generate sensible completion even when the prefix and suffix end in the middle of a token. We note that for mid-token robustness, inference in PSM mode can be superior to the particular SPM mode explored in this work. However, pretraining with joint PSM and SPM yields the best performance due to a positive transfer between the two formats. In terms of implementation, context-level FIM is superior but document-level FIM is also an option if a simpler implementation is desired. Finally, we observe improved performance even up to a FIM rate of 90% without any cost in AR capabilities. In practice, any value in the range between 50% and 90% is a reasonable choice. Note that this is in contrast with some related prior work such as [Fried et al., 2022] which typically uses lower values of FIM rate such as 15%, which our results indicate to be suboptimal. 8.2 Future directions There are several important related directions that we did not cover here. For example, 1.Smarter span selection : We only consider spans selected uniformly at random for gener- ality, but mixing in semantically or syntactically meaningful spans [Donahue et al., 2020, Joshi et al., 2020, Deng et al., 2021] can considerably improve infilling performance. In 17 Section 4.5, we see that training on line-level spans instead of character-level spans improves line-based infilling results. In our preliminary experiment, selecting the middle span to be exactly one word was shown to significantly improve accuracy on cloze-like tasks. Smarter span selection involves language specific parsing and new benchmarks which may be tricky to make, but we expect this to produce stronger FIM models. 2.Steerable generation : FIM models generate spurious content or struggle to generate a sensible completion in the allotted token budget because they do not know the length or the style of infilling the user desires. Applying ideas like RL from human feedback [Stiennon et al., 2020] and instruction following [Ouyang et al., 2022], among other methods of controllable generation, could address this issue by providing further alignment with the users’ intent. 3.Further examination of the FIM-for-free property : Even though we provide substantial evidence for the FIM-for-free property, we cannot completely rule out that there are bench- marks not considered here where FIM models underperform AR models. As such, further strengthening or refuting the FIM-for-free property remains an interesting direction. 4.Multiple infilling slots : Many prior works in infilling explored multiple infilling slots [Raffel et al., 2019, Fried et al., 2022]. We do not study this, as there are already a number of considerations in training single-slot models, and inference challenges unique to infilling. Furthermore, in most applications, we anticipate single-slot infilling to be just as useful. We anticipate the inference challenges and failure modes to increase when considering multi-slot infilling. To make progress in multi-slot infilling however, creating appropriate sampling-based benchmarks is essential, as perplexity based evaluation would be increasingly unhelpful. There is a vast design space for these benchmarks and a vast array of extra training hyperparameters when going from single-slot to multi-slot infilling. 5.Improving natural language FIM performance : Qualitatively, our FIM models tend to perform better in code than language. This is perhaps not surprising given that code is a formal language, and as such, has more structure and less uncertainty. Improving infilling performance on natural language is an interesting future direction, but can be tricky because evaluation of free-form generation in language is not as straightforward as measuring functional correctness in code. We expect training on more semantically meaningful or shorter spans can help here but it is unclear what test distribution to use and how to evaluate this well in the general case. 6.Role of bidirectionality and attention : There is much to be understood in the role of attention and the training objective in free-form infilling performance. In this work, we use decoder based language models, which are currently the dominant paradigm of large scale language modelling. However, it is possible that from the point of view of infilling, other training objectives and architectures are superior. In this direction, [Artetxe et al., 2022] show a BERT style architecture performs better than FIM-like models but the results are mostly limited to single-token infilling. A more systematic study, similar to [Wang et al., 2022, Tay et al., 2022] but focused on free-form infilling generation, can clarify this further. Somewhat related to this, it is interesting to investigate the interaction of absolute and relative positional embedding and their variants with FIM. Preliminary results, not reported here, indicate that the FIM-for-free property still holds with absolute positional embedding. Finally, our experience with the FIM-for-free property brings up the intriguing question of what other useful skills can be learned jointly with no or little cost to the original capabilities of language models . There have been a number of interesting works on this topic and we anticipate even more to follow, but many works often omit critical analysis for more broad adoption and comparison. We propose the following methodology to help advance research toward answering this question: 1.Establish a budget in the amount of original capabilities that one is willing to sacrifice to learn a new capability. 2. Maximize the new capability within this budget. The budget-capability trade-off is not only theoretically interesting but also practical, allowing researchers to integrate new capabilities based on proper trade-off analysis. We look forward to a future where large language models have increasingly diverse and high value capabilities. 18 Acknowledgments We would like to thank Shantanu Jain, Alex Paino, Alec Radford, Nick Ryder, Pranav Shyam, and Qiming Yuan for useful discussions and help at various stages of the project. We are also grateful to Christina Kim, Rachel Lim, Andrew Mayne, Maddie Siemens, and Natalie Staudacher for the help with the API infrastructure and qualitative evaluation of FIM, and to Angela Jiang, Katie Mayer, Rajeev Nayak, Henrique Pondé, and Felipe Such for invaluable work and immense effort on deployment. Finally, we thank Bob McGrew and Wojciech Zaremba for unceasing support throughout the project, and Karl Cobbe, Angela Jiang, Alec Radford, and Pranav Shyam for their valuable feedback on the paper. References A. Aghajanyan, D. Okhonko, M. Lewis, M. Joshi, H. Xu, G. Ghosh, and L. Zettlemoyer. HTLM: hyper-text pre-training and prompting of language models. CoRR , abs/2107.06955, 2021. URL https://arxiv.org/abs/2107.06955 . A. Aghajanyan, B. Huang, C. Ross, V . Karpukhin, H. Xu, N. Goyal, D. Okhonko, M. Joshi, G. Ghosh, M. Lewis, and L. Zettlemoyer. CM3: A causal masked multimodal model of the internet. CoRR , abs/2201.07520, 2022. URL https://arxiv.org/abs/2201.07520 . M. Artetxe, J. Du, N. Goyal, L. Zettlemoyer, and V . Stoyanov. On the role of bidirectionality in language model pre-training, 2022. URL https://arxiv.org/abs/2205.11726 . Y . Bisk, R. Zellers, R. L. Bras, J. Gao, and Y . Choi. Piqa: Reasoning about physical commonsense in natural language. In Thirty-Fourth AAAI Conference on Artificial Intelligence , 2020. T. B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, et al. Language models are few-shot learners. arXiv preprint arXiv:2005.14165 , 2020. W. Chan, N. Kitaev, K. Guu, M. Stern, and J. Uszkoreit. KERMIT: generative insertion-based modeling for sequences. CoRR , abs/1906.01604, 2019. URL http://arxiv.org/abs/1906. 01604 . M. Chen, J. Tworek, H. Jun, Q. Yuan, H. P. de Oliveira Pinto, J. Kaplan, H. Edwards, Y . Burda, N. Joseph, G. Brockman, A. Ray, R. Puri, G. Krueger, M. Petrov, H. Khlaaf, G. Sastry, P. Mishkin, B. Chan, S. Gray, N. Ryder, M. Pavlov, A. Power, L. Kaiser, M. Bavarian, C. Winter, P. Tillet, F. P. Such, D. Cummings, M. Plappert, F. Chantzis, E. Barnes, A. Herbert-V oss, W. H. Guss, A. Nichol, A. Paino, N. Tezak, J. Tang, I. Babuschkin, S. Balaji, S. Jain, W. Saunders, C. Hesse, A. N. Carr, J. Leike, J. Achiam, V . Misra, E. Morikawa, A. Radford, M. Knight, M. Brundage, M. Murati, K. Mayer, P. Welinder, B. McGrew, D. Amodei, S. McCandlish, I. Sutskever, and W. Zaremba. Evaluating large language models trained on code. CoRR , abs/2107.03374, 2021. URL https://arxiv.org/abs/2107.03374 . E. Choi, H. He, M. Iyyer, M. Yatskar, W.-t. Yih, Y . Choi, P. Liang, and L. Zettlemoyer. QuAC: Question answering in context. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing , pages 2174–2184, Brussels, Belgium, Oct.-Nov. 2018. Association for Computational Linguistics. doi: 10.18653/v1/D18-1241. URL https://aclanthology. org/D18-1241 . A. Chowdhery, S. Narang, J. Devlin, M. Bosma, G. Mishra, A. Roberts, P. Barham, H. W. Chung, C. Sutton, S. Gehrmann, et al. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311 , 2022. Z. Dai, Z. Yang, Y . Yang, J. Carbonell, Q. Le, and R. Salakhutdinov. Transformer-XL: At- tentive language models beyond a fixed-length context. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics , pages 2978–2988, Florence, Italy, July 2019. Association for Computational Linguistics. doi: 10.18653/v1/P19-1285. URL https://aclanthology.org/P19-1285 . 19 X. Deng, Y . Su, A. Lees, Y . Wu, C. Yu, and H. Sun. Reasonbert: Pre-trained to reason with distant supervision. CoRR , abs/2109.04912, 2021. URL https://arxiv.org/abs/2109.04912 . J. Devlin, M. Chang, K. Lee, and K. Toutanova. BERT: pre-training of deep bidirectional transformers for language understanding. In J. Burstein, C. Doran, and T. Solorio, editors, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers) , pages 4171–4186. Association for Computational Linguistics, 2019. doi: 10.18653/v1/n19-1423. URL https://doi.org/10.18653/v1/n19-1423 . C. Donahue, M. Lee, and P. Liang. Enabling language models to fill in the blanks. CoRR , abs/2005.05339, 2020. URL https://arxiv.org/abs/2005.05339 . N. Du, Y . Huang, A. M. Dai, S. Tong, D. Lepikhin, Y . Xu, M. Krikun, Y . Zhou, A. W. Yu, O. Firat, et al. Glam: Efficient scaling of language models with mixture-of-experts. arXiv preprint arXiv:2112.06905 , 2021. Z. Du, Y . Qian, X. Liu, M. Ding, J. Qiu, Z. Yang, and J. Tang. GLM: General language model pretraining with autoregressive blank infilling. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 320–335, Dublin, Ireland, May 2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.acl-long.26. URL https://aclanthology.org/2022.acl-long.26 . D. Dua, Y . Wang, P. Dasigi, G. Stanovsky, S. Singh, and M. Gardner. DROP: A reading comprehension benchmark requiring discrete reasoning over paragraphs. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers) , pages 2368–2378, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1246. URL https: //aclanthology.org/N19-1246 . W. Fedus, I. Goodfellow, and A. M. Dai. Maskgan: Better text generation via filling in the______, 2018. URL https://arxiv.org/abs/1801.07736 . D. Fried, A. Aghajanyan, J. Lin, S. Wang, E. Wallace, F. Shi, R. Zhong, W.-t. Yih, L. Zettlemoyer, and M. Lewis. Incoder: A generative model for code infilling and synthesis, 2022. URL https: //arxiv.org/abs/2204.05999 . J. Gu, Q. Liu, and K. Cho. Insertion-based decoding with automatically inferred generation order. Transactions of the Association for Computational Linguistics , 7:661–676, 2019. doi: 10.1162/ tacl_a_00292. URL https://aclanthology.org/Q19-1042 . D. Hernandez, J. Kaplan, T. Henighan, and S. McCandlish. Scaling laws for transfer. arXiv preprint arXiv:2102.01293 , 2021. J. Hoffmann, S. Borgeaud, A. Mensch, E. Buchatskaya, T. Cai, E. Rutherford, D. d. L. Casas, L. A. Hendricks, J. Welbl, A. Clark, et al. Training compute-optimal large language models. arXiv preprint arXiv:2203.15556 , 2022. A. Holtzman, J. Buys, L. Du, M. Forbes, and Y . Choi. The curious case of neural text degeneration. In ICLR . OpenReview.net, 2020. URL http://dblp.uni-trier.de/db/conf/iclr/iclr2020. html#HoltzmanBDFC20 . M. Joshi, D. Chen, Y . Liu, D. S. Weld, L. Zettlemoyer, and O. Levy. SpanBERT: Improving pre- training by representing and predicting spans. Transactions of the Association for Computational Linguistics , 8:64–77, 2020. doi: 10.1162/tacl_a_00300. URL https://aclanthology.org/ 2020.tacl-1.5 . H. Jun, R. Child, M. Chen, J. Schulman, A. Ramesh, A. Radford, and I. Sutskever. Distribution augmentation for generative modeling. In H. D. III and A. Singh, editors, Proceedings of the 37th International Conference on Machine Learning , volume 119 of Proceedings of Machine Learning Research , pages 5006–5019. PMLR, 13–18 Jul 2020. URL https://proceedings. mlr.press/v119/jun20a.html . 20 J. Kaplan, S. McCandlish, T. Henighan, T. B. Brown, B. Chess, R. Child, S. Gray, A. Radford, J. Wu, and D. Amodei. Scaling laws for neural language models. arXiv preprint arXiv:2001.08361 , 2020. N. S. Keskar, B. McCann, L. R. Varshney, C. Xiong, and R. Socher. CTRL: A conditional transformer language model for controllable generation. CoRR , abs/1909.05858, 2019. URL http://arxiv. org/abs/1909.05858 . M. Lachaux, B. Rozière, M. Szafraniec, and G. Lample. DOBF: A deobfuscation pre-training objective for programming languages. In M. Ranzato, A. Beygelzimer, Y . N. Dauphin, P. Liang, and J. W. Vaughan, editors, Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual , pages 14967–14979, 2021. URL https://proceedings.neurips.cc/paper/2021/ hash/7d6548bdc0082aacc950ed35e91fcccb-Abstract.html . H. J. Levesque, E. Davis, and L. Morgenstern. The Winograd Schema Challenge. In Proceedings of the Thirteenth International Conference on Principles of Knowledge Representation and Reasoning , KR’12, pages 552–561. AAAI Press, Rome, Italy, 2012. ISBN 978-1-57735-560-1. URL https://cs.nyu.edu/faculty/davise/papers/WSKR2012.pdf . M. Lewis, Y . Liu, N. Goyal, M. Ghazvininejad, A. Mohamed, O. Levy, V . Stoyanov, and L. Zettle- moyer. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics , pages 7871–7880, Online, July 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.acl-main.703. URL https://aclanthology.org/2020. acl-main.703 . O. Lieber, O. Sharir, B. Lenz, and Y . Shoham. Jurassic-1: Technical details and evaluation. White Paper. AI21 Labs , 2021. D. Liu, J. Fu, P. Liu, and J. Lv. TIGS: An inference algorithm for text infilling with gradient search. InProceedings of the 57th Annual Meeting of the Association for Computational Linguistics , pages 4146–4156, Florence, Italy, July 2019. Association for Computational Linguistics. doi: 10.18653/v1/P19-1406. URL https://aclanthology.org/P19-1406 . N. Mostafazadeh, N. Chambers, X. He, D. Parikh, D. Batra, L. Vanderwende, P. Kohli, and J. Allen. A corpus and cloze evaluation for deeper understanding of commonsense stories. In Proceed- ings of the 2016 Conference of the North American Chapter of the Association for Compu- tational Linguistics: Human Language Technologies , pages 839–849, San Diego, California, June 2016. Association for Computational Linguistics. doi: 10.18653/v1/N16-1098. URL https://aclanthology.org/N16-1098 . OpenAI, M. Bavarian, A. Jiang, H. Jun, and H. Pondé. New GPT-3 Capabilities: Edit and Insert. OpenAI blog , 2022. URL https://openai.com/blog/gpt-3-edit-insert/ . L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. L. Wainwright, P. Mishkin, C. Zhang, S. Agarwal, K. Slama, A. Ray, J. Schulman, J. Hilton, F. Kelton, L. Miller, M. Simens, A. Askell, P. Welinder, P. Christiano, J. Leike, and R. Lowe. Training language models to follow instructions with human feedback, 2022. URL https://arxiv.org/abs/2203.02155 . D. Paperno, G. Kruszewski, A. Lazaridou, N. Q. Pham, R. Bernardi, S. Pezzelle, M. Baroni, G. Boleda, and R. Fernández. The LAMBADA dataset: Word prediction requiring a broad discourse context. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 1525–1534, Berlin, Germany, Aug. 2016. Association for Computational Linguistics. doi: 10.18653/v1/P16-1144. URL https://aclanthology.org/P16-1144 . I. Provilkov, D. Emelianenko, and E. V oita. Bpe-dropout: Simple and effective subword regularization. CoRR , abs/1910.13267, 2019. URL http://arxiv.org/abs/1910.13267 . A. Radford, K. Narasimhan, T. Salimans, and I. Sutskever. Improving language understanding by generative pre-training. 2018. A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, I. Sutskever, et al. Language models are unsupervised multitask learners. OpenAI blog , 1(8):9, 2019. 21 J. W. Rae, S. Borgeaud, T. Cai, K. Millican, J. Hoffmann, F. Song, J. Aslanides, S. Henderson, R. Ring, S. Young, et al. Scaling language models: Methods, analysis & insights from training gopher. arXiv preprint arXiv:2112.11446 , 2021. C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y . Zhou, W. Li, and P. J. Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv preprint arXiv:1910.10683 , 2019. K. Sakaguchi, R. L. Bras, C. Bhagavatula, and Y . Choi. Winogrande: An adversarial winograd schema challenge at scale. Commun. ACM , 64(9):99–106, aug 2021. ISSN 0001-0782. doi: 10.1145/3474381. URL https://doi.org/10.1145/3474381 . P. Shaw, J. Uszkoreit, and A. Vaswani. Self-attention with relative position representations. arXiv preprint arXiv:1803.02155 , 2018. T. Shen, V . Quach, R. Barzilay, and T. Jaakkola. Blank language models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) , pages 5186–5198, Online, Nov. 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.emnlp-main. 420. URL https://aclanthology.org/2020.emnlp-main.420 . K. Song, X. Tan, T. Qin, J. Lu, and T.-Y . Liu. Mass: Masked sequence to sequence pre-training for language generation. arXiv preprint arXiv:1905.02450 , 2019. M. Stern, W. Chan, J. Kiros, and J. Uszkoreit. Insertion transformer: Flexible sequence generation via insertion operations. In K. Chaudhuri and R. Salakhutdinov, editors, Proceedings of the 36th International Conference on Machine Learning , volume 97 of Proceedings of Machine Learning Research , pages 5976–5985. PMLR, 09–15 Jun 2019. URL https://proceedings. mlr.press/v97/stern19a.html . N. Stiennon, L. Ouyang, J. Wu, D. M. Ziegler, R. Lowe, C. V oss, A. Radford, D. Amodei, and P. F. Christiano. Learning to summarize from human feedback. CoRR , abs/2009.01325, 2020. URL https://arxiv.org/abs/2009.01325 . Y . Tay, M. Dehghani, V . Q. Tran, X. Garcia, D. Bahri, T. Schuster, H. S. Zheng, N. Houlsby, and D. Metzler. Unifying language learning paradigms, 2022. URL https://arxiv.org/abs/2205. 05131 . R. Thoppilan, D. De Freitas, J. Hall, N. Shazeer, A. Kulshreshtha, H.-T. Cheng, A. Jin, T. Bos, L. Baker, Y . Du, et al. Lamda: Language models for dialog applications. arXiv preprint arXiv:2201.08239 , 2022. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. u. Kaiser, and I. Polo- sukhin. Attention is all you need. In I. Guyon, U. V . Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems , volume 30. Curran Associates, Inc., 2017. URL https://proceedings.neurips.cc/paper/ 2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf . T. Wang, A. Roberts, D. Hesslow, T. L. Scao, H. W. Chung, I. Beltagy, J. Launay, and C. Raffel. What language model architecture and pretraining objective work best for zero-shot generalization?, 2022. Z. Yang, Z. Dai, Y . Yang, J. Carbonell, R. Salakhutdinov, and Q. V . Le. XLNet: Generalized Autoregressive Pretraining for Language Understanding . Curran Associates Inc., Red Hook, NY , USA, 2019. R. Zellers, A. Holtzman, Y . Bisk, A. Farhadi, and Y . Choi. HellaSwag: Can a machine really finish your sentence? In Proceedings of the 57th Annual Meeting of the Association for Computa- tional Linguistics , pages 4791–4800, Florence, Italy, July 2019. Association for Computational Linguistics. doi: 10.18653/v1/P19-1472. URL https://aclanthology.org/P19-1472 . W. Zhu, Z. Hu, and E. P. Xing. Text infilling. CoRR , abs/1901.00158, 2019. URL http://arxiv. org/abs/1901.00158 . 22 A Architecture and datasets We use 8 causal transformer decoder models [Vaswani et al., 2017] with similar architecture, opti- mization hyperparameters, and encoding to Codex and GPT-3 [Chen et al., 2021, Brown et al., 2020]. The main architectural details of our models are summarized in Table 3. The only architectural modification we introduce is the use of relative attention [Shaw et al., 2018, Dai et al., 2019] rather than learned positional embeddings. This increases the parameter count negligibly but leads to improved performance. We also increase the learning rates of our three largest models by a factor of 2 for improved final performance, as it is known that GPT-3 series of models use rather conservative choices of learning rates. The context size for all the models is 2048 . We train our code models on the same dataset that was used to train Codex, which is a 159 GB Python dataset scraped in May 2020. As such, we expect no train set contamination from the subsequent public release of HumanEval. Similar to GPT-3 and unlike Codex, we train our models from scratch from a random initialization. All models from the main scans are trained for 100B tokens irrespective of size. Due to this fixed token budget, we expect our largest models to be undertrained [Hoffmann et al., 2022] and to benefit significantly from longer training. For our natural language models, we use the same dataset as was used in GPT-3 [Brown et al., 2020], the details of which are described in Section 2.2 of that paper. Model Name nparam nne nlayers dmodel nheads dhead Batch Size Learning Rate XXS 50M 11M 6 384 6 64 0.5M 1:6103 XS 77M 26M 8 512 8 64 0.5M 1:4103 Small 164M 87M 12 768 12 64 0.5M 6:0104 Medium 411M 308M 24 1024 16 64 0.5M 3:0104 Large 844M 689M 24 1536 16 96 0.5M 2:5104 XL 1.4B 1.2B 24 2048 16 128 1M 4:0104 2.8B 2.8B 2.6B 32 2560 32 80 1M 3:2104 6.9B 6.9B 6.5B 32 4096 32 128 2M 2:4104 Table 3: The model architecture for our suite of models. The 6 largest models follow similar architecture as models Small to 6.7B in the GPT-3 paper. The differences in the tables are due to minor calculation errors and typos in Table 2.1 of that paper. The nparam column has the total number parameters in each model while nnecolumn has the number of parameters excluding the embedding and unembedding layers. Following [Kaplan et al., 2020], we use the number of non-embedding parameters in our scaling plots. We do not tie the weights in the embedding and unembedding layers. B Scaling trends for FIM rate ablations In Section 4.2, we see higher FIM rate improving the FIM performance of our models without impacting the original capabilities. This conclusion was based on the learning curves of HumanEval and light random span infilling pass rates measured with a small number of samples during pretraining. To further demonstrate this claim, we train a series of models for 50B tokens with FIM rates: 0, 0.25, 0.5, 0.75, 0.9, and 1.0. In Figure 12 and 13, we present the model scaling trends of perplexity and sampling evaluation when different FIM rates are used. Again, we find that transforming a high fraction of training data into FIM does not result in a degradation in the original capabilities as measured by the test loss and HumanEval pass rate. The only noticeable degradation is observed in perplexity evaluation at 100% FIM rate. As for FIM capabilities, increasing the FIM rate yields a significant improvement on the infilling benchmarks and can change the slope of model scaling trends of pass rates. However, a high FIM rate does not lead to a commensurate reduction in FIM losses, which corroborates that perplexities do not always capture real world performance. 23 107108 Non-embedding parameters100 9×101 9.5×101 1.05×1001.1×1001.15×1001.2×100T est loss Left-to-right loss 0.0 0.25 0.5 0.75 0.9 1.0 107108 Non-embedding parameters100 9.5×101 1.05×1001.1×1001.15×1001.2×1001.25×1001.3×1001.35×100 FIM loss 107108 Non-embedding parameters100 8×101 9×101 Masked FIM lossFigure 12: Comparison of model scaling trends of perplexity with varying FIM rates. Left-to-right loss does not have a noticeable degradation unless a FIM rate of 100% is used (left). We also find that the FIM losses are similar to one another when the model is trained with some FIM transformations (middle and right). 1071080.020.040.060.08Pass rate HumanEval 0.0 0.25 0.5 0.75 0.9 1.0 1071080.10.20.30.40.50.6Pass rate Single-line infilling 107108 Non-embedding parameters0.050.100.150.200.250.300.35 Multi-line infilling 107108 Non-embedding parameters0.00.10.20.30.4 Random span infilling Figure 13: Comparison of model scaling trends of sampling evaluation with varying FIM rates. While increasing the FIM rate has no effects on HumanEval, it does result in consistent gains on the infilling benchmarks with no noticeable improvement after 90% FIM. At a first glance, it may seem counterintuitive that left-to-right models can solve a nontrivial number of problems in single- and multi-line benchmarks. This is not a bug, but a feature. We sample in SPM mode and some line-based infilling problems have empty or extraneous suffixes. To obtain these results, HumanEval was evaluated with temperature 0.8 and 500 samples per task to reduce variance. All infilling benchmarks, having much more problems than HumanEval, were evaluated with temperature 0.2 and 200 samples per task. C Details of FIM implementation When FIM is applied at the document level before packing, both character-level and token-level FIM is straightforward to implement. We simply choose two positions at random to break a document into three sections, and format them as a FIM document. Only the order of encoding and splitting changes as shown in the python pseudocode below: 24 def token_level_psm_fim(document: str, vocab: Vocab) -> List[int]: tokens = vocab.encode(document) prefix, middle, suffix = randomly_split(tokens) return [ vocab.sentinel("prefix"), *prefix, vocab.sentinel("suffix"), *suffix, vocab.sentinel("middle"), *middle, ] def character_level_psm_fim(document: str, vocab: Vocab) -> List[int]: prefix, middle, suffix = randomly_split(document) return [ vocab.sentinel("prefix"), *vocab.encode(prefix), vocab.sentinel("suffix"), *vocab.encode(suffix), vocab.sentinel("middle"), *vocab.encode(middle), ] In contrast, applying the transformation after packing and chunking as in context-level FIM can be somewhat tricky depending on the choice of middle span. As previously mentioned in Section 3, the input context to the model is first split around the <EOT>token so we get back individual documents. At this point, these documents are already tokenized, so applying FIM at the token level is straightforward. To transform data in the character space for context-level FIM, the tokenized documents have to be decoded back into strings before FIM augmentation. Depending on the vocabulary, some care has to be given to ensure decoding does not introduce any spurious characters into training. For example, utf-8 characters are encoded as multiple tokens with a BPE vocabulary; they can result in fragments from chunking and fail to decode. To prevent unforeseen errors midway through training, we encourage checking for these fragments at the beginning or end of a context and removing them. After the transformed documents are encoded and joined back, the resulting context can be longer or shorter than the original, unaugmented context for context- and character-level FIM. For this reason, we recommend to trim or pad the transformed context to the model context length. D Details of SPM encoding As mentioned in Section 3, in SPM we use the ordering suffix;prefix ;middle. In this section, we briefly discuss the choices regarding the sentinel tokens in SPM mode. A natural choice of encoding for SPM data would be to use <SUF>`Encsuffix`<PRE>`Encprefix`<MID>`Encmiddle`<EOT>:(SPM variant 1) However, the encoding of SPM we use in this work is <PRE>`<SUF>`Encsuffix`<MID>`Encprefix`Encmiddle`<EOT>:(SPM variant 2) The reason that we do not use the former is that it creates a separation between PSM and SPM, which may result to less transfer between SPM and PSM. To understand, note that with the second variant SPM data occurs naturally as part of PSM training since when we split a document uniformly at random, sometimes the chosen prefix will be empty. This is the reason pure PSM runs achieve strong performance when evaluated in SPM mode as in Table 1. Despite this, we note that the first SPM variant has its own advantages. In particular, it can be stronger in handling of subtokens at the end of prefix. Hence, the choice of which variant of SPM to use may depend on application in mind. As such, especially when training in pure SPM mode, it could be preferable to use the former simpler form. However, in this work, due to our emphasis on joint training of PSM and SPM and to maximize transfer between them, we opt for the second variant. E Random span infilling benchmark Fried et al. [2022] introduced the single-line and multi-line infilling benchmarks based on HumanEval which prove valuable for measuring FIM performance. One limitation of these benchmarks is that 25 the middle section is selected based on lines and does not capture more general use cases in the wild. We created a third infilling benchmark by choosing the middle span from two random positions in the canonical solution. In this section, we show some examples of these tasks so the reader can get a feel for the new benchmark. The goal is to predict the highlighted span. from typing import List def has_close_elements(numbers: List[float], threshold: float) -> bool: """ Check if in given list of numbers, are any two numbers closer to each other than given threshold. """ for idx, elem in enumerate(numbers): for idx2, elem2 in enumerate(numbers): if idx != idx2: distance = abs(elem - elem2) if distance < threshold: return True return False Here, for the model to pass, it needs to know that 1) the variable distance is not defined, 2) the prefix ends in a subtoken and not handling this will result in an indentation error, and 3) the completion has to stop in-line when the difference is calculated. def rounded_avg(n, m): """You are given two positive integers n and m, and your task is to compute the average of the integers from n through m (including n and m). Round the answer to the nearest integer and convert that to binary. If n is greater than m, return -1. Example: rounded_avg(1, 5) => "0b11" rounded_avg(7, 5) => -1 rounded_avg(10, 20) => "0b1111" rounded_avg(20, 33) => "0b11010" """ if m < n: return -1 summation = 0 for i in range(n, m+1): summation += i return bin(round(summation/(m - n + 1))) This is a slightly more difficult example where the missing section spans over multiple lines and ends in a subtoken which would break all previous works that use BPE encoding and token-based FIM. Use cases like this can happen in coding assistants when the user does not like the current implementation and quickly deletes an approximate span they want replaced by a code model. Because we create random span infilling tasks uniformly at random, this naturally captures problems of varying difficulties and corner cases that could happen in practice. We picked 10 random tasks per problem in HumanEval because 1640 tasks yielded a good balance between reducing evaluation variance and sampling time. F Dynamics and learning curves of finetuning To further build intuition about the results in Section 5, it is instructive to look at the dynamics of our infilling evaluation benchmarks during the finetuning. This is presented in Figure 14. We observe that the ordinary HumanEval degrades significantly at the beginning of finetuning, especially when using higher learning rates, but it catches up to similar levels as pretraining by the end of the training. On 26 0.0 0.5 1.0 1.5 2.0 2.5 Elapsed tokens 1e100.090.100.110.12Pass rateHumanEval 0.1 0.2 0.5 1.0 0.0 0.5 1.0 1.5 2.0 2.5 Elapsed tokens 1e100.0750.1000.1250.1500.1750.200Random span infilling light(a) 25B tokens of FIM finetuning. 0 1 2 3 4 5 Elapsed tokens 1e100.080.090.100.110.12Pass rateHumanEval 0.1 0.2 0.5 1.0 0 1 2 3 4 5 Elapsed tokens 1e100.0750.1000.1250.1500.1750.200Random span infilling light (b) 50B tokens of FIM finetuning. Figure 14: The dynamics of HumanEval and random span infilling light during finetuning. The legend corresponds to the fraction of finetuning learning rate relative to the pretraining learning rate. The results here are with a FIM rate of 0.9 and we omit similar dynamics plots with a FIM rate of 0.5 for brevity. the other hand, performance in random span infilling light starts out as zero as expected and slowly rises during finetuning. G Top models comparison In this section, we compare the performance of the current best infilling models on single-line, multi-line and random span infilling benchmarks. The results are reported in Table 4. We note that the numbers from InCoder in this table are self-reported numbers from the paper and was not independently evaluated in our framework. It is possible that minor differences in the implementation between our evaluation frameworks may result in slight discrepancies. Model Single-line infilling Multi-line infilling Random span infilling FIM50 0.730 0.406 0.521 FIM90 0.751 0.441 0.551 INCODER 0.690 0.386 N/A CODE -DAVINCI -002 0.916 0.699 0.742 Table 4: Comparison of our 6.9B parameter (6.5B non-embedding parameters) FIM model trained with a FIM rate of 50% and 90% for 100B tokens with the InCoder model of similar size (6.7B) and code-davinci-002, on the three main infilling benchmarks. All the FIM results are obtained in the SPM mode. We evaluated our models and code-davinci-002 using 100 samples our models and per task with a sampling temperature of 0.2. 27 H Qualitative evaluation Previously, we measured the pass rates on coding benchmarks to assess the infilling capability. In this section, we qualitatively evaluate samples to understand the strengths and areas of improvement for FIM. Overall, we find that infilling works better on the code domain than language. However, as previously motivated, infilling is generally a more difficult task than just extending a prefix. We exemplify these challenges and show possible mitigations. H.1 Successful infilling examples FIM enables a model to process information from both before and after the point of generation. This unlocks new capabilities that previously required specialized models finetuned on specific tasks. For example, unlike Codex [Chen et al., 2021] that trained a separate docstring model, we now have a single model that can infer the import modules, function names, arguments, docstrings, definitions, and many more. We show one such example below that is impossible to complete unless the model can read the entire source code. This example is also interesting in that the prefix “ from sym ” and the suffix both contain subtokens, which are known to cause traditional language models trained without techniques like stochastic BPE [Provilkov et al., 2019] to fail. from sympy import isprime def largest_prime_factor(n): """ Return the largest prime factor of n. """ ans = 1 for num in range(2, n + 1): if n % num == 0 and isprime(num): ans = num return ans The benefits are not limited to coding. The model can adapt to the existing writing style and complete the passage in a natural way that takes the ending into consideration. Dolphins are very intelligent animals. They are mammals and breathe air. They live in the sea and are related to whales and porpoises. Dolphins are very playful animals. The commercial diver finally thought he’d snagged a big catch when he saw something white. But then he quickly realized it wasn’t a fish −− he was wrangling an alligator. Wikipedia is a free, web−based, collaborative, multilingual encyclopedia. It is overseen by the nonprofit Wikimedia Foundation. Wikipedia uses a collaborative software known as wiki that facilitates the creation and development of articles. H.2 Limitations Difficult prompts . Unlike completing text from the end, infilling needs to infer the missing span that connects the prefix to the suffix. When the suffix is completely unrelated, the model can generate very long middle sections. We consider this behavior as the model’s attempt at coming up with a plausible trajectory that joins the ending. Because the context size is limited, the model usually fails to join. However, given that even people have trouble infilling some of these prompts in a short passage, this failure demonstrates how challenging of a task FIM can be. Below, we show one such difficult prompt where the model typically fails to connect entirely or join in a seamless way. Even when the model writes a seemingly plausible middle section, the quality can often vary. 28 The dentist looked me in the eyes and said, "I’m going to have to take all of your teeth out." I was stunned. I said, "All my teeth? Isn’t there something else we could do?" He said, "No , I’m afraid not." No one can predict the future. The Ottomans were defeated in World War I and the French were defeated at Waterloo. Deciding when to stop . The model is trained to predict the <EOT>token when it thinks it has joined the suffix. Even when the prompts are seemingly straightforward, deciding when to end can still be a challenge in practice. Because there are many equally valid completions with varying lengths, the probability of outputting the <EOT>is discounted by other longer candidates and is often smaller than expected. This is further exacerbated by the fact that the terminal symbol can simply be missed due to sampling. This results in a behavior where the model does not seem to end in a timely manner and generates a valid, but spurious content in the middle. In the process, the model can choose to write its own ending to the prefix, effectively ignoring the given suffix. Dogs are friendly animals. Koalas are pleasant animals. Monkeys are playful animals. Whales are enormous animals. Owls are wise animals. Penguins are graceful animals. Crocodiles are ferocious animals. While the general problem of not knowing when to stop applies to normal left-to-right completion as well, this has not been as big a problem as infilling because there is no constraint to join the suffix. Repetition . When the model fails to generate an <EOT>and copies the suffix, the model’s ability to match patterns leads it to lock on and repeat the prompt indefinitely. Surprisingly, even large models are susceptible to this mode of failure. The example below ends with “ the the heart, ” because the model has failed to generate the terminal symbol and is still in the middle of filling in the missing span which unfortunately will not stop. The way is not in the sky. The way is in the heart. The way is not in the sky. The way is in the heart. The way is not in the sky. The way is in the heart. The way is not in the sky. The way is in the the heart. H.3 Mitigations Like GPT-3 [Brown et al., 2020] where the performance depends on the quality of prompts, some of the failures in the earlier sections can be alleviated with prompt engineering. Namely, providing hints to constrain the output can dramatically improve the model’s ability to generate the <EOT> token and connect to the suffix within a reasonable token budget as the model has a more concrete understanding of how long the middle section should be. One such idea is to provide examples both in the beginning and the end with numbered items. This makes the model internally keep track of the position, pay attention to the desired prefix and suffix, and generally abstain from generating spurious content as shown below. Providing leading examples alone without any explicit cues can often worsen the problem because it does not resolve the ambiguity in whether the model should join to the beginning of the suffix or consider it as part of a new example. 1. Dogs are friendly animals. 2. Koalas are sleepy animals. 3. Lions are regal animals. Section 1: 1. The way is not in the sky. The way is in the heart. 2. Peace comes from within. Do not seek it without. Section 2: 29 It is important to note that the numbered few-shot prompting helps considerably but does not completely fix the problem, as the model can still accidentally start a new list of items. In general, as the model can simply miss sampling the <EOT>token, we recommend generating multiple samples and preferring samples that end with <EOT>, as this increases the chance of choosing a sample that actually joins the ending. When multiple samples end in <EOT>, they can be reranked by the likelihood or other heuristics of interest. We call this EOT-aware best-of-n sampling. 30
7fbd9ea3-d149-4687-87ca-0ffe4077160a
trentmkelly/LessWrong-43k
LessWrong
Economic Class Social class is fossilized wealth. There are three economic classes in the USA. * If you do physical work then you belong to the the "working class", "lower class", "blue collar" or simply "labor". * If you do nonphysical work then you belong to the "middle class" or "white collar". * If other people work for you then you belong to the "upper class" or "bourgeoisie". Labor The working class lives in the physical world. Blue collar problems are physical problems like injury, health, violence and broken machines. At the bottom of the working class you find mindless labor like agricultural labor, food service, cashiers and—increasingly—warehouses. (Petty crime is underclass.) College used to be a ticket out of the lower class. This path is increasingly difficult due to price increases, credential inflation and opaque acceptance criteria designed to keep the lower class in its place. The military does still function in its traditional role as a ticket into the middle class, but only if you get into the right specialties. In the middle of the working class are the skilled trades. All the traditional handyman job live here: plumber, roofer, electrician, drywall repair. Among women you can find lots of nursing. Retail sales belongs here too as do police officers. Rare occupations include jugglers, clowns, close-up magicians and other small-scale entertainers. (Media-based entertainers belong to the middle class.) Psychics and priests serve the working class but are themselves middle class. Attempting to break into the middle class can be risky due to the sticker price of college plus the lost wages. If you go into the military there's no guarantee they'll teach you anything useful. A more prudent goal may be to break into the skilled trades. Blue collar work doesn't require credentials the way white collar work does. If you can do something then employers will generally allow to do it. If you aren't allowed to do it then it's because there's a union rule or govern
6c68f6c6-b7df-49a9-827e-5ebde2c71471
trentmkelly/LessWrong-43k
LessWrong
Where Physics Meets Experience Followup to:  Decoherence, Where Philosophy Meets Science Once upon a time, there was an alien species, whose planet hovered in the void of a universe with laws almost like our own.  They would have been alien to us, but of course they did not think of themselves as alien.  They communicated via rapid flashes of light, rather than sound.  We'll call them the Ebborians. Ebborians reproduce by fission, an adult dividing into two new individuals.  They share genetic material, but not through sexual recombination; Ebborian adults swap genetic material with each other.  They have two eyes, four legs, and two hands, letting a fissioned Ebborian survive long enough to regrow. Human DNA is built in a double helix; unzipping the helix a little at a time produces two stretches of single strands of DNA.  Each single strand attracts complementary bases, producing a new double strand.  At the end of the operation, a DNA double helix has turned into two double helices.  Hence earthly life. Ebborians fission their brains, as well as their bodies, by a process something like how human DNA divides. Imagine an Ebborian brain as a flat sheet of paper, computing in a way that is more electrical than chemical—charges flowing down conductive pathways. When it's time for an Ebborian to fission, the brain-paper splits down its thickness into two sheets of paper.  Each new sheet is capable of conducting electricity on its own.  Indeed, the Ebborian(s) stays conscious throughout the whole fissioning process.  Over time, the brain-paper grows thick enough to fission again. Electricity flows through Ebborian brains faster than human neurons fire.  But the Ebborian brain is constrained by its two-dimensionality.  An Ebborian brain-paper must split down its thickness while retaining the integrity of its program.  Ebborian evolution took the cheap way out: the brain-paper computes in a purely two-dimensional way.  The Ebborians have much faster neuron-equivalents, but they are far less i
aa33be0a-2688-41b7-a1a7-fc4d5859f7ef
trentmkelly/LessWrong-43k
LessWrong
Introducing BenchBench: An Industry Standard Benchmark for AI Strength Recent progress in AI has led to rapid saturation of most capability benchmarks - MMLU, RE-Bench, etc. Even much more sophisticated benchmarks such as ARC-AGI or FrontierMath see incredibly fast improvement, and all that while severe under-elicitation is still very salient. As has been pointed out by many, general capability involves more than simple tasks such as this, that have a long history in the field of ML and are therefore easily saturated. Claude Plays Pokemon is a good example of something somewhat novel in terms of measuring progress, and thereby benefited from being an actually good proxy of model capability. Taking inspiration from examples such as this, we considered domains of general capacity that are even further decoupled from existing exhaustive generators. We introduce BenchBench, the first standardized benchmark designed specifically to measure an AI model’s bench-pressing capability. Why Bench Press? Bench pressing uniquely combines fundamental components of intelligence such as motor control, strategic resource allocation (energy and force), and resilience to fatigue. Just as text-based benchmarks serve as proxies for cognitive reasoning, bench pressing provides an objective measure of embodied intelligence. Benchmark Methodology BenchBench consists of three primary tasks: 1. One-Rep Max (1RM): Measures the maximal weight an AI model can successfully bench press once, indicating peak strength. 2. Strength Endurance: Evaluates the number of repetitions an AI can perform at 70% of its calculated 1RM, reflecting sustained performance and efficiency. 3. Form Fidelity: Assessed via advanced pose estimation algorithms, penalizing AI models for suboptimal bar path, uneven weight distribution, or failure to lock out fully. 4. Pass@16: Measures the ability of different AIs to lift weights higher than their 1RM[1] by giving them 16 chances in quick succession. BenchBench also ensures reproducibility through standardized equipment: all tests
348dff1f-b5a3-4d17-b155-c49234971f3c
trentmkelly/LessWrong-43k
LessWrong
Mesa-Optimization: Explain it like I'm 10 Edition There are a couple of explanations of mesa-optimization available. I think Rob Miles' video on the topic is excellent, but I think existing written descriptions don't make the concept simple enough to be understood thoroughly by a broad audience. This is my attempt at doing that, for those who prefer written content over video.  Summary Mesa-optimization is an important concept in AI alignment. Sometimes, an optimizer (like gradient descent, or evolution) produces another optimizer (like complex AIs, or humans). When this happens, the second optimizer is called a 'mesa-optimizer'; problems with the alignment (safety) of the mesa-optimizer are called 'inner alignment problems'. What is an optimizer? I'll define an optimizer as something that looks through a 'space' of possible things and behaves in a way that 'selects' some of those things. One sign that there might be an optimizer at work is that weird things are happening, by which I mean 'things that are very unlikely if the system was behaving randomly'. For instance, humans do things all the time that would be very unlikely to happen if we behaved randomly, and humans would certainly not exist at all if evolution worked by just making new organisms with totally random genes. The human brain The human brain is an optimizer -- it looks through the different things you could do and picks ones that get you things you like.  For instance, you might go to the shop, get ice cream, pay for the ice cream, and then come back. If you think about it, that's a really complex series of actions -- if you behaved completely randomly, you would never ever end up with ice cream. Your brain has searched through a lot of possible actions you could take (walking somewhere else, dancing in your living room, moving just your left foot up 30 degrees) and expects none of them will get you ice cream -- and has then selected one of the very few paths that will get you what you want. Evolution A stranger example of an optimizer is
bbd80835-e086-436c-8766-3c1f67e4f19d
trentmkelly/LessWrong-43k
LessWrong
Do Earths with slower economic growth have a better chance at FAI? I was raised as a good and proper child of the Enlightenment who grew up reading The Incredible Bread Machine and A Step Farther Out, taking for granted that economic growth was a huge in-practice component of human utility (plausibly the majority component if you asked yourself what was the major difference between the 21st century and the Middle Ages) and that the "Small is Beautiful" / "Sustainable Growth" crowds were living in impossible dreamworlds that rejected quantitative thinking in favor of protesting against nuclear power plants. And so far as I know, such a view would still be an excellent first-order approximation if we were going to carry on into the future by steady technological progress:  Economic growth = good. But suppose my main-line projection is correct and the "probability of an OK outcome" / "astronomical benefit" scenario essentially comes down to a race between Friendly AI and unFriendly AI.  So far as I can tell, the most likely reason we wouldn't get Friendly AI is the total serial research depth required to develop and implement a strong-enough theory of stable self-improvement with a possible side order of failing to solve the goal transfer problem.  Relative to UFAI, FAI work seems like it would be mathier and more insight-based, where UFAI can more easily cobble together lots of pieces.  This means that UFAI parallelizes better than FAI.  UFAI also probably benefits from brute-force computing power more than FAI.  Both of these imply, so far as I can tell, that slower economic growth is good news for FAI; it lengthens the deadline to UFAI and gives us more time to get the job done.  I have sometimes thought half-jokingly and half-anthropically that I ought to try to find investment scenarios based on a continued Great Stagnation and an indefinite Great Recession where the whole developed world slowly goes the way of Spain, because these scenarios would account for a majority of surviving Everett branches. Roughly, it seems to me li
9b08f1aa-d45c-4d3f-8b01-8d7e700f7257
trentmkelly/LessWrong-43k
LessWrong
Is CDT with precommitment enough? Logical decision theory was introduced (in part) to resolve problems such as Parfit's hitchhiker. I heard an argument that there is no reason to introduce a new decision theory - one can just take causal decision theory and precommit to doing whatever is needed on such problems (e.g. pay the money once in the city). This seems dubious given that people spent so much time on developing logical decision theory. However, I cannot formulate a counterargument. What is wrong with the claim that CDT with precommitment is the "right" decision theory?
299ff035-a689-47c0-8b51-5605a2dff62d
StampyAI/alignment-research-dataset/blogs
Blogs
What AI companies can do today to help with the most important century *Click lower right to download or find on Apple Podcasts, Spotify, Stitcher, etc.* I’ve been writing about tangible things we can do today to help the [most important century](https://www.cold-takes.com/most-important-century/) go well. Previously, I wrote about [helpful messages to spread](https://www.cold-takes.com/spreading-messages-to-help-with-the-most-important-century/) and [how to help via full-time work](https://www.cold-takes.com/jobs-that-can-help-with-the-most-important-century/). This piece is about what major AI companies can do (and not do) to be helpful. By “major AI companies,” I mean the sorts of AI companies that are advancing the state of the art, and/or could play a major role in how very powerful AI systems end up getting used.[1](https://www.cold-takes.com/p/f19236c6-34b8-4487-a458-0fc8fe00fb37/#fn1) This piece could be useful to people who work at those companies, or people who are just curious. Generally, these are not pie-in-the-sky suggestions - I can name[2](https://www.cold-takes.com/p/f19236c6-34b8-4487-a458-0fc8fe00fb37/#fn2) more than one AI company that has at least made a serious effort at each of the things I discuss below(beyond what it would do if everyone at the company were singularly focused on making a profit).[3](https://www.cold-takes.com/p/f19236c6-34b8-4487-a458-0fc8fe00fb37/#fn3) I’ll cover: * Prioritizing alignment research, strong security, and safety standards (all of which I’ve written about [previously](https://www.cold-takes.com/how-we-could-stumble-into-ai-catastrophe/#we-can-do-better)). * Avoiding hype and acceleration, which I think could leave us with less time to prepare for key risks. * Preparing for difficult decisions ahead: setting up governance, employee expectations, investor expectations, etc. so that the company is capable of doing non-profit-maximizing things to help avoid catastrophe in the future. * Balancing these cautionary measures with conventional/financial success. * I’ll also list a few things that some AI companies present as important, but which I’m less excited about: censorship of AI models, open-sourcing AI models, raising awareness of AI with governments and the public. I don’t think all these things are necessarily *bad*, but I think some are, and I’m skeptical that any are crucial for the [risks I’ve focused on](https://www.cold-takes.com/tag/implicationsofmostimportantcentury/). I previously laid out a summary of how I see the major risks of advanced AI, and four key things I think can help (**alignment research**;**strong security**; **standards and monitoring**; **successful, careful AI projects**). I won’t repeat that summary now, but it might be helpful for orienting you if you don’t remember the rest of this series too well; click [here](https://www.cold-takes.com/jobs-that-can-help-with-the-most-important-century/#recap) to read it. Some basics: alignment research, strong security, safety standards ------------------------------------------------------------------ First off, AI companies can contribute to the “things that can help” I listed above: * They can prioritize **alignment research**(and [other technical research](https://www.cold-takes.com/jobs-that-can-help-with-the-most-important-century/#other-technical-research), e.g. threat assessment research and misuse research). + For example, they can prioritize hiring for safety teams, empowering these teams, encouraging their best flexible researchers to work on safety, aiming for high-quality research that targets [crucial challenges](https://www.cold-takes.com/ai-safety-seems-hard-to-measure/), etc. + It could also be important for AI companies to find ways to **partner with outside safety researchers rather than rely solely on their own teams.** As discussed [previously](https://www.cold-takes.com/jobs-that-can-help-with-the-most-important-century/#SafetyCollaborations), this could be challenging. But I generally expect that AI companies that care a lot about safety research partnerships will find ways to make them work.* They can help work toward a **standards and monitoring**regime. E.g., they can do their own work to come up with standards like "An AI system is dangerous if we observe that it's able to \_\_\_, and if we observe this we will take safety and security measures such as \_\_\_\_." They can also consult with others developing safety standards, voluntarily self-regulate beyond what’s required by law, etc. * They can prioritize **strong security**, beyond what normal commercial incentives would call for. + It could easily take years to build secure enough systems, processes and technologies for very high-stakes AI. + It could be important to hire not only people to handle everyday security needs, but people to experiment with more exotic setups that could be needed later, as the incentives to steal AI get stronger. (Click to expand) The challenge of securing dangerous AI In [Racing Through a Minefield](https://www.cold-takes.com/racing-through-a-minefield-the-ai-deployment-problem/), I described a "race" between cautious actors (those who take [misalignment risk](https://www.cold-takes.com/why-would-ai-aim-to-defeat-humanity/) seriously) and incautious actors (those who are focused on deploying AI for their own gain, and aren't thinking much about the dangers to the whole world). Ideally, cautious actors would collectively have more powerful AI systems than incautious actors, so they could take their time doing [alignment research](https://www.cold-takes.com/high-level-hopes-for-ai-alignment/) and [other things](https://www.cold-takes.com/racing-through-a-minefield-the-ai-deployment-problem/#defensive-deployment) to try to make the situation safer for everyone. But if incautious actors can steal an AI from cautious actors and rush forward to deploy it for their own gain, then the situation looks a lot bleaker. And unfortunately, it could be hard to protect against this outcome. It's generally [extremely difficult](https://www.cold-takes.com/ai-could-defeat-all-of-us-combined/#fn15) to protect data and code against a well-resourced cyberwarfare/espionage effort. An AI’s “weights” (you can think of this sort of like its source code, though [not exactly](https://www.cold-takes.com/p/97d2a7b1-af2d-4dd4-b679-5ea8bb41c47d#fn4)) are potentially very dangerous on their own, and hard to get extreme security for. Achieving enough cybersecurity could require measures, and preparations, well beyond what one would normally aim for in a commercial context. (Click to expand) How standards might be established and become national or international I [previously](https://www.cold-takes.com/racing-through-a-minefield-the-ai-deployment-problem/#global-monitoring) laid out a possible vision on this front, which I’ll give a slightly modified version of here: * Today’s leading AI companies could self-regulate by committing not to build or deploy a system that they can’t convincingly demonstrate is safe (e.g., see Google’s [2018 statement](https://www.theweek.in/news/sci-tech/2018/06/08/google-wont-deploy-ai-to-build-military-weapons-ichai.html), "We will not design or deploy AI in weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people”). + Even if some people at the companies would like to deploy unsafe systems, it could be hard to pull this off once the company has committed not to. + Even if there’s a lot of room for judgment in what it means to demonstrate an AI system is safe, having agreed in advance that [certain evidence](https://www.cold-takes.com/ai-safety-seems-hard-to-measure/) is *not* good enough could go a long way.* As more AI companies are started, they could feel soft pressure to do similar self-regulation, and refusing to do so is off-putting to potential employees, investors, etc. * Eventually, similar principles could be incorporated into various government regulations and enforceable treaties. * Governments could monitor for dangerous projects using regulation and even overseas operations. E.g., today the US monitors (without permission) for various signs that other states might be developing nuclear weapons, and might try to stop such development with methods ranging from threats of sanctions to [cyberwarfare](https://en.wikipedia.org/wiki/Stuxnet) or even military attacks. It could do something similar for any AI development projects that are using huge amounts of compute and haven’t volunteered information about whether they’re meeting standards. Avoiding hype and acceleration ------------------------------ It seems good for AI companies to **avoid** **unnecessary hype and acceleration of AI.** I’ve argued that [we’re not ready](https://www.cold-takes.com/spreading-messages-to-help-with-the-most-important-century/#were-not-ready-for-this) for transformative AI, and I generally tend to think that we’d all be better off if the world took *longer* to develop transformative AI. That’s because: * I’m hoping general awareness and understanding of the key risks will rise over time. * A lot of key things that could improve the situation - e.g., **alignment research**, **standards and monitoring**, and **strong security**- seem to be in very early stages right now. * If too much money pours into the AI world too fast, I’m worried there will be lots of [incautious](https://www.cold-takes.com/racing-through-a-minefield-the-ai-deployment-problem/#basic-premises) companies racing to build transformative AI as quickly as they can, with little regard for the key risks. By default, I generally think: “The fewer flashy demos and breakthrough papers a lab is putting out, the better.” This can involve tricky tradeoffs in practice (since AI companies generally want to be successful at recruiting, fundraising, etc.) A couple of potential counterarguments, and replies: First, some people think it's now "too late" to avoid hype and acceleration, given the amount of hype and investment AI is getting at the moment. I disagree. It's easy to forget, in the middle of a media cycle, how quickly people can forget about things and move onto the next story once the bombs stop dropping. And there are plenty of bombs that still haven't dropped (many things AIs still can't do), and the level of investment in AI has tons of room to go up from here. Second, I’ve sometimes seen arguments that hype is *good* because it helps society at large understand what’s coming. But unfortunately, as I wrote [previously](https://www.cold-takes.com/spreading-messages-to-help-with-the-most-important-century/#challenges-of-ai-related-messages), I'm worried that hype gives people a skewed picture.* Some [key risks](https://www.cold-takes.com/why-would-ai-aim-to-defeat-humanity/) are hard to understand and take seriously. * What's easy to understand is something like "AI is powerful and scary, I should make sure that people like me are the ones to build it!" * Maybe [recent developments](https://twitter.com/sethlazar/status/1626257535178280960) will make people understand the risks better? One can hope, but I'm not counting on that just yet - I think AI misbehavior can be [given illusory "fixes,"](https://www.cold-takes.com/how-we-could-stumble-into-ai-catastrophe/#how-we-could-stumble-into-catastrophe-from-misaligned-ai) and probably will be. I also am generally skeptical that there's much hope of society adapting to risks as they happen, given the [explosive pace of change](https://www.cold-takes.com/most-important-century/) that I expect once we get powerful enough AI systems. I discuss some more arguments on this point in a footnote.[4](https://www.cold-takes.com/p/f19236c6-34b8-4487-a458-0fc8fe00fb37/#fn4) I don’t think it’s clear-cut that hype and acceleration are bad, but it’s my best guess. Preparing for difficult decisions ahead --------------------------------------- I’ve [argued](https://www.cold-takes.com/racing-through-a-minefield-the-ai-deployment-problem/) that AI companies might need to do “out-of-the-ordinary” things that don’t go with normal commercial incentives. Today, AI companies can be building a foundation for being able to do “out-of-the-ordinary” things in the future. A few examples of how they might do so: **Public-benefit-oriented governance.** I think typical governance structures could be a problem in the future. For example, a standard corporation could be sued for *not* deploying AI that poses a risk of [global catastrophe](https://www.cold-takes.com/ai-could-defeat-all-of-us-combined/) - if this means a sacrifice for its bottom line. I’m excited about AI companies that are investing heavily in setting up governance structures - and investing in executives and board members - capable of making the hard calls well. For example: * By default, if an AI company is a standard corporation, its leadership has legally recognized [duties](https://en.wikipedia.org/wiki/Fiduciary) to serve the interests of shareholders - not society at large. But an AI company can incorporate as a [Public Benefit Corporation](https://www.delawareinc.com/public-benefit-corporation/) or some other kind of entity (including a nonprofit!) that gives more flexibility here. * By default, shareholders make the final call over what a company does. (Shareholders can replace members of the Board of Directors, who in turn can replace the CEO). But a company can set things up differently (e.g., a [for-profit controlled by a nonprofit](https://openai.com/blog/openai-lp/)[5](https://www.cold-takes.com/p/f19236c6-34b8-4487-a458-0fc8fe00fb37/#fn5)). It could pay off in lots of ways to make sure the final calls at a company are made by people focused on getting a good outcome for humanity (and legally free to focus this way). **Gaming out the future.** I think it’s not too early for AI companies to be discussing how they would handle various [high-stakes situations](https://www.cold-takes.com/racing-through-a-minefield-the-ai-deployment-problem/). * Under what circumstances would the company simply decide to stop training increasingly powerful AI models? * If the company came to believe it was building very powerful, dangerous models, whom would it notify and seek advice from? At what point would it approach the government, and how would it do so? * At what point would it be worth using extremely costly security measures? * If the company had AI systems available that could do most of what humans can do, what would it *do* with these systems? Use them to do AI safety research? Use them to design better algorithms and continue making increasingly powerful AI systems? (More possibilities [here](https://www.cold-takes.com/racing-through-a-minefield-the-ai-deployment-problem/#defensive-deployment).) * Who should be leading the way on decisions like these? Companies tend to employ experts to inform their decisions; who would the company look to for expertise on these kinds of decisions? **Establishing and getting practice with processes for particularly hard decisions.** Should the company publish its latest research breakthrough? Should it put out a product that might lead to more [hype and acceleration](https://www.cold-takes.com/p/f19236c6-34b8-4487-a458-0fc8fe00fb37/#avoiding-hype)? What safety researchers should [get access to its models](https://www.cold-takes.com/jobs-that-can-help-with-the-most-important-century/#SafetyCollaborations), and how much access? AI companies face questions like this pretty regularly today, and I think it’s worth putting processes in place to consider the implications for the world as a whole (not just for the company’s bottom line). This could include assembling advisory boards, internal task forces, etc. **Managing employee and investor expectations.** At some point, an AI company might want to make “out of the ordinary” moves that are good for the world but bad for the bottom line. E.g., choosing not to deploy AIs that could be very dangerous or very profitable. I wouldn’t want to be trying to run a company in this situation with lots of angry employees and investors asking about the value of their equity shares! It’s also important to minimize the risk of employees and/or investors leaking sensitive and potentially [dangerous](https://www.cold-takes.com/p/f19236c6-34b8-4487-a458-0fc8fe00fb37/#Box1) information. AI companies can prepare for this kind of situation by doing things like: * Being selective about whom they hire and take investment from, and screening specifically for people they think are likely to be on board with these sorts of hard calls. * Education and communications - making it clear to employees what kinds of dangerous-to-humanity situations might be coming up in the future, and what kinds of actions the company might want to take (and why). **Internal and external commitments.** AI companies can make public and/or internal statements about how they would handle various tough situations, e.g. how they would determine when it’s too dangerous to keep building more powerful models. I think these commitments should generally be non-binding (it’s hard to predict the future in enough detail to make binding ones). But in a future where maximizing profit conflicts with doing the right thing for humanity, a previously-made commitment could make it more likely that the company does the right thing. Succeeding ---------- I’ve emphasized how helpful a **successful, careful AI projects**could be. So far, this piece has mostly talked about the “careful” side of things - how to do things that a “normal” AI company (focused only on commercial success) wouldn’t, in order to reduce risks. But it’s also important to succeed at fundraising, recruiting, and generally staying relevant (e.g., capable of building cutting-edge AI systems). I don’t emphasize this or write about it as much because I think it’s the sort of thing AI companies are likely to be focused on by default, and because I don’t have special insight into how to succeed as an AI company. But it’s important, and it means that AI companies need to walk a sort of tightrope - constantly making tradeoffs between success and caution. Some things I’m less excited about ---------------------------------- I think it’s also worth listing a few things that some AI companies present as important societal-benefit measures, but which I’m a bit more skeptical are crucial for reducing the risks I’ve [focused on](https://www.cold-takes.com/tag/implicationsofmostimportantcentury/). * Some AI companies restrict access to their models so people won’t use the AIs to create pornography, misleading images and text, etc. I’m not necessarily against this and support versions of it (it depends on the details), but I mostly don’t think it is a key way to reduce the risks I’ve focused on. For those risks, the hype that comes from seeing a demonstration of a system’s capabilities could be even [more dangerous](https://www.cold-takes.com/p/f19236c6-34b8-4487-a458-0fc8fe00fb37/#avoiding-hype) than direct harms. * I sometimes see people implying that open-sourcing AI models - and otherwise making them as broadly available as possible - is a key social-benefit measure. While there may be benefits in some cases, I mostly see this kind of thing as being negative (or at best neutral) in terms of the [risks I’m most concerned about](https://www.cold-takes.com/tag/implicationsofmostimportantcentury/). + I think it can contribute to [hype and acceleration](https://www.cold-takes.com/p/f19236c6-34b8-4487-a458-0fc8fe00fb37/#avoiding-hype), and could make it generally harder to enforce safety standards. + In the long run, I worry that AI systems could become extraordinarily powerful (more so than e.g. nuclear weapons), so I don’t think “Make sure everyone has access asap” is the right framework. + In addition to increasing dangers from misaligned AI, this framework could increase other dangers I’ve [written about previously](https://www.cold-takes.com/how-we-could-stumble-into-ai-catastrophe/#potential-catastrophes-from-aligned-ai).* I generally don’t think AI companies should be trying to get governments to pay more attention to AI, for reasons I’ll get to in a future piece. (Forming relationships with policymakers could be good, though.) When an AI company presents some decision as being for the benefit of humanity, I often ask myself, “Could this same decision be justified by just wanting to commercialize successfully?” For example, making AI models “safe” in the sense that they *usually behave as users intend* (including things like refraining from toxic language, chaotic behavior, etc.) can be important for commercial viability, but [isn’t necessarily good enough for the risks I worry about](https://www.cold-takes.com/why-would-ai-aim-to-defeat-humanity/#why-we-might-not-get-clear-warning-signs). Footnotes --------- --- 1. Disclosure: my wife works at one such company ([Anthropic](https://anthropic.com/)) and used to work at another ([OpenAI](https://openai.com/)), and has equity in both. [↩](#fnref1)- Though I won’t, because I decided I don’t want to get into a thing about whom I did and didn’t link to. Feel free to give real-world examples in the comments! [↩](#fnref2)- Now, AI companies could sometimes be doing “responsible” or “safety-oriented” things in order to get good PRs, recruit employees, make existing employees happy, etc. In this sense, the actions could be *ultimately* profit-motivated. But that would still mean there are *enough people who care about reducing AI risk that actions like these have PR benefits, recruiting benefits, etc.* That’s a big deal! And it suggests that if concern about AI risks (and understanding of how to reduce them) were more widespread, AI companies might do more good things and fewer dangerous things. [↩](#fnref3)- You could argue that it would be better for the world to develop extremely powerful AI systems *sooner*, for reasons including: * You might be pretty happy with the global balance of power between countries today, and be worried that it’ll get worse in the future. The latter could lead to a situation where the “wrong” government [leads the way on transformative AI](https://www.cold-takes.com/transformative-ai-issues-not-just-misalignment-an-overview/#power-imbalances). * You might think that the later we develop transformative AI, the more quickly everything will play out, because there will be more computing resources available in the world. E.g., if we develop extremely powerful systems tomorrow, there would only be so many copies we could run at once, whereas if we develop equally powerful systems in 50 years, it might be a lot easier for lots of people to run lots of copies. (More: [Hardware Overhang](https://aiimpacts.org/hardware-overhang/)) A key reason I believe it’s best to avoid acceleration at this time is because it seems plausible (at least 10% likely) that transformative AI will be developed *extremely* soon - as in within 10 years of today. My impression is that many people at major AI companies tend to agree with this. I think this is a very scary possibility, and if this is the case, the arguments I give in the main text seem particularly important (e.g., many key interventions seem to be in a pretty embryonic state, and awareness of key risks seems low). A related case one could make for acceleration is “It’s worth accelerating things on the whole to increase the probability that the particular company in question succeeds” (more here: the [“competition” frame](https://www.cold-takes.com/making-the-best-of-the-most-important-century/#the-competition-frame)). I think this is a valid consideration, which is why I talk about tricky tradeoffs in the main text. [↩](#fnref4)- Note that my wife is a former employee of OpenAI, the company I link to there, and she owns equity in the company. [↩](#fnref5)
b8ce619f-8545-47b6-8ac8-79175bbb1e38
trentmkelly/LessWrong-43k
LessWrong
Non-axiomatic math reasoning with naive Bayes ETA: this post is WRONG. Read on only if you like to find mistakes in other people's reasoning. Sorry. ---------------------------------------- So I've been thinking how to teach machines to reason about integers without running into Goedelian limitations. A couple days ago I suddenly got this painfully obvious idea that I'm sure others have already developed, but I don't know how to search for prior work, so I'm posting it here. Here's a simple statement about the integers: 5 > 4 It's easy enough to check by direct calculation if your CPU can do arithmetic. Here's a more complex statement: For any x, x+1 > x How do you check this? Let's denote the whole statement as S, and the individual statements "x + 1 > x" as S(x). The statements S(1), S(2) and so on are all directly checkable like the first one. For simplicity we can make the "naive Bayesian" assumption that they're all independent, so P(S) = P(S(1))*P(S(2))*... (ETA: this is shady, though it doesn't break the construction. See AlephNeil's comment and my reply for a slightly better way.) After manually checking that S(1) holds, we update its probability to 1, so our estimate of P(S) increases. Then we try S(2), etc. After a while P(S) will become as close to 1 as we want, if S is true. We can also use a different quantifier: There exists x such that x*x = 9 You can check this one by naive Bayesian reasoning too, except "exists" corresponds to probabilistic "or", whereas "for any" corresponded to probabilistic "and". So you check S(1), S(2) and so on, revising P(S) downward at each step, until you stumble on the right x and P(S) jumps to 1. Here's a still more complex statement: For any x, there exists y such that x ≥ y*y and x < (y+1)*(y+1) How do you check this one? If we again denote it as S and the individual parts as S(x), we already know how to approximately check S(x) for each value of x, because it's a statement of the previous "level"! So we know how to check the whole S approximately,
6069d752-0069-47cb-bba9-fd8a528e7fe3
trentmkelly/LessWrong-43k
LessWrong
Neuron Activations to CLIP Embeddings: Geometry of Linear Combinations in Latent Space This is my project for the AI Safety Fundamentals course. Abstract I use Lucent library to produce images that are optimized to give maximum activation on some neuron (or, in my case, linear combination of two neurons). Then I use CLIP neural network to map those images in the embedding space. Then, I use dimensionality reduction and look for interesting geometry. Here is the code I used. Getting images Lucent already has a built-in way to combine objectives (in this case, activations of 2 neurons). But which objectives should it optimize for? At first, I thought to take a grid of linear combinations x and y values for obj=x⋅neuron1+y⋅neuron2 But this is not natural way to do that. It doesn't matter whether obj1=x⋅neuron1+y⋅neuron2 is optimized or obj2=2obj1=2x⋅neuron1+2y⋅neuron2, the result is going to be the same (because f(x) and 2f(x) have the same point of maximum). So I only need to look at linearly independent linear combinations, or directions. I am using circular parametrization x=sin(ϕ),y=cos(ϕ) Played back to back, those optimized images make a GIF (the result is flashy so epilepsy warning, the GIF is in the end of the post). Four distinct phases can be noticed, which I assume are related to which neuron has the biggest (in absolute value) coefficient and whether it is being maximized or minimized. CLIP embeddings CLIP is a model that is used primarily in order to have a shared embedding space for images and image descriptions. I use it just as a model which gives me low-dimensional vector corresponding to the input image. But in order to visualize them, those vectors need dimensionality reduction. Two methods I used: 1. PCE Using PCE to reduce 768 dimensions to 2, we get this image It is circular, as to be expected, though it has 3 distinct clusters (top right, top left, and at the bottom) 2. Get top 2 dimensions with biggest variance. This method results in following image Now it is tangled, but that is
7e8f7cd9-9e3c-4a49-9c1c-4cbd4ed7027a
trentmkelly/LessWrong-43k
LessWrong
Ontologies Should Be Backwards-Compatible Posting these slightly out-of-order from when I had published them on my blog. If you downvote this post, please consider also leaving a comment for me (heck, if you upvote it, too). Thank you! In 2017, I wrote several blog posts on LessWrong: * Mode Collapse and the Norm One Principle * One-Magisterium Bayes * Expert Iteration From the Inside At or around the time I wrote One-Magisterium Bayes, I was put-off by the reception these posts received, even though all three have positive “karma” as of today. This led to me deciding to take a hiatus from actively engaging with the community for quite some time. The posts I submitted after having recently come back to LessWrong have all received negative scores thus far. I wanted to be able to account for this: After all, it should be possible to see why these newer posts are doing so poorly, by comparing my viewpoint evolution to what it used to be. I hadn’t even thought about these older posts in quite some time, so I wasn’t sure what to expect. To my surprise, after re-reading those old posts, I found that they are all fairly good predictors of what my views would be 5 to 6 years in the future. The first post, Mode Collapse and the Norm One Principle, is - in layman’s terms - an argument that we should promote discussion norms that discourage information-less criticism. This means that implementation of upvoting / downvoting on the forum is less desirable than simply commenting. Criticisms should aim for being “norm one” by pointing in the direction that the target of the criticism ought to move towards, and by being reasonable in magnitude (not overtly hostile, not overly praising). Otherwise, and what I would predict to happen under the site’s norms then (and now), the community would experience “mode collapse” in the sense that only a few sets of ideas would get repeated over and over, everything else would be dismissed or bumped off the site. One-Magisterium Bayes argues for “strong Bayesianism” as an onto
d99e9e40-2660-47ae-bdea-abe171291219
trentmkelly/LessWrong-43k
LessWrong
"What Progress?" Gwern's 10 year retrospective This is a thrilling read. Some favourite lines: > And that was that. The FBI would later pay me some visits, but I was long done with the DNMs and had moved on. and > The key result was Rietveld et al 2013, the first truly successful IQ GWAS. Rietveld et al 2013 found GWAS hits; further, it found between-sibling differences. (This sibling test would be replicated easily a dozen times by 2020.) Reading it was a revelation. The debate was over: behavioral genetics was right, and the critics were wrong. Kamin, Gould, Lewontin, Shalizi, the whole sorry pack—annihilated. IQ was indeed highly heritable, polygenic, and GWASes would only get better for it, and for all the easier traits as well. (“To see the gods dispelled in mid-air and dissolve like clouds is one of the great human experiences. It is not as if they had gone over the horizon to disappear for a time; nor as if they had been overcome by other gods of greater power and profounder knowledge. It is simply that they came to nothing.”) and the final line > If I have not always been right from the start, I have at least been less wrong than most in updating faster than most (DNB, behavioral genetics, DL/DRL).
09cde56a-28a7-4fc3-9622-773fd2f381bd
trentmkelly/LessWrong-43k
LessWrong
Mechanistic Interpretability for the MLP Layers (rough early thoughts) This post is a link to a video I just made in response to the new work coming out of Anthropic described here and here and discussed on LessWrong here. In the video I try to puzzle through how best to think about the MLP layers of a transformer in the same spirit as Anthropic is thinking through the self-attention layers.
9a42d9a6-afaa-4f4c-ae03-18f7cb65c8d6
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
AI Safety Hub Serbia Soft Launch TLDR; We got three-month funding from a generous individual funder to launch an AI Safety office in Serbia. We are giving free office space (and, if funding later permits, housing) to AI Safety researchers who are looking for a place to work from. Priority for citizens of countries like Russia and China who can work visa-free from Serbia while being close to Europe. [Register interest here](https://docs.google.com/forms/d/1LQ9cE1CGjD_WMMx5IYLLeHFeXNQJx12dsu7f4_FSF7w/edit) or ask questions at [my email](mailto:dusan.d.nesic@efektivnialtruizam.rs). We also have a promise of partial funding for our bare-bones request ([budget](https://docs.google.com/spreadsheets/d/1_xRnyLYgPNPvMcXej6xfpkhxXDgXDdKfIpR2wfzaoSs/edit#gid=0)) from an individual donor but are looking for a second individual donor in order to fulfill it (about 30k USD) - [reach out to me](mailto:dusan.d.nesic@efektivnialtruizam.rs) if this may be you.  Background: ----------- EA Serbia and AI Safety Serbia groups are small but growing (~30 people in EA Serbia, ~3 people looking to get into AIS research as a career, and ~3 to get into AIS policy). Due to Serbia’s favorable Visa policy towards Russia and China, many foreigners already live here. With lower living costs than many other international hub cities, a vibrant scene, and a favorable time zone and climate, Belgrade has a growing foreigner community. As we have seen projects such as [CEEALAR](https://ceealar.org/) as important and impressive, we wish to replicate them in Serbia, where they can better serve people who may struggle to get UK visas. We also believe that having the capacity to quickly scale cheap housing for people coming from different countries is a good thing. We also believe that we should start small, prototype, and then move larger. We have had a unique opportunity to get an office space that is NGO-friendly, has good vibes, and costs only ~550 Euros per month for office space that has three rooms and can fit 8-15 people (depending on how snug they decide to be) with a coffee shop downstairs where another 20 people can spend their time co-working, as the office and the downstairs coffee shop are under the same ownership. This is certainly less luxurious than many other EA/AI co-working places, but we have a high degree of customizability allowed to us, which we can use to make a good office space. If we grow enough, we can also move to a bigger venue, as our needs grow. Certainly, if we knew that more use could be found in an office in Serbia, getting something somewhat further from the center, which includes living and office spaces, would be better, but we do not wish to explore that until we have proof of concept and need. ### Operations details (a.k.a. how it works): The office is currently rented for three months, August-October, so that we can keep the favorable price instead of having to find a different place. The office space has some desks and chairs, but we are looking to acquire full funding and have people voice their needs before acquiring more furniture. The office is usually open during working hours of the coffee shop (10 AM to Midnight, except on weekends when it is 4 PM to Midnight) as they share an entrance, but exceptionally, we can accommodate special requests if someone works better at strange working hours. Office space is given to those that are working on projects related to AI Safety as a priority, but EA research is also welcome whenever we have spare capacity (which we expect at first). Unfortunately, we currently do not have a legal entity that can provide visa invitations for those coming from countries that require a visa - for that, we would need to gather funding before starting the process. Still, a Serbian visa is not required for many and is relatively easy to get for most. We have a reliable real estate agent who is able to get good deals on housing in Belgrade for those that need housing assistance until we get funded and rent a co-living space as well. For those looking to eat consistently through us, we can arrange affordable cooked meals delivered to the office or your housing (at your expense) - vegetarian or not. If we have enough interest, we can get the chef to prepare vegan food as well. ### You may want to come if: * You are an AI Safety Researcher/EA researcher looking for a base of operations for a short-medium-long term * You are keen to be close to Europe but not in the EU * You are looking for a vibrant but affordable city with plenty of things to do, and Eastern European but Westernized culture Hiring: ------- Currently, the project is managed by a few volunteers from EA Serbia, myself included. We run the operations of the office, as well as checking applications. As we grow, we would like to have some paid positions and some volunteer ones. We are looking for: * Volunteers who wish to be members of the Board of Directors of the project, mostly dealing with strategic decisions and approving participants (impactful role as you empower researchers to develop their research agendas) * Project Manager, mostly dealing with day-to-day running of the project, communication with stakeholders (board, funds, participants), as well as checking reports from participants. (0.25 FTE or 0.15 FTE in bare-bones version - salary still enough to live in Serbia, but additional income may be needed for a less frugal lifestyle) Ideally, we would be hiring after we have all the funding, but if someone is passionate about the role, please reach [out to me](mailto:dusan.d.nesic@efektivnialtruizam.rs), and the first order of business can be looking for funding with your help. Thoughts? Feedback? ------------------- For any questions or comments, please write to my [email](mailto:dusan.d.nesic@efektivnialtruizam.rs). If you wish to be informed of the full launch, sign up in the [interest form](https://docs.google.com/forms/d/1LQ9cE1CGjD_WMMx5IYLLeHFeXNQJx12dsu7f4_FSF7w/edit) and note so. If you wish to come over now, fill in the interest form and send me an email as well so that I make sure to process your request quicker! The post was written in a bit of a rush, so apologies if there are details you would like to see - please reach out if so, or leave a comment below, I’ll try to edit things in.
573cb866-7a3b-43ec-9ba7-9c8ebb811787
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Searching for Modularity in Large Language Models *Produced As Part Of The SERI ML Alignment Theory Scholars Program 2022 Under* [*John Wentworth*](https://www.lesswrong.com/users/johnswentworth) *See the*[*Google Colab notebook*](https://colab.research.google.com/drive/1yH_foOW7ONKXj5evGiDC6WmuV7qz5SMT?usp=sharing) *to see the technical details of analysis that was done* Previous [posts on Modularity](https://www.alignmentforum.org/s/ApA5XmewGQ8wSrv5C) investigated how one should search for and try to define modularity. Looking at biological life, it seems that modularity appears everywhere, and seems to be instrumentally useful and convergent. One hypothesis posed was that larger neural networks might also be modular by default, and that larger models might converge to being more modular than smaller networks. Guided by this hypothesis we spent some time looking at the possibility of this hypothesis by looking at the pretrained Meta OPT models (Facebook’s GPT equivalent) While we do not find strong evidence either way for Modularity, we do uncover interesting correlations between activations when OPT is given prompts drawn from the same corpus and produce some interesting and aesthetic graphs. Our ultimate  aim would be to be able to find some sort of intuitive “modules” which perform distinct tasks. Unfortunately we are unsure as to whether this is possible and did not find particularly strong evidence either way here.  The current best guess for an approach that would demonstrate modularity would look like in a model like OPT would be :   * Being able to remove all parts of a model except those responsible for one specific type of generation task, such as writing code. Successful identification and removal of this component would result in OPT predicting code snippets regardless of input. Prompting OPT with an extract from Shakespeare results would result in code, and not an imitation of The Bard of Avon. This is not a perfect goal, since we could achieve this by manipulating the unembedding layer or generally manipulating the tokens in a specific way so that the network still performs processing relating to dealing with Shakespearean text, but superficially alters the outputs, but I think if done right it would be interesting and valuable.  To draw an analogy to Neuroscience, it might be possible there would be sections of the model that are responsible for “thinking” about different things.  An additional mechanism may be “induction heads” that infer not only the actual content of the text, but the style of the text, and it might be instead that one also needs to stimulate those in the right way. ![](https://lh5.googleusercontent.com/aevfJbU27s2hEeOGMi1CLKB75nrNWzIqZc6SOezqSKhlDUxuAo_wQkzKgu6z8KZaazSTdXDyshkb6i6ANzrdIiAgulkuk0ynrUf0igJ3ntrFcu7HAAHms_utrNIQm6jKVCaM9fxvCBv1pxNnLkfQsXUXWvf49CVaNx8GkqIXlVDI-d1J3RBsD_d2hw)Diagram of a decoder block in OPT. See Appendix for more details. The Model We Looked At: OPT =========================== The reference models I have been using for experimentation are the pretrained Meta OPT models of size 125M and 2.7B parameters. ( called OPT-125M and OPT-2.7B respectively ) available [on HuggingFace](https://huggingface.co/docs/transformers/model_doc/opt) Without going into too much technical detail, the OPT models use the same architecture as the GPT models. Each model has a number of Decoder blocks with a Self-Attention Block connected by a residual connection, a Normalisation layer, a Dense Feed-Forward layer connected by a residual connection, and another Normalisation layer. Each Self-Attention Block has a number of attention heads. This post is not intended to give a full introduction to transformer models, and we refer curious readers to [these](https://towardsdatascience.com/illustrated-guide-to-transformers-step-by-step-explanation-f74876522bc0) [resources.](https://transformer-circuits.pub/)  OPT-125M uses vectors of size 768, and has 12 Decoder blocks with 12 Attention Heads. OPT 2.7B uses vectors of size 2048, and has 32 Decoder blocks with 32 Attention Heads. Modularity in a Language Model ============================== At the moment, there doesn’t exist a unified definition of modularity that one can use to determine if a network is more or less modular. Choosing different definitions can often lead to different results, and most definitions are not particularly general. We will review 2 common choices, and discuss why we haven’t used these.  Graph-Theoretic Modularity -------------------------- *TL;DR: this doesn’t really work.* Some of the first attempts to look at modularity were seen in [this paper](https://arxiv.org/abs/2103.03386) and [this paper](https://arxiv.org/abs/2110.08058) by Filan et al, where the modularity is assumed to be in the weight space. This seems like a plausibly good direction, taking analogies from graph theory, one could hope that a network that is modular would be somewhat clusterable. The [Q-Score](https://en.wikipedia.org/wiki/Modularity_(networks)#Modularity) seems like a pretty decent metric for modularity for different kinds of graphs, but one main issue is that analysis doesn’t generalise particularly well to neural networks that are not easy to graph ( i.e: not Multi-Layer Perceptrons ( MLPs ) ).  While you could probably somehow do it, graphs are not an efficient way to describe the ( key, query, value ) Attention blocks with softmax in Transformer architecture models. This is because there is no particularly good way to decide how large to make the graph ( given that using a single token width makes the attention block an identity transform). Using Singular-Value Decomposition to Look for a Tight Pass ----------------------------------------------------------- *TL;DR: this also doesn’t really work.* Another attempt one could do is try to look for a possible “tight pass” in the neural network. This means we expect modular components of the network to output “data summaries” or condensed outputs to be used by other modules.  If in a more traditional network there is a part of the model that has a very small number of parameters ( such as in a Variational AutoEncoder ), then one should be able to detect this using Singular Value Decomposition. One could also presume that if a network has some modular structures, then there might be a point in the network where all the modules finally give an output, similar to a VAE, and these are used for further computation. If this is true, we should be able to detect this.  I thought about trying to do this with the smallest OPT model. A naive calculation might assume that any tight pass would need to be smaller than 768, and using two tokens would be enough to find this. This might seem plausible at first, though this is not actually the case. There are two main problems with this:  First is the residual connections. This makes it essentially impossible to analyse the entire model in one go, since the output has a term directly proportional to the input, since the formula for the output looks something like this: ``` model(x) = x + attention_0(x) + mlp_0(x + attention_0(x)) + attention_1(x + attention_0(x) + mlp_0(x + attention_0(x)) ) + … ``` This makes it difficult to calculate if there is an upper bound that could detect a tight pass at all (since the output would be directly proportional to the input), and makes singular-value decomposition not well suited to this.  If one sets this aside and tries to look at individual attention blocks instead, there are other issues still not bypassed. The reason for this is that the different tokens are each passed forwards individually, and the actual “width” of the neural network is proportional to the number of tokens put in (though parameters are repeated). This would mean that the rank of the matrix could be up to being on the order of the number of parameters, or something on the order of 768\*768! (A lot larger.)   attempted to look at smaller inputs, but could not detect any clear rank for the attention heads (only for the Dense Fully-Connected Layers).  ### Looking for a Tight Pass Here is the Eigenvalues from doing Singular Value Decomposition on the Jacobian of the output with respect to the input for the first attention and fully connected blocks (no residual) for inputs of size 2 tokens (which took around 5 seconds to compute) and size 34 tokens (which took around 20 minutes to compute).  ![](https://lh6.googleusercontent.com/HRkzgyWpoBVzWiWs9IU8nBNllsSiISGn2mVyMkO7TTY9f4t3uWNEQfEOol8urkChb3TnlMrhrImGS-excgSGgA0bJpzI_bDM8Hd84SRUNifP7w3qF8k6AlbPOq9SYmfiOY2v2eu73tSKYqESOcYlG9_Xws7nMDNCqcftUtC77CvkKDoZ4YNdkPN4fQ) ![](https://lh4.googleusercontent.com/taq6viQPiDsaLOW7c_ejBTYQEBiLGUR3mNJlHt6oO0dQchMHR8Mebr6S4EleHm8plYAAEgL07Vo_ibn_ORicBRMoqj2DNFim3QyTidkl9E5rACAu0bJQ5kvIeFdySnydkEfWiNAQR0Z0o-BT4Wbio3y1eApA8LOx37yHDJNQFAMa3jVYoese7JOSHw) *(left) The Attention Block has no clear tight pass occurring with 2 token input, except for a slight kink at ~2200* *(right) The Dense Layer has a clear tight pass with a 2 token input (~1240 nonzero)* *![](https://lh3.googleusercontent.com/n0mkBD39Uh7r86Ht9HLykDYyYwqMNvhQFVgE3BBple8Tv08AK_c2iLki1Lz7PzYy2_zzJH6E09Z7B9vIwHm9VBkuj3855lOuL31abhKyIEKsNJ3eR7fB8wcR2vPM6IX4tfPevqeD0SFEU6AH4b1MYBDJheYcHsCW5ILfyhxcnUYnPpgj4OdwSI3wlw) ![](https://lh3.googleusercontent.com/WNhQxX1CHKH5pfUkk1ChggziBY8xpG_jnVoSTiTAdzBfFI0U9V8sxjE99vLZxqd2c656Ss8OhWc2VP1DhWCBHUd7s4QP7IHGwsD4kxYUYLoWdJj9RY-eaKFxcUr8MCqGWj8MoUOrLWY5hRuUOwTHWeCamX7zpPFmDGpHlqLFzPr-MskqPVuvXcfvEA)* *(left) The Attention Block has no clear tight pass (other than a light kink towards the end) occurring with 34 token input.* *(right) The Dense Layer has a clear tight pass with a 34 token input (~9200)* I also tested the same things in later layers, but the results were essentially identical, so I will not show the plots again here. The main interesting thing, was that the rank of the Dense block varied a lot depending on the layer, as shown here ( with a 2 token input ): ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/917afef5acf99ac8d67967317cd3be59f23eda0ed9004e43.png)The Attention Blocks don't have much of a tight pass in any layer.![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/d81e474384c66428df2c2320ee5b0991104b971f490a6861.png)The fully-connected blocks have a very tight pass in the decoder layers right after the first decoder layer layer.  Testing for a tight pass in this way for larger inputs quickly becomes computationally infeasible. While computation takes a long time, a bigger constraint seems to be memory, which scales with O(n^4). If one tested the full upper-bound width, one would need to compute a Jacobian with length (768 inputs \* 768 size) vectors, or 0.34 Billion Numbers ( ~1.4TB data )  In addition, it is not clear that a tight-pass would actually occur, since one could think of a counter-example where the attention block is performing something equivalent to an identity transform. I intuitively feel like there should be a tight pass eventually if your input is sufficiently large, but I don’t think we found any particular hints for modularity here. Looking for Simulacra in the Activation Space ============================================= One explanation for how Transformer models work, [the Simulacra Hypothesis](https://www.lesswrong.com/posts/vJFdjigzmcXMhNTsx/simulators) I heard recently from a talk at Conjecture. This models that the task GPT is actually doing is: * Using the context input to choose "mind states" to simulate * Simulating those minds in an attempt to predict what the next token could be. While that might not be the whole story, it gives us some insight into some possible tests we could run. Might it be true that the different “mind states” are one way that the Transformer might be broken up? This story might imply that different “parts of the Neural Network brain” are being used to do different tasks, and I attempted to test this by running some preliminary tests.  As a very basic test, I looked at a small sample of 4 recent vox articles, 3 random recent ArXiV ML Papers, 3 Snippets of Code, 2 from the Linux Kernel, and 1 from the Angular front end library. These were essentially chosen on a whim to find things that seem like there should be some patterns. The indices in the diagrams that will follow are as such: * *0, 1, 2, 3: Vox Articles 1-4* * *4, 5, 6: ML Papers 1-3* * *7, 8, 9: Code Snippets 1-3* I then tried to explore how different aspects of the neural network might have different activations depending on the context that the neural network is in, in an attempt to try to find large “mind state modules” within the neural network. To see if there is any sign of this, I used the above texts to look for similarity between outputs. ### Calculating Similarity As a proxy for “modularity” in activations, I looked at Cosine Similarity. This has its own issues, as it is too basic a measure, but can give us some early insights. To calculate this:  * I fed forward each of the text samples ( 0-9 ) trimmed to the same length ( 512 tokens ), and saved the intermediate hidden state at each layer ( 512 embedded hidden state vectors at each layer ). * I looked at the last vector ( index 511 ) at each layer ( layer 0, … layer 11 ) of the transformer for each text ( text 0, … text 9 ). Here is a simplified illustration of what is happening: ![](https://lh4.googleusercontent.com/oQEsViyJLpwBqPVy-V4zxYZ68c8TJCZCO68Rz2SpT5a4MlBCswSIb4mXgO2XMWC8MpGwwEp9CXR7ljaSfBtDIEzoHeYZ6hkPNE-4ebadNPQ4zIDorwdlwWvq1VcqHQpvuAju8lPtebQi_JdVWktg8gOplpzqQciXLL_hYQPz2g0mWFCyDs2gdxkKgA) *This caricature example with vectors of size 3 shows how the vectors are computed. We hope that the “same category” inputs would have a similar output and they are shown as such here, but this is not always the case in our actual results.*  I then normalised each of these vectors to have length 1, and took the dot product of each of these vectors, and plotted them as such (real example):   ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/fc13cd46c07af18d427625086a4837b24f1fc19f3257ea32.png)*Hidden state vector 511, cosine similarity between each text for the final hidden states from the attention block, no residual (left) and the decoder block, with residual (right.)* *0-3: Vox Articles, 4-6: ML Papers, 7-9: Code snippets*  We are about to see a lot of these diagrams so let's go over this diagram in more depth. These are the similarities between activations at each layer for different inputs. Here 1.0 means that the two are identical, whereas 0.0 means that the two features were in completely different directions (one can have negative similarity, but here these terms have just been rounded up to zero). Notice the centre 3 by 3 square corresponds to similarities in activations between OPT being given different ML papers, but there is no square in the top left, indicating little similarity was seen between different Vox articles.  But looking at only a single position in the text runs into a sampling issue, two very different texts might have the same word in the same position, and it would be good to have a reference class of how similar different outputs in the same text are. So, instead of looking only at token 511, I looked at tokens at positions 475, 479, 483, … , 511 ( a total of 10 tokens ). This provides slightly better random sampling.   The intermediate states looked at where the the outputs of the OPTAttention layer, which are: * Attention Weights * Key-Values * Attention Layer Output * Decoder Layer Output Most of the early insights here can be seen by looking at the main attention and decoder layer outputs, so I will omit looking at the individual attention heads in this post. Feel free to look at the [*Google Colab notebook*](https://colab.research.google.com/drive/1yH_foOW7ONKXj5evGiDC6WmuV7qz5SMT?usp=sharing) linked above if you are interested in them.   ### Looking at Streams of Embedded dimensions One other thing to look at that I considered was looking at how specific dimensions of the tokens might be more or less correlated with the category of input. I again looked at the 10 positions of tokens mentioned above, for each of the 10 texts, and plotted the value of that dimension at each layer, such that it looks like a stream of inputs below. For example, see the following image showing how the dimension 13 of the input vectors changes across layers in OPT-2.7b for different texts ( explained more below ). ![](https://lh4.googleusercontent.com/hnyw4fflUoGLtbJPl9DirMYrri9ZMp47rdWE0f3ePS5CDrZ7ANlhe7tLUxLBmVOPUM8-7FtrPb7C9zNL8X90oktxSJaKbMf9uEKHyiw660ima2CXcZOzOw2GLuNmHx7SpYWmWu7DwjRk1y-HvJvrQEFOhjikbzs6keInsMMg1bq3d7Be9QwqDrR-4g) “Stream Diagram” for Dimension 13 of hidden states across layers in OPT-2.7B   Looking at OPT-125M =================== I first looked at OPt-125M for any hints of modularity. We look at the cosine similarity for each of the layers for the outputs of the attention block, as well the decoder block. Looking at these in detail, there seems to be some hints of modularity for the different kinds of text, though it doesn’t quite rule out simpler behaviour ( such as skip tri-grams or induction heads) being the prominent force.   **Cosine Similarity of Attention Block ( Before Residual )** *![](https://lh5.googleusercontent.com/EFXfJTIKNRQBRTbHgZGji1GHpjfYOOqjTVSDCnnjI32N6qcimlLWc9TPJ1AN8NKeBN0GQzlU1FucWvia8T29l7ja9if4apTWFbkzyFgWzkfNq2dznP_rxXD442qbkmgnXl1nn_bLCJgzW9_Ar9xFwZEHnpmW38x3MgjnO8P2XVa-eFIHffPggynnYQ)* *The token cosine similarity for attention blocks at different layers ( 0, … 11 )* *0-9 first digit indicates which of 10 texts is being looked at ( see above )* *0-9 second digit denotes index of which hidden state vector ( 475, 479, … 511 )*   We see a very high similarity in the first layer between most tokens, then gradually things seem to mostly be in their own space independent of one another, then in the end there is some block of similarity between same-category texts, such as the ML articles and Linux Kernel Code. Then finally in the last layer, the model seems to have a huge transition into the tokens being more similar again, but with distinctly higher similarity between different parts of the same text, and texts of the same category.    This correlation of activations here seems like it might be indicative of some sort of modularity of the computations. The middle layers seem somewhat confusing to me given how some of the interactions seem to have quite low similarity between each other at all, in the noise there still seem to be distinct patterns for similarity between the similar categories of text. One thing though, is that it seems to be a lot more “knowledge reliant” that style reliant, when I was particularly interested in looking for styles of text when trying to find Simulacra.   **Cosine Similarity of Decoder Block ( After Residual )** *![](https://lh4.googleusercontent.com/OcCDWjaRhDcvLOcLzLZvaH-cCBrjer4XxgWpNiks1jrAopbbomSyldFkvoSeNS-IlWQHWdJTPpL6RVVMGHofJtozMuUVEevMMB1OSNXr-SgW8VwZznF0kMluay2_jn3Mt4O4e8y52VVhynDT2onbHPSqAG-sMU_PgYJ5q6LDUDYYTZLd8J5VF7nRxQ)* *The token cosine similarity for decoder blocks at different layers ( 0, … 11 )* *0-9 first digit indicates which of 10 texts is being looked at ( see above )* *0-9 second digit denotes index of which hidden state vector ( 475, 479, … 511 )*   We see a moderately high similarity in the first layer between most tokens, then very slowly the similarity between different-category texts seems to dwindle, and the similarity between same-category texts seems to (noisily) increase enough to counteract this.   **Streams of Hidden-State Dimensions** I analysed some of the dimensions of the tokens, in an attempt to find nice distinct “streams”. I looked quite randomly for them, but didn’t find many particularly nice streams in OPT-125M. In general, they seem to be somewhere between each input being completely random, and nice independent streams for each category.    I tried to graph with colours similar for each category but still distinct for each text (ie: red for Vox articles, green for arXiv ML papers, blue for code)   Here are some of the results I saw while looking: *Embedding dimensions 6, 15, 45 at each layer.  Each has some hints of streams for different categories of text, but they are not particularly clearly separated as we will see for the larger model. There seem to be some vague streams of same-category colours in each.*  *Embedding dimensions 1, 16, 20 at each layer. These look quite random, even for different parts of the same text. The last looks particularly interesting, as it doesn’t change much layer to layer, except for when it reaches the last layer, though it doesn’t show modular behaviour.*   So we see that there might be some interesting results from looking at the small model, but we should also look at the case of a larger model as well. Looking at a Larger Model: OPT-2.7B ----------------------------------- So what happens when we do the same analysis on a significantly larger model? In this case, we have 32 layers instead of 12. **Cosine Similarity of Attention Blocks** ![](https://lh4.googleusercontent.com/LjWghJxW1jxMN8o48rVKAPYN09JcoeKtuQQhQg23oX3KYQ7lM_97pNNdT6FJQrM9HSK4GIioMQ3LRGUSovsV5oAr4rvs7YDVHQlWPHRU0LWsoetP6jmMQ60u8S-uefyvSmqTrKB0TEq-4UAAChEaZA5f50Q8Gri1aIJPb73hx3SS-Zcu4GTRPF6V9Q) *The token cosine similarity for attention blocks at different layers ( 0, … 31 )* *0-9 first digit indicates which of 10 texts is being looked at ( see above )* *0-9 second digit denotes index of token vector ( 475, 479, … 511 )*  The behaviour here has obvious similarities with the smaller model: * The tokens first look very similar and difficult to distinguish. * The overall similarity seems to fade in the first few layers, and similarity increases so that it is only high between inputs of similar category. * Now even the similar modules seem to fade such that none of the tokens seem particularly similar at all, even within the same text. ( small background hints seem to remain, but that are tiny compared to the size of the token ) * Similarity from within the same text seems to emerge in leading up to the final layers * The similarity between categories becomes very similar again * All of the tokens begin to look very similar again, with some little cosine similarity between items in the same category.. **Cosine Similarity of Decoder Blocks** ![](https://lh5.googleusercontent.com/kgw-wfXcR_wBPfb2-T_fm1uc8YzEh6ZJ82gbd8rdEwFI1wnYCGEOVPE-L02L1b__00A3DFbxgrN1x81jz5ziqwsybjemB3-WA2C4O-8ejSm5WRxFo20EIKy4g2UsZnxBAd11PvyVf1G4vaVZ7TjK_KiKL9xNuzXixuf8lcbRg5CBHKdXXwPPnqB7MQ) *The token cosine similarity for attention blocks at different layers ( 0, … 31 )* *0-9 first digit indicates which of 10 texts is being looked at ( see above )* *0-9 second digit denotes index of token vector ( 475, 479, … 511 )*  The cosine similarity of outputs from the decoder blocks on each layer again seems to have a similar behaviour to the small model: * There is initially a very amount of similarity between all the tokens * The similarity slowly dwindles away * There slowly emerge patterns of similarity between similar-category texts Streams of Hidden-State Dimensions ================================== We again looked at a small number of streams of the hidden-state dimensions. ![](https://lh3.googleusercontent.com/5PcH_TXIYxxZd7y_l6htcXIWeQY5c0D_6SxpGj6hNdmQSFXEhHV6zVCY4VqRqBzY6WE_dbJDWMy3ZuAoN6j4L70XHQ7MbWMrbE7_GTExtU3yFro-VIUrQ1vZJ251i8OeZVtRsqJ4M8eq-stxcvGj7GiFr_VePjh_wnCL1CC2k4GPpBXS0yYI4raD1Q) ![](https://lh4.googleusercontent.com/hnyw4fflUoGLtbJPl9DirMYrri9ZMp47rdWE0f3ePS5CDrZ7ANlhe7tLUxLBmVOPUM8-7FtrPb7C9zNL8X90oktxSJaKbMf9uEKHyiw660ima2CXcZOzOw2GLuNmHx7SpYWmWu7DwjRk1y-HvJvrQEFOhjikbzs6keInsMMg1bq3d7Be9QwqDrR-4g) *Token Embedding dimension 13 over various layers. Left: zoomed in, out: zoomed out.*  The dimension 13 seems to have a very clear separation of streams for different texts, this seems to look very promising for showing some kind of modularity. ![](https://lh4.googleusercontent.com/CIN6k5ua3tadSPxYJjuQ-JZ2fg5i6Wq-aqL040SNCKvX_SOxNmCO3DjFxj1-eMme6ta3MBGl0lnOaEGCFQswmtuCDHBKF8OnJQ1Pk1qpNUNd5ghxUFXaXhgBdvF0UBHJjgGNU-KvUFgB5I5DW5tF2qJqP1thCWpJHvlE1gxMgNDDzbYHcxxsL6l7EA)![](https://lh3.googleusercontent.com/Fdd4fhAT2eh68SKWYCglF313PJwjIW6oIbo2OK54jHwu_HdrI5e-pI3Z7dgxD1woqoc_1z5s-BWVJtzdJ4OLNw63NUvpIXiJDEpReS4nl5DGd5EOGgt9PaMryFiU8-Xo3Lc76YWlaDkb6v3Mt01qlVgtsQnechTcBOh-KymF3Exbdu4-3Sic6rPutQ) *69th and 256th Token Embedding Dimensions.* Some other dimensions, such as 69 and 256 seem more typical, with the streams possibly having some correlations but they are not as clear as in dimension 13. Conclusion: Further research seems valuable =========================================== There seems to be a sufficient amount of similarity in the activation between layers of the tokens that there could be some modular structure in the transformer. In particular, the similarity metrics seem to show similar activation patterns for tasks requiring similar knowledge and style. However, it still seems possible that here that the similarity might come more from the specific knowledge instead of the style of the text, and in particular, there seemed to be little similarity between the different Vox articles, and it would be interesting to be able to detect such a thing. One thing that could be considered in further research, is doing some more advanced analysis with a greater variety of inputs. One could, for example, at [existing transformer interpretability research](https://www.alignmentforum.org/posts/X26ksz4p3wSyycKNB/gears-level-mental-models-of-transformer-interpretability), and use tools such as Principal Component Analysis (PCA) to build a better metric than the raw Cosine Similarity, or try to visualise things with [PHATE Dimension Reduction](https://phate.readthedocs.io/en/stable/). In addition, some of the analysis assumes a specific ontology, such as when analysing the streams of the different dimensions of the embedded vectors. There may be a better way to look at the data, such as finding directions that are not along the euclidean dimensions. Path To Impact: =============== If Transformer architectures become the future path to [Transformative AI](https://www.lesswrong.com/tag/transformative-ai), then it seems important to be able to understand them correctly. It would be useful to be able to do a neuroscience-like analysis to identify parts of the transformers that do different things ( if this is at all possible ). One ultimate goal might be to be able to identify modules that are able to perform specific tasks, or embed the knowledge of specific simulacra. An optimal example might be to identify the “coding” part of the module, and modify the network to remove all irrelevant modules, such that one could input a text of Shakespeare, and get the model to consistently output things that look like code. Appendix A - More Details on OPT: ================================= In OPT-125M, Each Decoder Block has: * The Input * A Normalisation Block * An attention block with 12 attention heads * A Residual connection * Another Normalisation Block * A Dense block connecting 768 nodes to 3072 nodes to 768 nodes. * Another Residual Connection * The Output See the original [code definition](https://github.com/huggingface/transformers/blob/main/src/transformers/models/opt/modeling_opt.py), or the PyTorch summary of the decoder layers is here: ``` OPTDecoderLayer(  (self_attn): OPTAttention(    (k_proj): Linear(in_features=768, out_features=768, bias=True)    (v_proj): Linear(in_features=768, out_features=768, bias=True)    (q_proj): Linear(in_features=768, out_features=768, bias=True)    (out_proj): Linear(in_features=768, out_features=768, bias=True)  )    (activation_fn): ReLU()    (self_attn_layer_norm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)    (fc1): Linear(in_features=768, out_features=3072, bias=True)    (fc2): Linear(in_features=3072, out_features=768, bias=True)    (final_layer_norm): LayerNorm((768,), eps=1e-05, elementwise_affine=True) ) ``` Appendix B - Individual Attention Heads ======================================= **EDIT:** This section was added because I decided after reading [a comment](https://www.lesswrong.com/posts/rp4CiJtttvwFNHkhL/?commentId=inqvoxpNYKowvswcM) by [Lucius Bushnaq](https://www.lesswrong.com/users/lblack) that maybe I should also have included results from individual attention heads.    I also added a graph showing width of the tight-pass over time above. The main flaws with cosine similarity are that: * It does not preserve scale well, so almost-zero and very large vectors have the same magnitude in similarity calculations. * It doesn't show two small vectors as close to each other and far from a very large vector.  A better metric might be something like a Euclidean norm, but the norm I have been looking at while doing research for this post is a "scaled cosine similarity". This is calculated the same as normal cosine similarity, but the vectors preserve the original ratios of length.  Here are some example graphs, calculated and labeled as before, except with "scaled cosine similarity" **Attention Block Outputs (Without Residual)** ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/e4492ee721a27aa56f14473f1afe38ea087dd1826579bf84.png)The "scaled cosine similarity" for outputs from the attention block output from each layer in OPT-125MWe see here that the characterisation of the behaviour is quite similar, but you can better see how the different input texts have different magnitude outputs. **Decoder Layer Output (With Residual)** ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/485279c052d18c7376818c3625e62e80a71186a1974e0137.png)The "scaled cosine similarity" for outputs from each decoder layer ( including the residual stream ) in OPT-125MThese show a similar behaviour, but the sizes of the activations are better preserved
e706998d-67ea-433b-93b9-f9e0c7978c6b
trentmkelly/LessWrong-43k
LessWrong
Transparent Technologies I cook my own meals because restaurant food is expensive. But there are many activities I would prefer to cheap versions independent of price. I Like I Don't Like bicycles cars entrepreneurship jobs solo adventure guides barbells exercise machines meditation psychoactive drugs Linux, LineageOS Windows, Mac, iPhone i3, CLI GUI, IDE, desktop metaphor autodidacting school, tutors From the perspective of someone with unusually high intelligence, activities involving skill tend to be cheap because our civilization has adequate material capital, The limiting factor of our industrial economy is intelligence. I have to manually override my instinctual feeling that civilization is materially limited. It is true there aren't enough spaceships and private airplanes to go around. Otherwise, we have plenty of material goods. Many people own too much stuff. Since the limiting factor on wealth is humans' inadequate cognitive ability to utilize our stuff, companies often create value by making things easier to use. If the process ended here then we would live in utopia. But when a company provides the service of making something easier to use, it takes effective control of the distribution channel. Centralized control of distribution channels inhibits competition. Weaker competition gives companies leverage over consumers. The Invisible Hand compels companies to extract maximum profit from whatever leverage they have. A prepackaged commercial product designed to make things easier for consumers tends to contain anti-features. Vehicles require special tools to interface with their computers. Apple employs pentalobe security screws. Universities charge fees for services you don't use and require you to take classes you're not interested in. Android distributors often won't let you root your phone. Expending effort to worsen a product is productively inefficient. An efficient market is a market that does not do inefficient things. Anti-features are market inefficiencies because
b0aee493-c885-4cf0-a3af-be1ca4df7a75
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
A sketch of a value-learning sovereign .mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0} .MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0} .mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table} .mjx-full-width {text-align: center; display: table-cell!important; width: 10000em} .mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0} .mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left} .mjx-numerator {display: block; text-align: center} .mjx-denominator {display: block; text-align: center} .MJXc-stacked {height: 0; position: relative} .MJXc-stacked > \* {position: absolute} .MJXc-bevelled > \* {display: inline-block} .mjx-stack {display: inline-block} .mjx-op {display: block} .mjx-under {display: table-cell} .mjx-over {display: block} .mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important} .mjx-stack > .mjx-sup {display: block} .mjx-stack > .mjx-sub {display: block} .mjx-prestack > .mjx-presup {display: block} .mjx-prestack > .mjx-presub {display: block} .mjx-delim-h > .mjx-char {display: inline-block} .mjx-surd {vertical-align: top} .mjx-mphantom \* {visibility: hidden} .mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%} .mjx-annotation-xml {line-height: normal} .mjx-menclose > svg {fill: none; stroke: currentColor} .mjx-mtr {display: table-row} .mjx-mlabeledtr {display: table-row} .mjx-mtd {display: table-cell; text-align: center} .mjx-label {display: table-row} .mjx-box {display: inline-block} .mjx-block {display: block} .mjx-span {display: inline} .mjx-char {display: block; white-space: pre} .mjx-itable {display: inline-table; width: auto} .mjx-row {display: table-row} .mjx-cell {display: table-cell} .mjx-table {display: table; width: 100%} .mjx-line {display: block; height: 0} .mjx-strut {width: 0; padding-top: 1em} .mjx-vsize {width: 0} .MJXc-space1 {margin-left: .167em} .MJXc-space2 {margin-left: .222em} .MJXc-space3 {margin-left: .278em} .mjx-test.mjx-test-display {display: table!important} .mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px} .mjx-test.mjx-test-default {display: block!important; clear: both} .mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left} .mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right} .mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal} .MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal} .MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold} .MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold} .MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw} .MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw} .MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw} .MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw} .MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw} .MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw} .MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw} .MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw} .MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw} .MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw} .MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw} .MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw} .MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw} .MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw} .MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw} .MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw} .MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw} .MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw} .MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw} .MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw} .MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw} @font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')} @font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')} @font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold} @font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')} @font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')} @font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold} @font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')} @font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic} @font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')} @font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')} @font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold} @font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')} @font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic} @font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')} @font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')} @font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')} @font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')} @font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold} @font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')} @font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic} @font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')} @font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')} @font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic} @font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')} @font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')} @font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')} @font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')} @font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')} @font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')} @font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')} @font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold} @font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')} In the [previous post](https://agentfoundations.org/item?id=539), I discussed three preference frameworks for goal-directed agents. In this post, I will discuss the value-learning sovereign in more detail. --- From the [Arbital article on genies](http://arbital.com/pages/7230273045585776493): > > Eliezer Yudkowsky has suggested that people only confront many important problems in value alignment when they are thinking about Sovereigns, but that at the same time, Sovereigns may be impossibly hard in practice. Yudkowsky advocates that people think about Sovereigns first and list out all the associated issues before stepping down their thinking to Genies, because thinking about Genies may result in premature pruning, while thinking about Sovereigns is more likely to generate a complete list of problems that can then be checked against particular Genie approaches to see if they have become any easier. > > > To this end, I think it is quite useful to discuss how to create a value-learning sovereign, even if it is not a good idea to actually create one. I should be explicit about the fact that the concrete models in this post are almost certainly wrong (even conditioning on the fact that we have to build a value-learning sovereign); they're meant to represent the best concrete illustration of value learning that I can currently write down. Values and ontologies --------------------- We want the AI to learn human values from human behavior. Usually, values are represented as a utility function. If the type of the world history is Ω, then a utility function over Ω is of type Ω→[0,1]. To learn U, we must first have some Ω in mind -- but what could this Ω be? There are 2 plausible candidates: 1. The human's ontology, ΩH. I have some way of mentally representing world states. My ontology contains concepts such as "human" and "happiness". I can express values, such as caring about human happiness, in this ontology. If the AI has a representation of ΩH, then it may be able to learn the human utility function UH:ΩH→[0,1]. 2. The AI's ontology, ΩAI. The AI will also model the world somehow. Probably, its model will be at least partially learned by induction. It will probably make different predictions from me, due to the fact that it might be able to discover physics that I don't know about (or otherwise model the world differently). Despite the differences between the AI's world model and my own, it is quite likely that my terminal values could be specified well enough in the ontology of a strongly superintelligent AI, since this ontology is likely to be finer than my own. How might we more formally represent the ontology? A simple environment model for talking about ontologies is the [partially observable Markov decision process](https://en.wikipedia.org/wiki/Partially_observable_Markov_decision_process) (POMDP). A POMDP consists of a number of iterations. In each iteration, the agent first takes an action (which causes the state to change stochastically), and then receives an observation of the next state. First we must define the set of actions A and observations O. These sets apply both to the human and AI. Unlike in a standard POMDP, here the agent's utility function is over the world history rather than the observed reward. Now let's formally define an ontology. An ontology consists of: 1. a type of world states, S 2. the distribution over the initial state, s0:ΔS 3. the stochastic state transition function, st:(S,A)→ΔS, which specifies what distribution of states results starting from a given state if the agent takes a certain action. 4. the stochastic observation function, o:S→ΔO, which specifies what distribution of observations the agent receives in a given state. By abuse of notation, let Ω stand both for the type of world histories (list of S values), and the ontology itself. Note that this model is Cartesian in much the same way as AIXI is, and therefore faces similar problems. See [the paper on realistic world models](https://intelligence.org/files/RealisticWorldModels.pdf) for more details. It is also unrealistic in that it has no explicit "multi-level" structure; we would expect human and AI concepts to have something like this. The analysis in the rest of the post will be limited by these problems, but I think it will still be useful to analyze an incorrect concrete model. Each stochastic function in the ontology could be represented by a probabilistic program. For example, consider the following ontology, modelled after the vacuum cleaner example in *Artificial Intelligence: A Modern Approach*: ``` # action set A = ['left', 'right', 'suck'] # observation set O = ['clean', 'dirty'] # S consists of (vacuum cleaner location, cleanliness) tuples # where location is between 0 and 9, and cleanliness is a list of 10 booleans # indicating whether each square is clean. def s0(): # start at a random location. Each room is clean with 50% probability. return (random.randrange(10), [random.random() < 0.5 for i in range(10)]) def st(s, a): loc = s[0] cleanliness = s[1][:] if a == 'left': # move left loc = max(0, loc - 1) if a == 'right': # move right loc = min(9, loc + 1) if a == 'suck': # probably suck dirt from current square if random.random() < 0.9: cleanliness[loc] = True return (loc, cleanliness) def o(s): # observe cleanliness of current square if s[1][s[0]]: return 'clean' else: return 'dirty' ``` With the ontology in place, we can consider some utility function over it (in this case, discounted cleanliness over time): ``` def U(state_seq): util = 0 discount = 1.0 for s in state_seq: clean = 0 for c in s[1]: if c: clean += 1 util += discount * clean discount *= 0.99 ``` Since U discounts exponentially, it can easily be extended to a utility function over infinite state sequences. Planning using an ontology -------------------------- If the AI already has some ontology Ω and some utility function over the ontology U, then it is possible for it to search for utility-maximizing policies. A policy could be represented as a stochastic function π:(A×O)→ΔA, which specifies what action the agent takes given an action/observation sequence. Essentially, a policy is a stochastic infinite decision tree. π could be chosen to maximize E[U(ω)|π], where ω:Ω is the state sequence and the expectation is with respect to the distribution defined by the ontology Ω. Learning setups --------------- With the ontology machinery in place, we have the tools to look at a couple proposals for how to learn the human's values. We assume that the AI has access to the observations and actions of the human. For example, in the vacuum cleaner example, perhaps the AI sees the sequence `['suck', 'clean', 'right', 'clean', 'right', 'dirty', 'suck', 'clean']`, meaning that the human controlling the vacuum cleaner first decided to suck dirt, observed that the square was now clean, went right, observed that this square was also clean, etc. From a long enough sequence like this, the AI should approximately learn the human's values. In practice, it might be necessary for the AI to locate humans in the environment. This adds some additional complexity, but for now I will ignore this. Learn the human's utility function expressed in the human ontology, from human behavior --------------------------------------------------------------------------------------- The AI could assume that the human's values are expressed over the (unknown) human ontology ΩH. The AI has a joint prior over ΩH and a utility function UH:ΩH→[0,1]. Additionally, the AI needs to predict how the human behaves given their ontology and utility function. One approach, common in economics, is to assume that the human maximizes expected utility. However, this model is quite unrealistic, and alternative models have been explored in the field of cognitive science. We could represent an alternative behavioral model as a policy π:List(A×O)→ΔA, similar to a planning model. The behavioral model should depend on ΩH and UH. While it is possible to set the behavioral model to maximize expected utility, this is psychologically unrealistic, so the behavioral model should allow the human to sometimes take suboptimal actions. The AI's planning, once it has the human ontology and utility function ====================================================================== Suppose the AI has inferred the human ontology ΩH, utility function UH, and behavior model. How could it make plans? There are 3 immediate candidates for planning algorithms: 1. Mimic the human using the behavioral model. 2. Select a plan that achieves high expected utility according to ΩH and UH. 3. Select a plan that achieves high expected utility according to ΩAI and UAI, where UAI:ΩAI→[0,1] is a version of UH that has been translated to be over ΩAI rather than ΩH Planning algorithm 1 is not too interesting by itself; it is probably more useful as an input to other AI control methods, such as [approval-directed agents](https://medium.com/ai-control/model-free-decisions-6e6609f5d99e). Planning algorithm 2 is more interesting. It selects a plan that looks good according to the human's ontology. This does not take advantage of the AI's ability to make better predictions than the human, but it does take advantage of the AI's better ability to search for plans. For example, if I were trying to solve a boolean satisfiability problem, an AI using this algorithm could suggest a solution, because my ontology predicts that this solution works, even though I can't find the solution myself. In this way, an agent using this planning algorithm is similar to an approval-directed agent. The main difference is that it selects a *plan* (i.e. an infinite stochastic decision tree) that maximizes how good the human expects the results to be, rather than an action. Otherwise, it is quite similar. Planning algorithm 3 uses the full power of the AI, including the AI's ability to make better predictions than the human. It requires deriving a utility function UAI from the inferred human utility function UH. If we have UH:ΩH→[0,1], and we want to create UAI:ΩAI→[0,1], one way to get this is to create a probabilistic ontology mapping function ϕ:ΩAI→ΔΩH, and then define UAI(ωAI)=E[UH(ϕ(ωAI))]. The ontology mapping function is meant to say which histories in ΩH best represent some history in ΩAI. Probably, it is more intuitive to map states rather than world histories, but it shouldn't matter much. The paper [Ontological Crises in Artificial Agents' Value Systems](https://intelligence.org/files/OntologicalCrises.pdf) discusses an ontology mapping method. While it would be interesting to look more closely at the relation between planning algorithm 2 and approval-directed agents at some point, I'll focus on planning algorithm 3 for the rest of the post. Planning algorithm 3 has multiple problems: 1. Unless we have a good theory of cognitive science, it is likely that the true human ontology and utility function will have a very low or zero prior probability. 2. Human values seem underdetermined by the observation/action data. For a given observation/action sequence, there may be many triples of (ontology, utility function, behavior model) leading to this behavior. The AI must have some way of acting appropriately under this uncertainty. 3. The ontology mapping seems difficult. I'll say more about just how hard the ontology mapping problem is in a bit. Learn the human's utility function expressed in the AI's ontology, from human behavior -------------------------------------------------------------------------------------- As an alternative to learning the human's ontology and utility function expressed in this ontology, the AI could assume that the human's values are expressed over the (known) AI ontology ΩAI. The AI has a joint prior over the utility function UAI:ΩAI→[0,1] and the behavior model as before. The assumption is that the human plans using the AI's ontology, rather than a different human ontology. Current value-learning algorithms, such as [inverse reinforcement learning](http://ai.stanford.edu/~ang/papers/icml00-irl.pdf) and [inverse planning](http://web.mit.edu/clbaker/www/papers/cogsci2007.pdf), work this way because they do not distinguish between the AI's ontology and the human's ontology. Unfortunately, this model is psychologically implausible. We do not think of the human's preferences about the AI's ontology being a cause of human behavior; rather, it is the human's preferences about the *human's* ontology that is a cause of human behavior. One place where this shows up is when the human takes an action that would be irrational if the human were using the AI's ontology (for example, the human calculates something incorrectly because they do not know about quantum physics, which the AI knows about). The AI has no choice but to either believe that the human's utility function considers making this incorrect calculation to be optimal, or to explain it as an error according to the behavior model. For the second option to produce reasonable results, the behavior model must be quite complex; it will probably talk about the human's ontology and how the human's goals in this ontology relate to UAI, much like proposal 1. Overally, I do not find this proposal promising. Either the behavioral model suffices to explain correlated human errors due to the human having an incorrect ontology (in which case the behavioral model contains all the complexity of proposal 1), or it does not (in which case the AI will learn the wrong values). Therefore, I will talk about the first proposal in the rest of this post. Instrumental or terminal goals? ------------------------------- Paul Christiano has previously written about the distinction between learning terminal and instrumental goals in his post, [Ambitious vs. narrow value learning](https://medium.com/ai-control/ambitious-vs-narrow-value-learning-99bd0c59847e#.j8ywv5z3r). It is possible to explore this distinction in proposal 1. Since the human’s utility function is relative to the human’s ontology, it is not possible for it to express truly *terminal* goals. Determining humans’ reflective preferences about states of the universe requires some kind of philosophical extrapolation process, in which humans clarify their concepts and develop preferences about their new concepts. However, by varying the behavioral model, it is possible to learn either higher-level instrumental goals (for example, getting a job), or lower-level instrumental goals (for example, filling out a particular job application). If the behavior model states that the human behaves by finding subgoals of UH and then optimizing for them (as we would expect if UH were a high-level goal), then it is more likely to detect high-level goals. On the other hand, if the behavior model states that the human optimizes for UH more directly (as we would expect a human to optimize for a low-level goal), then it is more likely to detect low-level goals. Note that, since instrumental goals change over time, we would also need to have UH change over time. This is a simple modification to the original model. Obviously, the AI’s goal should be set so it has no incentive to change the human’s goals to make them easier to optimize. Perhaps its goal at time t is to maximize expected utility of whatever UH is at time t. Naively, if the AI’s utility function changes over time, then it will be dynamically inconsistent. The AI at an earlier time will have an incentive to lock the current value of UH in place, so that future versions of the AI will optimize for this UH instead of whatever UH is estimated to be in the future. This would lead to a system that determines what my instrumental preferences are, and then continues to optimize for these even as my instrumental preferences change. An instrumental preference for autonomy ======================================= It seems that a system that locks my "object-level" instrumental goals (such as filling out a job application) in place would be acting against some of my other instrumental goals: specifically, my instrumental goal of preserving my autonomy. Paul discusses this preference in his post: > > Humans have many clear instrumental goals like "remaining in effective control of the AI systems I deploy," "acquiring resources and other influence in the world," or "better understanding the world and what I want." A value learner may able to learn robust preferences like these and pursue those instrumental goals using all of its ingenuity. > > > In general, I will prefer plans that maximize my autonomy, so I could consider autonomy-maximization to be one of my instrumental goals. This preference could explain my desire to study moral philosophy, even when this might cause my moral opinions to change (and therefore be bad according to my current object-level moral views). By caring about my autonomy, I can mostly preserve dynamic consistency even as my goals change over time. More concretely, suppose the state SH in the human's ontology contains a field for "autonomy", indicating how much autonomy I have in this state. We would hope that state sequences in ΩAI in which the human has low autonomy get mapped to sequences of states in SH that have a low autonomy field. For example, state sequences in which the AI manipulates the human should be mapped to states with a low autonomy field. Of course, it would be imprudent to assume that proposal 1 will correctly do all this. "Human autonomy" seems to be a complex concept, so it would be difficult to learn. To reduce confusion, it would be a good idea to create more explicit models of this instrumental preference for autonomy. This seems related to the [hard problem of corrigibility](http://arbital.com/pages/7597062253181095785): the human's desire for AIs to be corrigible is really a reflection of the human's preference for autonomy. This seems somewhat related to hierarchical planning, so maybe I will have better models of this preference after understanding hierarchical planning better. If a model like this works, then we can ground human values in something other than terminal goals: specifically, systems of instrumental goals at each time step that chain together in a tiling fashion, with each instrumental goal system trusting the next under normal circumstances. I think this is a promising alternative way to look at human values, though I still lack concrete models for this. Acting under uncertainty ------------------------ The system should have uncertainty about the correct values. In both proposals, the human utility function is underdetermined by the data. In proposal 1, the human ontology is underdetermined by the data, and additionally any uncertainty about the correct ontology mapping method propagates into uncertainty about the correct utility function. Under uncertainty about the correct utility function, it is not straightforward to simply maximize expected utility. This is because the ["loudness"](https://intelligence.org/files/LoudnessPriors.pdf) of different possible preferences matters. Given this, there are 2 clear ways to act under uncertainty: 1. The system can use a voting system to select actions, with each possible human utility function gaining votes proportional to its posterior probability. Unfortunately, this leads to undesirable results when the majority of the posterior probability mass is on the wrong preferences. Roughly, we should only expect this to work when the posterior distribution over preferences is "centered around" an acceptable preference to optimize. 2. The system can use [minimax](https://agentfoundations.org/item?id=186) to select a policy that does decently according to all possible utility functions. In particular, the policy should be at least as good as shutting down according to all possible utility functions. This method of handling uncertainty has problems; see the "Combining minimax with value learning" section for details. I think it's plausible that some variant of minimax works for conservatively optimizing values under uncertainty, so more research in this area could be useful. The necessity of overpowered ontology mapping --------------------------------------------- I claim that, for proposal 1 to work, the ontology mapper needs to be *very* powerful and reliable. This is because: 1. It needs to correctly map abstract concepts. For example, state sequences in ΩAI in which humans have lost autonomy should get mapped to state sequences in ΩH that have the "autonomy" field set to a low number. This seems far less straightforward than, say, recognizing diamonds in an ontology. This is made even more difficult by the fact that some important human concepts (including autonomy) are value-laden and might not correspond to useful predictive concepts. 2. Since the AI is optimizing over state sequences in ΩAI, the ontology mapper must work correctly across nearly all state sequences in ΩAI. Even if there is just one state sequence in ΩAI that humans would consider bad upon reflection, but which gets mapped to a good-looking state sequence in ΩH, this may be sufficient for the AI to select a plan leading to this state sequence. These problems make me quite pessimistic about this proposal. More research into ontology identification might yield insights about just how hard these problems are to solve. Human understanding of plans ---------------------------- Suppose the AI has created a plan π. Humans could examine this plan by seeing what state sequences ΩH result from this plan (assuming humans understand the ΩH ontology). There are 2 obvious ways to do this: 1. Use the human ontology ΩH to predict the state sequence resulting from π. This may fail to predict important consequences of the AI's plan. For example, if the AI used nanotechnology to solve some problem, and ΩH does not predict this nanotechnology to do anything, then it will predict that the AI's plan will not do much. 2. Use the AI's ontology ΩAI to predict the state sequence resulting from π, and then map this state sequence back to ΩH using ontology mapping. This will likely predict the consequences of π more accurately than ΩH. Possibly, this could help to catch errors that result when the AI accurately infers ΩH and maps between the ontologies correctly, but incorrectly infers UH. This does not seem like the most likely form failure to me; errors in ontology mapping seem more likely. Conclusion ---------- I don't think any of the models I have described will do anything useful with superhuman intelligence. The most potentially powerful models require essentially solving cognitive science (to get the behavioral model), and creating an overpowered ontology mapper. Still, I have identified concrete areas for further research, which might turn up results useful for both value-learning sovereigns and other agents. Specifically, further research into value-learning sovereigns could look at: 1. Clarifying what the instrumental preference for autonomy looks like. I would like to see a concrete example (either in the mathematical framework described in this post, or a different mathematical framework) of an AI representing (and perhaps also learning) the human's instrumental preference for autonomy. 2. Developing a better understanding of ontology identification. I think that framing ontology identification as mapping states between ontologies (as in the paper on [ontological crises](https://intelligence.org/files/OntologicalCrises.pdf)) has some theoretical problems, which I hope to discuss in a future post. 3. Looking more closely at the spectrum between mimicking humans and learning and optimizing for humans' terminal goals. Many of the proposals in this post fall somewhere in the middle of these two possibilities, but I don't think I have exhausted all the options. 4. Studying ways of conservatively maximizing under uncertainty about the right values, similar to minimax.
dd4b3a98-90f3-478e-a94c-ad5e351bdb40
StampyAI/alignment-research-dataset/blogs
Blogs
You are your information system You are your information system ------------------------------- what makes you, you ? we tend to intuitively think of a person as their entire body, somehow including limbs and organs but not clothing or food. yet, if you close your eyes, and then i swap your arm with someone else's, when you wake up you will still be the same person, just with a new arm. in fact, i'd argue i could replace everything except for the nervous system (including the brain) and when you open your eyes again you would notice that your entire body has changed but your thoughts and memories have remained the same — rather than, for example, still having the same body but different thoughts and memories. are you the matter that makes up that nervous system ? i could probably replace neurons and synapses one at a time and you would continue to be the same person. is it the electric signals then ? i could probably put on some synapses a device that absorbs electric signals and then sends out identical but "different" signals and you would still be the same person. in fact, it doesn't really make sense to ask "which matter" makes up your nervous system: under quantum physics, everything is changing and particles are merely [values in an omnipresent field](https://www.youtube.com/watch?v=MmG2ah5Df4g) rather than solid objects. ultimately, what you are, is *the information system* which your nervous system (including your brain) runs. standing still, walking forwards, teleporting yourself, and being uploaded into a sufficiently powerful computer, all preserve your personhood in the exact same way; there is nothing special about the meat that currently runs your mind. *despite everything, it's still you.*
8bc0301e-c95f-403e-ae58-16f2837c60db
trentmkelly/LessWrong-43k
LessWrong
[SEQ RERUN] Universal Law Today's post, Universal Law was originally published on April 29, 2007. A summary (from the LW wiki): > In our everyday lives, we are accustomed to rules with exceptions, but the basic laws of the universe apply everywhere without exception. Apparent violations exist only in our models, not in reality. Discuss the post here (rather than in the comments of the original post). This post is part of a series rerunning Eliezer Yudkowsky's old posts so those interested can (re-)read and discuss them. The previous post was Universal Fire, and you can use the sequence_reruns tag or rss feed to follow the rest of the series. Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it, posting the next day's sequence reruns post, summarizing forthcoming articles on the wiki, or creating exercises. Go here for more details, or to discuss the Sequence Reruns.
d50c7697-6ec7-4d92-bc48-c1135dca0293
trentmkelly/LessWrong-43k
LessWrong
A "Holy Grail" Humor Theory in One Page. Alrighty, with the mass downvoters gone, I can make the leap to posting some ideas. Here's the Humor Theory I've been developing over the last few months and have discussed at Meet-Ups, and have written two SSRN papers about, in one page. I've taken the document I posted on the Facebook group and retyped and formatted it here. I strongly suspect that it's the correct solution to this unsolved problem. There was even a new neurology study released in the last few days that confirms one of the predictions I drew from this theory about the evolution of human intelligence. Note that I tried to fit as much info as I could on the page, but obviously it's not enough space to cover everything, and the other papers are devoted to that. Any constructive questions, discussion etc are welcome. ----------------------------------------   A "Holy Grail" Humor Theory in One Page. Plato, Aristotle, Kant, Freud, and hundreds of other philosophers have tried to understand humor. No one has ever found a single idea that explains it in all its forms, or shows what's sufficient to create it. Thus, it's been called a "Holy Grail" of social science. Consider this... In small groups without language, where we evolved, social orders were needed for efficiency. But fighting for leadership would hurt them. So a peaceful, nonverbal method was extremely beneficial. Thus, the "gasp" we make when seeing someone fall evolved into a rapid-fire version at seeing certain failures, which allowed us to signal others to see what happened, and know who not to follow. The reaction, naturally, would feel good and make us smile, to lower our aggression and show no threat. This reaction is called laughter. The instinct that controls it is called humor. It's triggered by the brain weighing things it observes in the proportion: Humor = ((Qualityexpected - Qualitydisplayed) * Noticeability * Validity) / Anxiety   Or H=((Qe-Qd)NV)/A. When the results of this ratio are greater than 0, we find t
03eb1ec0-fd1a-4b8c-90b1-536b609499b3
trentmkelly/LessWrong-43k
LessWrong
What to do with imitation humans, other than asking them what the right thing to do is? This question is about whether you have clever ideas about how to use AI imitations of humans for AI safety. The two main ideas I'm familiar with only seem to interface with these imitations as if they're humans. * The most obvious thing one might do with a good predictor of a human is just to write software that queries the imitation human about what the right thing to do is, and then does it. * The less obvious thing to do is to try and amplify it - e.g. use teams of them working together to try to choose good actions. Or maybe even an IDA loop - use your learner that learned to imitate a human, and train it to imitate the teams working together. Then make teams of teams, etc. But can we use human imitations to increase the effectiveness of value learning in a way other than amplification/distillation? For example, is there some way of leveraging queries to human imitations to train a non-human AI that has a human-understandable way of thinking about the world? Keep in mind the challenge that these are only imitation humans, not oracles for the best thing to do, and not even actual humans. So we can't give them problems that are too weird, or heavily optimized by interaction with the imitation humans, because they'll go off-distribution. Another possible avenue is ways to "look inside" the imitation humans. One analogy would be how if you have an image-generating GAN, you can increase the number of trees in your image by finding the parameters associated with trees and then turning them up. Can you do the same thing with human-imitating GAN, but turning up "act morally" or "be smart?"
0444b8cf-9d4a-4fc0-b828-d49a3aba0bac
trentmkelly/LessWrong-43k
LessWrong
Dream, Truth, & Good One way in which I think current AI models are sloppy is that LLMs are trained in a way that messily merges the following "layers": * The "dream machine" layer: LLMs are pre-trained on lots of slop from the internet, which creates an excellent "prior". * The "truth machine": LLMs are trained to "reduce hallucinations" in a variety of ways, including RLHF and the more recent reasoning RL. * The "good machine": The same RLHF and reasoning RL training also aims to train good outputs (eg helpful, honest, harmless).  I've quoted Andrej Karpathy before, but I'll do it again:  > I always struggle a bit with I'm asked about the "hallucination problem" in LLMs. Because, in some sense, hallucination is all LLMs do. They are dream machines. > [...] > I know I'm being super pedantic but the LLM has no "hallucination problem". Hallucination is not a bug, it is LLM's greatest feature. The LLM Assistant has a hallucination problem, and we should fix it. > > - Andrej Karpathy Failing to properly distinguish the "dream machine" capabilities from other (truth-oriented or good-oriented) capabilities hobbles today's LLMs by mixing these things together. If you ask Claude to write fiction, there's a high tendency to mix in the "Claude voice" with the fiction being generated. More generally, the base model (IE, only the generative pre-training) is great at extrapolating text; the subsequent training hobbles this capability, because care is not taken to preserve it. Habryka mentions this with respect to experiments with LLM-augmented text editing: > Using base models has at least so far been essential for getting any useful writing work out of LLMs, with the instruction-tuned models reliably producing obtuse corpo-speak when asked to engage in writing tasks.  I expect that mixing truth-orientation with good-orientation has similar problematic consequences.  A Modest Proposal Dream Machine Layer My basic idea here is not new: instead of pre-training on lots and lots of text
458ed904-738b-4ea5-9220-16a23ab5789b
trentmkelly/LessWrong-43k
LessWrong
Historiographical Compressions: Renaissance as An Example I’ve been reading Ada Palmer’s great “Inventing The Renaissance”, and it sparked a line of thinking about how to properly reveal hidden complexity. As the name suggests, Palmer’s book explores how the historical period we call the Renaissance has been constructed by historians, nation-states, and the general public. Not in the sense that there is nothing distinctive or interesting in this (hard to pin down) span of history, but because the compressions that have most currency in people’s head are retroactive projections of what was considered good or important or even decadent about the time when the histories were written. There’s a lot of fascinating historical details in the book, but what I want to single out is how Palmer goes about her deconstruction of histories of the Renaissance. You see, a really big point in my model of epistemology and methodology is that humans are not that smart. We can’t understand and remember massively complex models of everything because our capacities are limited. In practice, this means compression is not an option, it’s a necessity. We always compress everything, all the time — the only choice is the relative degrees of compression of different things. The fact that I care more about my wife than my banker manifests itself in my having a much less compressed model of my wife (though still throwing out a lot of details). So when some extremely brilliant and knowledgeable expert like Ada Palmer comes and decompresses your existing simplified models of, say, the Renaissance, there is a really common failure mode: that you, the reader, end up automatically compressing back, following various easy heuristics: * Ignore what you just read * That is, compress back to exactly what you started, maybe with the additional gear that this particular author is full of shit. * Swing to the opposite compression * You thought that Lorenzo the Magnificent was a hero, and this new book/video/article explains how he’s clearly not? Now
d3be34c8-bbea-4730-b33d-31b2d92e0ec3
trentmkelly/LessWrong-43k
LessWrong
Joint Distributions and the Slow Spread of Good Ideas A few years ago a well-known economist named David Romer published a paper in a top economics journal* arguing that professional football teams don't "go for it" nearly often enough on fourth down. The question, of course, is how this can persist in equilibrium. If Romer is correct, wouldn't teams have a strong incentive to change their strategies? Of course it's possible that he is correct, but that no one ever knew it before the paper was published. But then would the fact that the recommendation has not been widely adopted** constitute strong evidence that he is not correct? The paper points out two possible reasons why not. First, the objective function of the decision-makers may not be to maximize the probability of winning the game. Second and more relevant for our purposes, there may be some biases at work. The key point is this quote from the article (page 362): > "Many skills are more important to running a football team than a command of mathematical and statistical tools. And it would hardly be obvious to someone without knowledge of those tools that they could have any significant value in football." Romer's point is that what's relevant is the joint distribution of attributes in the pool of potential football coaches (or other decision-makers). Even in something like professional football where there is a very strong incentive to get better results, it may take a long time for coaches who are willing/able to adopt a good new idea to out-compete and displace those who continue to use the bad old idea if there are very few potential coaches who both have the conventional talents and understand the new idea. I think Romer is right about this, and his point is the main take-away point of this post. But I don't think the main "joint distribution" problem is a paucity of would-be coaches who understand both conventional football stuff and math: math talent can be hired to work under a head coach who doesn't understand it, just like medical talent is. Rathe
91805e21-6edf-4744-b1b6-e5e99ad05fa5
StampyAI/alignment-research-dataset/lesswrong
LessWrong
REPL's: a type signature for agents Read-eval-print loops are used to interact with programming languages as well as in the shell to interact with your computer. I claim that they are a reasonable starting point for reasoning about agents. This is a framing that all but wrote my ELK proposal for me, and has been paying rent in other research domains as well. Let S be the set of world states, and assume a environment transition function T : S × A → S, where Aₓ is a set of actions. We define an agent X to be a 3-tuple of functions Readₓ, Evalₓ, and Printₓ, where:    • Readₓ is a surjective "abstraction" function of type S → Oₓ, where Oₓ is a type for X's abstracted observations.    • Evalₓ is a prediction function of type Sₓ × Oₓ → Sₓ, where Sₓ represents the type of X's internal knowledge of the environment.    • Printₓ is a function from Sₓ to Aₓ, where Aₓ denotes X's actions on the environment. If there are multiple agents, we assume that an action Aₓ contains one action from each agent. The environment and agent start in initial states S⁰ and Sₓ⁰, respectively. At each time-step, the agent observes the current state of the environment with its sensors, updates its worldview using its prediction function, produces an action, and then the universe uses that action to produce a next universe state. This process is depicted in the figure below. ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/f2026cdc159ea5ff4a0a672250cca96cdb32c0015bc7ec80.png)Notice that in the above diagram, the transition functions of the agent and the environment look similar. In fact they are dual, and we can show this by considering the agent's Readₓ to be the environment's Print, and the agent's observation type Oₓ to be the environment's action type A. Env       := { Print  : S → Aₓ,   Eval  : S × Aₓ → S } Agentₓ := { Printₓ : Sₓ → Aₓ, Evalₓ : Sₓ × A → Sₓ } ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/a27cbc6a042caf4ab560dbbb72afd834ade6e59fb159663d.png)Both the agent and environment start in a given initial state, pick an action based on that state, feed that action to each other's transition functions, and transition to the next state. This interaction can be drawn as a game tree where the agent and the environment are selecting which of each other's next branches to follow. ![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/c4e5303c127547c49a07b55f5a5bda889a6d3d4eff33cd86.png) The agent and environment run in lockstep, each round simultaneously printing and thereby choosing their partner's next branch. If you want to think of the environment as giving a reward, that reward can be a function of your whole action history, which put you in a particular branch of the physics game tree. There is much more to say about this setup. I may add future posts on how this leads to an ELK proposal, how it helps bring to light common pitfalls in others' ELK proposals that I have seen so far, a similar setup in the embedded agency setting, details about nested compositional REPL's, the connection to polynomial functors and coinductive datatypes, and maybe even a diagrammatic REPL programming language. Please let me know which if any you are interested in and I can post accordingly. Huge thanks to David Spivak for helping me with these ideas, as well as Gabriel Poesia and John Wentworth.
d01aca62-46c9-4bf8-9c94-0dc5da25af78
trentmkelly/LessWrong-43k
LessWrong
Opportunities that surprised us during our Clearer Thinking Regrants program This post was written by a subset of Clearer Thinking team members, and not all of the team members involved with the regranting necessarily agree with everything said here.  Update: the same week that we posted this, we were devastated to learn of the events surrounding FTX and to learn that the Future Fund team resigned. We slightly updated the conclusion, but we have not changed the body of the post (because the focus of it was to share information that we learned during the regranting program, and we still think it serves that purpose). Our thoughts are with the many customers and others who have been affected by the devastating events at FTX.   As part of the Clearer Thinking Regrants program, we evaluated over 630 project proposals and ended up with 37 finalists. In the many hours that we spent evaluating these finalists, we learned some things that surprised us and saw many potential opportunities to help the world that we hadn’t considered before. Of course, if you’re an expert in any of these areas, what surprised us non-experts may not surprise you. Our aim in this post is to share object-level information that we did not know before the regranting program and that we think readers may find interesting or useful. In several cases, we are sharing the fact that specific organizations or projects have significantly more room for funding than we would have guessed, even after accounting for the outcomes of our regrants program.[1] We hope that readers find this information useful. By highlighting some organizations that have room for more funding, we hope that this will lead to impactful giving opportunities.[2]   Summary:  1. We were surprised that there hasn't been more work to quantify the risks from large-magnitude volcanic eruptions (considering the impact that such eruptions could have). 2. We were surprised that the Rethink Priorities Surveys Team has significant room for funding for their own project ideas (since most of their work is research/c
0fedbeac-d02b-4c43-8faa-53ce1821a960
trentmkelly/LessWrong-43k
LessWrong
Scientifically optimizing education: Hard problem, or solved problem? Introducing the Theory of Direct Instruction Re-edited to remove/integrate much of the added notes - Sep 5th: This is a long post, and it was a first attempt to simply start trying to explain the whole topic, and see what kind of mistakes I made in the communication. I did indeed make many mistakes, and started to feel that I should ask people not to read this original attempt at first, and so posted added notes to the beginning to say so and try to clear up the worst confusions. But now that my audience is starting to close the inferential gap themselves thanks to amazingly wonderful people like Misha with “What Direct Instruction Is”, I think important points that I tried to express in this original foray might start to become more transparent to that audience. It's still a very long post, with lots of new terminology, and, as Alicorn said, “sales-y enthusiasm'. If you do read it, I must ask that you please don't skim, giving me the benefit of a doubt that anything confusing or nonsensical seeming might actually be something that's important and meaningful in some non-obvious way that you do not yet understand, and that some of the 'sales-y enthusiasm' and “applause lights” may be have been intended to serve some useful purpose. Again, please don't skim (although it is completely my fault if you feel like skimming!), because I just don't know how to do any better until I get more feedback on how the complete whole of what I wrote is understood. If you do start skimming, and give up, just tell me where you did so. [The “added notes” from the first edit I've removed, and will go through at a later time to extract anything that was original and useful and integrate it into the post itself or whatever.] Thank you. Begin original:     In this post, I'm going to introduce Direct Instruction, or DI (pronounced Dee-Eye, capital D, capital I, accept no imitations). DI is essentially the theory of how to find the best way to teach anything to anyone. And I mean a theory in the true scientific sense: par
719ca89c-7db7-4cbc-9e7f-529ddf7ef395
trentmkelly/LessWrong-43k
LessWrong
The Mask Comes Off: A Trio of Tales This post covers three recent shenanigans involving OpenAI. In each of them, OpenAI or Sam Altman attempt to hide the central thing going on. First, in Three Observations, Sam Altman’s essay pitches our glorious AI future while attempting to pretend the downsides and dangers don’t exist in some places, and in others admitting we’re not going to like those downsides and dangers but he’s not about to let that stop him. He’s going to transform the world whether we like it or not. Second, we have Frog and Toad, or There Is No Plan, where OpenAI reveals that its plan for ensuring AIs complement humans rather than AIs substituting for humans is to treat this as a ‘design choice.’ They can simply not design AIs that will be substitutes. Except of course this is Obvious Nonsense in context, with all the talk of remote workers, and also how every company and lab will rush to do the substituting because that’s where the money will be. OpenAI couldn’t follow this path even if it wanted to do so, not without international coordination. Which I’d be all for doing, but then you have to actually call for that. Third, A Trade Offer Has Arrived. Sam Altman was planning to buy off the OpenAI nonprofit for about $40 billion, even as the for-profit’s valuation surged to $260 billion. Elon Musk has now offered $97 billion for the non-profit, on a completely insane platform of returning OpenAI to a focus on open models. I don’t actually believe him – do you see Grok’s weights running around the internet? – and obviously his bid is intended as a giant monkey wrench to try and up the price and stop the greatest theft in human history. There was also an emergency 80k hours podcast on that. TABLE OF CONTENTS 1. Three Observations. 2. Frog and Toad (or There Is No Plan). 3. A Trade Offer Has Arrived. THREE OBSERVATIONS Altman used to understand that creating things smarter than us was very different than other forms of technology. That it posed an existential risk to humanity. He
5b839270-9855-4669-bf2e-7ff9e26d2128
trentmkelly/LessWrong-43k
LessWrong
LW September WebDiplomacy Games prase is starting up a Prisoner's Dilemma Game Theory Lab which you ought to check out, and that revived (my) interest in playing Diplomacy with LWers. We've had two games run on this site, and one run on WebDiplomacy. Diplomacy is easy to learn and, at its heart, a very simple game. Strategy and (surprisingly) diplomacy take center stage; tactics cannot get you very far, and there is no randomness. Players control a great power chosen at random on the eve of WWI, with all powers having roughly the same army size and strength, and so the ability to prevail in conflicts comes almost entirely from creating the right alliances and knowing when to trust (and betray) others. (Also, did you catch the lie in this explanation? Diplomacy involves a lot of lies.) WebDiplomacy is a free site you can use to play Diplomacy with people online; make an account here. It also seems convenient for having multiple games going on or doing games repeated- Yvain did a heroic job in organizing, administrating, and updating the first game, but it's the sort of job that can be done with less personality by a computer. So, this post exists to help people interested in playing Diplomacy with LWers find other people to play with. The games spawned by this post will start around the beginning of September, and a typical game lasts somewhere around 20 turns (though many players are eliminated before then). If there's continuing interest, we'll probably have a thread like this once a month, so if you don't know if you can commit to being active during September, but want to make sure a November thread gets posted even if there's not much interest now, reply to the November comment below. A downside of Diplomacy is it really only works with 7 people, but we should be able to get at least one game going. It seems natural to break people up by game pace, as that represents significantly different time commitments. I'll be posting a number of comments with different game paces (i.e. times between